text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Getting Into Digital Forensics Also known as a Computer Forensic Investigator, computer forensics analyst, digital forensics examiner and so on, digital forensics jobs involve the investigation and analysis of evidence extracted from computers, networks, mobile devices and more to identify the “suspect behind the keyboard”. Obtaining a bachelor’s degree in computer forensics, computer science, cyber security or similar is generally considered an important step in stepping getting into a digital forensics career. So too, certifications such as the GIAC (Global Information Assurance Certification) or CCE (Certified Computer Examiner) may impress potential employers. Found in both the private and government sectors, the requirements and responsibilities attached to digital forensics jobs will of course vary depending on the position you apply for and within which area it’s in. You may find the specific degree and qualifications needed will vary also, particularly between private and public. For example, public sector can be less stringent on the subject of qualifications. Prescribed academia aside, working in digital forensics demands a thirst for continuous learning. Technology changes too quickly for forensics specialists to hone their expertise to one particular area. The reality is that your skills will always be driven by evolving digital trends, which currently equate to having a solid understanding of drones and devices that come under the Internet of Things remit, as well as positioning yourself on the precipice of emerging AI-powered forensics tools. Fuel your learning by attending relevant conferences and subscribing to computer forensics and cyber security publications to stay up to date on emerging technologies. Due to the ever-evolving nature of the industry the best course of action for aspiring digital forensics professionals is to focus on their fundamental expertise. If you are coming to interviews in possession of excellent technical understanding and intricate knowledge of the ins and outs of computer-based systems so as to extract data from them, that is already a good start. Knowledge of operating systems including Windows, Linux, MacOS, Unix and Android as well as experience with reverse engineering and malware analysis will also hold you in strong stead. Further to that is having a real familiarity with the nature and personality of illicit material that tells you how it got on a device and how it navigates that device. Knowledge around threats to cyber security and what triggers different types of breaches is also an essential skill. Digital forensics candidates should also be able to take whatever digital evidence they find and be able to accurately analyse and interpret it. Written and oral communication skills are also highly important on the list of underlying expertise. Being able to clearly explain the outcome of your investigation to non-technical people at various levels, such as jurors and lawyers in a court case, or business execs as you delve into a case of employee misconduct, is a vital skill to have. The work can place you alongside a legal team supporting them with their e-discovery for a lawsuit, or you could be part of an incident response team navigating cyber security breaches. Careers in digital forensics are extremely varied considering the breadth of areas where that skillset is in demand.
<urn:uuid:e9f5f5a2-6137-46cb-8bc9-99fdb33b1a64>
CC-MAIN-2022-40
https://www.careersincyber.com/article/getting-into-digital-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00737.warc.gz
en
0.947285
629
2.578125
3
Gartner forecasts that there will be 20.8 billion connected devices worldwide by 2020. The growing number of connected devices in the Internet of Things (IoT) provides an opportunity for service providers to develop new streams of revenue. For end users, IoT has the potential to provide solutions that can dramatically enhance productivity and quality in security, health, education, and many other areas. To capitalize on this opportunity, service providers must deliver an optimal customer experience while ensuring absolute data protection, privacy, and user safety. However, this large number of diverse devices will also increase network vulnerability, with many devices carrying the potential to become targets for hackers and denial-of-service attacks. IoT devices are often constrained in memory, storage, and compute resources. These limitations make it a challenge to support complex and evolving security algorithms, as these require higher processing power with low CPU cycles. Connected devices might also outlive encryption effectiveness. In a February 2016 survey and report on The Future of Mobile Service Delivery, Jim Hodges, senior analyst for Heavy Reading, pointed out that the top security concern for service providers is outages caused by distributed denial-of-service (DDoS) and botnet attacks. This fear is followed by threats related to system integrity in which traffic is manipulated by external attackers while spoofing a user’s identity. Three concepts are fundamental to IoT security: identity, authentication, and authorization. Identity is about naming and providing authorization to the client. Authentication means proving the identity of the client, and authorization refers to managing the rights that are given to the client. To further enable the collection of device data for real-time intelligence analysis, IoT communication protocols have to be agreed upon as well. IoT message protocols should include the following high-level functions: Today, the popular IoT message protocol of choice is MQTT (Message Queue Telemetry Transport). MQTT is used as a primary transport protocol across industry verticals, including manufacturing, shipping/asset tracking, energy, and connected homes. Companies including IBM, Microsoft, Amazon Web Services, and Facebook Messenger all use MQTT for fast message delivery and to save battery life on devices. IoT protocols like MQTT are used for secure and reliable communication between devices. MQTT is based on Transmission Control Protocol (TCP) and can be secured with Transport Layer Security (TLS). The MQTT protocol has been designed with low compute and memory requirements that are optimized with much simpler and smaller packet size than HTTP for certain IoT applications. MQTT was originally created in 1999 for remote sensors but has found a new life with various IoT applications based on a publish-and-subscribe model. Devices/sensors will “publish” data to an IoT platform (broker) and the IoT application will then “subscribe” to the data that is relevant to its use case. The publish-and-subscribed data is called “topic,” which can be organized in a hierarchical structure based on use cases and category of the data type. Connectivity and communications between the devices and its platform influence the load, control, and security required by each use case. In order to keep the protocol as lightweight as possible for resource-constrained IoT devices, MQTT provides minimal security beyond TCP. MQTT only recommends that the TLS protocol be used for applications that require additional levels of authentication. As a result, MQTT communications that rely on TCP alone are unencrypted and susceptible to man-in-the-middle attacks. The MQTT protocol carries a number of potential vulnerabilities. For example, open ports can be used to launch denial-of-service (DoS) attacks as well as buffer overflow attacks across networks and devices. Depending on the number of IoT devices connected and use cases supported, the complexity of “topic” structure can grow significantly and cause scalability issues. MQTT message payloads are encoded in binary, and corresponding application/device types must be able to interoperate. Another problem area is with MQTT message usernames and passwords, which are sent in clear text. While not specifically part of the MQTT specification, it has become customary for IoT platform brokers to support client authentication with SSL/TLS client-side certificates. Transport encryption with SSL and TLS can protect data when implemented correctly. To protect against threats, sensitive data including user IDs, passwords, and any other types of credentials should always be encrypted. The downside of using TLS, SSL, and other methods of encryption is that they can add significant overhead. However, techniques such as TLS session resumption can compensate for some of the connection costs of TLS. Hardware acceleration is another method for reducing the size penalty for encryption. For complex applications over constrained devices, an optimized encryption library can be very useful. When application code is large, an encryption library can reduce the processing memory and increase performance. The architecture of MQTT depends on brokers being highly available. Using X.509 certificates for client authentication can save resources on the broker side when many clients try to use broker services—such as database lookups or web service calls. Combining MQTT with state-of-the-art security standards like TLS and using X.509 certificates can also help improve security and performance. The IoT represents an important new business opportunity for service providers. Though securing the MQTT messaging protocol with TLS is an important security consideration, many other attack vectors can also be exploited. Service providers must ensure end-to-end security with tight authentication of devices, authorization, and network-enforced policy for access to communication paths. The encryption of data for safety and privacy is also critical to the revenue streams of service providers in delivering an optimal customer experience.
<urn:uuid:a50c1704-5fdf-4df7-8254-07a10327a3b7>
CC-MAIN-2022-40
https://www.f5.com/company/blog/iot-message-protocols-the-next-security-challenge-for-service-providers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00737.warc.gz
en
0.933134
1,182
2.890625
3
To ensure that world over data communication systems must be compatible to communicate with each other ,a standard has been created by ISO called Open System Interconnection (OSI) model. The Open Systems Interconnect (OSI) model has seven layers. The highest layer being “Application Layer” while the ‘lowest’ in the hierarchy being Physical layer.The layers in hierarchy is as below – - Data Link LAYER 7: APPLICATION LAYER - It is the topmost layer of OSI model and provides the interface between the network protocol and the software. - This layer mainly holds application programs. Some of key application protocols which work on this layer are below – - File Transfer Protocol (FTP) - Simple Mail Transfer Protocol (SMTP) and Post Office Protocol (POP) - Internet Message Access Protocol (IMAP) - Hypertext Transfer Protocol (HTTP) - Simple Network Management Protocol (SNMP) - Network News Transfer Protocol (NNTP) LAYER 6 – PRESENTATION LAYER The functions performed at the Presentation layer of the OSI are: - Protocol / Data conversion - Code translation. - Data encryption and decryption - Data compression - Character set conversion - Interpretation of commands. Some of examples of functions formed at presentation layer include ASCII to EBCDIC translation, password encryption etc. LAYER 5 – SESSION LAYER Session layer is responsible for gracefully closing sessions and for session check and recovery. It is used in applications that make use of remote procedure calls. The Session layer utilises Transport layer to establish communication sessions. The major attributes and functions of session layer include – - Session layer is the 5th layer of OSI model. - To manage and synchronise the conversation between two applications. - Source to destination session layer streams of data are resynchronised suitably to avoid data loss and allow messages not to end prematurely. LAYER 4 – TRANSPORT LAYER The Transport layer is responsible for handling transport functions such as reliable or unreliable delivery of the data from source to destination. The transport layer is responsible to break the data segments into smaller packets. The transport layer will be responsible for opening all of the packets , fragmentation and reassembly of messages. The key functions of Transport layer for network communication are enlisted below – - Guaranteed data delivery - Name resolution - Flow control - Error detection - Error recovery The common Transport protocols at this layer are – - Transmission Control Protocol (TCP) is a connection-orientation protocol that offers high reliability of transporting data .Unlike UDP, in case of TCP, the application sending the data receives acknowledgement that the data was actually received. - User Datagram Protocol (UDP) is a connectionless protocol that does not provide reliable data transport. No acknowledgements are received by the application working on UDP. LAYER 3 – NETWORK LAYER The Network Layer is primarily responsible for establishing the paths used for transfer of data packets between devices on the network.One of the main functions performed at the Network layer is routing. Routing enables packets to be moved among computers which are more than one link from one another. The functions performed at the Network layer of the OSI model are listed below – - Traffic routing to the end destination - Routing functions; route discovery and route selection - Frame fragmentation: A router can fragment a frame for transmission and re-assembly at the destination station. - Packet sequence control - End-to-end error detection, from the data sender to the receiver of data. - Congestion control - Network layer flow control and Network layer error control - Accounting functions to keep track of frames forwarded by Network layer device LAYER 2 – DATA LINK LAYER The Data Link Layer is primarily responsible for communications between adjacent network nodes. Network switches and hubs operate at this layer which may also correct errors generated in the Physical Layer. The responsibilities of the Data-link layer include – - Packet Addressing - To make sure data transfer is error free from one node to another. - Transmitting and receiving data frames sequentially - Media access control - Acknowledgements and resending of frames. - Managing the Frame traffic control. It signals the transmitting node to stop, when the frame buffers are full. The Data-link layer receives packets from the Network layer and constructs these packets into frames to be sent over physical layer. A cyclic redundancy check (CRC) is added to the data frame to detect damaged frames. The Data-link layer can determine when a frame is lost and also requests any lost frames to be retransmitted. The Data-link layer is divided into two sublayers – - Logical Link Control (LLC) sublayer: The LLC sublayer provides and maintains the logical links used for communication between the devices. The functions at the LLC sublayer of the Data-link layer include the following: - Error checking - Frame synchronization - Flow control - Media Access Control (MAC) sublayer: The MAC sublayer of the Data-link layer controls the transmission of packets from one network interface card (NIC) to another over a shared media channel. LAYER 1 – PHYSICAL LAYER Below are the attributes of physical layer – - It is the lowest layer of the OSI Model. - Physical layer is the lowest layer of OSI model and transmits raw bit streams over a physical medium - It activates, maintains and deactivates the physical connection. - It is responsible for transmission and reception of the data over network. - Data rates needed for transmission is defined in the physical layer by converting digital/analog bits into electrical signal or optical signals - Devices at Physical layer handle signalling in form of bits (1s and 0s). The 1s and 0s are in represented by pulses of light or electricity Data encoding, bit synchronisation, multiplexing and link termination is also performed in this layer. BENEFITS OF OSI MODEL IS DESCRIBED BELOW – - Any errors that occur in one layer can be handled separately from other layers. - All the layers can operate automatically and independent of other above or below layers. - OSI does not have to depend on Operating Systems or platform. - Any hardware or software that meets the OSI standard will be able to communicate with any other hardware/software that also meets the standard - OSI is independent of country, it doesn’t matter where the hardware/software is made - It crosses barrier of localisation of products. It has become universal standard and vendors can follow OSI standard to produce products that can be used world over. - Different vendor products are able to interoperate since they now follow OSI as the reference model for protocols and interface standards. - By separating the network communications into logical layers , OSI model simplifies how network protocols are designed. - Network designs become scalable since by following layered OSI model ,modularity and layering capability increases. - Separating communication process into layers makes software development, design, and troubleshooting easier.
<urn:uuid:c586f40a-aad2-432d-9f0e-0a58d46911a3>
CC-MAIN-2022-40
https://ipwithease.com/osi-reference-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00737.warc.gz
en
0.853959
1,528
3.484375
3
What Is XAI? The Ethics of XAI Artificial Intelligence is technology that can be used in many ways. It can fake images, fake videos, and can make fake news. On the other hand, it can detect lies, detect cancer, and guide people using large amounts of data to make better decisions. AI technology can be used for the good and for the bad. This can't be a surprise because this holds true for many other technologies and tools that humans have invented. A simple tool such as a knife may be used to cut paper, eat a meal, or kill a person. A technology such as radioactive radiation may be used to cure cancer, create nuclear power, or make a nuclear weapon for mass destruction. Depending on the circumstances, we may judge the situation to be good or bad. As a society we have developed a framework of moral principles to guide our behavior and help us make judgments about good and bad. The resulting institutions and methods are used to protect humanity from harm. Recent developments in AI have raised concerns over the ethical aspects of some technology. Even technology visionaries — such as Elon Musk, Bill Gates, Steve Wozniak, and Stephen Hawking — are warning that super-intelligence may be more dangerous than nuclear weapons. What makes AI so different to raise such high concerns? Amongst the concerns are the extent to which our legal institutions are prepared to protect us and the robustness of the methods to deal with the threats that this new technology may bring. There are political concerns as well, including the job market, skilled labor shortages, and shifts in the balance of power system. Some workers fear losing their job because of the shifts in the job markets, while European startups, for example, do not find the right resources internally and compete for expertise with the US and China. Why would we consider that AI will take over the world? Since the beginning of the computing age, people have tended to treat the computer as an intentional being. We can infer that from the language we use to reference a computer. Expressions like "the computer says no," "the computer is not cooperative today," "he must be cleaned," and "my computer loves to crash" are an indication that people attribute intentions to this technology. AI solutions that are designed explicitly to appear intentional strengthen this phenomenon. These systems exhibit complex behaviors, usually seen as the territory of intentional humans, such as planning, learning, creating, and carrying on conversations. They seem to display beliefs, desires, and other mental states of their own. According to the philosopher Danniel Dennet, people assign intentionality to systems whose internal workings they do not understand in order to deal with the uncertainty about their behavior in a meaningful way. The fact that we assign intentions to computers may explain why one-third of the population would like a computer saying "I love you," as revealed by a recent study by Pega. It may also explain why we feel an urge to protect ourselves from being guided, fooled, restricted, or manipulated by this same computer. Hopefully the group that wants a machine to say "I love you" coincides with the two-fifths of the population that believes "AI can behave morally" and the one-fourth of the population that believes "AI will take over the world." Why would we even consider such ideas? Aren't we good enough decision makers? Is our reasoning capability so bad? Psychological research provides proof that human decision makers have many systematic decision biases. "Our reasoning is biased towards what we already believe" is a good summary of the research results, known under the name 'behaviorism'. Other, more recent research explains that these biases have evolved as strategies we use in order to distribute the work of 'good reasoning'. Experiments show that our reasoning capability improves when we reason together, "with each individual finding arguments for their side and evaluating arguments for the other side." We can conclude that reasoning is a social process, and the lonely thinker looking for truth and knowledge at the top of a high mountain is a romantic but false idea. To summarize, - we tend to assign intentions to computers even though they are not moral beings, - we are not very good decision makers, and - our reasoning capabilities work best in a social process. Given these observations it is a natural development to use technology to help us make better, more objective decisions. Is AI our panacea? There are reasons for concern when AI technology is applied on a large scale. These reasons relate to the differences between humans and computers and the effect on the pillars of our modern society: - People attribute intentionality to AI systems to allow us to predict potential behavior without the need to know its exact working. But in contrast to humans, these centrally-programmed systems don't explain themselves and are not open to feedback. This prevents us from learning. - AI is learning from the historical data that we created. We are biased so the data is biased, even if we do not know what biases are in the data. The resulting systems will have the same biases, and using them prevents us from unlearning our biases and from being egalitarian — violating the basic right of equality. - Human reasoning capacities, collective intelligence, and society's ethical framework are based on conversation, independent information search, and diversity of opinions. AI is automating those essential capabilities; as a result, our societies' learning capacities will be limited towards the tunneled view offered by individual advertisements, suggested readings, and the repetition of our shared opinions. This limits another fundamental right in our society — freedom of choice. Is our legal system fit enough to protect us from potential harm? The European Commission takes these concerns seriously and has set the example when it comes to ethics guidelines. They recognize that "To gain trust, which is necessary for societies to accept and use AI, the technology should be predictable, responsible, verifiable, respect fundamental rights and follow ethical rules." They also warn about the 'echo chamber', where people only receive information which is consistent with their opinion or reinforces discrimination due to biased training data. The resulting ethical guidelines for 'trustworthy AI' (published in the spring of 2019) are organized at three levels of abstraction: - Ethical principles stated by general terms (such as AI algorithms) should respect human autonomy, prevent harm, and be explicable and fair. - Guidance in the form of seven requirements on the way AI solutions are constructed or on their resulting behavior. Two of these requirements — transparency and fairness — relate directly to XAI. - An assessment list that consists of questions that can be used to establish the trustworthiness of an AI solution. All users and technology providers of AI solutions should be aware of these developments and be prepared, as these guidelines may become the norm. They will require AI solutions that have: - the ability to trace a dataset and an AI system's decision to documentation; - the ability to explain the technical processes of the AI system to humans; - the ability to provide a suitable explanation of the AI system's decision-making process when an AI system's decision has a significant impact on people's lives; - no biases in the data set, no intentional exploitation of human biases, and no unfair competition; - implemented a user-centric design process by consulting stakeholders throughout the system's lifecycle, starting in the analysis phase and continuing by soliciting regular feedback after the system's implementation. Figure 1. Realizing Trustworthy AI throughout the system's entire life cycle. (inspired by Figure 3 in ) This feedback loop is a well-known measure to improve and control any engineering process. What else can we learn from traditional engineering? Let's consult the famous article "No Silver Bullet" by Brooks. He argued that "we cannot expect ever to see two-fold gains every two years in software development, as there is in hardware development." AI and XAI will not change this because his arguments still hold today: - The fundamental essential complexity of engineering requirements in software design has not changed. For example, selecting the right level of detail, the mode of interaction with the user, and the explanation model differ per industry, topic, technology, and decision characteristics. Therefore, good engineering also applies to AI. The role of the business analyst remains important. - The need for 'great' designers next to 'good' designers is still difficult to fulfill. Brooks' specific statement about AI provides a perfect example of this need. Techniques for expert systems, image recognition, and text processing have little in common. Therefore, the work is problem-specific, and the appropriate creative and technical skills are required to transfer the technology into a good application. I support his recommendation to "grow" software organically through incremental development. His recommendation also aligns with the agile methodologies that are popular today and with the feedback cycles suggested by the guidelines for trustworthy AI. Do's and Don'ts The development of a code of conduct for the digital industry is one measure that many researchers, politicians, and leaders in the industry have asked for. It would have to be followed by anyone working with sensitive data and algorithms that guide behavior — a kind of oath for IT professionals similar to the way we authorize professionals in project management, medicine, accounting, and law. But such a code is not yet in place, and it will take time to implement it with sufficient support by society and professionals. I have found many suggestions that list elements that should be part of such a code, with different levels of abstraction. Based on these, I present my top three do's and don'ts. Follow these do's and don'ts to decrease the concerns that exist related to using AI technology and to increase the ethical use of this powerful technology. But don't forget to continue, or start, following good engineering principles. Top 3 Do's - Do create a feedback loop between the human and the machine, and require an explanation for each decision that guides human behavior. - Do de-centralize systems in a complex environment to make them more robust; balance different decision criteria, and anticipate changes. - Do educate people about algorithms, raise the awareness about trustful AI, and focus on education that develops critical thinking skills instead of 'repetitive' or 'white-collar' task skills that will get automated. Top 3 Don'ts - Don't use data that has not been collected and enhanced with the intention to be used by machine learning algorithms. - Don't filter information without making a user aware of the filter and providing the ability to remove the filter. - Don't hide the data that has been used to feed, train, and test algorithms. Needless to say, don't trust an oracle. The deceased artist Dali is talking to you — explaining about his life and making selfies with visitors in The Dali Museum (thedali.org) as if he is still alive. It's fun; people like it, and their learning experience in the museum has more impact. It is an improvement in the way we present and transfer knowledge and information. Artificial Intelligence (AI) technology made all this possible is what the promoting video tells us. Yes, in some sense that is true. A neural network is trained on old videos and pictures of Dali. The resulting model found patterns by Dali that could be mimicked. The technology used is a result of AI research. This is AI technology used for the good. The same technology can be used with good intentions but an undesired result. The most cited example is Google's algorithm that was trained to help in selecting new hires. The result turned out to be highly gender-biased towards men. That was not the intention! Google's engineers are not stupid. They thought about this risk in advance and omitted including gender as a characteristic in the dataset …but a CV may list 'chair of the Women in Chess Association' or extracurricular activities like 'cheerleader'. The model picked up these words. And the result was a gender-biased model towards male hires. Why? The biases of our past selections are reflected in the historical data and, in the end, a neural network is just a mathematical function based on statistics recognizing patterns in the historical data used for training. This is AI technology not used — because we did not agree with the outcome. Whether and how Cambridge Analytica influenced elections is still a question for many people. Setting that question aside, and supposing that Facebook's data was used legally, most people would still reject the idea of using that data to manipulate voting behavior — a form of 'big nudging'. And that is what happened. Deliberately. How? It is not hard to imagine that there are patterns to match users with psychological profiles based on the words they use and the kinds of Facebook items they like. And that is exactly what an AI algorithm learned. Combine this with a model that relates psychological profiles with people's reactions to certain information and voting behavior. The combination is a recipe to manipulate voting behavior. The only thing you need to add is an individualized advertisement to target a specific profile with information that confirms their opinions to influence their voting behavior. This is AI technology used for the bad. Can AI ever make a moral decision? An example of an often-cited moral decision involves three cats, five dogs, and a person driving a car. In this scenario, the dogs follow the traffic rules and have a green light; the cats do not follow the traffic rules and pass through a red light. The person in the car can either hit the cats or avoid them by hitting the dogs. Most people will argue that hitting the cats and saving the dogs is the morally right choice. But small changes to the situation may change their assessment. What if the dogs did not follow the rules? What if the cats are doctors on their way to save many lives? What if the car is a self-driving car? When people make such moral decisions, we are not concerned. But it does concern people when a machine makes such a moral decision. Could it be that this concern is related to the fact that the machine cannot explain its decision? Jichen Zhu & D. Fox Harrell, "System Intentionality and the Artificial Intelligence Hermeneutic Network: the Role of Intentional Vocabulary," @ https://pdfs.semanticscholar.org/2ddc/1444e94cb0741305ab930cb54a09bf2688bb.pdf "PegaWorld 2019 Keynote: Rob Walker — Empathetic AI," @ https://www.pega.com/insights/resources/pegaworld-2019-keynote-rob-walker-empathetic-ai Ref. Daniel Kahneman, @ https://kahneman.socialpsychology.org "Ethics Guidelines for Trustworthy AI," @ https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419 "Coordinated Plan on Artificial Intelligence," , Brussels, 7.12.2018 COM(2018) 795 final, @ http://data.consilium.europa.eu/doc/document/ST-15641-2018-INIT/en/pdf Frederick P. Brooks, Jr., "No Silver Bullet — Essence and Accident in Software Engineering," Computer, 20 (4):1987, pp. 10-19 @ http://faculty.salisbury.edu/~xswang/Research/Papers/SERelated/no-silver-bullet.pdf "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters (Oct. 9, 2018) @ https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G "Cambridge Analytica helped 'cheat' Brexit vote and US election, claims whistleblower," Politico (Mar. 29, 2018) @ https://www.politico.eu/article/cambridge-analytica-chris-wylie-brexit-trump-britain-data-protection-privacy-facebook/ "Will Democracy Survive Big Data and Artificial Intelligence?" Scientific American (Feb. 25, 2017) @ https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/ # # # About our Contributor: All About Concepts, Policies, Rules, Decisions & Requirements We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions.
<urn:uuid:b320a8d1-6071-4a8f-9883-db06a1847523>
CC-MAIN-2022-40
https://www.brcommunity.com/articles.php?id=c011
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00737.warc.gz
en
0.935654
3,552
3.71875
4
Originally published by New Context. No matter how large or small your organization is, the security of your data must be one of your top priorities. While regulations like HIPAA, GDPR, and CCPA provide basic privacy compliance standards for storing, accessing, and transferring information, data security overall is a broader issue that you’ll need to tackle from multiple angles. Data privacy – which is the primary concern of these regulations – involves who is allowed to see or access sensitive information. Data security, on the other hand, is about protecting that data from malicious or unauthorized access. Essentially, you need effective data security in order to maintain your data privacy. The best way to approach data security is by designing your systems, infrastructure, and policies around security from the very beginning – a practice known as “shifting left”. However, that doesn’t mean your organization can’t upgrade your existing security strategy now to ensure data protection and privacy compliance in the future. The amount and types of data you manage will determine which specific approach you take, but the following data security best practices can help organizations of any size protect sensitive information and prevent breaches. Data Security Best Practices It can be tempting to view data security as a problem that can be solved by implementing a one-and-done software solution, but the truth is there are many layers of technology, training, and processes involved in a truly effective data security strategy. Even the smallest organizations need to take a holistic approach to securing their important data. Sensitive data needs to be identified and clearly labeled so it can be stored in a secure location with the appropriate user access controls. You can use data discovery technology to help you classify and store data according to applicable criteria, whether the data security regulations you must be compliant with or the relative value of that data to your organization. If you employ a data discovery solution to help you manage your data, you need to ensure there are controls in place to prevent unauthorized users from reclassifying data improperly. Once your data has been classified and secured, you must create security policies to restrict who has access to important data and implement barriers to prevent unauthorized users from gaining access. Your data usage policy should follow the principle of least privilege, which means users only get the privileges that are required for them to perform their job duties. Your access control barriers can be physical, such as biometric or card-swipe locks on doors, or technical, like Group Policies and multifactor authentication. You should be logging all your database and file server activities, including logins, changes, and moves. This will enable you to track any changes to critical data and spot any other unusual activity. Also, tracking how your sensitive data is being used and who’s accessing it will help you build and manage more effective access controls in the future. All of your important data should be encrypted, whether it’s on a file server, inside a database, or on a user’s hard drive. You should be encrypting your data both while it’s at rest and while it’s in transit (i.e., via email, over the network, or on portable media, as well as during any data migrations). Most encryption is software-based, using either passwords or public key infrastructure (PKI) certificates like SSH or HTTPS, but you could also use hardware-based encryption such as a TPM chip on the motherboard or USB key that you must insert before gaining access. Software-based password encryption will only protect your data if your passwords are secure. Your organization should be requiring long, complex passwords that are changed on a regular basis. However, these password requirements can be frustrating to users, and can frequently lead to bad practices like writing passwords down on sticky notes or only changing one character of their password on each change cycle. To combat this, you should invest in a password manager that allows your staff to save and auto-fill their passwords. In addition, multi-factor authentication can provide another layer of security by requiring a secondary device (like a cell phone or a key fob) to confirm the user’s identity before they can access sensitive data. Data backups are absolutely critical no matter how big or small your organization is. You should be backing up all critical business assets, but especially the ones containing sensitive or important data. Having a robust backup policy in place will help ensure the security of your data even in a worst-case scenario, such as a natural disaster or ransomware attack. Your backup strategy will be determined by how much data you manage, how much storage space and other resources are available for the backups, and the regulations you must be compliant with. It’s important to note that backups are frequently targeted in cyberattacks as well, so that data should be encrypted to prevent data loss and add an extra layer of security. All of the applications and operating systems on your network need to be up to date to ensure any security vulnerabilities are patched. For endpoint devices, the best strategy is most likely going to be allowing automatic updates on antivirus software and operating systems. On critical infrastructure, you should be thoroughly testing any patches before you implement them to ensure that your functionality isn’t impacted, and no vulnerabilities are introduced into your network. Speaking of endpoints, it’s important to have security software installed and updated on all of the endpoints on your network, even if you have a BYOD policy and/or employees working from home. Endpoint security software protects your data from unauthorized programs and malware, including rootkits and ransomware. Protecting your endpoints will ensure that malware is unable to find an entry point into your network, keeping your infrastructure safe from breaches. Software Supply Chain Security In software development, “supply chain” refers to any code, binaries, any other components that go into or affect your software at any stage of development. These dependencies are frequently open source, and too often, they’re implemented without being thoroughly vetted, so there is a huge potential for an unnoticed vulnerability to be introduced. These kinds of vulnerabilities, even if they’re not intentional or malicious, are becoming common attack vectors, so you need to address them early in the development process. The most effective way to do so is with automated software supply chain scanning, like with an SAST (static application security testing) tool. One of the biggest threats to your data security is human error. You need to ensure your staff is trained on the proper procedures for accessing, storing, and changing sensitive data, especially if your data is subject to regulations like HIPAA or PCI. You should also educate your users on how to spot phishing attempts and other social engineering tactics, so they don’t inadvertently expose your sensitive data to malicious actors. Fostering a Secure and Collaborative Culture All of the technology in the world won’t protect your organization’s important data if you haven’t fostered a culture of trust and security among your people. Most security vulnerabilities occur because of human-level error, so you must approach security as an issue beyond your endpoints and infrastructure. Robust security policies and access control lists, plus comprehensive security awareness training for your staff, will go a long way toward preventing breaches. It’s crucial to find a balance between security and trust. If your organization views its staff as potential security threats and not as valuable team members, you’ll create a toxic environment that can damage your company faster than any security breach. Instead, you need to treat operational security as a collaborative goal that your whole business is working towards together.
<urn:uuid:71574f91-f7cf-47fb-ae5e-e3f680135607>
CC-MAIN-2022-40
https://www.copado.com/devops-hub/blog/data-security-best-practices-tips-for-any-size-organization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00137.warc.gz
en
0.925441
1,589
2.515625
3
Tethering is the process of connecting a non-mobile device like a desk or laptop to a mobile device or vice versa for the purpose of accessing the internet or internet signal. Many new mobile phones can be used to tether their internet signal for your computer similar to the way a router works. For example, if you have a laptop and are not near your usual WiFi signal you can use your mobile phone’s data connection to gain access to the web. The phones with VoIP capability happen to also offer tethering features. VoIP calls are made on WiFi signals so it makes complete sense that a 3G phone like an iPhone or the new Google G1 Android phone can be used for tethering.
<urn:uuid:380a868a-668c-4f7a-906c-724bd0eae2d4>
CC-MAIN-2022-40
https://www.myvoipprovider.com/blog/tether_your_voip_phone_with_your_laptop
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00137.warc.gz
en
0.888016
144
3.078125
3
Thermal Detection Monitoring in the Age of COVID-19 As states begin to ease COVID-19 restrictions, organizations must consider how and when they’ll reopen and how to keep employees, students, patients, and customers safe. The CDC offers guidance in a “Re-Opening Workplaces During the COVID-19 Pandemic” decision tool. They suggest three key questions to guide re-opening decisions: - Will reopening comply with state and local orders and protect high-risk employees? - Are recommended safety measures in place? - Is ongoing monitoring in place? It is the last question about ongoing monitoring that will likely require re-thinking tools and processes. How will your organization monitor employees, students, customers, patients, and others? Monitoring temperatures with thermal detection systems One way, of course, is to run a good, old-fashioned digital thermometer across the forehead, an ideal solution if you have just a couple of employees. But what if you have to take the temperatures of more than ten people waiting in long lines, impacting social distancing requirements and dragging productivity down? Thermal detection systems offer a faster, less obtrusive way. A thermal imaging camera detects and maps heat, making them useful in all kinds of applications—from detecting leaks and failure points in manufacturing and industrial plants, to conducting surveillance in law enforcement and military operations, to diagnosing disorders and disease in healthcare. While these systems have been in use for a while, today, they offer a meaningful way to detect high temperatures that can indicate serious illness, such as influenza, COVID-19 and others. It can also be a deterrent for those who are sick, ensuring that they stay home. Purpose-built cameras are typically connected to computer workstations and monitors. As people walk by, thousands of temperature readings are taken at a single spot on each person’s face—which keeps cameras from detecting people carrying a cup of coffee, for example—instantly producing a two-dimensional heat map. Screeners can then respond as appropriate. What to look for in a thermal detection system Thermal detection solutions come in many form factors—from high-end integrated systems to standalone portable systems that can be moved on demand. But not all solutions are created equal. When considering thermal detection systems, look for: - High degrees of temperature accuracy to minimize false positives. - Appropriate temperature sensitivity and range that enables a less intrusive experience for people moving in front of cameras. - Ability to scan many people at once to reduce lines, waits, and productivity losses. - Ambient temperature sensitivity that enables you to use the solution indoors or out—without interference from air temperatures. - Fast response times to instantly detect high temperatures. Some solutions also provide flexible options for recording and saving data (or not), as well as AI and data analytics capabilities that can extend your use beyond this current pandemic to maximize the value of the system. Logicalis: Experts at thermal detection systems Logicalis has successfully helped customers implement thermal detection systems for other applications, and has the expertise needed to help you identify, evaluate, implement, and support a thermal detection solution to help with COVID-19 monitoring and other applications. Our partners offer multiple form factors to fit your needs. Case study: Manufacturing facility A Logicalis customer, this manufacturer operates 10 facilities with approximately 200 employees in each facility. They’re considering re-opening and have estimated that it would take at least one hour to manually take employees’ temperatures, both when they arrive at work and anytime they have to re-enter the building (e.g., after lunch). With multiple shifts operating, this scenario would cost thousands of dollars each day—and a hit to productivity. Logicalis met with this customer to thoroughly understand their needs and recommend a solution for their organization—a solution that has already been implemented and is ready for the organization to re-open. Bill Evans is Director of Architecture for IoT & Analytics for Logicalis US, responsible for helping customers solve their technical challenges.
<urn:uuid:00045ebc-27f0-40c1-9078-3e7edd418a81>
CC-MAIN-2022-40
https://logicalisinsights.com/2020/05/05/thermal-detection-monitoring-in-the-age-of-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00137.warc.gz
en
0.933864
842
2.515625
3
Your company may have state-of-the-art monitoring and the latest anti-malware and anti-virus programs, but that doesn’t mean you’re not at risk for a breach, or that – as an employee, that you’re not putting your company at risk. Humans have always been the weakest link in the security chain. Phishing and social engineering schemes account for 93 percent of breaches, according to Verizon’s 2018 Data Breach Investigations Report. And passwords are easier for hackers to obtain than ever. One recently discovered file on the dark web contained 2.6 billion of them for sale. With proper training and motivation, your employees can prevent phishing attacks and password hacks. Education is essential, but if you do it wrong it can backfire, overwhelming workers with information and making them too worried about personal consequences to report problems. As guidance, here are some of the most important things you can teach employees about passwords and phishing, along with tips for presenting the information in a way that will encourage them to comply. Choosing a password We know a strong password means one with uppercase and lowercase letters plus numbers and symbols. Passwords should be changed every 90 days. Everybody in security knows this because that’s what the U.S. National Institute of Standards and Technology (NIST) said to do back in 2003. But, your employees defeated these rules. Because they’re human and have trouble remembering numbers and symbols not used in speech, they use combinations like P@s$w0rd!2 and 1L0v3U*7. It didn’t take hackers long to figure out that symbols were being substituted for letters they looked like and people who had to change passwords every three months simply added sequential numbers to the end. What about employees using those password meters that tell you whether the password you’re creating is weak, medium, or strong? Researchers at Carnegie Mellon University actually found these to be inaccurate. As a result, NIST recently admitted failure and revised its guidelines. Instead of using a hash of numbers and symbols, it says, you’re better off with a password that’s longer—at least 64 characters. Though this may sound difficult, it’s actually easier because you can create a pass phrase with spaces between words. Choose words that don’t normally belong together, like “big dog small horse.” This is good news for employees, who are much more likely to remember words they have chosen than a string of numbers and symbols. Even better news for employees: once you have a good password, you may never have to change it, NIST now says. Just don’t use it for anything else. However—and this is critical—NIST also says you shouldn’t rely on passwords alone except for low-risk applications. In other words, don’t run your business on them. NIST is right about that. We view the phrase “strong password” as an oxymoron. Serious cybercriminals use computers to do their password guessing and, as a human, you can’t keep up with that kind of computing power. Today, the average laptop has at least five times the processing power of the NASA Space Shuttle. That’s why it’s important to use multifactor authentication (MFA). However, passwords are still an important component of that system. So, encourage your employees to choose a good password, and then move on to MFA and single sign-on (SSO), which will make their lives even easier while making your business more secure. Spotting a phishing attack Do your employees know not to click links or attachments from unknown senders and to think twice even when they come from an insider or someone they know? Do they know to hover their mouse over a link to see if the address is different from the hyperlink text? To notice whether the “Reply” line information matches the “From” line? After years of security training, you might assume that they do. Although if you conduct a company-wide phishing test, the results may surprise you. Workers are busy and sometimes careless. Many believe that if they do click a bad link, your company’s antivirus and antimalware software will save them. A phishing test provides an opportunity to educate workers about real problems the company faces and updates relevant internal security personnel on the phishing education level of employees company-wide. Training is more compelling and less dry when it deals with the here and now. You should incorporate the latest real-life examples in your industry to help them see how serious cybercrime is and how it’s very possible they can be the heroes who stop it. Spotting a spear phishing or social engineering attack While many hackers still get results from traditional phishing attacks, others have moved on to spear phishing and social engineering, also known as business email compromise. In these schemes, a particular individual is targeted and asked to fulfill a request, usually providing data or wiring money. Unlike traditional phishing attacks, social engineering emails don’t usually contain malware. Instead, they rely on tricking the employee to act on the request. Some hack the email of the individual they’re impersonating, while others rely on “spoofing,” or using an email address a letter or digit off. Others have figured out how to edit the “From” field to make the fake addresses identical to the real one—but if you click “Reply,” they are different. Spear phishers comb LinkedIn and other business sites to learn about your company and its personnel and suppliers, your relationships with colleagues or partners and then use the information to craft plausible requests, such as wiring money to pay an invoice or handing over employees’ W-2 information. Hackers go after small and large businesses alike, and even sophisticated companies have been fooled. For example, Merck & Co. reported an estimated loss of millions from the NotPetya cyberattack from business disruptions of its worldwide operations, including manufacturing, research and sales operations. Outside of the professional sphere, social engineers target people based on their Facebook profiles. If a friend’s account is hacked, others in the network may be scammed through fake promotion schemes or tricked into downloading a keystroke logger. Training is important in employee cybersecurity education, as is contextualization for how their own personal information can be used or exposed in these types of attacks too. Employees who realize that their personal information, as well as your business data is at stake are likely to pay more attention to training and become more vigilant. Every company’s employees are different. Through employee security training and secure solutions, your employees should be able to recognize attacks and help protect the company and their own assets.
<urn:uuid:fbb38719-e8e3-4e45-98be-8ec4d6cef354>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/03/25/employee-cybersecurity-essentials-part-1-passwords-and-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00137.warc.gz
en
0.94745
1,434
2.765625
3
Introduction to DDoS Attack DDoS (Distributed Denial of Service) attacks are a very dangerous type of attack that concerns a lot the cyber security community nowadays. DDoS or otherwise Distributed Denial of Service is a kind of attack that targets the server and not the client computer in order to overload it with multiple connections. DDoS attacks have a high succeed rate when they are coordinated by “botnets”, resulting in an increased power. DDoS is one of the 5 most dangerous cyber attacks on the planet and in order for a DDoS attack to be successful, the attacker should possess under control a computer network, commonly known as “zombie net”. In order for these computers to become “zombies” they will have to be infected by some kind of virus or a Trojan. If the above is accomplished, the hacker is now able to affect these computers as a complete network. The result of this attack by the “zombie net” can now be the increased bandwidth overload of a webpage, resulting to a server failure and unavailability. Related – Zero Day Attack DDoS Attack Tools A very famous tool for performing a DDoS attack is known as “slowloris”. This type of tool was created in 2009 from “Rsnake” and it has a few different characteristics. The first one is the ability to create a server failure with only 1 computer (client) and second one is the low network resources needed for such a failure. This low network resource give the advantage to keep complete privacy of the “hacker”. Furthermore “slowloris” tries to keep as many connections as possible to the “server”. Other very common DDoS attack tool is the “Low Orbit Ion Cannon (LOIC)”, which is an open-source type application. It is a very user friendly program and utilizes TCP and UDP protocol layer attacks to be carried out. In addition to the above, a very handy tool, which is an updated version of the “Low Orbit Ion Cannon (LOIC)’ is also available. Its name in the IT community is known as “High Orbit Ion Cannon (HOIC)” and has the advantage of using HTTP protocol instead. The software is designed to have a minimum of 50 people working together in a coordinated attack effort. Reasons for DDoS Attack The main reasons for a DDoS attack are the following: - Overloading HTTP Attacks: The mission of such an attack is clearly the need to exhaust the resources of the target. By overloading the HTTP connections an attack can be extremely powerful and complicated to be treated by a simple server. - Protocol Attacks: Protocol attacks can cause the interference of services of a server, occupying most of the capacitance that a server can supply. - Volumetric Attacks: Those kind of attacks can cause a sort of misalignment between the target’s bandwidth and the rest of the “zombie’s” network. A huge amount of data is sent to the target using a form of mass movement, like in case of a “Botnet”. - DNS Overloading: A DDoS attack on the DNS can cause the server to be overloaded by opening multiple connections and waiting for the server to respond. This results in an overloaded server due to increased bandwidth. Related – DOS vs DDOS Mitigation of DDoS Attacks Experts in computer security domain are investigating on ways to prevent DDoS attacks and are faced a real challenge. Most grueling is how to recognize a DDoS attack that opens connections, from real connections. A practical scenario is on “Black Fridays” when webpages are overloaded with high load of traffic and protecting the applications from DDoS attacks will be a total failure. Thankfully, nowadays cyber security has invented solutions dealing with DDoS attacks such as: - Traffic Isolation: The traffic of a web page is isolated using a “rate limiter”, therefore increasing security but on the other hand compromising on user experience. - WAF: Commonly known as Web Application Firewall, it is kind of firewall that works as an inverted proxy. This technology protects the server and the web page from malicious traffic. - CDN: Using CDN (Content Delivery Network) results in a very stable operation of the web page due to the fact that this technology allocates the web page in multiple servers. This makes it impossible to lose the service.
<urn:uuid:9930d799-5aae-4f37-a688-1ce470bfcf94>
CC-MAIN-2022-40
https://ipwithease.com/what-is-ddos-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00137.warc.gz
en
0.939777
938
3.25
3
The impact of ransomware attacks Ransomware attacks are an unwelcome and increasingly concerning but unavoidable fact of life today. Industry experts estimate that by 2025,1 75 percent of organizations will have faced one or more attacks, incurring an average of $1.85 million in recovery costs2 along with potentially grave damage such as interruption of essential services. Criminal groups are now offering ransomware as a service to distribute tools and tactics to a growing cadre of affiliates — and many attacks have expanded beyond attempts to encrypt data and render systems unusable. Sophisticated threat actors are now exfiltrating data and threatening to publish it, sell compromised credentials or delete an organization’s data altogether. Ransomware has become a lucrative activity, with businesses and government agencies making $5.2 billion in bitcoin ransom payments since 20183. In a ransomware attack, the ideal scenario is to hold the attacker at bay while security and IT teams work to contain, analyze and recover from the attack. Invariably, however, the question of ransom payments surfaces. DXC advises organizations not to pay a ransom — not only because of the potential ethical and legal implications but also because payment does not guarantee a safe or complete recovery. It has been demonstrated that organizations that pay ransoms on average can only restore 65 percent of their data4. Worse than this, some organizations have been recompromised due to failing to identify root causes and securing their networks. As one of the world’s leading security services providers, DXC has helped many global companies and public sector agencies recover from ransomware attacks. We also work with customers to harden their environments against the latest tactics and techniques of threat actors. In this ransomware survival guide, we share lessons we’ve learned and best practices we’ve developed to help organizations coordinate their response to an attack and make timely, strategic decisions through all phases of the response. This guide is a follow-up to DXC’s Ransomware defense guide: Prepare for an attack, which focuses on proactive attack prevention. The defense guide includes a checklist to help organizations identify weaknesses in their environments that could be exploited, reduce attack surfaces and lower the likelihood of successful attacks. Life cycle of a ransomware attack – five phases Many adversaries plan for weeks or even months before launching the actual attack. For the attacked organization, the full recovery process can take up to a year. DXC has developed a model of a ransomware attack that includes both the phases of the attack and the response. (Figure 1). The order of the phases shown in Figure 1 is aligned with the preventive approach described in our ransomware defense guide; however, in the event of a ransomware breach where detection has already occurred, the top priority should be disrupting the attack. Therefore, this guide focuses on the following activities, in five phases: - Disrupt and stop the adversary - Understand the adversary - Remove the adversary’s presence - Recover from the attack and avoid recompromise - Perform post-incident activities and lessons learned While this report is organized in five phases, it’s important to note that remediation activities generally need to be done in parallel. Figure 2 shows how these phases overlap. This overlap creates staffing challenges, as key personnel are often involved in multiple activities. The ability to prioritize effectively in the face of unknown elements is key to surviving a ransomware attack. The task of beginning the recovery and remediation process can be daunting. To help address this need, DXC has created a comprehensive blueprint that groups the commonly required activities into 180 pages of work packages (Figure 3) based on our proprietary Cyber Reference Architecture and informed by our work with large global organizations. This matrix helps prioritize activities based on an organization’s cyber-maturity level and capabilities. Although the structure shown in Figure 3 is similar to that in the National Institute of Standards and Technology’s Computer Security Incident Handling Guide,5 it’s shaped by our real-world experience. Clearly, dealing with the aftermath of a ransomware attack is a complex task. In addition to the security-centric tasks shown in Figure 3, the response must entail a wide range of tasks related to external and internal communications, access control systems, business solutions, operational technologies (OTs) and many more potential issues specific to impacted systems and infrastructure. Effective action results only from effective coordination and control, which requires strong governance and highly organized recovery efforts. Figure 4 illustrates typical roles and functions involved in a ransomware crisis. Ransomware attack scenario The remainder of this paper brings the model to life. In the following scenario, the attack begins on a Friday evening. The IT help desk begins receiving reports from around the world of a ransomware message screen and services that are no longer functioning correctly. The victim is a global company that has many of the capabilities required to respond effectively but lacks proper preparation. Its incident response teams include more than 250 people from all regions and all technical domains, from administrative to senior leaders, working 24x7 and in shifts. Rather than suffering fast, widespread impact and encryption by the perpetrator, the strike’s impact on this company is slower and less pervasive because security preparedness has limited the impact. There is still a chance to protect some resources and further reduce the impact if the organization reacts efficiently. The encryption process has begun, but it is occurring slowly and has affected only a few random systems. Let’s discuss the key measures that other organizations can take to respond to a ransomware incident like this. 1 Disrupt and stop the adversary The organization’s first priority is to ensure that the ransomware does not spread further within the environment. The remainder of this section deals with two major topics – gaining visibility and forensic analysis. It’s worth noting that at this point in the response, senior management should ensure that governance and control mechanisms are established to manage many activities, including tracking the costs incurred by the incident. 1.1 Containment activities and visibility In this phase, the adversary’s ability to move around in the environment is blocked. The activities are informed by findings from digital forensics activities. - Secure crown jewels and crucial systems (agree on priority systems and services) - Block network traffic - Isolate network segments - Block malicious/suspicious domains and emails - Reset passwords for and reduce the number of privileged accounts - Restrict highly privileged accounts where required - Isolate known compromised hosts - Monitor for indicators of compromise - Collect malware sample and organize custom DAT file from antivirus vendor Additionally, a key activity in this phase is to get the best possible overview of the current situation. Anything learned can help management and response teams make more informed decisions for the activities to come. Increased visibility into the situation helps teams define configurations and tooling in later phases to better protect and more quickly clean or rebuild the impacted environment. Forensic services will lead efforts to find indicators of compromise, which will increase visibility and help the remediation team disrupt the adversary. Most of these activities follow each other closely, with focused switching from one phase to the other, while a low level of constant activities is performed continuously in the background. Concurrently, organizations must re-enable impacted systems and services, analyze how the environment was compromised, improve visibility into the incident, and plan and configure improvements that block the threat actor’s ability to recompromise the environment. Required information includes: - Where did the incident start? - Who detected the event and how? - Which systems, users and services are impacted? - Prioritize recovery activities based on business impact, with attention to crucial services and related dependencies to other services and infrastructure - Are people’s health or lives at risk? Has the threat been mitigated? - Are any countermeasures already applied? If so, what measures were taken? When and where? - Can system elements be separated, such as blocking rules on routers and firewalls and pulling virtual network cables? - Are IT and OT systems separated? If not, can they be? What are the dependencies between them? - Does remote access still work? - Has any suspicious activity been observed, such as account activity associated with privileged identities or password resets? - Are log files available from suspicious systems? Another important question to be answered is: Are backups secured, isolated, accessible and/or recoverable? Backups should be offline and encrypted, with access restricted, and organizations should conduct regular training and exercise to ensure they are recoverable in case of an incident. An air-gapped vault solution is one useful approach. After the initial fact-finding phase, the organization and supporting vendors need to come together as soon as possible to plan and push for further enhancement and execution of governance responsibilities. The response must be organized and communicated to all stakeholders, including legal, management, IT and staff. Additionally, management should declare a crisis and initiate the relevant crisis management and communication processes, including standing up the relevant response organization (Figure 4). 2 Understand your adversary The main goals of forensic analysis are to understand how the malicious code came into the environment and isolate the affected devices from the network as quickly as possible to prevent encryption of mapped network drives and other systems. Indicators of compromise collected during this analysis will help the teams articulate the necessary countermeasures and prioritize their remediation work based on the tactics, tools and procedures of the threat actor and the breadth of the compromise. The following forensic activities are typically required: - Recommendations on visibility tools, deployment and placement (the typical minimum threshold for visibility is 80 percent) - Forensic analysis of compromised systems, accounts, services, files, storage, memory, command and control channels, malware type analysis, reverse engineering, etc. - Malware analysis, a complex and difficult exercise made even more challenging by threat actors’ attempts to hide their persistence and encrypt malicious payloads - Collecting indicators of compromise and defining countermeasures and containment tools such as firewall blocks and sinkhole configurations, custom antivirus signatures, file blocks, software updates, component deactivation, etc. - Support during the creation of cleaning strategies and rebuild approaches - Participation in calls with cyber insurance organizations and other third parties 3 Remove the adversary’s presence After the immediate threat has been contained and encryption largely stopped, the work begins on removing the adversary from the environment. This phase often starts in parallel with disrupting the adversary, with the degree of overlap driven by a number of factors including the organization’s overall risk tolerance — balancing the need to restore critical business processes with the risk of recompromise. If the organization has been prepared for a crisis situation such as a ransomware attack, the organization can respond significantly faster and more effectively. Key steps are time-consuming activities such as: - Synchronization and approval with various groups - Definition of “crown jewels” and essential business services - Definition of high-value target systems such as domain controllers ADFS, PKI, AADsync, etc. - All cyber defense activities (referenced in the Ransomware defense guide) Organizations often need to prioritize restoring core identity functions such as Active Directory and DNS so they can begin the wider recovery. Technical workstreams required for recovery will depend on what was impacted by the attack. Governance and control continue to accelerate in this threat actor removal phase. Communications with customers and partners will become more intense, and questions are likely to move from “What is happening?” to “Why did it happen?” and “Wow, can we trust you?” Typically, authorities are notified in this phase, and communications teams prepare internal and external announcements. The overall volume of activity and related interactions rises with a corresponding demand for management input. A project management framework must be put in place at this point to address the complexity of technical tasks. For most organizations, recovery will impose a significant load on subject matter experts, and success requires a structured approach to managing priorities. Business impact and continuity should be addressed with analysis of non-IT related and IT-dependent service issues. 3.1 Cyber security insurance A growing number of organizations have purchased cyber security insurance coverage to recover costs; therefore, forensic teams need to meet insurers’ requirements for documentation and execution of incident response activities. Often these tasks must be approved or at least synchronized according to insurance policy requirements. Third-party service providers such as DXC should participate in discussions with insurers to explain initial findings and how to respond to the incident. 4 Recover from the attack and avoid recompromise The most comprehensive and secure approach to recovery is to assume widespread impact of the malware and rebuild from scratch. But often this is not practical or possible. The clean-and-rebuild program includes a general set of activities that are more or less the same for all servers and clients independent of their specific role, plus a set of role-specific activities that are unique to the particular server, role, software, solution or product. Isolate but don’t shut down suspicious machines, so you don’t lose any important forensic data that is only stored in memory. Typical activities that are common across a broad range of products and server roles include: - Rebuild and clean the hardware-adjacent software and components such as BIOS, drivers, etc. Make sure you have a trustworthy software library and verified software hashes. - Rebuild and clean the memory, operating system and registry. Ensure that you have scanned the server with an antivirus and/or endpoint detection and response tool that can detect the malware found during analysis. - Harden the server following vendor recommendations. - Perform credential hygiene following recommended practices: - For service accounts, allow “logon as a service” and “logon as a batch job.” Deny “logon locally” and “logon through remote desktop services” and make sure they are not allowed for interactive logon. - Define a secure management client (secure admin client) that is exclusively able to access high-security systems. Apply known good configuration settings that have been defined for the incident, such as: - Custom antivirus pattern with the latest version - Endpoint detection and response tools and agent installed - Patch level and latest patches installed - Privileged accounts reset - Negative antivirus and endpoint detection and response scan results - Participation in monitoring - No other signs of malicious activity During all of these activities, it is essential to actively monitor for potential suspicious activities and continuously update the tooling with information from the forensic investigation. The most common activities to clean, rebuild and secure servers and clients as well as related assets (such as accounts, applications, etc.) are listed in the work packages shown in Figure 2 (remediation work package prioritization). After the general base hardening, role-specific hardening and clean-up/rebuild should begin. The organization must decide whether to rebuild or clean a system based on the findings, availability of backups, complexity of rebuild, the overall risk appetite and the specific situation. Business dynamics can have a significant influence in these decisions and parameters. For example, the need to close year-end financials may put significant pressure on decision makers and influence the approach and priorities. One of the most important requirements of the remediation phase is having a good understanding of the Active Directory design, domain model, trusts and the way the environment is accessed. Key areas of concern are remote access entry points, location type and patch level of security boundary devices, as these access points are regularly compromised by threat actors. Rebuild-and-clean teams also should review all privileged groups and strictly apply the least-privilege principle, remove members from groups such as enterprise admins or schema admins, and apply permissions only for the time required for administrative tasks. Use multifactor authentication for least-privileged access and apply strict monitoring for relevant entities. Planning needs to take into account the time required to implement improvements and see benefits. Sometimes it can be faster to implement a tactical change to achieve some benefit, knowing that strategically this approach will need to be enhanced or replaced. Ideally the approach should support both short-term tactical and long-term strategic requirements. When rebuilding servers, the time required to restore data can vary from several hours to several days based on system complexity and the nature and amount of data. Backups often must be restarted because the data is unusable. A trained and practiced team can significantly improve recovery quality. When scheduling rebuild activities and recreating the replication and authentication infrastructure, organizations should prioritize subsidiaries and locations that have the highest user populations, perform the most essential services and otherwise have the greatest business impact. Systems should be allowed online again only if they are hardened (following the known good configuration) and included in the monitoring and endpoint detection and response tooling. Other typical challenges and questions include: - How much memory is required? - Should the same IP addresses be used for rebuilt systems? - What additional systems are required? - Which services and teams need to be recovered first to begin reactivating the infrastructure? - What information goes into and out of the various teams? What are the dependencies? - What are the dependencies among systems, services and networks? (Search for and fix circular dependencies such as “Active Directory requires database, and database requires Active Directory.”) - Are gold-standard images available? Determine what “good” looks like (antivirus pattern version, endpoint detection and response tooling, indicators of compromise addressed, patch level, etc.), so that new or cleaned systems are protected. - Is network bandwidth capable of sending large images? If not, how should images be distributed? Will image recipients know what to do with them? - Is the available storage sufficient? If an organization decides to pay a ransom in spite of the known risks and recommendations to the contrary, an experienced third party should be engaged to lead the negotiations. The organization must also define how they will verify the functionality of the key material received from the threat actor and how the decryption process should take place. Bear in mind that it can take more than 24 hours to decrypt a single system. 4.1 Typical ransomware response timeline Every incident response differs, even if the various remediation activities are similar. The complexity of the response effort and the time required to repair the situation depend on many factors such as organization size, structure, type of malware, scope of the breach, network topology, industry type and cyber-maturity. In DXC’s experience, it may take 1 to 3 weeks to regain production services after a major, large-scale attack, with a month to a year afterward spent on further recovery and security improvement initiatives. During all phases, teams should assume the breach is still active. Large organizations faced with significant remediation will have to find creative ways to streamline or speed up various tasks. For example, if the organization has 100,000 impacted clients to rebuild and deploy, and it takes a week to repair 3,000 clients, a full rebuild could take 34 weeks – a timeline that would probably be unacceptable to senior management and could threaten business health or even survival. An example of a ransomware timeline for incident response and recovery is shown in Figure 5. 5 Post-incident activities and lessons learned DXC highly recommends initiating further strategic security improvement projects once services have been recovered and systems are up and running again. (In the words of Winston Churchill, “Never let a good crisis go to waste.”) It’s best to seek approval for these strategic projects immediately following a crisis, while senior leaders’ attention is still focused on the impact of the attack. Post-incident activities should include the following: - Conduct a cyber-maturity review of the entire security environment to identify gaps and prioritize projects. - Analyze and update security governance practices. - Improve operational security reporting capabilities. - Perform adversary disruption exercises. - Improve containment and remediation planning and execution. Consider conducting a business impact analysis and related assessments to get an overview of your organization’s needs, and then prioritize the projects that will yield the best overall improvement. By following the recommendations in this ransomware survival guide, organizations can mitigate the impact of a ransomware attack on the business, speed up recovery and reduce the chance of another compromise. However, experience shows that investing in prevention and protection before an incident occurs is significantly easier and less expensive than conducting recovery and clean-up under attack conditions. Even better, organizations can avoid potential business interruption and reputational damages. In today’s threat environment, proper planning, preparation and governance are the keys to survival. About the author Get the latest threat updates Protect your enterprise. Subscribe to DXC's monthly report on the latest threats, breaches, cybercrimes and nation-state activities. 1“Detect, Protect, Recover: How Modern Backup Applications Can Protect You from Ransomware,” Gartner, January 2021. 2“The State of Ransomware 2021,” Sophos, April 2021. 3“Ransomware Trends in Bank Secrecy Act Data,” U.S. Treasury, Financial Crimes Division, Oct. 15, 2021. 4The State of Ransomware 2021,” Sophos, April 2021. 5 “Computer Security Incident Handling Guide,” National Institute of Standards and Technology, U.S. Department of Commerce, August 2012.
<urn:uuid:a9c50223-9d3a-4739-b934-aea624209bf0>
CC-MAIN-2022-40
https://dxc.com/us/en/insights/perspectives/paper/ransomware-survival-guide-recover-from-an-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00137.warc.gz
en
0.940566
4,454
2.515625
3
Circulating currents induced in conductive materials by any type of varying electromagnetic fields. Effective Ground-Fault Current Path. An intentionally constructed, low-impedance electrically conductive path designed and intended to carry current under ground-fault conditions from the point of a ground fault on a wiring system to the electrical supply source to facilitate the operation of the overcurrent protective device or ground-fault detectors on high-impedance grounded systems. (1) A conductor through which current transfers to another material, or (2) a conducting material through which current either enters or leaves a device, utilization equipment, a circuit, or a power distribution system. A general term used to the described material, fittings, devices, appliances, luminaires (fixtures), apparatus, and the like used as a part of, or in connection with an electrical installation. Equipment Bonding Jumper. The connection between two or more portions of the equipment-grounding conductor. The grounding connection, normally with a non-circuit conductor, of the non-current-carrying metal parts of a wiring installation, or of the electrical distribution or utilization equipment supplied. (1) A bare, green-covered, or green insulated conductor run with the supply-circuit conductors to connect the non-current-carrying metal parts of equipment, raceways, and other enclosures to the electrical-power distribution system neutral or other grounded conductor and/or the grounding-electrode conductor either at the service equipment, or at the source of a separately derived system, or; (2) a conductive path(s) installed to connect the normally non–current-carrying metal parts of electrical distribution/utilization equipment together and to the electrical-power distribution system grounded conductor or to the grounding electrode conductor, or both. Interconnection of all the noncurrent-carrying metal parts of a building or other structure or premise electrical-power distribution system with the grounded service neutral conductor at the point where the service neutral conductor is referenced to earth ground. Efficiency. As used with electric motors, the NEMA full-load efficiency basically lists as a percentage the amount of electrical energy supplied to the electric motor that is converted into kinetic energy. The remaining power is a loss that is converted mostly into heat. Electromotive Force (EMF). Normally abbreviated to the acronym EMF, this term can be described in two different ways: (1) the electrical pressure or force that pushes electrons through a circuit conductor; (2) the force that causes electrical current flow when there is a difference of potential between two points. The most common definition of circuit or source voltage, an electromotive force is not the same as potential difference, which is the voltage developed across a circuit element as a result of the current in the element. Explosion proof (EXP). Normally abbreviated to the acronym EXP, this term describes motor enclosures designed to be used in areas that have hazardous atmospheres; the end bells and the cylindrical motor housing are totally enclosed and non-vented. Electromagnetic (Magnetic) Field. Comprised of lines of force, this field is the region surrounding a magnet through which magnetic forces act. The magnetic field is most intense near the poles of the magnet and as the distance from the magnet is increased, these lines of force become weaker and weaker (because of the surrounding air’s high reluctance), until they are finally nonexistent. Excitation Current. The primary current that supports the continual reversal of the magnetic-pole polarities (core losses, dissipated as heat) in a transformer. This current contributes to the energy expended due to eddy-current and hysteresis losses. The expended electrical energy of this current is referred to as iron core losses in the transformer, which are normally dissipated as heat (I2R losses). A classification for representing large and small numbers by using a 1, 2, or 3 digit number times a power of ten that is a multiple of 3. A symbol written above and to the right of a mathematical expression to indicate the operation of raising to a power.
<urn:uuid:b9c7f426-b883-4746-96d9-4bdafc41cc8a>
CC-MAIN-2022-40
https://electricala2z.com/glossary/electrical-engineering-terms-e/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00137.warc.gz
en
0.91398
901
3.53125
4
sakkmesterke - stock.adobe.com Quantum computing may be years away, but Atos is preparing enterprises for the age of quantum with benchmarking to help them measure computational power. According to Atos, 2020 has become an inflexion point for quantum computing. Real-life application areas have been identified which are unable to be solved in the classical world, but may be able to be solved in the quantum world. Atos said that the new metric, called Q-Score, measures the actual performance of quantum processors when solving an optimisation problem. Elie Girard, CEO of Atos, describes Q-Score as a universal metric for quantum computing, which aims to measure and assess the quantum computing performance of any type of qubit. Quantum computers require a huge amount of error correction. He said that the Q-Score metric will help companies improve their understanding of the type of problems where quantum computers perform well. “The class of applications where Q-Score will have the largest impact is in optimisation,” he said. Unlike artificial intelligence (AI), where a data model may not be easy to explain, Girard believes humans can explain the problems a quantum computer is being used to solve. “I think it’s a combinatorial problem. It’s not about humans being unable to understand what the machine does,” he said, adding that while humans can solve the problem, the way they do this cannot scale. The travelling salesman problem is a well-known maths problem, deciding the optimal route a salesman has to take between each city, visiting each one once. For Girard, a human could easily draw an optimal route for the salesman to visit a handful of cities, but this becomes impossible as the number of cities increases. A quantum computer can scale far more, which means it can figure out the optimal route between cities as more and more are added. “Your brain understands the problem, but when you increase the dimensions, the brain can’t calculate the outcomes,” he added. Another seemingly simple example, said Girard, is where a university has to house 400 students but only have 100 rooms in its hall of residence, and there are 50 pairs of students who hate each other. According to Girard, the number of combinations of pairs of students who could be housed together is larger than the number of atoms in the universe. Girard said that today’s quantum computers measure 15 on the Atos Q-Score, an improvement of 5 from last year’s quantum computers. By next year, he expects quantum computers will have a Q-Score of at least 20. To reach so-called quantum supremacy – where a quantum computer can solve a problem not possible on a supercomputer – the Q-Score would need to be around 60. For Girard, this shows that while quantum computing is a number of years away, it is something business leaders need to build into their mid to long-term strategic planning. “In spite of the fact that quantum technology may take years, we have identified applications which quantum computers can solve,” he said. Atos is collaborating on the European project, NExt ApplicationS of Quantum Computing, which aims to boost near-term quantum applications and demonstrate quantum superiority. Projects include identifying molecules for carbon dioxide capture with Total, work with EDF on optimising the load of electrical cars on fast-charging stations, and redefining Monte-Carlo techniques for near-term quantum computers.
<urn:uuid:b9775dc5-ceb8-447f-99d1-2b5233cd6ea8>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252493150/Atos-develops-Q-Score-benchmark-for-quantum-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00337.warc.gz
en
0.933621
729
2.703125
3
Billions of Bluetooth devices could be affected by Blueborne malware, say researchers. Security researchers at Armis Labs have discovered a number of Bluetooth vulnerabilities that could affect millions of IoT devices around the world. More specifically, any device that uses Bluetooth connectivity – from smartphones to medical devices – could become the target of an attack vector that the researchers have named ‘BlueBorne’. In a blog post, they said the malware can “spread through the air and attack devices via Bluetooth.” “BlueBorne allows attackers to take control of devices, access corporate data and networks, penetrate secure ‘air-gapped’ networks, and spread malware laterally to adjacent devices,” said researchers. Read more: Mesh networking comes to Bluetooth Eight vulnerabilities so far Researchers said the attack does not require the targeted device to be paired to the attacker’s device, or even to be set on discoverable mode. It has so far identified eight zero-day vulnerabilities so far, which, it said, indicate the existence and potential of the attack vector. It also said that many more vulnerabilities await discovery in the various platforms using Bluetooth “These vulnerabilities are fully operational and can be successfully exploited, as demonstrated in our research. The BlueBorne attack vector can be used to conduct a large range of offences, including remote code execution as well as man-in-the-middle attacks,” they said. The Armis researchers added that BlueBorne can potentially affect all devices with Bluetooth capabilities, estimated at over 8.2 billion devices today. Spread through the air Researchers said they were concerned about the attack because of the medium its operates in. “Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. “The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than Wi-Fi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.” The company said that flaws that can spread over the air and between devices pose a tremendous threat to any organization or individual. “Current security measures, including endpoint protection, mobile data management, firewalls, and network security solution are not designed to identify these type of attacks, and related vulnerabilities and exploits, as their main focus is to block attacks that can spread via IP connections,” said researchers. “New solutions are needed to address the new airborne attack vector, especially those that make air gapping irrelevant. Additionally, there will need to be more attention and research as new protocols are using for consumers and businesses alike.” Read more: Bluetooth 5 launches with emphasis on IoT
<urn:uuid:54ceb1d5-35eb-4b16-9cd9-17d4a1f53b48>
CC-MAIN-2022-40
https://internetofbusiness.com/security-researchers-warn-over-blueborne-iot-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00337.warc.gz
en
0.942243
617
2.71875
3
UPDATE--SHA-1, the 25-year-old hash function designed by the NSA and considered unsafe for most uses for the last 15 years, has now been “fully and practically broken” by a team that has developed a chosen-prefix collision for it. The development means that an attacker could essentially impersonate another person by creating a PGP key that’s identical to the victim’s key. The technique that the researchers developed is quite complex and required two months of computations on 900 individual GPUs, so it is by no means a layup for most adversaries. SHA-1 has been phased out of use in most applications and none of the major browsers will accept certificates signed with SHA-1, and NIST deprecated it in 2011. But the new result shows that SHA-1 is no longer fit for use. The new collision is the work of researchers Gaetan Leurent and Thomas Peyrin, and while SHA-1 isn’t widely used anymore, it has potential consequences for users of GnuPG and OpenSSL, among other applications. “Our work show that SHA-1 is now fully and practically broken for use in digital signatures. GPU technology improvements and general computation cost decrease will quickly render our attack even cheaper, making it basically possible for any ill-intentioned attacker in the very near future,” the researchers said in their new paper, published this week. “SHA-1 usage has significantly decreased in the last years; in particular web browsers now reject certificates signed with SHA-1. However, SHA-1 signatures are still supported in a large number of applications. SHA-1 is the default hash function used for certifying PGP keys in the legacy branch of GnuPG (v 1.4), and those signatures were accepted by the modern branch of GnuPG (v 2.2) before we reported our results.” “We note that classical collisions and chosen-prefix collisions do not threaten all usages of SHA-1." There are several potential scenarios in which the new collision could be implemented in an attack, the most likely of which is someone impersonating another user by creating an identical PGP key. But the researchers said there are other possibilities, as well. "Another important scenario is the handshake signature in TLS and SSH which were vulnerable to the SLOTH attack when MD5 was supported, and could now be attacked in the same way when SHA-1 is supported. However, the attack is still far from practical in this setting because we need to compute the collision in a few minutes at most," Leurent said in an email. There could also be attacks similar to the MD5 Rogue CA or the attack used by the Flame malware to break windows updates, but that only works is someone is still signing certificates with SHA-1, and using predictable serial numbers. We are not aware of a CA doing this, but it may still exist somewhere. The chosen-prefix collision is distinct from the SHA-1 collision developed by a team of researchers from Google and the Cryptology Group at Centrum Wiskunde and Informatica in the Netherlands. That work from 2017 showed that it was possible to create two distinct files that would have the same SHA-1 digest and resulted in the browser manufacturers deprecating SHA-1. In the new research, Leurent and Peyrin were able to show that SHA-1 should not be used for digital signatures, either. “Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name,” the researchers said. For many individual users, the new collision likely won’t have any practical effect, as the major browsers have already moved on from SHA-1, as have the major certificate authorities. However, the research does have implications for PGP users because PGP keys could be forged under some circumstances. And any SHA-1 certificates with predictable serial numbers also would be vulnerable. "Currently, the concrete impact is mostly for people who use the PGP web of trust. If they trust SHA-1 signatures, an attacker could impersonate their contacts," Leurent said. However, if there are still some automated systems (such as system updates) accepting and issuing SHA-1 certificates (either PGP certificates, or X.509 certificates issued with predictable serial numbers), this could become a more dangerous attack vector. Leurent and Peyrin notified the developers of GnuPG and OpenSSL of their findings and GnuPG has implemented a countermeasure already, while OpenSSL’s developers are considering removing support for SHA-1. This story was updated on Jan. 8 to add comments from Leurent.
<urn:uuid:9d0e1098-ec36-4dd3-805c-e6692fe8f85d>
CC-MAIN-2022-40
https://duo.com/decipher/sha-1-fully-and-practically-broken-by-new-collision
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00337.warc.gz
en
0.953535
1,065
2.921875
3
Liquid immersion cooling – Path to net-zero emission Liquid Immersion cooling - Path to net-zero emission In matters of environment, United Kingdom is leading the way by becoming the first major economy to pass laws that will contribute greatly to reducing global warming. In its targets, UK has an ambitious plan to reduce greenhouse gas emissions to net-zero by 2050. Other countries have followed suit with a promise to significantly reduce carbon emission and ultimately reaching net zero. For those who may not be familiar with the buzz word “Net Zero”, it means balancing between emission and absorption of greenhouse gases, that is emitting and absorbing an equivalent amount. With the world moving towards digitalisation, data centres will be an important component of this development. However, they pose a big threat towards derailing countries in their quest to live by their promises of substantial reduction of carbon and eventual balancing of emission and absorption by 2050. Luckily, we can have our cake and eat, that is, continue with the digitalisation quest but efficiently reduce the amount of carbon emitted in the data centres and related infrastructure. One may ask, how do we achieve that? The answer to this is the adoption of a cooling technology that is efficient and cost-effective. Liquid immersion cooling fits this description perfectly. So how will liquid immersion cooling help the UK and other environmentally conscious economies achieve net zero-emission? Innovative cooling technology The answer lies with the efficiency of this innovative cooling technology. It reduces the amount of energy that you use to cool the data centre hardware as well as other equipment such as GPUs and CPUs. Given that by average only ten per cent of the global primary energy comes from renewable sources, it means a lot when we reduce the amount that we consume at any given time. Reduced energy consumption Liquid immersion cooling in a data centre involves submerging IT components in a dielectric liquid. It is important to note that the liquid involved in this process will not damage the IT components it is a special liquid – one that conducts but cannot electrocute you. Traditional cooling systems dependence on direct air cooling makes them inefficient as they tend to consume a relatively high amount of energy, impacting the cost of running the data centres as well as the environment. With just one cubic of dielectric liquid that is used for immersion cooling, a heated IT component will be cooled with less energy consumption. The fact that there are no fans used in this cooling technology means there is a further reduction of the power that is consumed in the process. Answer to speeding the path to net-zero emission With the adoption of this advanced cooling system, there will be a reduction in the amount of power that is consumed. This will eventually reduce power needs, especially the kind that is derived from fossil fuels. With this, two birds are killed with one stone. The environment is conserved and at the same time, the amount of money that goes towards paying power bills can be channelled to other activities that promote green and sustainable production. Liquid Immersion cooling is the answer to speeding up the digitalisation process and at the same time ensuring that economies achieve net-zero emission faster than envisaged.
<urn:uuid:4babd639-36c5-4ee1-939c-020667520e75>
CC-MAIN-2022-40
https://peasoup.cloud/eco/liquid-immersion-cooling-path-to-net-zero-emission/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00337.warc.gz
en
0.941815
643
3.046875
3
What is Redis and What Does It Do Have you ever wondered how popular web applications can achieve mind-blowing performance? Think about it. Some websites get more than a million views a day with thousands of active users at any given time. Even with multiple load-balanced servers, that's enough traffic to bring any web architecture to its knees. There's always a weak point in the chain, and that weak point can be database response times. Using a service like Redis to cache database queries can fix those response time issues, though. Let's take a closer look at what Redis is and why you should use it in your web application. What is Redis? At its core, Redis is a caching service for databases. Some people might say Redis is a database itself. It kind of is, too. But what makes Redis different from an actual database? Redis lives in system memory. System memory access is designed to be blazingly fast. In fact, the only component faster than RAM in a computer is the cache built into the CPU. Juxtaposition to Redis, databases live on system storage, otherwise known as a hard drive. These drives can be mechanical drives or solid-state drives, and they can come in a variety of configurations. System storage is traditionally much, much slower than system memory, though. With specialized hardware under certain configurations, system storage can come close to matching the same throughput speed as RAM, but that hardware can cost tens of thousands of dollars. RAM, on the other hand, only costs a few hundred dollars. Nonetheless, this is why calling a database for information is so much slower than making a call to Redis cache. Redis stores information in a key: value pair, too. Traditional databases typically come in one of two flavors: a relational database or NoSQL database. A good example of a relational database is something like MySQL where information is stored in rows and columns of organized data. MongoDB is a popular example of a NoSQL database where information is stored as documents that may not necessarily correspond to other information stored in that database. Redis is closer to a MongoDB database built using Mongoose, but Redis doesn't have a concept of object IDs. It simply uses keys with corresponding values. We mentioned that Redis lives in system memory. That makes the information stored in a Redis cache instance volatile. If anything happens to that system memory, or if the computer system running Redis is powered off, that information will be lost. On the other hand, because databases live on system storage, it is far less likely information will be lost in the event of a system power failure. Because of this, important information stored in a Redis cache needs to be properly stored in a database, or another storage mechanism, for non-volatility. An Overview of Redis [VIDEO] In this video, Trevor Sullivan covers the Redis cache service: what it is, what it does, and how it helps. It’s an open source tool that runs as a service in the background that allows you to store data in memory for high-performance data retrieval and storage. That’s the technical explanation, but watch on to hear about how useful it is for all manner of use cases. What is Redis Used for? At its core, Redis is a caching service for databases. Databases can take a long time to respond with data. In a good scenario, a database can easily take 300+ milliseconds to respond. That isn't including the roundtrip flight time it takes for a remote system to make that call, travel to a web service, have that web service process that data, and then respond back with information from the database. On the other hand, Redis can respond with information in as little as 20 milliseconds. The difference between 20 and 300 milliseconds might not seem like a lot of time, but that example is only a best-case scenario. During peak traffic times, large web applications can take seconds to respond with information. That's not a good user experience. Likewise, that kind of traffic can impose an undue burden on a database. Instead of calling a database directly, a web application can check a Redis cache instance to see if the information it needs is available in that cache instance first. This can drastically reduce the time to respond to a client from that web application or reduce the workload for a database. In many cases, databases are not typically located on the same system as the web application itself, too. This means that making a call to a database is imposing more load on a network. Utilizing a Redis cache instance can reduce network overhead as well. So, at its core, Redis is used to make applications perform faster by caching data in a volatile storage mechanism (System Memory) that is much faster than traditional storage mechanisms (hard drives). How to Use Redis with Docker, Mac, Windows There are multiple ways that you can install and use Redis. Depending on whether you are using a production or test system, and the type of system you are using, the steps for installing Redis will be different. The easiest solution for installing and using Redis is by using a Docker container. Redis has a container pre-made and ready-to-go for docker. All you need is Docker installed on your computer. Calling the Redis container will handle everything else. You can find more information about the Redis Docker container in the Docker Hub. The Redis Docker Hub page also has installation instructions for the Redis Docker Container and how to use that container. If you are using Mac OS or Linux, download and install Redis with your favorite package manager. For Mac OS, simply use Brew. For Linux, your package manager will change depending on which distribution of Linux you are using. For example, if you are using a Debian-based OS, use Apt. Likewise, if you are using a Fedora or Redhat distribution, use RPM. The Redis package name in those package managers is simply called 'Redis'. The Redis Website also has a Tar container that can be used to install Redis from scratch in Mac OS, Linux, and Unix as well. Installing Redis on Windows is more complicated. There isn't a native Redis client for Windows. So, Redis needs to be installed with the Windows Subsystem for Linux features. Installing WSL takes a bit of time, but it is not difficult. First, Windows Subsystems for Linux needs to be enabled in the Windows Features lists under Programs and Features in the Control Panel in Windows. After the WSL feature is enabled, a Linux distribution needs to be installed from the Windows Store. You have the option of using various distributions like Fedora, Ubuntu, Suse, and more from the Windows app store. Once your preferred version of Linux is installed from the Windows app store, launch it from the Windows Start Menu. Then use the native Linux package manager for the version of Linux that you installed for WSL to install Redis as you normally would for Linux. Redis is also available as a cloud service, too. The easiest way to use Redis is from Redis itself. Redis offers a free Redis instance with Redis Cloud Essentials from Redis Labs. Major cloud vendors like AWS, Azure, and Google offer Redis services, too. Using Redis from a cloud provider is the best path to take for a production service. Cloud providers will automatically handle moving data from Redis to its corresponding database while configuring high-availability instances. If you install and use your own Redis instance, you will need to handle these tasks yourself. How Do You Implement Redis? Because a Redis cache instance is volatile, information needs to be moved to and from the Redis cache to its corresponding database. This can be done in one of two ways. First, a function can be built that monitors web applications for database calls. That function will check the Redis cache instance for requested data first. If that data is not in the Redis cache, that function calls the database for that information first. Once that function receives that information, it responds back to the client that requested that data with it and then updates the Redis cache instance with that data as well. Likewise, that function may write data to the Redis cache instance as clients request data to be written to the database. Another option is to use a Redis worker. A Redis worker will automatically move data back and forth from the database. It monitors the database for changes. Once it detects a change, that Redis worker takes the appropriate actions to make sure the Redis cache instance and the database are in sync. This is the reason that using a cloud service provider for Redis cache is the preferable method for production systems. Cloud service providers will automatically create that Redis worker and manage it for you. We covered a lot of information in this article, so let's recap all of it quickly. Redis is a cache service typically used in conjunction with databases. Redis is typically used for web applications. Data in Redis is volatile. It lives in system memory, so data in Redis must be synced with a longer-term storage mechanism if that data should be saved. Redis can easily be installed in Mac OS and Linux using your favorite package manager. Redis must be installed in Windows Subsystems for Linux if it is going to be used within Windows. Redis is also available through various cloud vendors as well. Finally, you can implement Redis using either a function that checks Redis for data first, and if it does not exist, fetches it from a database, or you can use a Redis worker that monitors a database for changes and syncs those changes between the database and the Redis cache instance.
<urn:uuid:1d30ebba-0ad5-4acc-80bf-c1f1cc59072d>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/devops/what-is-redis-and-what-does-it-do
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00337.warc.gz
en
0.904439
2,032
2.90625
3
Researchers in MIT's Media Lab are 3D printing hair, which for many of us is quite the promising development. The group achieved this without using CAD software, but instead they built a new software platform they call "Cilllia". Cilllia lets the user define the angle, thickness, density, and height of thousands of hairs in just a few minutes. If you think of traditional CAD, this could take several hours to process, and likely even crash the program. Using the new software, the researchers designed arrays consisting of hairs about 50-microns in size, which is about the width of a human hair. Could the technology one-day print wigs and hair extensions? Maybe, but that's not the end goal — and likely incredibly expensive. Instead, they're seeing how 3D-printed hair could be used for sensing, adhesion, and actuation. The team printed arrays that acted like Velcro to prove it could work as an adhesive. For sensing or interactive toys, the team inserted an LED light into the fuzzy rabbit with 3D-printed hair, along with a small microphone that senses vibrations. With this setup, the bunny turns green when you pet it the right way, and red when it is not. This is IEN Now.
<urn:uuid:0ee65e0e-7d8f-41e8-b207-242468563e6a>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21101609/mit-3d-prints-hair-balding-community-rejoices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00337.warc.gz
en
0.953383
262
2.96875
3
While organizations use applied data science and machine learning to keep their system secure and for security data gathering, hackers are using more sophisticated techniques like artificial intelligence to perform cyberattacks. Copyright by blog.eccouncil.org This is why most modern cybersecurity tools, from antivirus software to comprehensive proactive tactics, now use data science and machine learning. Data science can work hand in hand with machine learning techniques by searching through numerous patterns to help determine which vulnerability can put the organization at risk. In this article, we will breakdown everything you need to know about data science and data security. You will also learn which AI algorithms are suitable for AI data science to form your data analysis strategies and how these tie into cybersecurity. What Is Data Science? Data science is an expansive field that touches numerous aspects. It is used in products to forecast, predict, classify, anomaly detect, pattern find, and statistical analyze. What Is Machine Learning? Machine learning is a branch of artificial intelligence capable of learning from the data provided or past experiences to help make informed choices. In other words, machine learning will continually improve the accuracy of their results while they gather and analyze more data. What Is Applied Data Science? We are living in a world with abundant data, but we cannot learn anything from raw numbers. With data science techniques, both machine learning tools and humans can now discover and understand data findings and then put the findings to practical use. This is why most effective machine learning tools now use applied data science. What Is CSDS? Cybersecurity data science is an emerging profession that uses data science to detect, prevent, and mitigate cybersecurity threats. Cybersecurity data science can also be regarded as the process of using data science to keep digital devices, services, systems, and software safe from cyberattacks, operational, technical, economic, social, and political issues. [ ] Read more: blog.eccouncil.org
<urn:uuid:d81b7d50-99a2-4831-a96a-dd59fdf84f58>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/04/13/cybersecurity-in-2021/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00337.warc.gz
en
0.90937
393
3.3125
3
Where I personally have a problem remembering names and birthdays, computers have a hard time “forgetting” things. Even when we tell them to do so. If you ever unintentionally deleted a file, you may have been able to retrieve it from the Recycle Bin. Or, if it was past that stage and the file was really important, you may have used System Restore. You may even have looked for recovery software. But what's actually happening when you delete and recover those files? And are they ever truly gone? We examine the steps a forensic analyst would use to both recover deleted files and permanently delete those they want gone forever. Deleting a file in WindowsWhen you send a file to the Recycle Bin, nothing happens to the file itself. The only change is in a pointer record that showed the location of the file before you deleted it. This pointer now shows the file is in the Recycle Bin. Taking the removal one step further, which can be achieved by emptying the Recycle Bin or using Shift + Delete, this pointer record is now what gets deleted. So Windows will no longer “know” the physical location of the file. And the physical space it occupies on the hard disk is now free and ready to be used for a different objective. But it’s not immediately overwritten. This is by design. The data that was in the file is still in that same location until the operating system uses that physical location for a different purpose. How does that help us?Let’s for the sake of this article assume that System Restore or another backup method was not enabled, because if it were, that would the second method to try and get those important files back. The problem is that with System Restore, we sometimes dread the other changes that may be undone in the process of using it. Especially if the last usable restore point is an old one. Knowing how the deletion procedure in Windows works can help us if and when we want to recover important deleted files. You should realize that every change you make after deleting that file diminishes the chance of getting it back in one piece. Defragmenting, for example, re-arranges a lot of the physical locations that files are in and can overwrite the “freed-up” space. The mere act of looking for recovery software, downloading it, and installing it, may be the very thing that renders the file unrecoverable. This is where forensic analysts come into play. While most home users wouldn't perform many more tasks to find deleted files than mentioned above, forensic analysts will take the drive that they want to examine out of operation and slave it on another system, creating an exact snapshot image of all the data contained on the drive. This method allows them to examine the data without making any changes to the drive. And if they make changes to the copy, there is no harm done, as they can make a new copy from the original. What if I really want my files to be deleted?Deleting a file may erode it or make space for other files, but is it ever truly 100 percent gone? For example, are there effective ways of deleting the content of a hard drive when you sell your computer? Well, the short answer is “No.” There is no method of deletion that I would trust 100 percent. There are professional recovery tools that claim they will be able to recover files even when the drive has been re-partitioned and re-formatted. What a forensic analyst might do is to overwrite a whole hard disk and fill every addressable block with zeroes (ASCII NUL bytes). There are secure drive erase utilities for this purpose that can reach a high efficiency rate when used several times on the same drive. At this point, there is no way of recovering overwritten data. There is software that can erase specific files and folders by overwriting them. Take note that this procedure could turn out to be useless if you have any type of automatic backup system in place, which is recommended given the current number of ransomware threats that are out there. And if you want to keep on using a drive, but don’t want anyone else to have access to your important files, we would advise you to use encryption. You can encrypt specific data or the whole drive to prevent uninvited eyes from opening them. There are important differences between deleting, erasing, and overwriting. When it comes to recovering and deleting files, think like a forensic analyst. If you want to be able to recover a deleted file, the method you use will be very different from wanting to make a file virtually disappear. Choose wisely and you'll better protect your data in the long run.
<urn:uuid:342c56af-5082-4ae5-a7a8-6f60e756a8ba>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2017/10/digital-forensics-recovering-deleted-files
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00337.warc.gz
en
0.957678
975
2.875
3
Humans Can Detect a Single-Photon Incident on the Cornea (TheNextWeb) Alipasha Vaziri, a physicist at the Rockefeller University in New York City who both conducted and participated in experiments has determined that humans can detect a single-photon incident on the cornea with a probability significantly above chance. This was achieved by implementing a combination of a psychophysics procedure with a quantum light source that can generate single-photon states of light. Vaziri reported, “The most amazing thing is that it’s not like seeing light. It’s almost a feeling, at the threshold of imagination”. Theoretically, if you were to entangle that photon of light with another and then shine the photon at a person’s eye: they should not be able to perceive a difference. Not being able to detect entanglement with the naked human eye doesn’t actually indicate that quantum mechanics is correct. But, as evidence continues to pile up for it, quantum mechanics remains the pervasive working theory to explain how our universe works. This is because, for quantum mechanics to work as a theory, it has to explain everything that happens, including why we don’t usually perceive quantum phenomena. We trust that quantum phenomena happens, and that we don’t have to work too hard to observe it.
<urn:uuid:e36af5f5-0359-4c6b-88ad-b8d08f148f09>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/humans-can-detect-single-photon-incident-cornea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00337.warc.gz
en
0.948872
280
2.84375
3
Opportunity risk occurs whenever there’s a possibility that a better opportunity may become available after having committed to an irreversible decision. We all experience opportunity risk at its most basic level several times a week. For example, imagine you have enough cash on you for lunch in a new town and you’re trying to decide between two restaurants you’ve never tried. What if you spend your time and money on the first option and it’s terrible? Or even maybe it’s not terrible, but the second option is just so much better? Opportunity risk is what you’re contemplating as you stand there running out of time between meetings with your stomach is growling. In the context of financial business processes, opportunity risk is most often expressed as the time value of money. In other words, are you able to use cash in its most economically efficient way? The use of funds in a manner that leads to the loss of economic value includes: - Time value losses due to delays in invoicing, order processing, collections, claim processing, investment of funds, etc. The consequences of these delays could result in some subsidiaries borrowing while others are investing. - Transaction costs due to inappropriate or inefficient management of cash flows. For example, the need to borrow high-cost funds or sell securities at a loss, because of the failure to match the maturities of short-term investments to settlement dates on operational or financial obligations. - Indifference to yield-enhancement strategies and ineffective yield-curve management. Earnings exposure may exist when funds are invested in a manner that does not generate sufficient returns to cover costs, profits and risk. Investment losses may result from the failure to obtain return, which compensates for the degree of risk which is incurred. Loss of value may also occur when cash moves through the financial system and/or is transferred across borders. Economists use the term "opportunity cost" to describe the invisible loss that comes from missing out on a chance to generate a higher return. As expressed above, the biggest problem businesses face is time value loss. For instance, assume a company invested its money in bonds at a 6% interest rate. If it was found later that the money could have been invested in mutual funds with a 10% return, then the opportunity cost would be 4%. BUSINESS RISKS RELATED TO OPPORTUNITY COST Failure to manage opportunity cost risk can have the following impact: - Loss of foregone economic funds - Time value losses - High or additional transaction costs - Earnings exposure - Declining sales or profits - Competitive position may erode over time - Exposure to an income loss - Missed business opportunities ROOT CAUSES OF OPPORTUNITY COST RISK Sourcing the root causes requires an analysis of the key business processes that influence the cash-to-cash cycle. Analysis of business processes can be comprehensive or selective depending on management's view of where the risks and opportunities for improvement are. When analyzing business processes, look for areas where cash flow is delayed, funds are left idle or cash transfer and handling costs are excessive. Take a look at the processes for Order entry and processing, invoice processing receivable management and collection processes. Are these all being handled in an efficient manner or is there lag that causes funds to sit idle or even fail to be collected in a timely fashion? For more information on the root causes and important questions to consider, check out the Opportunity Cost Risk Key Performance Indicators benchmarking tool on KnowledgeLeader. For more information on the root causes and important questions to consider, check out these related resources on KnowledgeLeader:
<urn:uuid:f151a2ab-c0d3-4a05-a491-ca0e909dd59d>
CC-MAIN-2022-40
https://www.knowledgeleader.com/blog/how-does-opportunity-risk-apply-financial-business-processes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00337.warc.gz
en
0.935088
763
2.640625
3
The Open Web Application Security Project (OWASP) has listed Insufficient Cryptography as the fifth most exploited risk in mobile applications. Insufficient Cryptography or insecure usage of cryptography is a common vulnerability in mobile apps that leverage encryption. Due to weak encryption algorithms or flaws within the encryption process, the potential hacker is able to return the encrypted code or sensitive data to its original unencrypted form. Exploitation of broken cryptography results in both technical as well as business impacts. While the technical impact includes unauthorized access and retrieval of sensitive information from the mobile device, business impacts may result in Privacy Violations, Information Theft, Code Theft, Intellectual Property Theft, or Reputational Damage. How to Assess Vulnerability to Insufficient Cryptography? There are two ways in which broken cryptography can be exposed within mobile apps. - The encryption/decryption process used by the mobile app is fundamentally flawed and can be exploited by the adversary to decrypt sensitive data. - The encryption/decryption algorithm employed by the mobile app is weakly built and can be directly decrypted by the adversary. The following scenarios of encryption misuse can result in such attacks: Reliance Upon Built-In Code Encryption Processes Heavy reliance on the built-in encryption code can result in bypassing of the built-in code encryption algorithms by an adversary. In iOS applications, the app loader decrypts the app in memory and proceed to execute the code after its signature has been verified by iOS. This feature, in theory, prevents an attacker from conducting binary attacks against an iOS mobile app.In case of non-iOS devices, where the above addendum is not prevalent, an adversary will download the encrypted app onto their jailbroken device using freely available tools like ClutchMod or GBD, and take a snapshot of the decrypted app once the app loader loads it into memory and decrypts it. The adversary can then use tools like IDA Pro or Hopper to easily perform static/dynamic analysis of the app and conduct further binary attacks. Poor Key Management Processes The best algorithms don’t matter if you mishandle your keys. Many mistake implementing their own protocol for employing the correct encryption algorithm. Some examples of the problems here include: - Including the keys in the same attacker-readable directory as the encrypted content; - Making the keys otherwise available to the attacker; - Avoid the use of hardcoded keys within your binary; and - Keys may be intercepted via binary attacks. Creation and Use of Custom Encryption Protocols Mishandling encryption becomes easier when a device uses its own encryption algorithms or protocols. Thus, it is highly imperative that a developer uses modern algorithms that are accepted as strong by the security community, and whenever possible leverage the state of the art encryption APIs within the mobile platform. Use of Insecure and/or Deprecated Algorithms Cryptographic algorithms and protocols like RC2, MD4, MD5, and SHA1, which are shown to have significant weaknesses or are otherwise insufficient for modern security requirements must not be employed. Prevent Insufficient Cryptography While the above-mentioned attack scenarios educate developers on the dos and don’ts to follow while creating encryption and decryption algorithms, following best practices must be followed when handling sensitive data: - Avoid the storage of any sensitive data on a mobile device. - Apply cryptographic standards that will withstand the test of time for at least 10 years into the future; and - Follow the NIST guidelines on recommended algorithms Worried that your phone might be vulnerable to such threats? Protect your mobile now with Astra’s Complete Security Suite for Android and iOS apps
<urn:uuid:af89a67c-4938-4349-bd2f-8b1e1b6087a7>
CC-MAIN-2022-40
https://www.getastra.com/blog/app-security/all-you-need-to-know-about-android-app-vulnerability-insufficient-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00337.warc.gz
en
0.89993
758
2.65625
3
90% of teachers believe digital inking improves the quality of their curriculum. This IDC study of 685 teachers exposes the power of digital inking (the use of a stylus such as Microsoft Surface Pen) in the classroom. With an ever increasing range of classroom device options, school leaders and district decision makers should consider devices that will most directly support authentic, and efficient student-teacher engagement and reduce the time teachers spend on nonteaching related activities. See the transformation other school’s are achieving with this eBook. “Teachers without Surface and Digital Ink don’t even consider working the way we do, because we do things that are otherwise impossible… this has probably been the most significant change in our classroom.” – Director of Technology, K-12 The Good News 98% of classrooms frequently use computers, laptops or tablets. The Bad News More than half of all teachers are tethered to a desktop computer, another 30% to a traditional notebook, making interaction with their instructional material while standing or walking around awkward. Changing for the Better Modern classrooms are starting to look different – about 10% use a tablet with touch screen, which makes simultaneous interaction between teachers, students and the material more seamless and authentic. The benefits experienced by educators are endless, Microsoft Surface with digital inking is proven to: “Consider white boards: I erase and redo every period, six times a day. Now, I just complete a template on OneNote, and fill it in.” – High School Science Teacher “The Surface with Digital Ink gives me the ability to be anywhere and everywhere in class, at once. I’m not tied to the front of the room.” – Teacher, Grade K-8 See the full family of Microsoft Surface devices, ideal for education. To request a quote from Microsoft’s largest Australian Surface partner, contact Data#3 here.
<urn:uuid:32ea4ee9-e451-44dc-8e95-0cc0621ee39c>
CC-MAIN-2022-40
https://www.data3.com/knowledge-centre/ebooks/power-digital-inking-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00337.warc.gz
en
0.918316
400
3.1875
3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. In 2018, a big fan of Nicholas Cage showed us what The Fellowship of the Ring would look like if Cage starred as Frodo, Aragorn, Gimly, and Legolas. The technology he used was deepfake, a type of application that uses artificial intelligence algorithms to manipulate videos. Deepfakes are mostly known for their capability to swap the faces of actors from one video to another. They first appeared in 2018 and quickly rose to fame after they were used to modify adult videos to feature the faces of Hollywood actors and politicians. In the past couple of years, deepfakes have caused much concern about the rise of a new wave of AI-doctored videos that can spread fake news and enable forgers and scammers. The “deep” in deepfake comes from the use of deep learning, the branch of AI that has become very popular in the past decade. Deep learning algorithms roughly mimic the experience-based learning capabilities of humans and animals. If you train them on enough examples of a task, they will be able to replicate it under specific conditions. The basic idea is to train a set of artificial neural networks, the main component of deep learning algorithms, on multiple examples of the actor and target faces. With enough training, the neural networks will be able to create numerical representations of the features of each face. Then all you need to do is rewire the neural networks to map the face of the actor on to the target. Deep learning algorithms come in different formats. Many people think deepfakes are created with generative adversarial networks (GAN), a deep learning algorithm that learns to generate realistic images from noise. And it is true, there are variations of GANs that can create deepfakes. But the main type of neural network used in deefakes is the “autoencoder.” An autoencoder is a special type of deep learning algorithm that performs two tasks. First, it encodes an input image into a small set of numerical values. (In reality, it could be any other type of data, but since we’re talking about deepfakes, we’ll stick to images.) The encoding is done through a series of layers that start with many variables and gradually become smaller until they reach a “bottleneck” layer. The bottleneck layer contains the target number of variables. Next, the neural network decodes the data in the bottleneck layer and recreates the original image. During the training, the autoencoder is provided with a series of images. The goal of the training is to find a way to tune the parameters in the encoder and decoder layers so that the output image is as similar to the input image as possible. The narrower the problem domain, the more accurate the results of the autoencoder becomes. For instance, if you train an autoencoder only on the images of your own face, the neural network will eventually find a way to encode the features of your face (mouth, eyes, nose, etc.) in a small set of numerical values and use them to recreate your image with high accuracy. You can think of an autoencoder as a super-smart compression-decompression algorithm. For instance, you can run an image into the encoding part of the neural network, and use the bottleneck representation for small storage or fast network transfer of data. When you want to view the image, you only need to run the encoded values through the decoding half and return it to its original state. But there are other things that the autoencoder can do. For instance, you can use it for noise reduction or generating new images. Deepfake applications use a special configuration of autoencoders. In fact, a deepfake generator uses two autoencoders, one trained on the face of the actor and another trained on the target. After the autoencoders are trained, you switch their outputs, and something interesting happens. The autoencoder of the target takes video frames of the target, and encodes the facial features into numerical values at the bottleneck layer. Then, those values are fed to the decoder layers of the actor autoencoder. What comes out is the face of the actor with the facial expression of the target. In a nutshell, the autoencoder grabs the facial expression of one person and maps it onto the face of another person. Training the deepfake autoencoder The concept of deepfake is very simple. But training it requires considerable effort. Say you want to create a deepfake version of Forrest Gump that stars John Travolta instead of Tom Hanks. First, you need to assemble the training dataset for the actor (John Travolta) and the target (Tom Hanks) autoencoders. This means gathering thousands of video frames of each person and cropping them to only show the face. Ideally, you’ll have to include images from different angles and lighting conditions so your neural networks can learn to encode and transfer different nuances of the faces and the environments. So, you can’t just take one video of each person and crop the video frames. You’ll have to use multiple videos. There are tools that automate the cropping process, but they’re not perfect and still require manual efforts. The need for large datasets is why most deepfake videos you see target celebrities. You can’t create a deepfake of your neighbor unless you have hours of videos of them in different settings. After gathering the datasets, you’ll have to train the neural networks. If you know how to code machine learning algorithms, you can create your own autoencoders. Alternatively, you can use a deepfake application such as Faceswap, which provides an intuitive user interface and shows the progress of the AI model as the training of the neural networks proceeds. Depending on the type of hardware you use, the deepfake training and generation can take from several hours to several days. Once the process is over, you’ll have your deepfake video. Sometimes the result will not be optimal and even extending the training process won’t improve the quality. This can be due to bad training data or choosing the wrong configuration of your deep learning models. In this case, you’ll need to readjust the settings and restart the training from scratch. In other cases, there are minor glitches and artifacts that can be smoothed out with some VFX work in Adobe After Effects. In any case, at their current stage, deepfakes are not a clickthrough process. They’ve become a lot better, but they still require a good deal of manual effort. Manipulated videos are nothing new. Movie studios have been using them in the cinema for decades. But previously, they required tremendous effort from experts and access to expensive studio gear. Although not trivial yet, deepfakes put video manipulation at the disposal of everyone. Basically, anyone who has a few hundred dollars to spare and the nerves to go through the process can create a deepfake from their own basement. Naturally, deepfakes have become a source of worry and are perceived as a threat to public trust. Government agencies, academic research labs, and social media companies are all engaged in efforts to build tools that can detect AI-doctored videos. Facebook is looking into deepfake detection to prevent the spread of fake news on its social network. The Defense Advanced Research Projects Agency (DARPA), the research arm of the U.S. Department of Defense, has also launched an initiative to stop deepfakes and other automated disinformation tools. And Microsoft has recently launched a deepfake detection tool ahead of the U.S. presidential elections. AI researchers have already developed various tools to detect deepfakes. For instance, earlier deepfakes contained visual artifacts such as unblinking eyes and unnatural skin color variations. One tool flagged videos in which people didn’t blink or blinked at abnormal intervals. Another more recent method uses deep learning algorithms to detect signs of manipulation at the edges of objects in images. A different approach is to use blockchain to establish a database of signatures of confirmed videos and apply deep learning to compare new videos against the ground truth. But the fight against deepfakes has effectively turned into a cat-and-mouse chase. As deepfakes constantly get better, many of these tools lose their efficiency. As one computer vision professor told me last year: “I think deepfakes are almost like an arms race. Because people are producing increasingly convincing deepfakes, and someday it might become impossible to detect them.”
<urn:uuid:a0ed3159-4699-4eb3-a4a2-a2e2d5cc47de>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/09/04/what-is-deepfake/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00537.warc.gz
en
0.928431
1,842
3.328125
3
Cyber threat intelligence is evidence-based information or knowledge of the capabilities, techniques, infrastructure, motives, goals, and resources of an existing or emerging threat. This intelligence provides context to better understand and identify adversaries, and as Gartner’s definition of threat intelligence states, this information “can be used to inform decisions regarding the subject's response to that menace or hazard”. Simply put, threat intelligence is the knowledge that allows you to prevent, identify, and mitigate cyberattacks. Where Does Cyber Threat Intelligence Come From? Threat intelligence comes from both internal and external sources, meaning data from both inside and outside of your own network. Combining both internal and external threat intelligence can allow you to better understand the threat landscape or individual profiles of threat actors. Internal Threat Intelligence Sources There is a wealth of threat data that you can leverage from the internal network of your organization including log files, alerts, and incident response reports just to name a few. Organizations that use SIEM tools will also have access to several raw sources of internal network event data, like event logs, DNS logs, and firewall logs that can be used to identify and stop threats. Incident response reports, which are commonly used to maintain historic knowledge of past incidents, can also be used to provide context and answer questions like who was attacking you, what were their motivation and capabilities, and what indicators of compromise (IOCs) should be monitored to prevent similar attacks in the future? Other valuable sources of internal intelligence are retained malware, packet capture, and netflow. External Threat Intelligence Sources External sources of threat intelligence, just as the name implies, come from outside of your network. There are a wide variety of these sources with different structures, intentions, and trustworthiness. Open-source intelligence, often referred to as OSINT, includes data from independent security researchers, vendor blogs, and publicly available threat indicator blocklists. Another common source of external threat intelligence is a private or commercial threat intelligence feed. These threat intelligence feeds are updated sources of indicators or data derived from an outside organization and can include information on suspicious domains, malware hashes, IP addresses, and other IOCs. Unlike OSINT feeds , private or commercial feeds include more unique, higher-quality intelligence and can sometimes be focused on specific industry verticals. These feeds often include more detailed intelligence such as threat actor profiles or motivational insights along with more standard indicators. Threat intelligence can also be collected from partners, peers, vendors, and clients in a sharing environment. This is usually seen in an information sharing community, such as an ISAC or ISAO, and includes intelligence that is shared amongst member organizations in a similar industry. The increase in information sharing through communities has emerged as a major milestone in the evolution of threat intelligence Why is Cyber Threat Intelligence Important? Keeping a network and data secure is becoming increasingly more difficult as the tactics, techniques, and procedures (TTPs) used by cyber threat actors continue to get more sophisticated. To avoid a breach, a security team must be right 100 percent of the time with no exceptions. On the other hand, for a threat actor to be successful, they just have to get lucky once. With their singular focus on working their way into organizations, attackers always have the upper hand in this situation. Attackers can decide whether to target humans, unpatched vulnerabilities, purchase hacking tools on the dark web, and more. In order to level the playing field, security teams can tap into available threat intelligence to gain greater visibility into what potential threats to be prepared for and how to best prevent and mitigate as many of them as possible. By combining threat intelligence with internal telemetry, you can begin to get an understanding of not only what is happening within your network, but can also help you establish a proactive stance and be informed and better prepared for potential threats or blind spots in your defense. Threat intelligence has become a necessity and by leveraging actionable threat intelligence , organizations can improve their cybersecurity framework. Types of Cyber Threat Intelligence Cyber threat intelligence comes in many forms but can largely be divided into human and machine-readable threat intelligence types. Human readable threat intelligence is meant for the consumption of security analysts who analyze that intel to gather strategic and operational insights on cyber threats. On the other hand, machine-readable threat intelligence is leveraged directly by cyber security technology platforms such as threat intelligence platforms, SIEMs, EDRs, NDRs, etc. for automated detection, analysis, and actioning. Strategic Threat Intelligence includes identifying and inspecting risks that can affect an organization’s core assets such as employees, customers, vendors, and the overall infrastructure. The development of strategic intelligence requires highly skilled human analysts to gather proprietary information, follow up on trends, identify threats, and design defensive architecture to combat those threats. At the strategic level, threat intelligence presents highly relevant information in a clear and concise form, while outlining mitigation strategies that can aid an organization in the decision-making process. This form of intelligence includes historical trends, motivations, or key attributions of an attack. It helps enterprises look at the bigger picture and set predominant goals to be more secure. Tactical Threat Intelligence provides extensive and rich data on current or existing threats that could be of more use for an analyst. Unlike strategic , tactical intelligence is micro in its scope. This intelligence comes in the form of IOCs which includes information on malicious domains, malware files, malicious URLs, and virus signatures. Tactical intelligence is highly effective in analyzing a cyber kill chain and thereby containing the attack in progress. With tactical intelligence in hand, organizations can act quickly and minimize the impact. Technical Threat Intelligence commonly refers to information that is derived from a threat data feed. It tends to involve information such as what attack vector is being used, what command and control domains are being employed, what vulnerabilities are being exploited, etc. Technical intelligence usually focuses on a single type of indicator, like malware hashes or suspicious domains. Operational Threat Intelligence is knowledge about cyberattacks, events, or campaigns that provide more context and understanding of the nature, intent, and timing of specific attacks. This form of intelligence focuses mainly on how a threat actor is going to attack a company—who is most active, what are the targets, capabilities, intentions, etc.—at the operational level. It also examines other elements like how the attack would impact the organization and helps prioritize the operational assets from the security perspective. Top Cyber Threat Intelligence Use Cases Security Operations Center (SOC) A Security Operations Center, or SOC, can leverage cyber threat intelligence for security monitoring, alerting, and blocking. A threat intelligence platform (TIP) plays a great role in helping SOC teams create rules or signatures for indicators of compromise (IOCs). These rules help create alerts in SIEMs, IDS/IPS, or endpoint protection products. A cyber threat intelligence feed or set of IOCs can also be used to block suspicious activity at firewalls or other security devices. A more advanced threat intelligence use case for SOC teams is to help with the management and triage of alerts that are generated from network monitoring. When alerts are combined with context provided by threat intelligence a SOC analyst can more quickly determine the accuracy, relevance, and priority. This can help to reduce false positives, speed up triage, and drastically reduce the time spent on analysis and containment. Similar to SOC analysts, incident responders are inundated with high volumes of alerts that make it difficult to know which to investigate first and how best to respond. Cyber threat intelligence can assist incident response teams in assessing alerts by reducing false positives, enriching alerts with context, and help inform where to look next to observe an ongoing intrusion. Threat intelligence can also help with triage and prioritization of ongoing investigations based on what adversaries may be involved and which infrastructure is potentially at risk. is often a very time-consuming process that can seem like a never-ending battle. It is also common practice for vulnerability patching to be delayed in favor of business continuity. However, there are times when organizations need to be aware of real imminent risks that could be thwarted by a simple patch. Cyber threat intelligence can help bridge this gap for organizations and provide a smarter lens of risk-based analysis of vulnerabilities. Threat intelligence can provide key insights into the weaponization of vulnerabilities through different malware or exploits. It can also show what a threat actor’s objectives might be and what they could possibly use to achieve it. By having knowledge of the attackers’ TTPs, security teams can evaluate the risk posed to specific internal systems and prioritize those vulnerabilities first. Cyber threat intelligence has come a long way with its evolution over the years and can be used by CISOs and other security leaders in planning for business and technical risk. This information can help inform decisions on the architecture and processes that best suit an organization or the budget and team size that need to be justified. Threat intelligence can provide valuable insight into attack trends by geography, industry, software, hardware, and more. Gaining a better understanding of the threat landscape that is most relevant to you is an invaluable asset for any security leader. Cyber Threat Intelligence and Cyber Fusion In many ways, cyber fusion is the combination of all or many of the other use cases listed here. Cyber fusion is an approach to cybersecurity that unifies all security functions such as threat intelligence, security automation , threat response , security orchestration , incident response , and others into a single connected unit. This level of visibility and collaboration across all units for detecting, managing, and responding to threats provides security teams with an advanced level of resilience and control. A key element to this is the continuous flow of analyzed and updated threat intelligence being automatically fed into all functional units of security operations including deployed security and IT tools and human-based analysis and response teams to foster visibility-driven security operations. Cyber fusion enhances an organization’s defense posture and accelerates response to cyber threats. What is a Threat Intelligence Platform? Threat intelligence platforms (TIPs) are threat intelligence software solutions that enable security teams to collect, organize, and manage threat data and intelligence. A more advanced TIP provides the ability to share and receive intelligence from multiple peers, TI providers, ISAC members, regulators, partner organizations, and subsidiary companies. An advanced TIP like this can also automate the normalization, enrichment, and analysis of cyber threat intelligence to help security teams more quickly identify, manage, and take action on cyber threats. In the cybersecurity marketplace, you’ll come across several cyber threat intelligence platform vendors, but it’s important to choose the right TIP that suits your business needs. When leveraged properly a smart, bi-directional TIP provides security teams with the ability to more accurately predict and prevent attacks as well as mitigate and respond to threats with faster, smarter actions. Cyware Threat Intelligence Solutions An innovative TIP to automatically aggregate, enrich, and analyze threat indicators in a collaborative ecosystem. A mobile-enabled, automated, strategic threat intelligence, aggregation, processing, and sharing platform for real-time alert dissemination and enhanced collaboration between an organization’s security teams or an ISAC and its members.
<urn:uuid:c491f9b6-3e72-4f1c-ac83-1c9f019a4ea2>
CC-MAIN-2022-40
https://cyware.com/educational-guides/cyber-threat-intelligence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00537.warc.gz
en
0.934652
2,338
2.609375
3
While much of the work in VoIP systems and networks takes place in North America, there is a substantial interest from both users and vendors in this technology from the other side of the Atlantic. Although North Americans may not want to admit it, many of the technical innovations in the last few decades have had their roots in Europe. One clear example is the Integrated Services Digital Network (ISDN), which, for many years enjoyed much more popularity with both end users and carriers in Europe than in North America. Another example is next generation wireless systems, especially the Global System for Mobile Communications (GSM) networks, which are much more widely deployed, and which benefit from inter-carrier relationships in Europe much more than in North America. Perhaps these are two examples of the European Union’s ability to unite their member countries (at least from a communications perspective). In any event, many of the international standards bodies are headquartered in Europe, including the International Telecommunications Union (ITU), www.itu.int, the International Organization for Standardization (ISO), www.iso.org, (both in Geneva, Switzerland), and the Third Generation Partnership (3GPP), www.3gpp.org, (in Cedex, France), which may also influence the European community to invest more heavily in next generation technologies. One example of a solid European communications company with over a century of technological innovations is Siemens AG, headquartered in Berlin and Munich, Germany. Siemens was founded in Berlin in 1847 by Werner Siemens and his business partner Johann Georg Halske. The foundation for the business was the pointer telegraph invented by Werner Siemens. With the discovery of gutta-percha as an insolating material for marine cables, the Siemens firm opened up the way to transcontinental telecommunications. It rapidly grew into a renowned corporation and was awarded the first major international telecommunications projects: In1870, the Indian-European telegraph line was opened between London and Calcutta, and in 1875 the first transatlantic cable between the USA and Ireland was commissioned. A special cable laying ship, the Faraday, was built for this project. Further milestones on the road to becoming one of the most important suppliers on the global telecommunications market included setting up the first automatic telephone exchange (1909) and the first telephotography line (1927). Today, with over 450,000 employees worldwide, approximately $100B in annual revenues, and customers in over 190 countries, Siemens develops products for the information technology, communications, power, automation and control, transportation, medical, and lighting industries. The Siemens product portfolio for next generation switching is called SURPASS, and includes four building blocks: switching, access, options, and network management. The SURPASS switching products are focused on the carrier market, and have been adopted by over 70 network operators worldwide. These products include: - hiE 9200: a switching system that integrates the functionalities of the Siemens EWSD switching system, providing a migration path from TDM to next generation switching. - hiQ 8000: a softswitch that provides the platform for next generation networks, while supporting multiple signaling protocols, including SS7, H.323, and SIP. - hiS: a standalone Signaling Transfer Point, to handle SS7 over TDM, SS7 over ATM, and SS7 over IP signaling, thus bridging the gap between wireline, wireless, and next generation networks. - hiR: a family of resource servers, which provide announcements and user-interactive dialogues, and support the MGCP/MEGACO interfaces. - hiG: a family of media gateways, that interface between the existing TDM network and the next generation IP network, supporting voice, data, fax, modem, and ISDN traffic. - hiD: high-speed packet switches, that integrate ATM, IP, TDM, frame relay, and Ethernet technologies into a multi-service platform. - hiX: multi-function DSLAMs (Digital Subscriber Line Access Multiplexers) for deployment in access and backbone networks. - hiT: an optical platform for high bandwidth transport within metropolitan and core networks. As the product list above indicates, Siemens provides a very comprehensive solution that provides carriers a smooth transition from current to next generation IP technologies. Further details on the Siemens architecture and products can be found at www.siemens.com. Our next tutorial will continue our examination of vendors’ architectures. Copyright Acknowledgement: © 2006 DigiNet ® Corporation, All Rights Reserved Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
<urn:uuid:1d160d5a-b026-4177-9a31-72276a073c60>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/unified-communications/vendor-network-architectures%EF%BF%BDpart-x-siemens/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00537.warc.gz
en
0.922813
1,003
2.640625
3
Consumers are growing increasingly comfortable storing sensitive information on their computers, USB flash drives, and external hard drives, as well as using Web-based solutions to automate regular tasks such as shopping for holiday gifts, paying bills and tracking financial portfolios. The push from vendors encouraging their customers to move toward e-billing has also played a major role in more personal information being stored locally on personal computers. To put the magnitude of this problem into perspective, consider this: Over 600,000 laptop thefts occur annually in the U.S. alone, resulting in an estimated US$5.4 billion loss of proprietary information, according to the Ponemon Institute. Over 90 percent of these laptops are never recovered. At the same time, cybercriminals are developing increasingly savvy techniques to access and exploit sensitive information — such as usernames, passwords and credit card details — for personal gain. There are two very easy methods available to protect consumers from identity theft at a relatively inexpensive cost. The first is to encrypt any data containing personal information, and the second is to utilize password manager tools to store online logins, passwords and banking information. Exposing Your Data There are two common situations in which people expose themselves on a regular basis. The first is using systems that rely on automated antivirus software protection and the second is using public or borrowed PCs to connect to the Internet. Most consumer facing Web sites have now implemented robust security features, such as SSL certificates that display an “https” URL instead of “http,” to alert users that their e-commerce pages are secure. However, the proliferation of public WiFi hotspots and online social networks has created new opportunities for thieves to spread Trojan viruses such as keyloggers, to phish for passwords, and to sniff out packets of sensitive information as they pass through a network. All too often, I hear from consumers who have picked up viruses on their PCs because they relied on their antivirus software to update automatically in the background or they used free shareware antivirus programs to protect themselves. These approaches can provide a false sense of security. Protection can be compromised if their antivirus application runs past the expiration date or stops updating. To remedy this, I recommend that everyone should do manual software updates on a regular basis and thoroughly review any errors they receive while performing this task. The other common complaint I hear from customers is that they picked up a virus on their USB drive while using a public or borrowed PC on a vacation or business trip, which has then infected their personal PC. This can be avoided by encrypting your data on your USB flash drive, as viruses can’t penetrate encrypted data. Some Scary Facts About Data Theft - Business travelers lose more than 12,000 laptops per week in U.S. airports alone; - 1 laptop is stolen every 53 seconds; - Computer viruses cost U.S. businesses $55 Billion annually; and - 25 percent of all PC users suffer from data loss each year. Common techniques used by hackers and thieves for data theft include harvesting information from stolen laptops and USB flash drives, and employing keystroke logging and phishing to steal sensitive online passwords. Keystroke-logging — often used to steal information such as online bank account credentials — accounts for 76 percent of all online threats, according to a recently published Symantec Internet Security Threat report. In this instance, hackers use software capable of recording an unsuspecting victim’s keystrokes, which can reveal their online passwords and credit card numbers, as well as information being passed by email or recorded into Word documents. Lock Down Your Data The great news for consumers is that data encryption software solutions are available to address these important security concerns by enabling the user to lock down sensitive information in secured folders (vaults) on their computers, removable hard drives, and USB memory sticks. These data security products use pairs of complex algorithms, known as “ciphers” in the field of cryptology, capable of quickly encrypting and decrypting just about any type data file, whether it’s a document, video or photo. Essentially, these algorithms scramble the data so it would be unintelligible and therefore useless to a hacker or thief. Once encrypted, these files cannot be infected by viruses or opened without knowing the user’s personal password. In the event that your laptop or desktop crashes and needs repair, these types of data encryption tools can prevent the people at your local computer repair shop from accessing your personal information, photos, videos, medical and financial records. When you’re at the coffee shop using their wireless network to get online, these same tools prevent would-be snoops from gaining access to sensitive files stored on your machine. Hackers are always developing new tools and techniques to crack passwords and exploit vulnerabilities in weaker encryption software. I recommend that people exercise due diligence and investigate the encryption software they are using to ensure it has not been hacked, and that tools aren’t readily available through search engines like Google to hack the software they are using. Secure Your Passwords The more advanced encryption software solutions also enable the user to securely log into sensitive Web sites, providing advanced algorithmic protection while sensitive passwords are entered. The data entered into the password managers should be encrypted in case of theft or loss of the PC, laptop or USB flash drive it is stored on. These types of password-protection features are also capable of storing and managing secure passwords so you can maintain unique IDs for each Web site, without having to remember them each time you log on to do online banking, surf the social networks, or check your email. With the increasing instances of physical theft and cybercrime, it’s imperative that we all understand the potential threats of data theft in our personal and professional lives. By using simple data encryption and password protection tools, you can ensure that your personal information and online identities remain secure and private. Mark Smail is CTO of Onix International, the distributor of EncryptStick, a data encryption software solution.
<urn:uuid:d49d2cb2-972b-40cc-97b4-4d3a363d9fae>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/are-we-risking-our-digital-lives-69145.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00537.warc.gz
en
0.924761
1,238
2.734375
3
Increasingly, businesses rely on algorithms that use data provided by users to make decisions that affect people. For example, Amazon, Google, and Facebook use algorithms to tailor what users see, and Uber and Lyft use them to match passengers with drivers and set prices. Do users, customers, employees, and others have a right to know how companies that use algorithms make their decisions? In a new analysis, researchers explore the moral and ethical foundations to such a right. They conclude that the right to such an explanation is a moral right, then address how companies might do so. “In most cases, companies do not offer any explanation about how they gain access to users’ profiles, from where they collect the data, and with whom they trade their data,” explains Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon University’s Tepper School of Business, who co-wrote the analysis. “It’s not just fairness that’s at stake; it’s also trust.” Calling for transparency under the concept of algorithmic accountability In response to the rise of autonomous decision-making algorithms and their reliance on data provided by users, a growing number of computer scientists and governmental bodies have called for transparency under the broad concept of algorithmic accountability. For example, the European Parliament and the Council of the European Union adopted GDPR in 2016, part of which regulates the use of automatic algorithmic decision systems. The GDPR, which launched in 2018, affects businesses that process the personally identifiable information of residents of the European Union. But the GDPR is ambiguous about whether it involves a right to explanation regarding how businesses’ automated algorithmic profiling systems reach decisions. In this analysis, the authors develop a moral argument that can serve as a foundation for a legally recognized version of this right. Informed consent as an assurance of trust for incomplete algorithmic processes In the digital era, the authors write, some say that informed consent–obtaining prior permission for disclosing information with full knowledge of the possible consequences–is no longer possible because many digital transactions are ongoing. Instead, the authors conceptualize informed consent as an assurance of trust for incomplete algorithmic processes. Obtaining informed consent, especially when companies collect and process personal data, is ethically required unless overridden for specific, acceptable reasons, the authors argue. Moreover, informed consent in the context of algorithmic decision-making, especially for non-contextual and unpredictable uses, is incomplete without an assurance of trust. In this context, the authors conclude, companies have a moral duty to provide an explanation not just before automated decision making occurs, but also afterward, so the explanation can address both system functionality and the rationale of a specific decision. Attracting clients by providing explanations on how they use algorithms The authors also delve into how companies that run businesses based on algorithms can provide explanations of their use in a way that attracts clients while maintaining trade secrets. This is an important decision for many modern start-ups, including such questions as to how much code should be open source, and how extensive and exposed the application program interface should be. Many companies are already tackling these challenges, the authors note. Some may choose to hire “data interpreters,” employees who bridge the work of data scientists and the people affected by the companies’ decisions. “Will requiring an algorithm to be interpretable or explainable hinder businesses’ performance or lead to better results?” asks Bryan R. Routledge, Associate Professor of Finance at Carnegie Mellon‘s Tepper School of Business, who co-wrote the analysis. “That is something we’ll see play out in the near future, much like the transparency conflict of Apple and Facebook. But more importantly, the right to explanation is an ethical obligation apart from bottom-line impact.”
<urn:uuid:4e522cf0-ca81-4888-8891-de2dfde87595>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/05/19/algorithms-decisions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00537.warc.gz
en
0.92438
792
2.984375
3
Automation Using a DevOps Toolchain In a virtually connected world, software technologies have raised the bar for organizations to meet customer expectations. The internet has given them the power to demand better service performance, reliability, and quality, forcing enterprises to respond faster to customer queries, fix issues, and improve product features. To streamline the organizational process, businesses have embraced modern Software Development Lifecycle strategies such as DevOps. Organizations can automate workflows and push quality software features to the end-users in a swift and effective manner. Replacing manual tasks with automation reduces the associated errors and enhances employee potential. Intro to DevOps Toolchain DevOps key principles revolve around continuous integration, continuous deployment, and continuous delivery using automation and better collaboration within teams. For all the stages of software development, DevOps offers a combination of most effective tools that automate development, maintenance, and deployment of the software as per the agile principles. Standardization and consistency are vital functions of the DevOps toolchain. Toolchains are created, orchestrated, and stored within the cloud. Developers can simplify and accomplish complex development processes because it integrates seamlessly with other DevOps tools and processes. Role of Automation in DevOps Toolchain The main highlight of DevOps practice is automating infrastructure set up & configuration, and software deployment. It is heavily dependent on automation to foster speed, greater accuracy, consistency, reliability, and faster delivery across platforms. Automation in DevOps summarizes everything right from creating, deploying, and monitoring. DevOps in automation removes performance bottlenecks, communication gaps, silos Devs, Ops and QA, into one place and facilitates agility through a standardized SDLC possess. The role of automation further extends to following key tasks of the DevOps SDLC pipeline: Code Development – Automation in applications allows developers to stay on top of the DevOps SDLC pipeline. It simplifies the development of large, complex software projects, defines certain changes or process activities, and saves time. Visibility – Automating traceability and issue tracking processes helps the Ops team keep up with the code changes and issues which results in better achievement of the project goals. Devs can fix the root cause of the issue without delaying any progress made. Continuous Testing – Continuous testing process helps to identify risks, address them, and improve standards. This methodology focuses on achieving continuous quality and improvement. To support, execute, and manage continuous testing in a DevOps environment, automation is critical. Continuous Integration & Continuous Delivery – Automating Continuous Integration and Continuous Delivery enables teams to find and fix bugs every time changes are made in code. It speeds up the release process, reduces risk and effort associated with it, delivering each update more frequently to the market. Effective Monitoring and Incident Management – Automation in monitoring and incident management leads to improved communication between Dev teams and IT operations teams resulting in faster incident responses, and a more resilient system. It intelligent prioritizes events, identifies root issues, and proactively delivers an actionable strategy. Benefits of streamlining Automation with DevOps Toolchain Businesses incorporating DevOps practices perform better with cross-functional members working together to deliver maximum speed, functionality, and innovation. 1. Better Coordination IT automation assists companies to coordinate and consolidate IT operations within a consistent and common interface so that disparate systems and software can self-act and self-regulate. 2. Maximizing Efficiency Automation using DevOps Toolchain removes human dependency, improves accuracy, consistency, and reliability to perform with speed. Automated deployments and standardization production environment relieve employees from routine, repetitive tasks to spend more time on critical processes that require a higher level of intelligence. It provides change management tools that allow high efficiency and targeted modifications. 3. Optimize Business Process Beyond repetitive tasks, it automates many processes required to move from code to production. Automation provides innovative and cost-effective ways for development and operations to enable frequent and faster releases with high quality. It makes the production process agile, empowering team members to accelerate new services through continuous improvement and operational flexibility. In the event there are any barriers in the process, automated monitoring and alerting can proactively notify concerned parties to step in and take necessary action. 4. Better Management Automation in DevOps helps organizations to meet customer and management expectations for quick and accurate rollouts. It helps in executing processes, managing teams, and ensuring adaptability for noticeable and lasting ROI. iLink Helps You Select the Right DevOps Toolchain While DevOps is implemented everywhere, it’s important to choose the right DevOps toolchain for a homogenous pipeline. This includes selecting tools from categories like planning, collaboration, configuration, and deployment. At iLink, we help you get started on your DevOps toolchain and deliver high-quality software.
<urn:uuid:654c9d4a-131d-4872-85a1-7eb0440a7ac5>
CC-MAIN-2022-40
https://www.ilink-digital.com/insights/blog/automation-using-a-devops-toolchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00537.warc.gz
en
0.902739
993
2.609375
3
Your autonomous car needs to avoid an accident. Does it swerve left towards two children or right towards 10 adults? Or does it keep going and risk killing you? Tim Green mulls over the moral quandaries of our driverless future… Imagine if your driverless car had an ethics dial. You would sit in the vehicle, program the SatNav to your required direction, key in your music choice and set the temperature. Finally, you would decide how selfish you want the car to be. More precisely, you would tell the car whether – in the event of an emergency – to save you or the hapless pedestrians you are speeding towards. Sometimes people mount the pavement or break the speed limit to avoid accidents. Should AI do the same? And If a driverless car malfunctions and causes an accident, who’s fault is that? The occupant? The car maker? The software provider? It’s a fanciful idea. But it’s one that has been discussed as part of a growing debate about the ethical and legal implications of living in an era of autonomous automobiles. The first conversations about connected cars were mostly technical. Would they be safe? Would they save fuel? However, now that the concept is moving closer to reality, people are thinking more deeply about ethical questions. Specifically, they are thinking about the ‘trolley problem’. This is a familiar subject in philosophy textbooks. It asks people to decide what they would do in the following situation: a runaway trolley is speeding down the railway tracks. It is headed straight for five people. But there is a lever. If you pull it, the trolley will re-direct and kill just one person. Would you do it? And then it gets more murky. What about if you could throw a fat person in the way of the train to stop it killing five people. Would you do that? Apply the trolley problem to the driverless car, and you have to ask: what should the car’s AI be programmed to do in such a situation? Maybe the car’s human ’driver’ should decide. Which brings us back to that ethics dial. Here, the motorist could choose to favour pedestrians if she was alone in the car. But with her kids in the back, she could re-set the dial to prioritise the car’s occupants. This is just a thought experiment, but it’s already been the subject of research. In one survey, 44 per cent of people said they would like to have control over an ethics setting. 12 per cent said they would want the manufacturer to pre-set it. It’s all made more complicated by the fact that we might apply different judgements to human drivers and AI. After all, a human might be forgiven for making a fatal decision in a split second. But AI programmers have time to consider their decisions when they are coding the software. Tricky isn’t it? It’s fair to say the idea of robots ‘driving’ cars is fraught with moral quandaries. And a lot of commercial ones. At present there are no firm answers. But there are lots of questions such as: Should driverless cars break the law? Sometimes people mount the pavement or break the speed limit to avoid accidents. Should AI do the same? What about setting off with a broken light? No? But what if it’s the daytime? Where’s the line? OK to hit squirrels, but not dogs? This is related to the trolley question. Should driverless cars have a hierarchy of animals to swerve? How ‘irresponsible’ can occupants be? Is it OK to be drunk inside a driverless car? How incapacitated can occupants be? Can a blind person be in control of an autonomous vehicle? A child? What happens when self-driving cars inspire road rage? In a world where autonomous cars co-exist with regular ones, might cautious law-abiding ‘robots’ infuriate impatient humans? Who would be held to blame for disputes? Should driverless cars make judgements about occupants? If drinking limits, for example, are set on occupants, should cars be programmed to detect alcohol. And should they de-activate if the humans are under the influence? How much should a car know about its driver? Should it track your visits to a known crime scene and pass them to police? Can its systems be made to disclose location information in a divorce case? Who pays when there’s a collision? If a driverless car malfunctions and causes an accident, who’s fault is that? The occupant could be to blame. But he might blame the car maker, who might blame the software provider, who might blame the OS maker… Who pays when there’s a fine? A driverless car gets a parking ticket because a no-parking zone has just changed. Who’s fault is that? Should the mapping systems have known? Did the local authority inform the car maker? Should it have to? Can a human driver be penalised for failing to take control? A self-driving car malfunctions, and alerts its occupant to take the wheel. What if she doesn’t, and there’s an accident? One wonders how much attention will be paid to these questions when the trials start in earnest – as they already are in fact. Just weeks ago, Jaguar and Land Rover revealed a 41 mile ‘living laboratory‘ around Coventry and Solihull in the UK to assess real-world autonomous driving conditions. They’ll be testing ‘vehicle-to-vehicle’ and ‘vehicle-to-infrastructure systems’ to see how well 100 connected cars perform on real roads. It’s hard to imagine how they will factor in ethical concerns like driver drunkenness and stray dogs. But they should.
<urn:uuid:2ce6ea8e-324b-4fc8-be5a-a8f69d3c37e4>
CC-MAIN-2022-40
https://mobileecosystemforum.com/2016/02/18/27922/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00537.warc.gz
en
0.94346
1,260
2.625
3
In the first part of this series, we addressed why HIPAA was so important, where it came from, and the punitive measures it incorporated as a means to ensure compliance. More importantly, we discussed the general requirements of HIPAA. Today, we’re going to do a deep dive into those requirements, focusing on the most important and heavy aspect of HIPAA legislation – the Privacy Rule. The Privacy Rule is the fundamental result of HIPAA that defined how data should be handled, and the expectations set against data providers. While HIPAA oversaw great reform in terms of insurance coverage and portability, data providers should be much more concerned about the Privacy Rule, as this group of concepts is the chief governing agent when it comes to data. Let’s take a look at this rule, and everything it entails. What is the Privacy Rule? In its most simple form, the Privacy Rule, officially called the “Standards for Privacy of Individually Identifiable Health Information”, is the first national standard passed in the United States concerning medical and health data. Before this point, data of this nature was covered broadly under a range of identity protections – this proved woefully inadequate and led to a high incidence of identity theft while also resulting in nervousness to transfer data amongst providers, lest they be snared in the complex web of law. When designing the Privacy Rule, the U.S. Department of Health and Human Services had a single goal in mind – meet all the stated requirements of the HIPAA legislation. In doing this, they covered a wide range of rules and laws that would pertain not only to privacy rights but to the disclosure of activities and use of data to the owners of that data (in other words, it established that patients have the right to know where their data is and what it’s being used for”). This data was specifically defined as “protected health information”, and was considered the joint domain of the citizen and the entities handling the data, with the entities known as the “covered entities”. While the Privacy Rule specifically addresses client protection, a good portion of this rule is also dedicated to ensuring that medical information can flow from provider to provider while ensuring the general well-being of the public at large. Because of this, the “Privacy Rule” is very much a balance between protection and usage. It should be said that the bulk of the systems covered under the privacy rule are not technical in nature. These include health plans, medical providers, clearinghouses, and insurance adjusters. That being said, there is a very specific classification for businesses that interact with medical data but are not in themselves a medical organization. The documentation for the Privacy Rule calls these organizations “Business Associates”, and designates them as “a person or organization, other than a member of a covered entity’s workforce, that performs certain functions or activities on behalf of, or provides certain services to, a covered entity […]”. While many have argued that this definition is overly broad, it was intended to be as such – the takeaway from that definition should be that any company handling medical data in any form from a covered entity is subject to legislation under the Privacy Rule. Because of this, data providers are in the unique situation of having to adhere to conceptually complex legislation for an industry that they are not specifically part of, but may indeed do business with. More on these specific regulatory concerns later. Now that we know who is covered, what is covered? The protected information under the Privacy Rule is dictated as all “individually identifiable health information”, including: • Past, present, or future medical and mental conditions, disorders, or other such data; and • Records of care provided and methodologies of delivery and remuneration; What makes this a complex issue is that a data provider often has no idea what data they’re handling, especially if said data is encrypted. If a destruction provider receives encrypted hard drives, all they know is that the drive is scrambled and obfuscated – there is often no way to know whether this hospital hard drive is simply a workstation of a janitor or security guard, or whether the drive is from a payment processing center. With this in mind, providers should always approach data as if it contains the most highly sensitive data that can possibly be processed. By assuming the data being destroyed is highly sensitive while erasing drives, you can ensure that your systems are dealt with in the most extreme case. In the worst case, you’re being overly careful, but the data is still being handled in a proper and legal way. In the best-case scenario, you are protecting yourself and handling data in an appropriate manner. Either way, you’ve done right by the law and your consumers. A major provision of the Privacy Rule is the idea of safeguarding data. This is perhaps the second most important provision in the entire HIPAA guidelines (the first most important being the Security Rule – more on that later). Under the data safeguarding provision, a covered entity and its business associates must at all times maintain appropriate and data-reasonable administrative, physical, and technical safeguards. While this section was obviously designed for healthcare providers specifically, it has some serious implications for data providers. It might be easy to say “it’s their problem, not mine”, but these provisions cover data providers as well. When handling medical data, regardless of form, the items must be secured physically behind a lock and key or other such mechanisms. The data must be encrypted or secured in another technical form. There must be security policies in place, as well as handling and chain of custody considerations. All of this must be done at each stage of data handling, including during the stage of data transmission or destruction that is often engaged in by data providers. This requirement does not expire simply because you are not a medical company – in fact, with the ease of which these solutions are implemented, failing to do so could be easily ruled as willful negligence, incurring an even greater fine than if you were judged to be accidentally negligent. This was a brief summary of the Privacy Rule, and the expectations it sets forth for all data providers, medical or otherwise. These rules are intense, but it gets even stronger down the line. In the next piece of this series, we’re going to tackle the Security Rule, and what it means for data providers and destruction experts. While this might seem like a lot to remember, you have to consider the type of data that you’re handling. This data is someone’s life, literally – exposing, even accidentally, some of this data can result in huge issues both personally and financially for many patients, and could result in some serious repercussions for the provider in question. The easiest way to consider the Privacy Rule is that old adage of the “Golden Rule” – do unto others as you’d want to be done to you. Think of your private data – would you want it handled differently? Secure others’ data as you would your own, and follow these common-sense procedures.
<urn:uuid:83c5cb03-ceef-43c6-bbe3-9bdc44336012>
CC-MAIN-2022-40
https://clarabyte.com/blog/the-ultimate-primer-for-hipaa-compliance-part-ii-the-privacy-rule/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00537.warc.gz
en
0.9579
1,488
2.625
3
Tunneling Protocol (PPTP) is a network protocol that enables the secure transfer of data from a remote client to an enterprise server by creating a VPN across TCP/IP-based data networks. PPTP encapsulates PPP packets into IP datagrams for transmission over the Internet or other public TCP/IP-based PPTP establishes a tunnel for each communicating PPTP network server (PNS)-PPTP Access Concentrator (PAC) pair. After the tunnel is set up, PPP packets are exchanged using enhanced generic routing encapsulation (GRE). A call ID present in the GRE header indicates the session to which a particular PPP packet belongs. Translation (NAT) translates only the IP address and the port number of a PPTP message. Static and dynamic NAT configurations work with PPTP without the requirement of the PPTP application layer gateway (ALG). However, Port Address Translation (PAT) configuration requires the PPTP ALG to parse the PPTP header and facilitate the translation of call IDs in PPTP control packets. NAT then parses the GRE header and translates call IDs for PPTP data sessions. The PPTP ALG does not translate any embedded IP address in the PPTP payload. The PPTP ALG is enabled by default when NAT is configured. NAT recognizes PPTP packets that arrive on the default TCP port, 1723, and invokes the PPTP ALG to parse control packets. NAT translates the call ID parsed by the PPTP ALG by assigning a global address or port number. Based on the client and server call IDs, NAT creates two doors based on the request of the PPTP ALG. ( A door is created when there is insufficient information to create a complete NAT-session entry. A door contains information about the source IP address and the destination IP address and port.) Two NAT sessions are created (one with the server call ID and the other with the client call ID) for two-way data communication between the client and server. NAT translates the GRE packet header for data packets that complies with RFC 2673. PPTP is a TCP-based protocol. Therefore, when NAT recognizes a TCP packet as a PPTP packet, it invokes the PPTP ALG parse-callback function. The PPTP ALG fetches the embedded call ID from the PPTP header and creates a translation token for the header. The PPTP ALG also creates data channels for related GRE tunnels. After ALG parsing, NAT processes the tokens created by the ALG. PPTP Default Timer The default timer for PPTP is 24 hours. This means that a generic routing encapsulation (GRE) session will live for 24 hours when deploying static and dynamic NAT. Based on your PPTP configuration and scaling requirement, you adjust the PPTP default timer. Some PPTP clients and servers send keepalive messages to keep GRE sessions alive. You can adjust the NAT session timer for PPTP sessions by using
<urn:uuid:c3623d50-d24a-4a3d-b443-3dd7d7d93b0e>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nat/configuration/xe-16/nat-xe-16-book/iadnat-pptp-pat.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00537.warc.gz
en
0.794469
680
3
3
A data leak is when information is exposed to unauthorized people due to internal errors. This is often caused by poor data security and sanitization, outdated systems, or a lack of employee training. Data leaks could lead to identity theft, data breaches, or ransomware installation. What's the Difference Between a Data Leak and a Data Breach? It's important to distinguish between a data leak and a data breach. These terms are often used interchangeably, but they do have one notable difference. While data leaks and data breaches both involve the unauthorized exposure of data, the cause of the exposure determines whether it's a leak or a breach. A data leak occurs when an internal source exposes information. Meanwhile, a data breach is caused when an external source breaches the system in a cyberattack. Criminals can use a variety of methods to try and break into a network. In other words, a data leak is usually an accident, while a breach is often intentional and malicious. Sometimes the line blurs between a leak and a breach because criminals use information in a data leak to launch a large-scale data breach. Take an email password leak, for example. If one email account is compromised, a criminal can then use that account to commit business email compromise scams like invoice fraud or ransomware attacks. Criminals only need one data leak to turn it into a massive data breach. Leaks are as much a serious threat to organizations as data breaches. That's why organizations should understand what causes data leaks and how to prevent them. How Do Data Leaks Happen? Data leaks occur because of an internal problem. They don't usually happen because of a cyberattack. This is encouraging news for organizations since they can proactively detect and remediate data leaks before they are discovered by criminals. Let's review some of the most common causes of data leaks. Bad infrastructure: Misconfigured or unpatched infrastructure can unintentionally expose data. Having the wrong settings or permissions, or an outdated software version may seem innocent, but it can potentially expose data. Organizations should ensure that all infrastructure is carefully configured to protect data. Social engineering scams: While data breaches are the result of a cyberattack, criminals often use similar methods to create a data leak. Then the criminal will exploit the data leak to launch other cyberattacks. For example, phishing emails may successfully gain access to a person's login credentials, which could result in a bigger data breach. Poor password policies: People tend to use the same password for multiple accounts because it's easier to remember it. But if a credential stuffing attack happens, it could expose several accounts. Even something as simple as having login credentials written in a notebook could lead to a data leak. Lost devices: If an employee loses a device with a company’s sensitive information, it qualifies as a potential data breach. If a criminal gains access to the device's content, it could lead to identity theft or a data breach. Software vulnerabilities: Software vulnerabilities can easily turn into a huge cybersecurity issue for organizations. It's possible for criminals to take advantage of outdated software or zero-day exploits and turn it into a variety of security threats. Old data: As businesses grow and employees come and go, companies can lose track of data. System updates and infrastructure changes can accidentally expose that old data. Legacy data storage practices create ideal conditions for a data leak. This can compound in an organization with infosec employee turnover. Losing institutional knowledge of archaic data systems can lead to vulnerabilities and accidents. Cybersecurity systems need to ensure that data leaks are prevented. Criminals can easily use data leaks to perpetrate further crimes. How to Prevent Data Leaks Most data leaks are caused by operational problems, including technical and human error. Preventing data leaks starts with a strong, multi-layered cybersecurity approach and respect for data privacy. While security teams should provide a robust defense system, they should also implement an incident response plan to recover quickly from a cyberattack. Here are a few tactics to prevent data leaks: Assess and audit security: Organizations should verify that their business has the necessary safeguards and policies in place to protect data. This is especially crucial for regulatory compliance. If you find any weak points, it's imperative to fix them. Restrict data access: Employees should only have access to the data they need to do their jobs efficiently. Evaluate and update data storage: Antiquated data storage practices create vulnerabilities. You should regularly monitor the data you collect and how you store it. Delete old data: Regularly practicing data sanitization goes a long way toward reducing your organization’s risk of a leak. Train employees on cybersecurity awareness: Employees should receive regular training on security awareness. Think of employees as another line of defense to prevent data breaches from occurring. They should receive training on how to spot malicious emails and report them to the security team. Never trust, always verify: IT systems should not inherently trust any devices or accounts on company networks. Adopt a zero-trust security approach to prevent unwanted access to sensitive data. Use multi-factor authentication: A strong password policy for employees is good, but don't rely on it alone. Implementing multi-factor authentication ensures that a password leak isn't enough to cause a data breach. Monitor third-party risk: Supply chain attacks occur when a third-party vendor has one of their email accounts compromised. This can lead to a large-scale data leak. Properly off-board employees: Ensure you’re fully removing access to any software, systems, and files when an employee leaves. This includes disabling accounts and repossessing company equipment. To learn more about how Abnormal Security can prevent data leaks, request a demo of the platform today.
<urn:uuid:a79fda7a-fc56-4e2d-8b5a-f34dece8ca9a>
CC-MAIN-2022-40
https://abnormalsecurity.com/glossary/data-leak
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00537.warc.gz
en
0.932037
1,186
3.40625
3
Why Solid State Drives or SSDs are not always fastest for Bit9 I had a client who was believed that something was wrong with our application because the tests performed worse on sequential writes on an SSD array than a spinning disk array. He said he wanted a “technical no-BS answer” — so I gave him one. Hopefully it can help someone else as well. #1) It’s The Manufacturers’ Fault! — SSDs have their own drive interface, bridge, buffers, and gates that can be configured in a number of ways by the manufacturer. Every time a read is performed on an SSD, nothing changes, so the impact on the NAND flash memory is minimal. A write, however, causes the gates to block current to a chain of transistors, changing its value. Without going into an explanation of the role of electrons in electrical fields, this change creates significant wear on the memory cells of the drive. In fact, the larger the write, the greater the wear on the drive. Therefore, a standard was reached early on that set 90%-95% of an SSD’s interface bandwidth to be used for reads, and 5%-10% to be used for writes to limit simultaneous electrical changes. This limitation alone can slow a large-block sequential write stream. #2) You Can Not Change The Laws Of Physics, Jim! — When a hard drive has “free space” to use, and it wants to write to it, it can just write the magnetic signature. One step — easy peasy — magnetism rocks. But you can’t just change the orientation, relative location, or charge of a group of electrons. Without a supercollider, your only point of change is called the helicity, which is affected by changing the flow of electricity. In an SLC (single level cell) SSD, your capacity is lower, but you are modifying one electrical path containing one bit between a 0 and a 1. These cells form a grid, and these grids contain your data. SLC drives usually show lower performance specs, but this can be misleading. They generally are slower because they have to change more cells to get the same amount of data in, but each cell can only have two values: 1 (powered) or 0 (unpowered). Now let’s take the far more common type of SSD known as a Multi-Level Cell (or MLC SSD). In this case, the cells can hold two bits of information: Bit-A and Bit-B. Both bits are controlled by separate electrical paths. So instead of 0 or 1, that one cell could be 00, 01, 10, or 11. Needless to say, that allows for a lot more data to be stored in the same number of cells, but writes involve a LOT more electrical changes, creating far faster wear on the cells. (And as those of you familiar with particle decay will realize — YES — cells DO wear out! Just because you can’t SEE moving parts without an electron microscope doesn’t mean they don’t exist!) So why do we care? Here’s how writes happen to these three types of media: Hard disk — Change the magnetic state of each bit/byte/block in one pass. Done. (Yes, the arm needs to be moved and articulated…but we’re talking sequential writes here.) SLC SSD — Examine the grid of cells needed If the grid contains any zeroes, send power to all of the cells required to change their values to ones. (This is what a TRIM command does, by the way). Now that the grid is “reset”, cut the power to the cells that need to go to zero. Those three steps take time — especially in larger blocks, and ESPECIALLY when they have to go through a drive interface like SAS which was built for hard disks! MLC SSD — Examine the grid of cells needed, which takes longer because of the extra bit per cell. If any value other than 11 exists, send power to all of the cells to change them to 11 in one burst. With the grid reset, cut the current to Bit-A of each cell that needs a value of zero. Now cut the current to Bit-B of each cell that needs a value of zero. Due to their proximity, confirm the value of each two-bit cell. So when working with write-intensive applications, especially sequential ones where it needs a sequential grid of cells to take the data, the SSDs tend to perform worse, because it is simply more commands taking more time through a serial drive interface. The extra “gotcha” in all of this is that in write-heavy environments, the SSDs will wear out far sooner than even 15k rpm hard disks! (Especially the MLC SSDs) Why PCI-E Flash is faster than everything else, even for sequential writes: It sits directly on the data bus and does not have to go through a serial drive interface, allowing it to perform FAR more actions at one time. The reason PCI-E flash is so much more expensive has to do with the processing and sorting of all of those commands without a physical controller to drive it.
<urn:uuid:7780146a-17fb-445c-8bd8-a154d8fc8eef>
CC-MAIN-2022-40
https://community.carbonblack.com/t5/Knowledge-Base/Why-Solid-State-Drives-or-SSDs-are-not-always-fastest-for-Bit9/ta-p/41472
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00537.warc.gz
en
0.947182
1,094
2.640625
3
In a previous post, we contrasted the two prominent cloud deployment technologies – virtual machines and containers. The ability to virtualize server hardware kicked off the cloud revolution by making computing power just another utility available on demand. But containerization has taken cloud-based deployment to the next level. The trend in software development these days is to build applications to run on cloud platforms – “cloud native” applications, as it is called – from the start. And containers are the means to do so. Consider just one example of its uptake: Google says it runs everything – from Gmail to YouTube to Search – on containers, and spins up billions(!) of containers each week. Containerization is a logical confluence of several trends – most notably, Agile– and DevOps– based software development, which favors an approach that modularizes applications into small, independent microservices (see our blog post on this). Microservices make it easy to add, modify or remove features without disturbing the whole application. And, as small, self-contained and portable units of code, containers form the natural deployment units for microservices. This allows different microservices to be scaled up, depending on demand, by spinning up more container images. Such containerized microservice-based workflows lead to the increased velocity of application development and deployment. To better see the advantages of containers, it is easiest to contrast it with the older cloud deployment technology – virtualization. As we explained in our earlier post, each virtual machine (VM) runs its own operating system (OS), cleverly partitioned by the hypervisor software to give the impression to each VM that it has exclusive access to the server hardware on which they run. But therein lies a problem. A VM is heavyweight, because each comes with its own OS. Not only does the size of this OS often top several gigabytes, but it can take up all available RAM assigned to the VM –even if the applications inside that VM don’t need that much. Thus, if there is a surge in demand for an application, new VMs may have to be deployed. This takes time, as the VM has to be created and the OS and application code initialized. Moreover, each new VM takes up more space and memory, which may not all be used. This leads to poor server hardware utilization. By contrast, a container is a self-contained piece of software which contains the application code (for a microservice) together with any dependent libraries and binaries needed for it to run. A container is built to run on a given OS – usually Linux – and thus does not need a separate OS instance for each new image. Also, the container runtime, which takes the place of the hypervisor in VMs, abstracts away any dependencies on the chosen OS, such as the difference between Linux distributions. It is now possible to see why containers have become so popular. We’ll spell out 10 reasons why: Size: Containers are sized in megabytes, or less. Thus, one can spin up thousands of containers on a server without incurring any additional overhead for each instance. Consequently, one can grow the number of containers running on a server by a very large number before adding servers – a large savings in capital expense (CAPEX) and operating expense (OPEX). Uniformity: Most development environments are built around a given OS – usually Linux – and its associated tools. As containers are written to work with a specific OS, you can build once for that OS environment. Portability: The phrase “build once” must be paired with “run everywhere” to complete it. And a containerized microservice can be run on another Linux machine with minimal or no changes. That’s because a container carries all its dependencies with it, wherever it goes. Thus, a containerized microservice can be moved from a developer’s laptop running Ubuntu, to an on-premise server on SUSE Linux to a public cloud — with little or no friction. Consistency: DevOps teams typically use a particular programming language (or a small set) with its associated tools and frameworks. As a container is self-contained piece of code, so long as it can run on a chosen OS, the team does not have to worry about different deployment environments and can concentrate building their specific microservice using their preferred language and tools. Choice: A corollary to the portability aspect is that a container would run just as well on any cloud platform; so, the choice of the cloud provider can be made on the basis of business-dependent factors such as cost, geographical reach, etc. This freedom to choose is very important for many IT managers. Elasticity: As demand for a microservice grows (or falls), the number of its containerized instances can be automatically grown (or reduced) with minimal overhead – a key advantage of using a cloud platform. It takes seconds to add or remove a container in contrast to minutes to spin up a VM. Upgradeability: If a microservice needs to be replaced by a newer version or its container image is found to be faulty (maybe because a supporting library has a newly-discovered security flaw), these can be gracefully removed and replaced with the new version. And, if for some reason a container crashes, it does not affect the other containers on that server. Container orchestration tools such as Kubernetes – which we’ll cover in a separate post – makes this changeover easy. Agility: Current Agile and DevOps based software development has greatly reduced the time between coding, testing and deployment – often called “continuous deployment.” Starting with containers as the unit of deployment right from the start makes these workflows uniform and frictionless, and many steps can be automated using a variety of tools. Standardized: Google, Docker and other early proponents of containers open-sourced their technology under the governance of the Open Container Initiative (OCI). It has standardized container image formats and runtime to allow a compliant container to be portable across all major operating systems and platforms. Training: Despite the trepidation with which cost-conscious CIOs confront new technologies, container technology does not impose any unusual burden on finding and training developers with the right skill set. Almost all developers know Linux, and containers are a built-in part of the Linux kernel. It has been around for a long time although its usefulness was realized only in the mid-1990s. These ten items have hopefully motivated the growing trend for container-based development. On a cautionary note, not every application can be containerized, but many can. For those that can – and it is the growing trend to code new applications as containerized microservices from the start – containers provide the agility needed for today’s continuous build, integrate and deployment environment. Contact us today with any questions!
<urn:uuid:63501f9f-722d-4e08-8582-f7f62f4d5f13>
CC-MAIN-2022-40
https://bluesentry.cloud/blog/why-containers-the-top-10-benefits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00737.warc.gz
en
0.937738
1,412
2.6875
3
A DDoS, or Distributed Denial of Service, attack involves cybercriminals intentionally overwhelming a server with data in order to use up all its bandwidth. With all the server’s bandwidth occupied, VoIP activities and all internet activity grinds to a halt. Disruptions like this can seriously affect a company’s day-to-day operations, as well as its bottom line. Unfortunately, DDoS attacks are only becoming more common. The equipment needed to carry out a DDoS attack is becoming more advanced, which makes executing these attacks cheaper and faster for cybercriminals. In fact, 70 percent of organizations surveyed by Corero said that they experience approximately 20-50DDoS attacks per month. And according to the security company Cloudflare, the average cost of a successful DDoS attack is around $100,000 per hour. So, what can you do to adequately address these attacks when they happen? First and foremost, it’s important to identify DDoS attacks early. The sooner you’re able to recognize a problem, the sooner you can work to fix it, right? Set yourself up for success by appointing a DDoS czar at your company, a.k.a. someone whose responsibility it is to act should you come under attack. Once an attack starts, there are several steps you can take to mitigate the damage: - Overprovision bandwidth: Though keeping a reserve of bandwidth for emergency situations is unlikely to halt a DDoS attack in its tracks, it can but you the valuable time you'll need to contact security experts. - Contact your ISP: Generally, your ISP (Internet Service Provider) is responsible for the security of you network connection and will have staff on hand who can help to mitigate the damage of a DDoS attack. Contacting your ISP and making them aware of the attack should be one of your top priorities. - Reach out to a DDoS specialist: Because DDoS attacks are so complex, you'll need the help of an experienced expert to get things back under control. Part of the planning you can do before cybercriminals strike is establishing a partnership with a credible DDoS specialist, like the experts at MindPoint Group, who'll be able to come to your aid should you experience an attack.
<urn:uuid:fc0aea24-9966-4e8c-98d9-3531cb761cec>
CC-MAIN-2022-40
https://www.mindpointgroup.com/blog/what-are-ddos-attacks-a-pen-testers-perspective
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00737.warc.gz
en
0.95492
468
2.578125
3
This article shares the process of setting up Wireshark and using it to capture network traffic. - Administrator privileges - Intermediate understanding of network and systems administration Follow these steps: - Navigate to Wireshark Download Page. - Download the compatible version for your operating system. - Install Wireshark, and then open the application. - In the top menu, go to Capture > Options. - Click on Manage Interfaces. - Check the boxes for which network interfaces you would like to capture. - Network interface card(s) used by the FOIP/VOIP fax device to transmit packets. - Local Area Network Connection. - Most dedicated fax servers with more than one Local Area Connection. If you are unsure check all of them or verify with a systems administrator. - Click OK. - Verify the interfaces to capture by selecting and highlighting them. - To select multiple lines, hold down the CTRL button while clicking the interface name. Only the highlighted ones will be captured. - When all desired interfaces are highlighted, click Start to begin capture. - Reproduce the problem. - If troubleshooting faxes, by sending or receiving a fax on the problematic FaxMaker line. - If troubleshooting an HTTP address, navigate to the URL. - After the transmission has finished, with or without errors, navigate back to Wireshark application and click the red square to stop (Capture > Stop). - In the menu, click File > Save As. - Then select Wireshark /tcpdump/ ... pcap from the 'Save as type' drop-down menu. - Name the file something capture.pcap. - Click Save.
<urn:uuid:fb64a4dd-89ef-40b2-aaa8-ee04990de6a8>
CC-MAIN-2022-40
https://support.faxmaker.gfi.com/hc/en-us/articles/360016065960-Using-Wireshark-to-Capture-Network-Traffic
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00737.warc.gz
en
0.721291
368
2.53125
3
For most of us, cybersecurity is a part of our daily lives. We have to be aware of how to act responsibly online to keep our own data safe and protect ourselves from malicious activity. It’s no different for organizations handling and utilizing customer data, except that with such strict compliance targets and financial penalties to contend with, failing to invest in cybersecurity can have far more disastrous consequences. More recently, however, there has been an increased drive for ‘cyber resilience’. While some take the two terms as synonymous, they are, in fact, quite distinct and can even occur independently of one another. Both cybersecurity and cyber resilience are essential for keeping businesses secure, but what are the real differences between them? In this article, we go over what you need to know about cybersecurity vs. cyber resilience. Cybersecurity vs. Cyber Resilience To put it simply, ‘cybersecurity’ is about protecting yourself from digital threats. This often means: - Securing devices and services against viruses, theft, cybercrime, and other malicious activity - Updating all software and investing in antivirus protection, firewalls, and so on - Training members of staff on their own responsibilities and how to support security with daily best practices - Meeting compliance standards to protect user-information - Protecting value-generating products and services ‘Cyber resilience’, on the other hand, is about working to mitigate damage. No security setup is 100% foolproof, especially with threats evolving all the time. Cyber resilience specialists acknowledge that, at some point, some kind of failure will occur, and businesses must prepare themselves to bounce back, continue generating value, and minimize damage as much as possible. Cyber resilience often involves: - Upskilling staff to avoid internal issues relating to human error - Preparing plans to recover from PR issues resulting from breaches - Creating backups of critical data - Ensuring essential functions like customer service have offline backups in case of an emergency - Regularly reviewing the organization’s preparedness with analysis and simulations In essence, cybersecurity is about securing your company against threats, while cyber resilience is about preparing your company to deal with and recover from them. Optimizing Cybersecurity and Cyber Resilience While cybersecurity and cyber resilience may differ, they are equally essential. Both contribute towards business continuity in the face of serious cyber threats, and it’s important for organizations to make them both ongoing priorities. The best way to create an optimized solution is to defer to a specialist. The precise security requirements of individual organizations can vary. Cybersecurity or cyber resilience experts can assess the needs of an organization and create ongoing plans to put the necessary protection in place. RESILIA Cyber Resilience is the world’s only dedicated cyber resilience framework. Created by AXELOS, the organization behind frameworks such as ITIL 4 and PRINCE2, it offers an approach based on the knowledge and insight of leading cyber resilience specialists. The framework itself demonstrates the importance of cyber resilience not only for a business’s ongoing operations but also its strategic goals. It also outlines steps for implementing cyber resiliency, with insight and best practices regarding resilience strategy, design, transition, operations, risk management, and continuous improvement. How Can Good e-Learning Help Me Optimize My Cyber Resilience Strategy? Good e-Learning is an award-winning online training provider with a diverse portfolio of fully accredited courses. We cover a number of essential corporate domains, including cybersecurity and cyber resilience. Our in-house team of e-learning specialists works with leading subject matter experts to deliver courses that package certification training alongside unique practical insight. This not only helps candidates pass their exams but also leaves them equipped to begin applying their training in practice. The courses themselves come with a range of engaging online and blended training assets, including instructor-led videos, regular knowledge checks, and downloadable whitepapers. Our support team can provide free exam vouchers and resits, and candidates can access our courses any time and from any web-enabled device thanks to the free Go.Learn app. Good e-Learning also specializes in corporate training. We offer bespoke LMS platforms designed to suit the exact goals and training requirements of our clients. As our LMS offers dynamic reporting, we also take a proactive approach to helping teams and individuals succeed. Each client also receives a direct point of contact to discuss their learning plans as they scale and evolve. Are you interested in writing for Good e-Learning? We are currently accepting guest contributions and content exchanges in areas like ITSM, DevOps, and Cyber Resilience. Visit our Write for Us page to find out more, or contact a member of our team today!
<urn:uuid:b3000e3e-af07-42a8-9aee-fd9e77777a54>
CC-MAIN-2022-40
https://blog.goodelearning.com/subject-areas/resilia-cyber-resilience/whats-the-difference-between-cybersecurity-and-cyber-resilience/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00737.warc.gz
en
0.945559
979
2.578125
3
The fleet of satellites that enable accurate weather forecasting is aging, and the program intended to replace them is both behind schedule and over-budget. As a result, a lengthy gap without the satellites' data is looking more likely. Artist's rendering of a polar-orbiting weather satellite. (Image: NOAA) Aging weather satellites critical to forecasts in the United States – like those that so accurately predicted the path of Hurricane Sandy – are living on borrowed time, and there is increasing concern over an impending gap in satellite coverage that could leave meteorologists without some of their most important tools. “We are looking at, potentially, a 17-month gap,” said David Powner, director of IT management issues for the Government Accountability Office. “Right now, there is a high probability of some gap occurring.” Why the potential gap? Meteorologists at the National Oceanic and Atmospheric Administration, a scientific agency under the Commerce Department, have been using polar orbiting satellites to forecast weather for decades. These satellites circle the earth regularly at low-orbit and provide real-time data like storm direction, speed and intensity, all of which feed into forecast models. But those satellites have aged and in 2002, a joint program managed by the Department of Defense, NOAA and NASA called the National Polar-orbiting Operational Environmental Satellite System (NPOESS) was supposed to replace them. By 2010, Powner said – eight years after a development contract for NPOESS was awarded – launch dates had been delayed five years and cost estimates had more than doubled to about $15 billion. The program was disbanded, and key management responsibilities were transferred to their separate agency offices. The DOD established its Defense Weather Satellite System. The responsibility for replacing the aging polar orbiting satellites transferred to NOAA, which established the Joint Polar Satellite System (JPPS) program. Even with NASA assisting NOAA on the project, a true replacement polar orbiting satellite will not be operational until at least 2017. "The projected loss of observing capability will have profound consequences on science and society, from weather forecasting to responding to natural hazards," said Dennis Hartmann, in a report he chaired for the National Research Council on the polar orbiting satellites. In it, he said “budget shortfalls, cost-estimate growth, launch failures and changes in mission design and scope have left U.S. earth observation systems in a more precarious system than they were five years ago. "Our ability to measure and understand changes in Earth's climate and life support systems will also degrade,” Hartmann said in the report. Over the summer, the $13 billion JPPS program was further criticized in reviews by the GAO, the Commerce Department’s inspector general and a third-party review team led by retired aerospace executive A. Thomas Young. Powner said the GAO recommended in June that NOAA establish mitigation plans because a gap in satellite data is highly likely. “Until these risks are mitigated and resolved, civilian and military satellite data users may not have the information they need for timely weather forecasting, thereby risking lives, property, and commerce,” the GAO report reads. Powner said potential contingency plans could include a combination of utilizing European satellite data, a deal with the DOD to harness its satellite data, or increased ground-based observational data, though he added those options could be costly. “They are going to need detailed procedures with contingency plans,” Powner said. “The contingency plans should have different scenarios because we’re not sure what the gap is going to be. Out point is that it looks pretty clear there will be some gaps, what are you doing to do to fill that gap, what are your options?” NOAA, which did not respond to calls from FCW for comment, has publicly addressed some concerns highlighted by the reviews. In September, Jane Lubchenco, Under Secretary of Commerce for Oceans and Atmosphere, acknowledged JPSS had become a “national embarrassment” and vowed for an improved, restructured program and the creation of contingency options if a gap in polar orbiting satellite observations does occur. Whether and how long a potential gap is in polar orbiting satellite observation may depend on a “converted demonstration satellite” being used for “operational purposes,” according to Powner. The satellite, originally called the NPOESS Preparatory Project (NPP), was launched by the JPPS program office in October 2011. Powner said it was not designed for long-term operations – its left expectancy is three to five years – yet NOAA meteorologists began putting data from the satellite to use in May 2012. “If NPP lasts a full five years, and we stay on track for a 2017 launch of the new satellite, we’re looking at a 17-month gap,” Powner said. NEXT STORY: DOD's Kendall pushes data for better buying
<urn:uuid:5980b916-7217-40f4-8f8f-1966f6bd6e06>
CC-MAIN-2022-40
https://fcw.com/acquisition/2012/11/satellites-on-borrowed-time/206562/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00737.warc.gz
en
0.954656
1,019
2.953125
3
A patent document recently made public, entitled “Local Device Awareness”, describes a number of electronic devices within close proximity being able to communicate with each other with very little to no user input, Apple Insider (opens in new tab) reports. With the new patent in place, Apple could make device connections simpler, as well as less restrictive, with the current need for devices to be on the same wireless connection in order to communicate. With that in mind, iPhones and Macs, as well as Wi-Fi compatible printers could easily communicate with each other over wireless or Bluetooth technology. Moreover, the patent also makes a reference to RFID, a short-ranged wireless standard currently found in very few devices. According to the patent, the system could even be able to use GPS to locate pieces of hardware and display them over a map. Moreover, not only would the invention offer great connectivity, but also greatly simplify interactivity between such devices, such as using an iPad a great distance away to display documents on a projector. It could also offer interesting innovations in playing multiplayer games. One could, for example, use an accelerometer-equipped iPhone to roll a set of virtual dice, or even use RFID enabled physical dice (amongst numerous other applications of the technology that could be given).
<urn:uuid:a70f7620-4b34-4c7b-bc3c-1534b1169ced>
CC-MAIN-2022-40
https://www.itproportal.com/2011/07/01/apple-new-patent-improve-local-networking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00737.warc.gz
en
0.949142
264
2.859375
3
When looking for a new television provider, it becomes clear that the landscape is dominated by one basic choice–satellite or cable? We all know that cable is just what it sounds like, television service provided by transmitting radio frequencies through cables. In contrast, DISH Network provides premium television services through direct broadcast satellite. But what exactly is a direct broadcast satellite? At the most basic level, a direct broadcast satellite is a satellite system that transmits television signals to the end user via a satellite in geosynchronous orbit with the Earth. A direct broadcast satellite relies on microwave and radio frequencies to transmit television data in a digital format. This means that all of DISH Network’s television services are digital, not analog. Let’s break that down a little more. A geosynchronous orbit means that the satellite is rotating 22,247 miles above the equator, in the Clark Belt, at roughly the same speed that the Earth is rotating. Since the satellite and the Earth are rotating together, the satellite is able to maintain a constant connection with a stationary dish at a specific geographical location on Earth. This allows one satellite, or a series of satellites, to provide constant coverage in designated areas. An important aspect of DISH Network’s direct broadcast satellites is the Echostar Spot Beam Technology that their satellites utilize. This technology actually enables DISH to deliver local channels to a specific geographical area via a satellite “spot beam”. The best way to understand spot beams is to think of a satellite signal as a flashlight. Wherever the flashlight is shining, the local channels are transmitted. This allows DISH Network to use the same radio frequencies to deliver different channels to you, depending on what spot beam your area is being targeted with. DISH Network’s digital receivers are TVRO (TV receive only), so they receive information from the satellites but don’t send any back. A customer’s satellite dish receives the downlinked signal from a DISH Network satellite through a low noise blocked feed (LNBF), which then projects the signal onto the customer’s television. This keeps your information private and ensures stable service. Consumers today expect exemplary picture quality, and seamless high-speed service. DISH Network surpasses consumer expectations by utilizing cutting-edge technology like direct broadcast satellites to provide superior service. Buy DISH Network today and experience the power of modern technology in your everyday life.
<urn:uuid:9aebfaff-a6b1-45e1-95ac-323210d9184f>
CC-MAIN-2022-40
https://deals.godish.com/direct-broadcast-satellite/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00737.warc.gz
en
0.909571
554
2.859375
3
Man-in-the-middle (MITM) attacks – where an attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other – are a very real threat, especially when it comes to authentication. Various solutions have been put forward to prevent, or at least manage, this threat. These have met varying levels of success, although it is somewhat surprising that some of the most common technologies to prevent MITM authentication attacks are less effective than most people think. Do Public Keys Hold the Key? Public Key cryptography, or public key infrastructure (PKI), first introduced to the world of information technology the concept of “asymmetric” encryption; a way in which a message can be encrypted, but only unlocked by one particular user. This is while the public key is accessible to all, and can be used to encrypt a message, however only the intended recipient – the one who possessed the private key – can actually read it. This seemed to offer a great solution to encryption and decryption without the need to pre-share a common key, one which found the balance between security and usability. Public Key encryption, however, rests on a very critical step – a step that exposes a serious weakness in the entire PKI scheme. Although there is no need to share a symmetric key, there is an actual need of trust between the two parties. An initial interaction needs to take place between the two parties at the beginning of any session. Before the keys needed to encrypt data can be generated, the server must present the client with a digital certificate, verifying its identity. This is what occurs, for instance, every time a user logs in to their Gmail account. The certificate issued by Google allows the user’s browser to know that it is, in fact, Gmail they are “talking” to, and not a digital imposter. Currently, the PKI approaches to certification fall into two categories. For a long time, the most common method used to implement this has been the utilization of Certificate Authorities, or “CA”s, trusted third parties that “sign” digital certificates, confirming the identity of the parties. It soon became clear, however, that Certificate Authority (CA) presented a weak link in the chain of security. Certificate Authorities have been shown to be vulnerable to attacks. Once these companies are compromised, the certificates can no longer be trusted, completely undermining the Public Key encryption they support. In fact, private keys themselves are also susceptible to the same danger. One of the more important revelations of former CIA contractor and whistleblower Edward Snowden, was the efforts of Western intelligence agencies to breach communications companies in order to steal crypto keys and take advantage of certificate authority vulnerability. Spreading the Risk This would be disastrous; the very element that ensures trust in the system would be compromised, thus calling into question the integrity of the system in its entirety. Understanding the risk that a hacked Certificate Authority poses, some organizations have recently offered at least a partial solution to this problem: Instead of storing certificates in one location – which makes them vulnerable to hacking and alteration – these are spread out to a worldwide community of users. While this is certainly a step in the right direction, it doesn’t address the other fundamental problem with the Public Key Infrastructure (PKI) system: certificates and private keys themselves are vulnerable to being hacked. As long as an encryption system is founded on something cybercriminals can steal, it will remain vulnerable. It’s just a matter of time before someone compromises the system. A Blockchain-type answer has been proposed to mitigate this, however it in no way addresses all the logistical burdens on businesses involved in maintaining certificates. The blockchain technology premise – decentralization – is a great idea for the actual server infrastructure, but what about the credentials? If the route by which the credentials are transferred intercepted (MITM), then anyone can use those credentials to impersonate the party to the verifier. In fact, the word “decentralization” has been floating around the authentication industry for some time, and usually refers to the credentials being stored on devices (especially mobile) to reduce the risk of a single breach of a repository, and lower the cost of IT maintenance. However, especially with blockchain solutions, duties such as lifecycle management, submitting to validation checks and archiving services for certificates, present huge hassles for enterprises that need to divert large amounts of resources to these tasks. Even more importantly, though the goal of removing the target and decentralizing credentials is a noble one, if a breach occurs through a man-in-the-middle attack, the compromising of the credentials of one user can lead to a breach that could affect the entire system.
<urn:uuid:6918b922-bafa-411e-90d3-2c0418f48962>
CC-MAIN-2022-40
https://doubleoctopus.com/blog/threats-and-attacks/stopping-man-in-the-middle-attacks-with-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00737.warc.gz
en
0.950278
986
3.5
4
Antibiotic Has Been Discovered In The Dirt Alexander Fleming is a British scientist had invented penicillin, which was the first antibiotic in the world and he has found this in a petri – dish. After ninety years, the world has faced an issue with antibiotics, and it was the resistance made by superbugs. Also, it was made of drugs in the doctor’s arsenal, and it increases the difficulty in treating the infection. It is also predicted that this antibiotic-resistant will hit 10 million global deaths per year by the year 2050. So, scientists around the world are sincerely working to invent new microbes to destroy the molecules. But the microbiologist Sean Brady thought to involve in some tactics concerning Fleming. He thought to find the antibiotics on the grounds rather than growing it in Petri dishes. Brady is also an associate professor in New York and he said that there are at least 10,000 unseen bacteria found in your every step and most of the bacteria are not able to produce molecules. He also added that their idea is to make use of those antibiotics that are present in the environment. This idea has also earned good results and a study which is recently published said that they had found a new type of antibiotics which is extracted from the microorganism in the soil. This type of antibiotics was called malacidins, and this has the capability of killing plenty of superbugs. Brady said that this class of molecule would not be found in the next week because it takes several years. To develop, test and to approve a novel molecule, it needs many years, but the discovery of it will be a powerful principle. Also, the antibiotics are made to be appreciated for their ability to fight with the microbes that cause sickness to humans, and most of the drugs are usually made from bacteria. Bacteria are the one which fight among each other for many years and surprising they are the reason for all the great weapons. Also, most of the microbes which are treated in laboratories don’t grow well. Brady also said that due to this approach, we missed much chemistry which is being produced by the bacteria and they are the best way to derive many interesting molecules from the environment directly without any chemical process. So, the researchers can simply focus on doing this, and the team of Brady has cloned from a huge quantity of DNA, which are evolved from hundreds of samples provided by the scientists. They have sorted them with the help of the material, and now they are searching to get some interesting sequences. Brady said they don’t know what is present in that and that will be the future. Brady and his colleagues are now searching for a new type of gene which can produce antibiotics, which are calcium-dependent and the molecules fight with the bacteria only when they found calcium around it. But it makes the microbes hard to evolve in such resistance. If this gene is injected into microbes, it will produce malacidins. When it is applied to the skin of the infected rats, it sterilized the wounds successfully without causing any side effects. Brady also said that it doesn’t be toxic to the people.
<urn:uuid:f7da0a01-ec2e-4da4-9b93-e17fc181ee7a>
CC-MAIN-2022-40
https://areflect.com/2018/02/17/a-more-powerful-antibiotic-has-been-discovered-in-the-dirt/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00737.warc.gz
en
0.980324
642
2.890625
3
The impact of artificial intelligence in business is rising day by day. You presumably engage with artificial intelligence (AI) regularly without even realizing it. There are a lot of use cases for artificial intelligence in everyday life. But what about artificial intelligence in business? It is even more than you can imagine. Are you scared of AI jargon? We have already created a detailed AI glossary for the most commonly used artificial intelligence terms and explained the basics of artificial intelligence as well as the risks and benefits of artificial intelligence for organizations and others. Table of Contents What is the role of artificial intelligence in business? Although the acceptance of AI in modern society is recent, the idea is not. Although the present area of artificial intelligence (AI) was founded in 1956, significant advancements toward creating an AI system and making it a technological reality required decades of labor. Artificial intelligence is used in a variety of ways in business. In actuality, most of us engage with AI regularly in one way or another. Artificial intelligence is already upending practically every business activity in every industry, from the routine to the astonishing. To keep a competitive edge, AI technologies are becoming more and more important. According to Semrush, AI is predicted to increase business value and worker skills significantly. “Increased AI adoption in organizations will generate $2.9 trillion in corporate value and 6.2 billion hours of worker productivity in 2021,” according to the study. In various industries, including healthcare, sales, HR, operations, manufacturing, marketing, and technology, AI technology can be used in various use cases. Like: - Self-driving cars and other autonomous technology, - The internet of things (IOT), - Medical diagnostics, - Robotic aid in manufacturing, - Contactless purchasing, - Candidate selection for jobs and many more are examples of use cases for AI that are frequently explored. Did the precursors of artificial intelligence dream of it? There are countless business opportunities. However, we need a staff trained to handle the technology to integrate AI and machine learning into business. After data architect, cloud computing, data engineer jobs, and machine learning engineers, artificial intelligence careers is hot and on the rise. Check out the best masters in artificial intelligence online How is artificial intelligence used in business? What does artificial intelligence mean in business? Let’s find out. Among the most popular uses of AI include automation, data analytics, and natural language processing (NLP). How do these three domains improve operational efficiency and streamline processes? They have the following effects on a variety of businesses: - Automation: As a result of automation, people no longer necessary to do monotonous tasks. By doing tedious or mistake-prone activities, it gives employees more time to concentrate on work of higher value. - Data analytics: Identifying novel patterns and connections in data enable organizations to uncover previously unreachable insights. - Natural Language Processing (NLP): NLP improves accessibility for people with disabilities, such as hearing impairments, and gives search engines the ability to be smarter, chatbots more helpful. Several other present-day business applications of AI also provide: - Cross-referencing, data transfer, and file updates - Predicting customer behavior and suggesting products - Detection of fraud - Individualized advertising and marketing communications Customer support via phone or chatbot After giving a brief outline, let’s examine some benefits of artificial intelligence in business. Benefits of artificial intelligence in business Here are some of the best benefits of artificial intelligence in business: AI is being used in pricing in more advanced ways. Choosing the right price for a product or service is difficult for most businesses. You will need a ton of data to calculate a product’s price, including consumer activity, rival pricing, production costs, customer reviews, customer willingness to pay, and more. Businesses can easily track and analyze client behavior across many channels, including online and offline customers, using artificial intelligence (AI). Additionally, AI can combine and analyze these data to improve conversions. Amazon, Uber, Zara, and more businesses are using AI to set prices. Have you ever seen a pop-up saying, “Clients who bought this also bought…” while shopping online? There are suggestions for everything from buying automobiles to movies and internet shopping. The recommendation engine is one important factor to consider for increasing your sales and revenue. You can display to clients products that are comparable to what they are looking for with the aid of AI recommendations. Netflix, Amazon, and more businesses are using AI to customize recommendations. Finding the right people from a pool of applicants is a vital challenge in human resources or recruitment. Many businesses have difficulty doing it. The screening process may be simplified using AI. The ideal candidates may be found using AI, who can also schedule the interviews and locate them. This process saves more time and money than you might have thought. Additionally, AI may offer data on costs per hire and how long a candidate stays in a job or organization. PepsiCo, Google, and more businesses are using AI for automated recruitment. Enhanced customer support Customer service comes in second in importance to marketing in determining a company’s standing and future success. And it takes a lot of time. The organization and the customers will save a lot of time if AI is integrated with conventional customer support. Chatbots offer useful quick questions and point customers to the appropriate website or executive to quickly resolve their doubts and inquiries. Offering the appropriate links or information during chats also reduces the need for pointless contact with customer care. Almost every business that is adopted AI is using the enhanced customer support. A single coding mistake could result in cybercrimes that allow someone to steal vital information from your app or website. However, since AIs are machines, the inaccuracy is manageable. It increases cyber security and reduces the chance of cybercrimes. Most consumers are concerned about the security of their personal data. Do you know how employees ignore cybersecurity training? Today, your customer’s trust in you is largely based on the strength of your cyber security. Therefore, integrating AI into your security system will undoubtedly strengthen trust and cyber security. Check out the cybersecurity best practices in 2022 One of AI’s most valuable business benefits is processing massive amounts of data and understanding it in real-time. This strategy enables firms to make important decisions and take action much more quickly, maintaining the company’s competitive position. For instance, in the transportation sector, drivers can quickly modify their routes depending on data on traffic jams based on their location. Further development of IoT Widespread adoption of IoT devices built on platforms with AI components may soon result in ground-breaking advancements. Customers and businesses will gain from this trend. A business that operates in a very competitive market may find that even a small knowledge and qualification gap among its staff members costs it money. As a result, businesses worldwide invest a significant amount of money in training activities to raise the qualifications and skills of their workforce. By applying an individualized approach to each person, AI can significantly lower the cost of such processes and increase their efficiency. Employees will also find learning more pleasurable as a result of this. AI systems can manage huge amounts of data, spot trends, and make future predictions. People are constantly curious about what will happen next, which is essential for business. Among the greatest benefits of artificial intelligence are its self-learning capabilities, which include the quick identification of significant and pertinent conclusions while processing data and the capacity to make specific predictions based on them. These tools help companies identify possibilities and ideas that can be used to gain market-competitive advantages. Is artificial intelligence better than human intelligence? Before you decide, let’s look at the disadvantages of artificial intelligence in business. Disadvantages of artificial intelligence in business Here are some of the disadvantages of artificial intelligence in business: Cost is a major factor when purchasing AI technologies. Businesses that lack in-house expertise or are unaccustomed to AI frequently have to outsource, which presents problems with cost and upkeep. Smart technologies can be expensive due to their complexity, and you may incur additional fees for continuous maintenance and repairs. Additional costs may include the computational costs associated with building data models, etc. Dependency on machines We might be moving toward a day when it will be challenging for humans to work without the aid of machines, given all the automation that is taking place around us. Due to the development of AI, we will become exponentially more dependent on machines. As a result, human reasoning and mental faculties may deteriorate with time. The lack of technical people with the requisite expertise and training to efficiently deploy and manage AI technologies is another major barrier to AI adoption. There is a shortage of experienced data scientists and other specialized data workers proficient in machine learning, building strong models, etc. Even with the wealth of data already accessible to businesses, using artificial intelligence is still difficult in several ways. Most business applications of artificial intelligence rely on machine learning, which needs a lot of data to train the model. This restricts the application of AI in new business sectors where there is a lack of data. The enormous amount of data we currently have is frequently completely unstructured and unlabeled. As most AI applications require supervised training on labeled data, this presents a problem for implementing AI in business. Artificial intelligence in business examples We are surrounded by artificial intelligence (AI). Most certainly, you have used it to conduct web searches, check your most recent social media feed, or use it throughout your daily commute. Whether you realize it or not, AI significantly impacts your personal and professional life. Here are some artificial intelligence in business examples that you could already be utilizing regularly. Artificial intelligence in business management In business management, artificial intelligence is used in: - Spam detectors, - Speech-to-text tools, - Smart personal assistants like Siri, Cortana, and Google Now, - Automated insights, especially for data-driven sectors automated responders, - Online customer assistance process automation, - Sales and business forecasting, - Security surveillance (eg financial services or e-commerce). Artificial intelligence in e-commerce In e-commerce, artificial intelligence is used in: - Intelligent searches and relevancy tools, - Service that offers customization, - Purchase forecasts and product recommendations, - Online transaction fraud detection and prevention, - Optimizing prices in real-time. Artificial intelligence in marketing In marketing, artificial intelligence is used in: - Curation of content and recommendations, - Individualized news feeds, - Image and pattern recognition, - To process unstructured data from customers and sales prospects, language recognition, - Ad targeting and real-time, optimized bidding, - Customers into segments, - Sentiment analysis and social semantics, - Electronic web design, - Anticipates customer needs. Artificial intelligence in business management In business management, artificial intelligence is used in: - Process automation, - Cognitive insight, - Cognitive engagement Use cases/examples of AI in different industries Over the past few years, all industries have experienced a significant surge in using artificial intelligence (AI) in business. Like: Artificial intelligence in the oil and gas industry The 2019 investment by BP in Belmont Technology serves as an illustration of how AI is being used in the oil and gas sector. To strengthen its AI skills, BP collaborated with the digital start-up to create a cloud-based platform known as “Sandy.” The platform made it possible for BP to gather useful insights from data on geology, reservoir projects, history, and geophysics. BP might interpret the simulation findings using neural networks to consult the data. Artificial intelligence in the renewable energy industry The generation from older sources, such as coal, is declining, reducing the grid’s inertia caused by large rotating machinery like steam turbines. Without grid inertia, we risk having electrical systems that are less stable and more prone to blackouts. Using real-time data acquired by sensor technology and satellite photos, AI helps us comprehend these dangers better. The organization can take appropriate action due to AI’s ability to estimate capacity levels and downtime windows. Artificial intelligence in the mining industry AI is increasingly used in mining to optimize processes, increase safety, improve decision-making, and extract value from data. Learning more about the environment is one-way mining companies use AI. Artificial intelligence (AI) can map and forecast topography more precisely than humans, avoiding potential mistakes. With computer vision systems, pattern matching, and predictive data analytics, AI is also being used to find new resources to mine. These enable data analysts in the mining industry to forecast where the best resources will be found by analyzing vast amounts of data. Artificial intelligence in the engineering industry Engineers do a lot of work in many different businesses. By using AI, they may spend less time on time-consuming jobs. They use machine learning algorithms to find patterns so they can eventually make reliable decisions. As technology advances, machines can support production lines and manufacturing operations. Vehicle engineers have used robotics to manage precise maneuvers on the assembly line without requiring human assistance. Additionally, AI supports effective data management and the dismantling of departmental silos. Artificial intelligence in the software engineering industry Every stage of the software development cycle requires human developers to work on various procedures and use AI. It has the tools to convert spoken language into computer code and machine language, providing correct results automatically. AI algorithms provide intelligent software analysis, testing, development, and decision support systems. These tools can assist with software development procedures already in place that was designed for labor-intensive program development. Artificial intelligence in the manufacturing industry AI and machine learning models have been implemented by 60% of manufacturing organizations. Furthermore, Global AI in Manufacturing Market Trends projects that the market will grow to $16.7 billion by 2026. Check out the process of controlling digital manufacturing with AI General Electric is one business that has used AI in the production process. The 125-year-old energy company has started integrating AI into every aspect of its business. Artificial intelligence in the fintech industry Fintech and the broader banking industry may benefit from the practical applications of artificial intelligence. According to Autonomous Research, by 2030, AI technology may enable the financial services industry to cut operational expenses by 22%. Banks using data to assess a customer’s creditworthiness is one example. Institutions can use AI to analyze client data and decide on credit rates without worrying about charging too much or too little. Fraud detection is another excellent application for the fintech sector. Machine learning tools can respond in real-time to data to identify fraudulent behavior and uncover patterns and linkages. What are the current trends in Artificial Intelligence? IT professionals would be well to watch a few more emerging trends in AI. Like: - Data wrangling - Robotic process automation (RPA) - Customer-facing AI plows - AI boosted supply chains - Natural Language Generation (NLG) - Autonomous vehicles - Convergence of IoT and AI - Augmented intelligence - Ethical AI AI and machine learning have revolutionized businesses and will remain so for a long time. Check out the best real-life examples of machine learning AI use in commercial environments reduces time spent on repetitive processes, increases staff productivity, and improves the entire customer experience across IT operations and sales. At a level humans are incapable of, it also assists in avoiding errors and spotting impending catastrophes. It is understandable why businesses are using it to enhance various operational aspects, from oil and gas to fintech. Businesses at the forefront of AI will benefit financially and win the competition in the future.
<urn:uuid:5d6bad15-7da5-4bbc-aee5-739dc1c3361d>
CC-MAIN-2022-40
https://dataconomy.com/2022/08/artificial-intelligence-in-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00737.warc.gz
en
0.932666
3,338
2.859375
3
An introductory article on the different technologies emerging in the wireless space can feel like an introductory article how to speak a foreign language. For some, the language makes sense; they’ve been around it, taken a couple classes and maybe even traveled to a place where that’s the only language spoken. For others, it can be intimidating the first few times you hear the words and phrases. Technology-speak can be the same way. Accountants, lawyers, doctors, and technology experts all have their own “language” and professional conversations might seem foreign to anyone outside of these fields. But wireless technology is everywhere and it’s impacting the way we live our daily lives, so it could be quite useful to get acquainted with wireless lingo. This article is meant to introduce readers to emerging wireless technologies, and provide a high-level understanding of the concepts. Treat this article as a window into the world of emerging trends, specifically in the wireless space. The ideas are grand, and they need to be. Think of wireless as a large building. The most complicated processes, algorithms and concepts are in the attic, but the foundations (lower levels) of the concepts are what most people see, and all a consumer really needs to understand. We’re going to open the front door and take a tour of the ground floor. First, it’s important to understand that wireless and cellular networks are different. Though used interchangeably, wireless is a broader term which can be used to represent any technology which is wireless, for example, WiFi, Bluetooth, contactless or NFC, 4G, and 3G. Cellular, on the other hand, represents wireless technologies specifically used to make calls or surf the internet from mobile phones, using 4G, 3G, etc. For starters, it’s important to understand the digital landscape we’re all living in. According to YouTube, 100 hours of footage is uploaded every minute. It’s watched by younger demographics more than any other cable network. Washington Post reports that 70% of today’s college students send at least one Snapchat every single day, and most are heavy users. People are connected everywhere they go and the number of connected devices per person is growing. Wireless technology has evolved significantly, and network capabilities need to evolve with it. This leads to a need for what’s called data offloading. What is data offloading? The surge in mobile data traffic creates an overpopulated cellular network that needs to offload users, similarly to how a boat bails water. These users have to go somewhere, and that’s where you see wireless networks like WiFi taking some of the traffic. Expect to see more wireless/WiFi infrastructure being built because it’s typically much cheaper to build than cellular networks, and can take some of the load off (offload!). Pretty interesting concept, right? For more information on offloading, check out this page on our site. No, a femtocell is not a character from Star Wars. Consider a femtocell similar to a WiFi router we have at our homes but instead of using WiFi, it uses cellular technologies like 4G or 3G. Sticking with the previous analogy, these cells are the bail-buckets cellular networks use to remove overflow users from their boat. Femtocells are also known as small cells, and small cell technology is commonly used for homes and small businesses. Think of the femtocell as a hub for a home or office. The cells help to increase wireless reach in areas with low signal, which improves everything from call clarity to the battery life of your device. If you own and operate a small business, there might be floors or departments that receive a much weaker signal than others. With small cells (or femtocells) you can extend that wireless connection to the more removed parts of your business. We have significant experience in leveraging small cell technology for businesses – check out our main page on the subject for more information. Just when you thought you had the latest and greatest in 4G technology… introducing 5G. Fortunately, 5G isn’t expected to reach the pockets of consumers everywhere until 2020. While there isn’t necessarily a problem with 4G, the evolution of mobile, combined with the internet of things, has created a need for more power and faster internet speeds than 4G will likely be able to handle in the near future. So although you can currently control your thermostat, garage door and microwave from your cell phone, the sheer volume of appliances and devices that your phone can handle are limited… at least with 4G. Ever hear of beamforming? At first thought, it might make you scratch your head, but try to think of it like this… you’ve been in a wireless network with a broad reach, but what if that network could pinpoint the exact location of the highest demand? Imagine, for example, a room with the ability to focus heat on the ten square feet surrounding the people in the room. Without wasting energy heating the entire house to the same temperature, you’d be warm and cozy in the area you need it most. Beamforming works the same way, targeting wireless service like a laser instead of throwing a wide blanket to provide fast, efficient connections. One barrier to a wireless utopia is consistency. Similar to electric cars, the next phase of wireless technology won’t reach a tipping point until a couple more things happen. First, more people need to adopt the technology. And second, the technology will need to be readily available for everyone, meaning that access to the technology needs to be convenient and affordable. If only 6% of phones have the capabilities required by new technology, the investment of time and resources are not going to be spent building the infrastructure to support it. This is just brief explanation as to why many of these impending advances are happening on the slow side. At HSC we pride ourselves on staying on top of the latest developments in the wireless industry, so don’t worry. We’ll keep you posted on the developments of these technologies, and the services we’re able to offer because of them. Do you have an upcoming project and wantus to help speed up your time to market? These cookies are necessary for the website to function and cannot be switched off. These cookies allow us to monitor traffic to our website so we can improve the performance and content of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited or how you navigated around our website. These cookies enable the website to provide enhanced functionality and content. They may be set by the website or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
<urn:uuid:1bb8a487-e003-4c5a-b24a-4857c4b15ac8>
CC-MAIN-2022-40
https://hsc.com/Resources/Blog/Up-to-Speed-A-High-Level-Look-At-The-Different-Technologies-Emerging-In-Wireless-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00737.warc.gz
en
0.943781
1,559
2.765625
3
Joanna Smith-Griffin launched a programme which utilises artificial intelligence to predict children at risk of truancy across the US. The programme has since raised US$8mn of venture capital financed by Spero Ventures, with investors including Rethink Education, Gratitude Railroad, and Boston Impact Initiative, to name a few. AllHere, the EdTech start-up headed by Joanna, utilises artificial intelligence through mobile messaging to prevent chronic absenteeism in K-12 schools. The attendance tracking tool aims to increase academic performance by providing unique, tailored interactions with students. According the the AllHere website, two-way texting chatbots encourage students to attend school by utlising “a customised knowledge base to ensure each family and student receives the right support at the right time.” Joanna says: “Most families tend to underestimate the number of days that their kids miss, and how that impacts student achievement. “We foresee that post-COVID, the scale of this problem will increase dramatically. We envision a world in which nearly every student will need some kind of assistance to re-engage in brick and mortar learning.” AllHere has so far reached over two million students, in 8,000 schools across 34 states. The programme has proven to reduce truancy by 17% and to lower course failures by 38%. Next on Joanna’s agenda is to develop a feature to fight truancy in remote learning, after AllHere’s next round of investment.
<urn:uuid:658de50d-599b-47e0-bd69-7a3b1645fc40>
CC-MAIN-2022-40
https://march8.com/articles/ai-powered-programme-raises-8mn-to-fight-truancy
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00737.warc.gz
en
0.942452
315
2.546875
3
The Olympics and Burner Phones: Are You Sure About the Safety of That QR Code?Download Case Study The Olympic and Paralympic Games represent the highest levels of achievement in the athletic world, celebrating hard work and perseverance of its participants. But at the 2022 Beijing Games, a different type of persistent activity is being spotlighted: cyber threats. To mitigate against what they’re calling “malicious cyber activities,” the Federal Bureau of Investigation (FBI) urges U.S. athletes to leave personal smartphones at home and instead use temporary devices. “Burner” phones are actually more commonly used than we think, and we're not just talking about unlawful activities. Organizations would often ask employees to use different devices when traveling to higher risk regions. In fact, as an alternative to using temporary devices, a global news organization is currently providing Lookout Mobile Endpoint Security to its reporters in China to protect them. While there are countless types of threats, one of the most common is phishing. And threat actors have found that QR codes are one of the most effective ways to deliver malicious links. Whether you’re a journalist covering the Olympics or just going to a restaurant in San Francisco, you need to understand that while QR codes make contactless interactions seamless, they also make it easy for attackers to send you malicious links. Once a credential is stolen, it makes it easy for attackers to steal personal and corporate data alike. QR codes are becoming a part of everyday life Historically, attackers relied on sending phishing URLs via email to desktop users with hopes of stealing corporate data — either by tricking users into installing malware or unknowingly handing over login credentials. But this changed with the proliferation of mobile devices. Nearly all mobile applications with messaging functionality, such as social media, third party messaging apps, gaming and dating platforms, can be used to deliver phishing attacks. Within popular apps like Snapchat and WhatsApp, QR codes are used to sign into accounts, exchange contact information and make money transfers. As businesses try to create a contactless experience amid the pandemic, many have turned to QR codes. For example, it's now common practice for restaurants to use QR codes to link to their menus or provide contactless pay options. At the Beijing games, QR codes are a huge part of everyday life. The Chinese have relied on them for years, and now they’re using QR codes at the Games for everything from accessing training centers and hotel facilities to testing for COVID-19. While the codes make navigating activities at the Games easy and contactless, it creates opportunities for them to be abused for phishing purposes. A low-tech, yet highly effective phishing method QR phishing attacks are on the rise because they require so little effort to be successful. For one, the codes are physical displays, meaning a harmless one can easily be covered with a nefarious one that brings users to a malicious website. This makes it easy for cybercriminals to “display” the legitimate site that steals login credentials or installs malware. QR phishing is not just an effective method to attack individuals, they can also be used to steal corporate data. For example, your employee could scan a code that leads to a fake bank login page. Once their login credentials are entered, an attacker can use software that crawls the internet for other sites with that employee’s username. If your employee uses the same login credentials across multiple accounts, including ones related to work, an attacker could gain access to your organization’s infrastructure. How to safeguard against QR code phishing Since the beginning of the pandemic, we’ve all become accustomed to using QR codes as part of our daily lives. In fact, the FBI just warned about QR code phishing in January. To ensure we protect ourselves and our organizations, education is the first line of defense. For the Olympians and journalists in Beijing, using temporary phones and recognizing the dangers of QR codes can lower the risk of encountering these phishing attacks. We recommend thinking about QR codes the same way you think about other phishing tactics like email scamming and social engineering. Always check the URL on the notification before clicking to be redirected. If the URL does not look like a trusted source or differs from the known company’s URL, exit out of the notification. But beyond that, organizations also need to look into solutions that can protect their users and data from all internet-based attacks regardless of where they are in the world. Lookout Mobile Endpoint Security and its Phishing and Content Protection module secures your data from threats such as malicious sites, spyware, adware, ransomware, phishing attacks and botnets. It only allows sites that are safe for your user, while blocking phishing and malicious content. To learn more about why internet-based attacks are a huge issue, check out the Phishing Spotlight Research Report.
<urn:uuid:d1dfbf2d-15c4-45af-ab61-8c422522f5e6>
CC-MAIN-2022-40
https://www.lookout.com/blog/qr-code-phishing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00137.warc.gz
en
0.919951
1,025
2.671875
3
Clouds, Fog and the IoT – Isolation or Collaboration? IoT = Internet of Connected Things? The idea of transmitting sensor data over the Internet is becoming quite popular – we call it the “Internet of Things”. But the quality of Internet connectivity is not the same across all locations. To have maximum impact on improving business processes, sensor data granularity is important to ensure accurate monitoring. Thus data gathered from sensors should be collected at high frequency, and it should never be lost. Sensors are often situated in remote, unmanned sites which, for many real-world applications, will be quite numerous (e.g. Telecom Radio Base Stations). Cellular connectivity provided by mobile network operators is often used to transmit data from remote locations, but can quickly become very expensive for high volumes or repetitive data transmission. Thus transmission frequency should be kept at a minimum, but without losing the level of data accuracy obtained by polling sensors at high frequency. Additionally, and in any location, connectivity might be affected by unplanned, unpredictable interruptions, introducing the risk of losing collected data if this is not somehow ‘stored’. Last, the variable nature of Internet connectivity makes it especially difficult – and risky – to deploy ‘sense & respond’ scenarios based on data transmission over the Internet. The need to address these issues has led to the emergence of the concept of ‘Fog Computing’. Cisco uses the term ‘fog nodes’ to define tools capable of collecting real time sensor data from any device and processing it locally, transiently storing it for periodical transmission to the cloud; these tools also provide the data intelligence required to send control commands to local actuators. Fog nodes typically make use of local mass storage (non-volatile memory) to decouple the frequency of data polling from sensors from that of its onward transmission, as well as to prevent sensor data loss caused by unplanned connectivity interruptions. However, collecting sensor data at high frequency for storage in non-volatile memory prior to internet transmission could easily become very expensive, as it requires high frequency write & delete operations, causing fast memory deterioration and thus increasing the maintenance needs of computers that could be located in remote, difficult to reach areas. IoT = Interoperable connected Things The Internet of Things is more than just connecting devices to the Internet. According to McKinsey it is device interoperability – i.e. sharing device data across multiple applications – that will unlock up to 60% of IoT value in business environments. McKinsey defines an individual IoT system as ‘sensors and/or actuators connected by networks to computing capabilities that enable a single IoT application’. An operational definition of multi-stakeholders, collaborative IoT scenarios could then be ‘sensors and/or actuators connected by networks to distributed computing capabilities that enable multiple and diverse IoT applications’. The requirement for a collaborative IoT, but still capable of addressing Internet connectivity and security issues, suddenly makes relevant a ten year old technology paradigm, In-Memory Data Grid (IMDG). Gartner defines an IMDG as “a distributed, reliable, scalable and … consistent in-memory NoSQL data store[,] shareable across multiple and distributed applications.” Combining the concepts of Fog Computing and In-Memory Data Grid might provide the architectural basis to leverage ‘edge’ intelligence for distributed, collaborative IoT scenarios. The future of collaborative IoT The architecture of collaborative IoT will need to be based on grids of remote intelligent Feeders (fog nodes) collecting real time sensor data, processing and storing it in volatile memory (e.g. RAM) as/if required, and sending it to brokers in the cloud using industry standard protocols (MQTT, AMQP, DDS etc.). Brokers will in turn relay sensor data to any ‘listening’ application. As they do not require non-volatile memory to process and store data, Feeders will be installed on small footprint computers in remote locations, or on virtual networks in urban and business environments. The use of intelligent Feeders will decouple the frequency of data collection and data transmission, adapting the flow to the quality of available connectivity – and accommodating its sudden interruptions. In the presence of optimal connectivity, all applications in collaborative IoT scenarios will simultaneously receive a real time data stream. For example, in a food retail environment, energy usage and temperature data will be immediately available to applications for e.g. energy management, predictive maintenance, hazardous analysis and critical control points (HACCP). In a smart city traffic scenario, all moving vehicles’ position data could be used at the same time to provide drivers with traffic information and advice for alternate routes, public transport users with updates on buses timetables, traffic wardens with advice to move to congestion areas, automated traffic lights with commands to adjust their timing, pollution management systems with real time trend analysis and, of course, car insurers and their customers with risk related real time information. Additionally, Feeders will provide the intelligence required to respond to changes locally with the lowest possible latency – for example by slowing down pumps in an oil rig in response to a trend showing a decrease in pressure. Feeders will daily collect, process in-memory and push to the cloud millions of data points, connecting to any device, using any communication protocol and running on small footprint edge gateway devices or computers, or on virtual platforms. They will provide the intelligence required to control local actuators, while avoiding the security pitfalls inherent to traditional request & reply communication. Intelligent Feeders will eliminate all issues related to scaling and performance from the complexity and volumes of sensors, devices and data in large scale IoT projects, at the same time ensuring complete interoperability and resilience of data flow – dramatically cutting the cost of any IoT deployment. This article was written by Elena Pasquali, the CEO of EcoSteer, an IIoT software company with offices in Italy and the US. EcoSteer key software product, the EcoFeeder, converts any kind & number of sensors and industrial devices into real-time data streams instantly accessible to multiple applications.
<urn:uuid:004385cd-2e3a-4eb5-87f5-e77d52b717b2>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/connected-industry/clouds-fog-and-the-iot-isolation-or-collaboration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00137.warc.gz
en
0.906921
1,283
2.734375
3
Right now, more than 100 million pieces of space debris smaller than 1 cm orbit Earth. The intelligence community’s key research and development unit is on the search for innovative approaches to pinpoint and monitor currently nontrackable space junk that’s increasingly placing the nation’s space assets at risk. “Since the retirement of the Space Shuttle program in 2011, the [United States government] no longer has a dedicated, calibrated on-orbit orbital debris detection sensor,” Intelligence Advanced Research Projects Activity officials wrote in a request for information released Thursday. “IARPA seeks to understand the interaction of orbital debris with the surrounding space environment, and whether the resulting phenomena can be used for the detection, mapping and tracking of currently nontrackable orbital space debris.” Commercial space activity is rapidly rising, alongside calls to reduce the growing buildup of shreds of rockets and satellites that are placing Earth-orbiting spacecraft at serious risk. Even the tiniest pieces of trach in orbit can cause major impacts—due to an average impact velocity of roughly 22,500 MPH. “Currently, there are over 500,000 pieces of debris between 1 and 10 cm in diameter, and over 100 million particles smaller than 1 cm orbiting the Earth,” officials confirm in the RFI. Although traditional ground-based sensor detection techniques continue to improve, the capabilities are not at a point where they can successfully track debris that’s on the small side, or junk in certain orbits beyond lower Earth orbit, or LEO. Via the RFI, IARPA invites interested entities to submit white papers describing “promising approaches for the detection and tracking of orbital debris in the size range from 0.1 cm to 10 cm, traveling in any orbital plane around planet Earth.” For context, 0.1 cm objects can be as small as a grain of sugar. Approaches may include existing or newly proposed sensor technology, probabilistic detection techniques for low signal-to-noise objects and more. Responses to the RFI are due by March 11, and IARPA intends to host a virtual invitation-only workshop on this effort later in the spring.
<urn:uuid:0bf80363-2205-48ee-a6b3-f4441c05a41a>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2022/02/iarpa-seeks-ways-track-nontrackable-space-junk/361890/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00137.warc.gz
en
0.908175
454
3.4375
3
Ok, this is strange! At least this was my first reaction when I saw that in one of my CCIE labs that I am trying to resolve all the links between routers are addresses with a subnet /31. Isn’t that weird that something like this you see for this first time after couple of years in networking. For me it was. It blow my mind out. I asked my more experienced networking colleagues later but for them it seemed new too. They said at first: Ok men, that’s not possible! Well, try to type it on router interface and you will se that it is possible. It strange for sure, but it’s possible. Router OS (Cisco IOS in this case) will try to be sure that you will use this kind of subneting only for Point-to-point links. That’s why it will issue a warning message if you apply this subnet mask on an Ethernet interface. For serial it will go without the warning. The idea behind this is of course simple if you put it this way: On point to point links we actually do not need special broadcast address of that subnet because there’s only one way you can send a packet across point to point link. All we have is the IP address on the other side of the link. We know that if we want to send broadcast it will go there no matter that address is separate broadcast address or any other address. There cannot be more destination than one and the router will then know that broadcast will be directed on the same link as the normal unicast for the link destination address. Why should we have network name defined as first address of a range and not being able to use it on the interface, we want to use that one too. If we have 256 different addresses in /24 range. Why we need to divide this on 64 subnet with 4 addresses each if we want to use only two addresses on every side on the link. This is the idea. For one /24 subnet we can use /31 subnets for point-to-point links and with that get double the number of point-to-point links that we can cover. R1(config)#int fa 0/0 R1(config-if)#ip add 192.168.0.0 255.255.255.254 % Warning: use /31 mask on non point-to-point interface cautiously R1(config-if)#
<urn:uuid:b6750f02-989a-4b6d-a38b-6382d1079210>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2014/point-to-point-subnet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00338.warc.gz
en
0.93763
513
2.71875
3
Where there are advantages, there are risks—learn about the elements of Edge Computing. Many Cloud-based applications use a Data Center with servers as a central location to process information generated by devices such as smartphones, tablets, and IoT devices for big data processing and analytics. This model imposes increasing demands on the communication and computing infrastructure, with an inevitable problem that has to do with the quality of service and the user experience. The concept of Edge Computing is based on moving part of this computational load towards the Edge of the network and not using centralized servers in order to take advantage of the currently untapped computational capacity at the Edge nodes, such as stations, routers and switches. This article considers the challenges, opportunities and benefits of Edge Computing. Cloud computing is not the best option for the new Internet of Things applications (among other reasons, because you cannot know with certainty where the public Clouds are located or whether they are simply on another continent). These “things” can be a person with a heart monitor implant, a farm animal with a biochip, a car that has built-in sensors to alert the driver when maintenance is necessary, or any other object that can be assigned an IP address and given the ability to transfer data over a network. In these cases, computing must be done closer to the source so that data traffic can improve the service that is delivered. This benefit can also be generalized to any web-based application which depends on the processing of information, since the computing is done at peripheral nodes closer to the application users, and this could be exploited as a platform for application providers to improve their services. Edge devices generate large data streams; on the other hand, it is not possible to make decisions in real-time when the analysis is carried out with data in the Cloud, which is almost always at a distance. Using current Cloud infrastructure poses high latency, and this is a challenge between a connected device and the Cloud. The travel time from Sydney to Los Angeles is approximately 175 ms (milliseconds), which is far from the needs of latency-sensitive applications. Advantages of Edge Computing By bringing data analysis tools and applications closer to the real data source: - It reduces the physical distance data must travel, and the time required for the transfer. - It greatly reduces network congestion and periods of inactivity or lag. - It increases responsiveness, speed, and overall service quality. Unlike Cloud computing, which is based on a single Data Center, Edge Computing works with a more distributed network: - It eliminates the round trip to the cloud, thus reducing latency and offering real-time responsiveness. - It keeps the heaviest traffic and processing closer to the application and the devices of the end-user to dramatically reduce latency, and leads to automated, real-time decision-making, improving the user experience. Edge Computing offers local Edge Data Centers for data storage and processing. - Businesses can depend on reliable connectivity for their IoT applications, even when cloud services are affected. - Edge computing allows IoT applications to use less bandwidth and operate normally under conditions of limited connectivity. Businesses can reduce their costs considerably. - Decrease required bandwidth. - Replace Data Centers with Edge Device Solutions. - Reduce data storage requirements. Disadvantages of Edge Computing Where there are advantages, there are risks, and Edge computing is no exception. That is why the disadvantages must be taken into account. A distributed system is much more complex than a centralized Cloud architecture. An edge computing environment is a heterogeneous combination of various components using new technology, produced by different manufacturers, and communicating with each other through a variety of interfaces. IoT devices often transmit trivial information, such as temperature and humidity. Manufacturers have neglected to implement strong security measures on their devices, and some IoTs are susceptible to malicious attacks (mostly denial of service). Less robust infrastructure These Data Centers do not have the complete infrastructure that we can usually find at a Core Data Center, which means needing to overcome some technical challenges. Remember that having a technology partner with the necessary experience and knowledge will help you achieve your business goals. We invite you to visit https://www.kionetworks.com/es-mx/
<urn:uuid:90ae32cd-ff19-4364-82f1-12804b6b6f65>
CC-MAIN-2022-40
https://www.kionetworks.com/en-us/blog/data-center/advantages-and-disadvantages-of-edge-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00338.warc.gz
en
0.924652
889
3.21875
3
The OSI Model is Academic, but Useful The OSI model of a networking protocol stack is a long-lived academic tool. It isn't practical, it doesn't describe exactly how the real protocols work, but it's useful. It definitely helps people quickly understand at least a little about the various protocols. Here it is in its simplest form: When I teach short courses, I work to teach people what they need to know to get things done. I don't try to teach people formal models or fancy jargon. But I find the OSI model, or at least the bottom 4 layers, to be very helpful for giving people a practical understanding. In linguistic terms, the OSI model should be descriptive and not prescriptive. That is, if it provides descriptions that help you understand, that's great. But don't waste effort trying to conform to it as a formal model or requirement. Let's look at what really matters. We'll start at the bottom and work our way up. Layer 1: Physical When I need to introduce networking concepts, I try to use example scenarios that people will understand. So, let's imagine that the Internet hasn't been invented yet. We're talking about the mid to late 1960s at the latest. Someone has a dream of a world-wide network that would allow you to watch funny cat videos from the other side of the world, whenever you wanted. You could watch these on your computer, and you could download and store them as computer files to watch them again. Where do we start? Well, we said watch them on computers and store them as files. So, we're talking about digital data, not analog video signals. Binary communications. Bits. Ones and zeros. The very first task is to define just what makes up a one and a zero. How will we look at a signal and decide "This is a one" versus "This is a zero"? Layer 1 or the Physical layer defines ones and zeros. We need some electronic or electro-optical or other mechanism for transmitting and receiving bits, and we need some test for the signal to decide whether it's a one or a zero. This is an area for electrical engineers and physicists. Our bits might be defined by voltages on wires, or currents flowing along wires, or phase shifts in high frequency signals. Or, we might define bits by light pulses instead of electrical signals. There's no single right answer. Some technology might be very appropriate in one setting, but very inappropriate in another. For example, light pulses through glass fiber make sense for point-to-point links under oceans. But it would be expensive and inconvenient to use fiber to connect several laptop computers in a coffee shop. In that setting we would prefer microwave radio signals with a useful range of no more than several tens of meters. (that is, WiFi) All signals degrade as they travel. Electrical signals and light pulses lose their sharp definition. Radio signals weaken as they spread out. Beyond some range you can no longer reliably tell the difference between what started out as a zero and what started out as a one. So, every so often out across your network, you need something to clean up the bit and relay it onward as an unambiguous signal. We can call such a device a repeater if it's in long point-to-point media like a cable or fiber. We might call it a hub if out network is built in a star shape, with every node connected to a central device that cleans up and forwards signals. Good designs can reliably move a lot of data per second. That's good, because the video files will be large, and we don't want to wait too long. Layer 2: Data Link Layer Now we can transmit and receive a stream of bits. But we want to have more than two computers on our newly invented Internet, and we don't want to force all of them to do the same thing at the same time. So, we need a way to direct bits to one specific device. And, in order to share the network, we need to limit how many bits will be sent at a time. That is, limit the length of any one transmission. Layer 2 groups bits into frames, and directs frames to destinations on the local network. We're building on top of Layer 1, where we defined how much time per bit. Now we will group bits into frames, each with a maximum number of bits and thus taking no more than some maximum amount of time to transmit. That means that if the network link is currently busy with another device's bits, our device will soon get a chance to use the network. Each frame will start with a header including two addresses, specifying which piece of hardware it's coming from, and which one it's meant for. We might call those Layer 2 addresses hardware addresses, or physical addresses, or MAC addresses standing for "Media Access Control". Just two layers in, and we already see how the OSI model is academic but not practical. Ethernet defines Layer 1 signaling, but it also defines Layer 2 frames and addressing. They're both done by the same chips on the Ethernet card. A repeater or hub is a dumb Layer 1 device, all it knows is how to clean up signals. Then it sends clean copies everywhere except where it came from. A switch is a device that cleans up signals like a hub, but it also uses Layer 2 hardware addresses to decide where to forward frames. It can learn by observing traffic, so at first it has to send copies everywhere because it hasn't yet learned where anything is. As its knowledge of the network improves, more and more it is sending frames only to where they need to go. That makes the network more efficient, it makes for less interruptions to hosts attached to the network, and it makes things a little harder for would-be intruders to capture network traffic they shouldn't see. So far, this looks promising! We can send data at high speed to specific devices. However, we have two big problems. First, many of the network technologies we want to use are severely limited in how far they can reach. We have to be polite, we have to allow other nodes to insert packets into the network traffic. But signals can't move faster than some fraction of the speed of light (for Ethernet, about 65-75% the speed of light). So, Layer 1 and 2 design imposes limits such as "Ethernet cables can be no more than 100 meters long." Ethernet (and WiFi, and Token Ring, and many other technologies) are limited to use as a local area network or LAN. But remember that we want to watch cat videos from the other side of the world. We need to be inventing a true "internetwork", a large-scale network made up of smaller networks, some of them small LANs with several attached hosts, others long point-to-point links. We must interconnect these networks with relaying devices. Second, even if we had a network technology to which we could connect every computer in the world, like one enormously large Ethernet switched network, that wouldn't work. A device couldn't accomplish anything, as there would almost never be a clear time for it to send a packet. We can only have a limited number of devices on any of these networks. There's another severe limitation to what we have so far. Hardware addresses are unique, but they're meaningless. Yes, for Ethernet and now WiFi, the first half is a manufacturer code and the second half is basically a serial number. The first half specifies the manufacturer, but that isn't useful for deciding how to forward the packet. We need a logical addressing system, one with some meaning at various scales. That leads us to... Layer 3: Network Layer This layer adds a new header holding Network Layer source and destination addresses. The first part of a Network Layer address is called the netid, it answers the question "Where?" or "On which network?" Devices called routers use the netid to decide how to forward the packet hop by hop by hop as many times as needed to get it to a router that is directly connected to the destination via a switch. That last-hop router then uses the hostid, the remainder of the address, to answer the question "Which host on this network?" and make the final delivery.IP addresses and subnet design There are several families of network protocols, but the group we call TCP/IP won out for the world-wide Internet. In fact, IP stands for "Internet Protocol". IP is the Layer 3 or Network Layer protocol on the Internet. Another page of mine explains how IP addresses implement logical addressing with netid and hostid. It also explains how subnet design provides a multi-level architecture. Wow! Now we can get a packet to any device anywhere in the world! However, while that's very impressive for sending a short message, our design goal of a video file won't fit into a single packet. A packet is probably limited to no more than about 1,500 bytes, while a video file is easily one thousand to one million times that long. Also, these are supposed to be multi-purpose devices. We want to do more than just watch cat videos on our computers. We want our computers to do multiple things at the same time. Send and receive email, submit print jobs, share files, and more. We need a way of delivering data more specifically to one activity or one program on the destination host. So we need the Transport Layer on top of this. But first, notice that we have passed two more signs that the OSI model is just that, an impractical academic model that has its limited uses. A host uses the ARP or Address Resolution Protocol to find the hardware or MAC address for a destination host plugged into the same switch. ARP straddles Layer 2 and Layer 3. The ICMP or Internet Control Message Protocol is used for error reporting, plus some limited network information retrieval, plus echoing back and forth It's mostly about IP and Layer 3, but it's purely meta-level and unneeded as long as things are working. Even if needed, it doesn't make Layer 3 work. It just helps people figure out what they need to fix to get Layer 3 working again. Don't get hung up on the OSI model being some ideal. It's just a tool to help you understand. Layer 4: Transport Layer Some people describe Layer 4's job as "software addressing". The idea is that a Network Layer protocol like IP gets the packet to the correct host, and then the layer above that directs the information to the appropriate software or process running on that host. But, there's even more to it than that. Different applications need their own type of communication. Some need short messages, some need two-way streaming connections. I always describe this by comparing network communication to postcards and telephone calls. Postcards can be cheap to buy, and they're the cheapest thing for individuals to mail (junk mail from businesses and charities is the only mail cheaper than postcards). A postcard is small, you only have a very limited space on which to write a message. But that's fine, you don't have too much to say. If you really wanted to, you could send a long letter via postcards by sending several with numbers explaining the order in which to read them. A postcard isn't guaranteed to get there, but the postal service does a pretty good job. Almost all postcards make it to their destination. If you send one to the same person on each day of your trip, the cards might arrive out of order. But that's still probably good enough for what you want to accomplish. Again, you could write the date and time on your postcards, or number them, if you really cared about the receiver reading them in order. On the other hand, telephone calls cost more in money and effort. People usually don't have to pay more for long-distance calls these days, and quite likely all your calls are "free", or at least your phone provider wants you to think they are. But you have to own the phone itself, and you have to pay a monthly fee to make those "free" calls. How many postcards would you have to send in a month to spend more on postage than you pay for your phone contract? Postcards are easy, you get home and say "Oh, I got a card!" That's it. Telephone calls require both people to be available at the same time. Then there's all the back-and-forth to start a call. — "Hi, this is Joe, is this Jane?" — "Yes, this is Jane." — "Do you have a few minutes to talk about our project?" — "Yes, I suppose so." And only after that do they get around to really communicating. Then there's more back-and-forth at the end of the call, both of them verifying that the other person is really done, nothing more to add or ask, and we'll talk later, and you take care, OK? However, the great thing about a phone call is that it's a two-way stream of communication. Everything one person says gets to the other end in the correct order. The other person can interact, or even interrupt. Some of the time the higher cost and hassle of a telephone call is well worth it. Transport Layer protocols are just like this. Sometimes you have a quick question that will have a short answer. You know that YouTube has funny cat videos, you type youtube.com into your browser, and then your computer has to ask "What's the IP address for The answer is also very short, "The IP address for Other times you want something that streams both ways indefinitely like phone calls do, like Skype, or an interactive command-line session, or a remote desktop session. Or you want to transfer a large data file, like that video you want to download and watch later. Then you need to set up a connection, stream the data, and eventually shut the connection down. UDP or the User Datagram Protocol is the message-oriented protocol at the Transport Layer. UDP is like postcards. Short message-oriented protocols like DNS (to look up IP addresses) and NTP (to synchronize clocks) are carried as UDP messages. TCP or the Transmission Control Protocol is the message-oriented protocol at the Transport Layer, is the stream-oriented protocol. TCP is like telephone calls. Connection-oriented protocols like HTTP and HTTPS (web pages) and SSH (secure remote command-line access and file transfer) are carried in TCP connections. But there's more to specify: send a short message to which process, or make a connection to which process? Both UDP and TCP use port numbers. Each of those protocols provides a 16-bit field for both the source port (which program it's coming from) and destination port (which program it's going to). 216 = 65,536, so there's support for plenty of independent network clients and services on any one host. Let's take my server, cromwell-intl.com, as an example. Its IP address is 22.214.171.124. You try to make some sort of connection, and the IP routing process delivers your packet to my server. Once it arrives, the operating system looks inside the IP header to find a TCP header (TCP because you're making a connection, not sending a message with UDP). Three out of the possible 65,536 destination ports could work. My web server listens on both TCP port 80 and TCP port 443. HTTP runs on TCP/80, HTTPS on TCP/443. Yes, you can connect to HTTP on TCP/80, but my server will immediately redirect your browser to disconnect and connect to HTTPS on TCP/443 instead. The third "open port", or TCP port with a service program accepting connections, is TCP port 22. SSH or Secure Shell runs on TCP/22. That's how I upload web page updates, and connect to interactively work on the server. Anyone can make a TCP connection to port 22, but to authenticate and get a session you need to have the appropriate ECDSA or Ed25519 or RSA key. We didn't originally have devices that made forwarding decisions above Layer 3. Layer 3 was to get it to the destination host, and that operating system would decide what to do. But the Internet became less academic and more dangerous, and now we need to apply further restrictions. Some packets that we could forward based on pure IP logic should instead be discarded for safety's sake. Don't let people from the far side of the Internet connect to your internal file and print sharing, for example. So we have firewalls, devices that decide what not to forward for security reasons. They base those over-riding policy decisions on some combination of Layer 3 and Layer 4 information. Here's one more debunking of the supposed accuracy of the OSI model: TCP makes connections, it clearly deals with sessions. But the OSI model insists that sessions are the entire point of the next layer up the stack. Which leads to... Layer 5-7: The Application Layers And now to make the purists cringe: Everything above Layer 4 is the application. Yes, I know the formal model says Layer 5 is Session, Layer 6 is Presentation, and Layer 7 is Application, but that academic hair-splitting just doesn't matter. OSI Layers 1 through 4 are done by the operating system. Layers above that are done by the application. If an application developer really wants to implement three layers formally following the OSI model, go ahead. Very few do. Old-school NFSv2 over UDP seems to have been the only popular application protocol that really did all seven layers, more or less. Most application-layer protocols are directly inside the transport-layer protocol, either TCP or UDP. And look at this, we finally have a solution for our design project! Our dream of watching funny cat videos from the other side of the world can be done with HTTP, HyperText Transport Protocol. Our web browser can connect to TCP port 80 at youtube.com, send a request (part of the HTTP application-layer protocol), and receive a 200 code meaning "OK" followed by a large stream of video data. It could also be done with HTTPS, on TCP/443. That verifies the server's identity, then sets up encryption on the bidirectional data stream. Then the application-layer protocol works the same way. As for making forwarding decisions, this is the latest addition to our networking model. We want systems that can quickly and accurately decide about application-specific issues: - "Does this data file contain malicious software?" - "Is this email message spam?" - "Is this email message a spearphishing attack?" - "Does this outbound email message contain sensitive information?" "Do this request from a web client constitute a web-specific attack such as cross-site request forgery, SQL injection, PHP injection, buffer overflow, directory traversal, or another syntactical or semantic attack?" ...and so on... These can be enormously difficult questions. Two intelligent and well-motivated people might never agree on whether a specific message should be considered spam, or whether specific message contents were truly sensitive. Automated systems will let inappropriate content pass, or block what really was appropriate, or, most likely, commit a mix of both error types. These tasks require computing resources, to the point that we usually have a dedicated system for each application-layer protocol. We might say Application-Layer Gateway or Application Proxy as a general term for a device that attempts to regulate connections and messages and data transfers in application-aware ways. Or, we might use terms more specific to their tasks, such as Malware Detection or AV for Anti-Virus, Spam Filter and Anti-Phishing for email, Data Loss Prevention or DLP to keep sensitive data from leaking, and Web Application Firewall or WAF for the wide range of web-specific threats. What Really Matters Here is the result of leaving out unneeded layers and adding in short explanations of what happens and what we call the connecting devices. Jobs software programs do |ALG, AV, Spam filter, DLP, WAF, etc| UDP: Messages to numbered ports TCP: Connections to numbered ports Relay packets hop by hop to anywhere by IP address: [netid|hostid] Send frames to HW/MAC addresses Send and receive 0 vs 1 bits |Repeater (link) or hub (star)| And of course, they keep moving the goalposts!Software-Defined Software-Defined Networking or SDN allows a device to request a "flow" from the network infrastructure. It could be a TCP connection, or it could be a flow of UDP packets, or it could be just about anything else. The flow request can include Quality of Service or QOS performance metrics, such as bandwidth, latency, latency jitter, and so on. It could include security requirements, to compartmentalize traffic flows and keep sensitive traffic off less-trusted network links. With OpenFlow, the request could be anything you can dream up. The flow definitions can be based on everything from physical topology up through the Application Layer, as voice and video streaming quality is an application-layer issue. People working in SDN need to call their highly controllable forwarding boxes something. So, within SDN, switch means an SDN switch, making forwarding decisions at least up through the Transport Layer.
<urn:uuid:1cdfaf0f-db31-49e8-b199-9a3c7c0d83aa>
CC-MAIN-2022-40
https://cromwell-intl.com/networking/osi.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00338.warc.gz
en
0.945195
4,684
3.140625
3
Standards are set of published documents used as a measure or a model in relative evaluation. These are applied to build particulars and methodology intended for predictable outcomes to guarantee the dependability of the materials, items, strategies as well as administrations that professionals use every day. Standards address different issues and aren’t bound by different conventions that encourage item versatility, acknowledgment, similarity, and interoperability. Why do we need standards? Standards function like building blocks for item advancement as these are demonstrated after conventions that can be easily received and comprehended. Standards enable us to compare product offerings and know about their weaknesses. In the meantime, they allow us the extension to accommodate to the limitations as well. As standards are received and connected in numerous business sectors, likewise they fuel a worldwide exchange. Different forms of standards: - Quality management to work more effectively and reduce product failures - Environmental management to decrease ecological effects, diminish squander and improve sustainability. - Health and security management to alleviate chances of danger at workplace. - IT security management to keep sensitive data secure. - Energy management to optimize energy consumption. - Food safety to sustain and prevent from being defiled. - Accessibility management to make structures available to disabled clients. Standards give a platform to both IT experts and specialists (including administration and clients/customers) towards the development of the organization. Following are a few advantages: 1. Business alignment with IT: Allows IT to go about as a specialist co-op and turn into a center and a pivotal aspect of the business. 2. Systems integration: Ensures compatibility with third-party project management systems to enhance work processes, improve coordination and perceivability among diverse groups within the organization. 3. Reliability: Monitors incident and problem management processes with the framework-enabled businesses. Issue administration includes activities such as auditing, underlying driver examination, and issue resolution to help reduce future episodes from recurring. 4. Quality of service: Incorporates business relationship and service-level management processes that give insight into customer experience, and to identify the client expectations by guaranteeing that IT services are easily accessible. 5. Change management: Provides an agile environment that gives businesses the ability to respond to changing requirements quickly and without disruption to services, thereby enabling an environment of continuous improvement. 6. Transparent costs: Ensures transparency in terms of cost in a service-oriented model, where expenses can add up quick, much to a customer's dismay. Through defined standards, you can keep check on the costs accurately and optimize expenses. 7. Business agility: Empowers organizations with pre-characterized procedures and best practices to respond with agility to the rapid innovations, centered around development, cutting-edge technologies, and customer satisfaction. Limitations of implementing standards Nothing can give you a hundred percent result when we talk about business improvement or managing costs and generating value from it, since there’s always a scope of improvement. Sticking to standards too comes with some drawbacks: 1. Slow Returns: Minimal short-run benefits are gradual and can be a demotivating factor for the business. Returns from implementing standards are generated at a higher rate initially but gradually they lower down to a saturation point. To avoid this: - Gather regular feedback and make necessary improvements and adjustments. - Implement a phased approach i.e. quick-wins that generate savings for long-term process implementation. 2. Time consuming: Execution is tedious and time-consuming as the total design and the stream of the organization need to change. The process can be driven from the top or from the bottom of the organization. Any top-down policy for implementing standards means that policies are pushed down from the top of the company. The advantage of a top-down approach is that it ensures that the policy is aligned with the company strategy. On the down side, it's a time-consuming process that requires time to implement. The second approach is bottom-up implementation policy. Relatively faster, the bottom-up approach addresses the operational concerns of employees because it starts with their inputs and concerns, hence standards are built on calculated risks. 3. Trained professionals: Implementation of defined standards requires experts or a board to adjust the structure and get acquainted with it. This can be achieved with the help of external vendors specialized in advisory to organizations and vendors who can assist with quick implementations. 4. Disruptive changes: Forced implementation in the organization may result in a negative impact on the everyday operations in the organization. To avoid this, prepare for change implementation with a systematic approach of a thorough process to recognize and mitigate risks throughout the life cycle of the change. This should be coupled with transparency, which is the key to success and stakeholder engagement. 5. Short-term returns: There are no profits in a here-and-now premise, and there is a need of maintained endeavors, i.e., to accomplish the outcomes in the long haul. One needs to weigh returns carefully with regard to returns. It shouldn’t be just seen as ROI, but how it impacts the organization if not implemented. For example, in today’s world, cyber security and ROI are critical aspects that no CIO/CSO can ignore.
<urn:uuid:48727f6c-6907-4c9b-aab1-aa54c47b3381>
CC-MAIN-2022-40
https://www.nagarro.com/en/blog/pros-and-cons-of-adhering-to-standards
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00338.warc.gz
en
0.932424
1,083
2.984375
3
Apr 23 2019 Before we explore how edge computing can benefit from computational storage, it is worth defining what edge computing is. Edge computing is an approach that places computational resources close to where data is generated. Edge computing is most often discussed in the context of the Internet of Things (IoT), and for good reason. IDC has stated that 25% of data that will be generated in 2025 will be real-time data, and 95% of that will be generated by IoT devices. Shipping all of that data from the source to a data center or a cloud is usually prohibitive, which is where edge computing comes into the picture. Edge computing accomplishes two tasks in this context: i) it provides the potential for real-time analysis of data; and ii) it significantly reduces the amount of data that has to be shipped upstream. So how does computational storage fit into the edge computing? In addition to requiring both computing and storage resources, edge computing also presents additional challenges for conventional data processing architectures; these include limited space, power, and cooling. Just think about putting a standard 1U “pizza box” server into a self-driving vehicle or into an automated machine in an industrial environment – challenging would be an understatement. In contrast, computational storage solutions such as the NGD Systems Newport U.2 solid-state drive (SSD) packs 16TB of data storage, a multi-core ARM processor, and dedicated acceleration hardware for AI, encryption, and other capabilities into a 2.5”SSD that is only 15mm thick, and consumes less than 10 watts on average. TECHunplugged recently wrote an independent analysis on computational storage. It includes an exploration of use cases for computational storage in this market, as well as an overview of the vendors currently building products for this market. You can either download the paper by going to the “bit.ly” listing in the illustration, or by downloading it here. It has some great points about why computational storage makes sense for edge computing. Enjoy
<urn:uuid:7f3f039f-0fff-4667-bc6b-8f4f4f9233a1>
CC-MAIN-2022-40
https://ngdsystems.com/edge-computings-need-for-nvme-computational-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00338.warc.gz
en
0.93693
415
3.3125
3
Five Eyes Alliance Advises on Top 10 Initial Attack VectorsCybersecurity Companies Weigh in on Pros and Cons of the Latest Alert Misconfigured or unconfigured security configurations, weak controls and a lack of proper authentication protocols are among the 10 most common initial access vectors "routinely exploited" by threat actors, says Five Eyes, the alliance of cybersecurity authorities from the United States, the United Kingdom, Australia, New Zealand and Canada. Initial Access Attack Vectors Malicious threat actors regularly use techniques such as exploiting public-facing applications, external remote services, phishing and trusted relationships to gain initial access into victims' systems, according to the joint advisory from the Five Eyes alliance. To leverage these techniques, the advisory says, they use the following top 10 initial access vectors: - Unenforced multifactor authentication: MFA, particularly for remote desktop access, can help prevent account takeovers. - Incorrectly applied privileges or permissions and errors within access control lists: These mistakes can prevent the enforcement of access control rules and allow unauthorized users or system processes to be granted access to objects. - Outdated or unpatched software: Unpatched software may allow an attacker to exploit publicly known vulnerabilities to gain access to sensitive information, launch a denial-of-service attack or take control of a system. - Use of vendor-supplied default configurations or default login usernames and passwords: Many software and hardware products come out of the box with overly permissive factory-default configurations intended to make the products user-friendly and reduce the troubleshooting time for customer service. - Lack of sufficient controls in remote services: These include virtual private networks, to prevent unauthorized access. During recent years, malicious threat actors have been observed targeting remote services. - Lack of strong password policies: Malicious cyber actors can use a myriad of methods to exploit weak, leaked or compromised passwords and gain unauthorized access to a target system. - Unprotected cloud services: Misconfigured cloud services are common targets for threat actors. Poor configurations can allow for sensitive data theft and even attacks such as cryptojacking. - Internet-facing open ports and misconfigured services: This is one of the most common vulnerability findings. Threat actors use scanning tools to detect open ports and often use them as an initial attack vector. - Lack of phishing detection and mitigation measures: Threat actors send emails with malicious macros - primarily in Microsoft Word or Excel files - that often go undetected due to inadequate technology adoption. - Poor endpoint detection and response: Threat actors use obfuscated malicious scripts and PowerShell attacks to bypass endpoint security controls and launch attacks on target devices. If organizations are aware of these attack vectors, they can be better prepared to counter the threats and reduce risks. Rob Joyce, director of cybersecurity at the National Security Agency, says that there is "no need for fancy zero-days when these weak controls and misconfigurations allow adversaries access." CISA Director Jen Easterly concurs and advises users, via Twitter, to review and act on the Five Eyes advisory. Malicious cyber actors don’t need to use zero-days to compromise your data—they just need to exploit poor security configs, weak controls & a range of bad cyber practices. Let’s make it A LOT harder on them—check out this CSA to help reduce your risk: https://t.co/ZjOSAdjE5E pic.twitter.com/38upoMN2Nb— Jen Easterly Shields Up! (@CISAJen) May 17, 2022 The mitigation measures offered in the joint advisory include: - Use of control access: This includes adoption of the zero trust security model and role-based access control. - Credential hardening: This includes implementation of MFA and changing vendor- or device-specific default passwords. - Centralized log management: Each application and system should generate sufficient log information. This plays a key role in detecting and dealing with attacks and incidents. - Employment of antivirus and detection tools: This includes intrusion detection and prevention systems. - Initiating and maintaining configuration and patch management programs: Always operate services exposed on internet-accessible hosts with secure configurations and implement asset and patch management processes. Useful Advice - But Tough to Implement As lists go, this is a very good one and enumerates the most common reasons why organizations fall victim to cyberattacks, says Chris Clements, vice president of solutions architecture at security firm Cerberus Sentinel. "By following CISA's recommendations, organizations can drastically improve their security posture and resilience to cyberattack," he says. But, he adds, many of these recommendations can be difficult to implement, especially at organizations that don't already have a strong cybersecurity culture. "It's also difficult for an organization without an existing culture to know where to begin. For example, the mitigations list starts with 'Adopt a zero trust security model.' Zero trust can be an incredibly effective approach to network defense, but can also be a significant undertaking to implement," he says. This is particularly true for organizations with large environments, legacy dependencies or limited resources for staff or budget, Clements tells Information Security Media Group. "It's critical for every organization to adopt a true culture of security to evaluate their individual risk, which best practices can be implemented quickly, and from both a short- and long-term strategy for defense. There should also be a candid assessment of areas where it makes sense to partner with outside organizations for assistance," he says. "An SOC is a great thing to have in this case, but not all organizations will have the resources to build and staff their own." Focus on Social Engineering While it's a good effort, this list, like many others, doesn't acknowledge that phishing and social engineering make up 50% to 90% of the cybersecurity problem, says Roger Grimes, data-driven defense evangelist at KnowBe4. "Like most warnings, it mentions phishing and social engineering almost in passing," he says. "None of the mitigations mention ... better training employees to recognize and defeat phishing attacks," he adds. Grimes says social engineering is the biggest threat by far, but "no one who is reading the document would know that defeating it is the single best thing you can do." Preventing social engineering "is better than firewalls, antivirus, MFA, zero trust defenses and everything else added up all together," he says and warns that "if defenders do not concentrate on and do more to defeat social engineering, they just are not going to be successful in keeping hackers and malware out." Is Passwordless the Future? The joint advisory also highlights just how frequently weak passwords and user credentials appear in attacker exploits. Whether it's exploitation of default passwords, phishing, guessing insecure passwords, failure to deploy MFA or use of stolen login credentials, passwords are clearly a key enabler behind many cyberattack scenarios, Mike Newman, CEO of My1Login, tells ISMG. Organizations need to take action against this threat, and one of the best remedies, Newman tells ISMG, is to remove passwords from the hands of users and enable the transition to passwordless security. "This limits the chances of passwords being stolen and phished for and also means users are not forced to employ insecure password practices," he says. Last month, the Five Eyes alliance published a similar list of the most routinely exploited vulnerabilities in the past year, which included CVEs for the Log4Shell, ProxyShell, ProxyLogon, ZeroLogon, Zoho ManageEngine AD SelfService Plus, Atlassian Confluence and VMware vSphere Client vulnerabilities (see: The Top 15 Most Routinely Exploited Vulnerabilities of 2021).
<urn:uuid:53021972-0117-43c4-9353-956cd4983533>
CC-MAIN-2022-40
https://www.inforisktoday.com/five-eyes-alliance-advises-on-top-10-initial-attack-vectors-a-19097
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00338.warc.gz
en
0.92016
1,606
2.5625
3
The increased availability of Internet of Things (IoT) devices has allowed facility managers to implement automation which was previously impossible with legacy facilities management systems. IoT devices have also allowed facilities managers to gather a rich data set that allows for more data-driven decisions. Utilization Based Facilities Management Traditional equipment maintenance involves scheduled inspections. Regardless of usage, equipment is serviced at regular intervals or cleaning arranged at regular frequencies. Clearly, for equipment or spaces where use is irregular, such an approach is inefficient. IoT sensors allow granular monitoring of equipment utilization. For example, people counters can be deployed to count persons entering a toilet, and cleaners deployed only when a fixed number of persons have used it. Tracking Asset Locations in Real-time IoT technology can make assets location-aware. Indoor positioning technologies utilize RFID or Bluetooth to enable facilities managers to track the indoor positions of critical assets in real-time. Without IoT, location tracking of assets can be very tedious and error-prone, with users having to report the latest locations manually. Accurate real-time knowledge of asset location often also translates to the need to store less inventory. For example, in a hospital with portable ultrasound machines, finding out where these machines reduce handover time to the following user and optimize their utilization. Automated Fault Triggering IoT sensors affixed to equipment can automatically trigger maintenance requests. When a piece of equipment stops functioning, these sensors can alert a workflow management system which can immediately alert the appropriate technician. The result is lower equipment downtime or increased building user satisfaction. The concept of automated fault triggering applies to a varied range of Facilities Management processes. In the context of Covid-19, for example, temperature sensors can alert technicians when refrigeration levels of vaccines have exceeded prescribed thresholds; and indoor air quality sensors can trigger alerts when ventilation is insufficient. Improving Employee Well-being Companies have started giving more attention to the welfare of their most valuable assets, their employees. The utilization of sensors and smart building systems allows organizations to improve the well-being of building users by tracking the indoor environment and regulating it to suit the users’ needs better. Instead of constantly having disagreements over the building temperature, crowded areas can be created to fit the demands of the users. This allows for the easy environment adjustment following the users’ preference by incessantly monitoring the space’s temperature and humidity. Additionally, air quality can be monitored and modified to look after the health and welfare of the employees. These benefits are only some of the biggest reasons modern that drive IoT and analytics deployment in modern facility management. Another factor that’s boosting the implementation of IoT is predictive maintenance which leads to the next point. Moving Towards Predictive Maintenance with IoT Integration More often than not, several facility management teams wait to fix something when it has been broken, but this reactive attitude happens to be very costly. Facility management can be proactive and nip the bud of the problem before it arises. The holy grail of facility management, predictive maintenance, will only be possible by leveraging IoT by constantly monitoring asset conditions and letting them self-monitor; facility management services can determine whether an asset is bound to fail to give them the chance to act before a failure occurs. Moreover, assets that communicate with each other in an interconnected system can inform other assets that they are due to failure to stop the process before failure extends to the entire system. Altogether, these predictive measures can increase asset performance and prolong the asset’s lifespan through optimized asset operation. Practically, every asset, equipment, and system generates data. Keeping track of this data instantaneously will alert if something is out of order – or about to be –so the problem can be addressed in its infancy before it turns out to be an expensive repair or replacement. Inevitably assets and machinery will deteriorate over time. Still, with predictive analysis and machine learning techniques, historical data is taken into account to arrive at predictions as to when a specific asset requires, for instance, a refurbish or a significant overhaul. IoT gives way to improved building operating systems, averting failures and keeping building users happy. Building management is no simple activity. Facilities vary, team members are spread out, and assets function independently from each other, which pave the way for operating in a silo, hindering management from making more informed decisions and leading to repetitive tasks. Taking advantage of unutilized data sitting within these assets and systems is needed to create an efficient building operation. Now is the time for connected facility management. Fortunately, it takes no wizard-like figure to create an entirely digitized IoT smart building in today’s world. Instead, it presses for a collaborative effort from management, who has set up comprehensive and actionable goals. When done right, facility management can transform into an automated system beneficial to the occupants, management, and owners.
<urn:uuid:8c76c670-1864-4640-ba68-8c3b12d73bdf>
CC-MAIN-2022-40
https://www.iotforall.com/how-iot-transforms-facilities-management-processes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00338.warc.gz
en
0.932761
1,000
2.625
3
What is robotic process automation (RPA)? Robotic process automation (RPA) is technology that cuts down employees’ workloads by automating repetitive, high-volume steps in processes. Software robots, or ‘bots’ emulate the actions of human workers to execute rote tasks within applications via their UI. When RPA software handles this manual work, employees can devote their time and skills to more rewarding and valuable activities. How RPA works Robotic process automation is principally designed to automate repetitive analytical tasks and processes. This is accomplished by programming a desktop RPA interface that you can “teach” to mimic clicks and keystrokes to replicate a human typing information into various fields, access systems, compile information in spreadsheets, or transfer data between disparate systems. Robotic process automation can be applied to high-volume, business rules-driven, repetitive processes across your organization. Save time and costs Improve productivity, speed up results, and save money by investing fewer working hours. Customer and employee satisfaction Fulfill requests in a timely manner and reduce employees’ manual workload. By eliminating human error, RPA increases data integrity and makes every analytical activity more impactful. What processes are relevant to RPA? Financial Services operations Implement automation solutions to perform mass customer account maintenance, monitor and flag accounts for suspicious activity, issue credit cards, open or close checking accounts, and more. How Customers Use Nintex RPA Read the eBook Lending and loan processing Process loans and payment plans, automatically update accounts, scrape data to find interest due, execute batch transactions between banks and mortgage companies, and more. Read the use case IT system maintenance Perform regular file and system backups, migrate data from legacy systems quickly and precisely, test new software, automate IT staff notifications for consistent monitoring, and more. Driving Rapid Results with RPA View the webinar Employee data management Leverage bots to accurately and quickly process payroll checks, automate leave request notifications, perform audit trails, automatically add and delete employee information in your HR system. Read the use case The honest truth about robotic process automation The market hype around RPA is warranted, but it is often sold as a one-stop solution for all process challenges when in reality, traditional RPA software is limited. It can be expensive and time-consuming to deploy, and inflexible when processes require human interaction. To solve these digital transformation roadblocks, it’s important to tailor your automation toolset to your organization’s needs. "Nintex has an ROI report that analysis all workflows in your environment and time spent on developing the workflows . We have paid off our investment in the first 12 months according to this report." "My organization was able to use Nintex workflows to compress processes that previously took multiple steps (with multiple points of opportunity for human and technical errors) into a single step from a user point of view. Being able to eliminate the possibility of any one step being forgotten or bottlenecked was great. Additionally, the automation allows for the system to function independent of any one employee being at work, eliminating the fear of the "person who knows how it works" to leave and take the functionality with them."
<urn:uuid:88b2e783-b432-42e2-b23a-ec3742217056>
CC-MAIN-2022-40
https://www.nintex.com/what-is-robotic-process-automation-rpa/?referrer=kryon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00338.warc.gz
en
0.914845
694
2.671875
3
Multimedia applications typically involve programming code, enhanced user interaction, and multiple different types or forms of media. There are five core categories of multimedia that can be used as part of your multimedia system. This seems so obvious that many people forget about it. Text content is the most common type of media used throughout multimedia systems and applications. Chances are, your multimedia system uses text and at least one other type of media to have functionality. Whether your text relays information or reinforces information, it is a crucial part of any multimedia system. Many multimedia systems include digital images as part of the application. Many applications use custom buttons or interactive elements to further customize the application. Other images can include basic digital image files like JPEGs or PNGs. These file types allow for good image quality without a large file size. In many multimedia systems, audio provides a crucial link between text and images. In applications, many audio files automatically play. If you are using your audio on the web, the end user might need to have a plug-in media player installed to access it. Common audio formats include MP3, WMA, and RealAudio. Another common type of media found in multimedia applications is video. Digital video can be streamed or downloaded and compressed as needed to reduce the file size. The most common file formats are Flash, MPEG, AVI, WMV, and QuickTime. Just like audio files, the end user might need a plug-in installed before they can watch the video. Animation is a fun and common part of both online and desktop multimedia systems. Whether it means an interactive element that invites the user to engage with the application or simply a fun animation to watch, animation is a unique multimedia system element. Adobe Flash is commonly used to create animations viewable online. Multimedia Systems from FiberPlus Whether you need an electronic security system installation, audio/visual support, structured cabling, or distributed antenna systems, FiberPlus has the expertise and experience to get the job done. We pride ourselves on offering our customers the best possible products, the best possible customer service, and the best possible prices. Are you ready to get connected to the future? Contact us online or give us a call at 800-394-3301 for more information. For more tips and tricks, follow us on Facebook, Twitter, Google+, LinkedIn, and Pinterest.
<urn:uuid:545e0824-7e0b-42f5-a109-9ffe86e6f2df>
CC-MAIN-2022-40
https://www.fiberplusinc.com/multimedia-systems/5-applications-for-your-multimedia-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00338.warc.gz
en
0.902816
489
2.53125
3
We have all heard the the saying, “A picture is worth a thousand words.” In that case, how many words is running code worth? Based on recent IETF Hackathons, I would estimate the number to be much higher. The first thing that typically comes to mind when thinking of the IETF is internet standards, commonly known as RFCs. RFCs are the basis on which the internet as we know it is built. RFCs are evidence of the importance of words, especially carefully crafted and reviewed words that specify what each network protocol does and how it works. The IETF Hackathon promotes running code to complement existing RFCs and accelerate the creation of new network protocols that improve the functionality, manageability, and security of the internet. IETF 105 was a return trip to Montreal, having been here the same time last year for IETF 102. As has become the tradition, the IETF meeting kicked off with an IETF Hackathon. The first IETF Hackathon, at IETF 92 in Dallas, had around 40 participants working on 6 projects related to a few IETF working groups. At the IETF 105 Hackathon, July 20-21 in Montreal, over 300 participants worked on over 40 projects representing most IETF working groups. These included projects on QUIC, MUD, TLS 1.3, YANG, NETCONF, RESTCONF, DNS, DNS over HTTP, WebRTC, and many more. Here’s a short video recorded at the hackathon: Writing and reviewing internet drafts is still a critical activity before, during, and after each IETF meeting, but thanks to the IETF Hackathon, there is now an agreed upon time and place to come together to focus on running code, code that tests theories, implements experimental algorithms, and verifies interoperability (or the lack thereof). This has not only improved the pace and quality of IETF standards, it has opened the doors to new forms of contribution and a new group of contributors, people who excel in software development and now find it easier and more rewarding to participate in the standardization process. IETF hackathons are collaborative events with a shared goal of moving IETF work forward. Commemorative t-shirts and laptop stickers are prized takeaways from the event. IETF veterans work side by side with newcomers, exchanging ideas and collaborating on code. This is a great way for newcomers, especially developers interested in networking, security, and other IETF technologies, to have a welcoming experience and start making significant contributions immediately. “This was our first year participating in the IETF hackathon, and we had a great time. We were able to improve the interoperability of LiveSwitch with WebRTC simulcast in large part due to being able to consult with Harald Alvestrand from Google, Nils Ohlmeier from Mozilla, and Lorenzo Miniero from Meetecho. Many thanks to everyone who helped to put this together!” – Anton Venema, WebRTC 1.0 improvements and testing team “The IETF hackathon is a good way for us to get in touch with Internet draft/RFC authors and discuss the ideas that they are considering. We were able to work with the SCE group, some of the TLS group, and also reach out to open source projects who gave us feedback on our proposed implementations. Without the IETF hackathon, this would have been hard – Loganaden Velvindron, cyberstorm.mu team working remotely from Mauritius “Although I was only testing another participant’s code, it was great to be able to informally work with other folks who were also interested in the tool being created. The talk around the table about different projects definitely helped everyone out.” – Paul Hoffman, DNS team “The Manufacturer Usage Descriptions (MUD) table brought together people from three continents to do interop testing against some four MUD managers with several different devices, and to develop a new reporting capability to further improve IoT device protections.” – Eliot Lear, MUD draft author and implementor “Eliot Lear, the inventor of MUD, was right there at the table. We were working jointly on a draft. The Hackathon enabled us to quickly iterate through ideas. It was a very productive week end. This was my first IETF. Just wanted to thank the organizers for the experience. Thanks especially to the Hackathon organizers.” – M. Ranganathan, MUD draft author and implementor Insights gained during the hackathon are captured and shared in short presentations at the end of the hackathon. These presentations are recorded and the materials are available via the IETF Hackathon GitHub. Relevant findings are brought back into working group meetings that run throughout the following week. This accelerates the standardization process and leads to better standards that are more complete, more precise, and easier to implement. Hackdemo Happy Hour In the interest of time, presentations at the end of the hackathon are limited to three minutes. They are meant to be conversations starters that highlight: - What problem you tried to solve - What you achieved - Lessons learned, feedback to working group - Collaboration with open source communities and other SDOs The following evening, we held Hackdemo Happy Hour. This provided an opportunity for teams to polish their projects and presentations, and then engage in more detailed conversations with the IETF community, including people not at the hackathon. This is a fun way to extend the reach of the hackathon by helping people connect, share information, and discover areas of common interest. More information about all the projects at the hackathon can be found on the hackathon wiki. Thank you to sponsor and supporters Huge thanks to ICANN, for both sponsoring this hackathon and stepping up to sponsor the next two hackathons as well. We greatly appreciate this sponsorship, and we welcome and encourage additional sponsors for future hackathons to ensure it remains a free event accessible to everyone. Cisco DevNet sponsored and ran the first several hackathons and continue to be big hackathon supporters. NoviFlow provided valuable support as well, making software licensing available for P4 programming tools. “We are very happy that NoviFlow was able to contribute to the IETF105 Hackathon. As a local Montreal-based programmable-networking company, we appreciate the IETF’s efforts to globally promote the evolution of networks and to drive awareness of new technologies such as P4 to the broader networking community. We also want to thank IRTF’s COINRG and Barefoot Networks for helping us make our participation in this event possible.” – Marc LeClerc, Noviflow - DevNet Automation Use Case Library - Network automation Dev Center - Network automation and programming learning paths - Network programmability basics video course * Photos and video thanks to © Stonehouse Photographic / Internet Society We’d love to hear what you think. Ask a question or leave a comment below. And stay connected with Cisco DevNet on social! Visit the new Developer Video Channel
<urn:uuid:f560ad96-b35f-4b67-8b10-634fc89b3e6b>
CC-MAIN-2022-40
https://blogs.cisco.com/developer/ietf-hackathon-105
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00338.warc.gz
en
0.938412
1,501
2.734375
3
A quantum physics breakthrough could turn pipe dreams, such as ultra-high-speed quantum computers and teleportation, into real-world technologies. Eugene Polzik and his coresearchers at Denmark’s University of Aarhus have managed to raise the mysterious concept of quantum entanglement?a link between two or more particles that have no physical contact?to an unprecedented scale. The team gathered two clouds of cesium gas, each containing about a trillion atoms, into separate, sealed vessels. They then shined a laser through both clouds. For a split second, the clouds became entangled, and magnetic changes in one instantly affected the other. The previous entanglement record was a mere four atoms. The development could lead to the creation of computers and communications networks that operate much faster than anything that’s available today, says Peter Handel, a physics professor at the University of Missouri in St. Louis. “Information encoded in photons could be transmitted to places without sending them across space,” he says. Quantum entanglement could also allow matter to be transported from one location to another by instantly duplicating the properties of one object in another place. Other researchers, however, are skeptical about quantum entanglement’s sci-fi aspects. “You can’t transfer information faster than the speed of light, that’s an immutable law of physics,” warns Randall Hulet, a physics and astronomy professor at Rice University in Houston. Yet Hulet is confident that quantum science will eventually be able to provide tangible IT benefits. “Quantum mechanics’ promise lies in things like unbreakable codes and computers that run exponentially faster by operating in multiple states rather than step-by-step,” he says. “Quantum entanglement is significant, but it’s also important not to get carried away by things.”
<urn:uuid:eb6e92b2-ad1f-4f96-87e9-e26ccab951a3>
CC-MAIN-2022-40
https://www.cio.com/article/270764/infrastructure-quantum-computers-no-longer-a-pipe-dream.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00338.warc.gz
en
0.937177
391
3.515625
4
Study finds effective cooling improves data centre energy efficiency by 64% Whitepaper reveals cooling key to data center energy efficiency Real world studies show huge savings with cooling efficiency Higher Chilled Water Temperatures Improve Data Center Energy Efficiency by 64% As data centers continue to swell in size, an issue that is growing in parallel is energy efficiency. This hot topic affects all that are involved with the design and operation of data centers. Schneider Electric says cooling takes a large chunk of a data center's total power usage, with the water chillers in particular being very hungry and as such requiring focused efforts to improve overall energy efficiency. In fact, research from Schneider Electric reveals water chillers account for between 60 and 85 percent of total cooling system energy consumption. This has led to data centers being designed (where possible) to minimise the usage of chillers by maximising the usage of ‘free cooling' with less power-hungry systems like air coolers and cooling towers that can keep the temperature of the IT space at a satisfactory level. Schneider Electric recently released a new whitepaper (How Higher Chiiled Water Temperature Can Improve Data Center Cooling System Efficiency) that details the various strategies and techniques that data center operators can deploy to permit satisfactory cooling at higher temperatures. “One approach to reducing water chiller energy consumption is to design the cooling system so that a higher outlet water temperature from the chillers can be tolerated while maintaining a sufficient cooling effort. In this way, chillers consume less energy by not having to work as hard, and the number of free cooling hours can be increased,” the report states. Of course there is a trade-off involved with this approach as it requires capital investment, which includes having to install more air-handling units inside the IT space to offset the high water-coolant temperatures, the need for redesigned equipment such as coils to provide adequate cooling efforts when CHW (chilled water temperature) exceeds 20 degrees Celsius, and the addition of adiabatic or evaporative cooling to further improve heat rejection efficiency. The whitepaper presented two case studies to illustrate its findings with real world examples. The first was in a temperate region (Frankfurt, Germany) while the second was located in a tropical monsoon climate (Miami, Florida). In both cases, data was collected to assess the energy savings that were accrued by deploying high CHW temperatures at various increments while comparing the effect of deploying additional adiabatic cooling. The study found that an increased capital expenditure of 13 percent for both examples resulted in energy savings between 41 and 64 percent, with improvements in total cost of ownership (TCO) between 12 and 16 percent over a three year period. “Another inherent benefit of reducing the amount of energy expended on cooling is the improvement in a data centers PUE (Power Usage Effectiveness) rating. As this is calculated by dividing the total amount of power consumed by a data center by the power consumed by its IT equipment alone, any reduction in energy expended on cooling will naturally reduce the PUE figure,” the report states. The PUE for the Frankfurt data center was reduced by 16 percent, while Miami saw a 14 percent decrease.
<urn:uuid:84407041-dd71-458b-a0ee-43a8739fbf66>
CC-MAIN-2022-40
https://datacenternews.asia/story/study-finds-effective-cooling-improves-data-centre-energy-efficiency-64
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00538.warc.gz
en
0.942936
659
2.859375
3
Automated car technology has been booming within the last few years. Companies like Uber and Lyft are currently reimaging the way we look at public vehicle transportation, to many other companies like Google attempting to redefine how we use our vehicles, smart cars have been a talking point for anybody interested in technology. While all of this flashy technology is new and exciting, it does leave a lot of us wondering if smart cars and automated vehicles pose a possible transportation security threat. Read on to learn what automatic smart cars mean for the state and future of transportation security. Open to Hacking Smart vehicles are susceptible to being hacked, which is a big problem for transportation security. Hackers have the ability to breach the internal system of smart cars if they’re not protected properly. They could possibly disengage brakes, remove transmissions, and see real-time data of GPS coordinates of smart cars as they’re moving. This can be done through a single loophole in a car’s software. The solution might be a little bit of a trade-off, as smart car manufacturers can bolster security by having a bit more of your information. Take, for example, how Lyft and Uber collect customer GPS information for a few minutes even after their ride is over. Terrorist and Threat Accessibility If somebody with the wrong intent gets their hands on a vehicle with no driver and is programmed in a specific way, it might have disastrous results. Terrorists might potentially attach an improvised explosive device, or IED for short, to the underside of a smart vehicle and program it to cause severe amounts of damage to secure facilities and heavily populated spaces. The best way to stop this is through the use of Gatekeeper’s Automatic Under Vehicle Inspection System. Our automatic under-vehicle inspection system is capable of quickly scanning under a vehicle for any anomalies or odd modifications, like an IED, for example. This makes transportation security easy and very stress-free for both the manufacturer and the owner of the vehicle. A problem that comes along with the many benefits of driverless cars is the process of removing accountability. There is no driver to blame if a problem were to suffice. This can be a serious issue for those of us worried about transportation security. One way to fix this issue is to utilize a license plate reader, which could then be used to figure out who the car is registered to and where it actually came from originally. Swift and extensive license plate reading can be tricky, but our own license plate reader system can scan license plates quickly to provide you or your facility with all the information you want. Groundbreaking Technologies with Gatekeeper Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 36 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company.
<urn:uuid:ec650e52-b48b-4df9-8647-d1313f5a353e>
CC-MAIN-2022-40
https://www.gatekeepersecurity.com/blog/automated-smart-cars-mean-state-transportation-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00538.warc.gz
en
0.942461
665
2.515625
3
Thank you for Subscribing to CIO Applications Weekly Brief Transforming Industries with Machine Vision Sanghamitra Deb, Staff Data Scientist, Chegg Artificial Intelligence is transforming education, healthcare, retail, automobile and most other industries. Machine Vision or Computer Vision is a remarkable part of AI that gives computers the ability to see, and comprehend, similar to a human being. Vision is a more fundamental means of communication compared to language. A picture is worth a thousand words. With the widespread use ofsmartphones, people want to interact with the digital world via images. Instead of typed instructions, computers will receive information from cameras or sensors.For example, we unlock our phone with our face, we want our fitness app to know what food we ate and the calories associated with it and we want our education app to infer what we need to learn without having to type it. Computer Vision gives machines this ability. The past decade has been revolutionary for Computer Vision with the success of deep learning algorithms and advancement in hardware dedicated to graphs and Image Processing. This has led to a huge uptick in investments in this field. Fig 1: Investment in Computer Vision since 2011 Machine Vision solutions require fast processing of computationally expensive algorithms on large number of images. The processors might also need to be embedded in drones, smart phones and other portable devices. While processing on computers, we have a large range of hardware options: GPU’s, hybrid GPU+CPU and High Performing Cluster (HPCs). High performing GPU’s and CPU’s are readily available from cloud compute providers.Companies are no longer limited by equipment set up costs and can easily start machine vision projects. Recent advances in embedded vision systems have produced miniature cameras and processors.There is a huge potential for demand in this space. This will be shaped by the evolving imaging capacities of embedded vision. Visual understanding is difficult since it requires knowledge beyond the objects present in image; it represents the ability to comprehend actions and goals of the subjects in an image of video. This is very straightforward for humans but immensely complex for an algorithm. The future of Machine Vision lies in enhancement of processing speed integrated with improved modeling capabilities These achievements have been possible because of well-timed hardware and algorithmic advancements, and the availability of examples or, what is more commonly known asannotated data in machine learning. Machine Vision Applications To understand the utility of Machine Vision let us deep dive into a few industries that are getting transformed with computer vision. • Automobile Industry: Every year, traffic accidents account for more 2% of deaths and alarge percentage of people also get severely injured due to human error. Currently, machine vision techniques are being used to incorporate safety features that makes sure the driver stays within the lane and are not too close to other cars. There are also features that help with tight parking situations. Autonomous vehicles are expected to use cameras and sensors all around the car and perform multiple tasks simultaneously to operate the vehicle without human intervention. • Healthcare: Machine vision is being used in healthcare in a number of applications ranging from predicting heartbeat rhythm disorders to tracking blood loss during childbirth. The advantages of machine vision techniques are precisionand timely detection of rare diseases. Precision medicine can prevent unnecessary invasive surgeries or expensive medications. Machines are able to attain precision from several examples that humans might miss due to sensory limitations. Timely detection of fatal diseases such as cancer with pattern recognition employed by machine vision techniques will have a large impact on saving lives. Machine Vision can also provide assistance to doctors by taking over tedious tasks and making them more efficient. The ability to make predictions at an individual level will make personalized medicine achievable. Machine vision is also fundamentally changing industries such as quality assurance: defects in production line are automatically detected. For retail, customers could potentially walk into a store and pick the item they want and walk out and there will be a seamless transaction performed by an automated system using machine vision. Other industries that are experiencing a transformation are robotics, agriculture, fitness, education, finance, marketing to mention a few. What does this mean for your business? The possibilities of machine vision solutions to improve a business can be numerous. It is important to identify the problems that can be solved by machine vision and get stakeholder buy in. The next step is to decide whether to build a solution in-house or to buy services. If the problem can be cast into a standard problem, such as facial recognition or object detection, thenbuying services is a good place to start. If the problem is custom or domain specific, building solutions makes more sense. This requires expertise which can be achieved through in-house training and external hires. Another important part of achieving an accurate model is having examples or annotated data that deep learning models need, to learn patterns. It is important to collect good-quality annotated data by identifying reliable third-party data vendors with track record of reliable subject matter experts.The secret to successfully implement emerging technologies, such as machine vision to add value to a company, lies in its ability to invest in all these components. Research and Development is another important part of success in machine vision since the field is rapidly evolving. The future of Machine Vision lies in enhancement of processing speed integrated with improved modeling capabilities. An important factor in adapting vision technology lies in ease of use by the end user who could be a factory operator or a medical staff. In the future, building machine vision capabilities will also get easier with availability of higher-levelabstractions of algorithms and accessible solutions.
<urn:uuid:cb97e895-0fca-41ff-8a33-33c0e7c9c25b>
CC-MAIN-2022-40
https://machine-vision-and-imaging.cioapplications.com/cxoinsights/transforming-industries-with-machine-vision-nid-5757.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00538.warc.gz
en
0.939283
1,133
2.984375
3
Agriculture is arguably one of the largest industries in the world and one which is always at the forefront in adopting technological advances to improve its efficiency and future preparedness. To this end, precision agriculture, artificial intelligence (AI), and machine vision all share the same goal, to enhance and improve farming operations. In this article, we’ll be looking at how both artificial intelligence and machine vision are being used to bring precision agriculture and smart farming in general to life. So, before we look into AI and machine vision, let’s take a brief look at what precision agriculture actually is. What is Precision Agriculture? Precision agriculture is widely regarded as being the practice of using satellite, cellular networks, GPS, sensor and IoT devices, robotics, control systems, automation and various other technological and communications innovations. The concept came into practice in the early 1990’s with the advent of GPS guided tractors and is now a booming part of agriculture and farming thanks to advances in Internet of Things, wireless communications, automation and various other smart technologies. In fact, such is the growth in this aspect of farming that a report from Hexa Reports suggested that precision agriculture could grow by up to $43.4 billion by the year 2025. This is in part due to an explosion over recent years of IoT and smart devices that are now available to the market and currently being adopted. This adoption then grows further innovation and development and it is hoped that, with the help of technologies such as AI and machine vision, precision agriculture will become much more beneficial and, therefore, increasingly widespread. So, how is it that artificial intelligence will aid in the growth of precision agriculture? Let’s now turn our attention to exactly that. Artificial Intelligence Within Precision Agriculture Artificial intelligence is not a new technology on the agricultural scene. Many farms and farmers are using AI systems for a variety of different roles and it is slowly becoming more widespread among smart farms and, according to a recent market research report, agricultural artificial intelligence will grow by up to $2.6bn by 2025. This means that, within precision agriculture, artificial intelligence will most likely become more and more commonly used as an increasing number of farms become smart farms through the adoption of technologies such as IoT sensors and devices and more advanced AI systems. Alongside this, the use of artificial intelligence within precision agriculture could tackle one of its biggest challenges, the huge amounts of data that are produced through precision agriculture and smart farming. Human farmers are often so inundated with data that it can often be impossible for them to work with. AI systems would be able to interpret, understand, and relay that data in a way that is significantly more useful for farmers and those working in industrial agriculture. As well as this, AI systems could also assist in the automation and operation of other technologies, such as robots equipped with machine vision, an area that we will now turn our attention to. Machine Vision Within Precision Agriculture Machine vision, like artificial intelligence, is not new in industrial operations. The manufacturing industry, for example, has been using machine vision systems for several years and its benefits to that industry are easy to see. However, what about precision agriculture? The use of robotics within agriculture has made a steady climb over the past few years as new advances improve the technology and make it more feasible for farmers to introduce. However, advances in machine vision systems could allow for much more independent robotic systems as well as equip them with the means to gather highly valuable data from across the enterprise’s area of operations. For example, facial recognition systems are currently being built into robotic machine vision systems so as to allow for individual animals to be identified and enhancing herd monitoring and minimizing the need for human intervention. High-throughput plant phenotyping is another example, whereby machine vision systems are able to collect and measure plant characteristics in order to build databases that could prove crucial in plant genetics and the development of rugged crops. As smart farming, precision agriculture, AI, machine vision and other technological advances such as automation continue to enhance and improve the way we work, it seems likely that data will eventually run the vast majority of our systems, leaving minimal requirements for human intervention. Human oversight will most likely not be entirely done away with; however, current trends do suggest that it will be technologies such a AI and automation, that will do the majority of the work. We’ve now seen that the adoption of advanced technologies such as artificial intelligence and machine vision systems can bring a plethora of benefits to both the practice of precision agriculture, and smart farming as a whole. With an increase in the use of these technologies, the agricultural industry is sure to see widespread transformation and precision agriculture will likely play a big part.
<urn:uuid:eac08a5a-8d5e-4cb1-ba5e-9b111703a43e>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/ai-machine-vision-redefining-precision-agriculture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00538.warc.gz
en
0.949924
971
3.0625
3
Most Commented Posts Three different cities we studied illustrate the power of this multilayered ecosystem approach. Singapore: Singapore’s officials have said they want it to be a “45-minute city”—meaning that people can travel from their home to their place of work in less than 45 minutes. The government has built infrastructure for bus rapid transit (BRT), light-rail transit (LRT), and mass rapid transit (MRT). (Because sustainability is a key goal, municipal leaders have committed to having a 100% clean energy public bus and taxi fleet by 2040.) Singapore also has collaborated with French transportation company Bolloré to develop an electric car–sharing program, called BlueSG. Meanwhile, the Singapore Economic Development Board, through various public–private partnerships, is working to create an innovation pipeline to take advantage of new mobility offerings such as on-demand autonomous shuttles—in collaboration with Alliances for Action (AfA), an industry-led coalition—and air taxis, in collaboration with Volocopter. Already a technology leader among cities, Singapore has been using advanced tech, including smart sensors, connectivity, and cloud computing, to enable a centralized bus fleet management system, which has improved service efficiency. What the city is doing well: To achieve its vision of becoming a 45-minute city, Singapore is focusing on building its infrastructure (e.g., it is building intermodal mobility hubs to allow commuters to move seamlessly from one mode of transportation to another). The city is developing a robust innovation ecosystem, collaborating with many private-sector players. Singapore has proactively shaped both the demand side (e.g., congestion fines, vehicle quotas) and the supply side (e.g., nonmotorized transportation policy), and has provided guidance for forward-looking technologies (e.g., technical references for autonomous vehicles). Istanbul: The city is focused on providing citizens with multiple ways to travel efficiently (MRT, LRT, and BRT), while expanding roads, highways, and bridges. It is experimenting with technologies such as an electronic tolling system, and is even looking into the possibility of developing flying cars. By adopting an ecosystem approach, the city has made inroads into tackling its mobility challenges. What the city is doing well: Istanbul is focusing on its modes of mobility/B2C offerings and mobility assets to provide multiple options to its citizens (e.g., MRT, LRT, and BRT). To tackle its unique traffic challenge—the Bosphorus Strait separates the city’s Asian and European sides—Istanbul is building underground road tunnels as well as an underground metro line to mitigate congestion on bridges (the infrastructure layer). It has used the financing and insurance layer to finance capital-intensive infrastructure projects through public–private partnerships. Brisbane, Australia: On average, Brisbane residents travel farther for work than they do for any other purpose—in fact, double the distance. To alleviate this burden on commuters, the city is developing a new public bus network of more than 1,200 vehicles and 6,200 stops. Queensland is currently trialing hydrogen fuel cell buses, which local authorities want to become as ubiquitous as mobile phones. Through an investment of AU$5.4 billion (US$3.8 billion), the Queensland Government is working on a new high-speed, high-frequency rail link, the Cross River Rail. The in-progress metro project and a provision for water taxis, coupled with the existing shared mobility and micromobility modes—such as electric bikes and scooters—aim at making the city highly accessible and connected. Brisbane places great importance on improving technology and developing infrastructure. The Brisbane Metropolitan Transport Management Centre, operated in partnership with the Queensland Government, provides real-time monitoring and operation of the city’s road and busway networks. Smart parking and smart traffic lights, along with an integrated payment system, is helping it move ahead on the path of smart mobility. To support these smart mobility initiatives, the Brisbane city council aims at harnessing innovation by bringing together government, industry, research partners, and the private sector to share ideas, technologies, and data. What the city is doing well: Brisbane is prioritizing its mobility infrastructure via an extensive network of high-frequency buses along major routes that connect the city with the outer suburbs. Brisbane is also focused on enhancing the modes of mobility/B2C offerings and the mobility assets, developing multiple modes of public transit such as rail, metro, and water ferries to make the city accessible and connected. Finally, Brisbane is employing data and technology enablement (e.g., one payment method that can be used across all public transit modes).
<urn:uuid:d324127b-ef4f-466d-9400-dd865ade5100>
CC-MAIN-2022-40
https://ihowtoarticle.com/sustainable-mobility-ecosystems-in-smart-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00538.warc.gz
en
0.941691
964
2.546875
3
The H.264 standard for video encoding is also referred to as MPEG-4 AVC. H.264 was released as an open standard by telecommunications and IT industry organisations. It is often associated with high definition applications, including HD DVD. The H.264 standard is a dominant format for recording video and compressing it into manageable files for storage and distribution. It is useful in higher resolution surveillance applications where specific frame rates are required, such as in the recording of public environments. Cameras with the H.264 video compression format are beneficial in systems with significant network storage and archival requirements. H.264 includes a number of profiles. Even when the same H.264 compression is used, the quality and bit rates of output may differ. H.264 Video Compression and Security Using H.264 video compression reduces the file size of a digital video significantly (up to 80%), in comparison to more traditional formats, such as Motion JPEG. This reduction is achieved without sacrifices to image quality. After encoding, video files will occupy less bandwidth and demand less storage. These reduced requirements often lead to savings on commercial surveillance operations. From standard to miniature cameras, surveillance devices are available that offer multiple streams in H.264 to meet quality video requirements. H.264 Encoder for Video Surveillance Applications Latency refers to the amount of time needed for encoding, transmitting and decoding a video for display. H.264 has a low latency that is key to live monitoring and directional recording. An H.264 encoder for video surveillance can also be connected to numerous analogue cameras for an integrated, IP-based surveillance system. The encoded files can be generated without compromises to image quality. H.264 encoders for surveillance are available that support both H.264 and Motion JPEG applications.
<urn:uuid:d5cb6f73-0a87-4b70-89d0-835fb410e31c>
CC-MAIN-2022-40
https://www.comms-express.com/infozone/article/h.264/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00538.warc.gz
en
0.922701
359
3.1875
3
This course provides an overview of the security measures that need to be considered when implementing an organization’s cloud environment. It discusses the importance of security and compliance, and highlights some of the tools and security mechanisms that can help ensure an organization’s data integrity and security is maintained. Cloud security specialists, cloud administrators and system programmers seeking to gain further skills and understanding of cloud security concepts, and tools and mechanisms available. Completion of “Introduction to Cloud Computing” or equivalent knowledge and basic IBM Z hardware and z/OS knowledge. - After completing this course, the student should be able to: - Identify Some Common Security Threats Associated with Cloud Computing - Identify Aspects of a Cloud Security Framework and Cloud Security Architecture - Describe What Cloud Encryption Is and How It Is Used to Protect Data - Describe How Identity and Access Management Plays a Part in Cloud Computing Understanding and Managing Cloud Security Security and Compliance Data Privacy and Data Protection Cloud Security Architecture Security as a Service (SECaaS) Cloud Security Best Practices Cloud and Data Encryption Identity and Access Management Cloud Access Security Broker Cloud Workload Protection Platform Cloud Security Posture Management
<urn:uuid:98da108d-ca07-4268-9436-792ac840181d>
CC-MAIN-2022-40
https://bmc.interskill.com/course-catalog/Cloud-Security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00538.warc.gz
en
0.782958
299
2.984375
3
The EFTA, Innovative Financial Regulations The Electronic Funds Transfer Act EFTA or the EFTA for short is a federal law that was passed in 1978. The EFTA was passed to provide American consumers with protection in regards to the transfer of funds through electronic means, including ATM withdrawals, debit card usage, and automatic withdrawals from bank accounts. As such, the EFTA outlines the requirements that banks and other financial institutions must abide by when errors occur in relation to the electronic transfer of funds. Moreover, the EFTA also limits the level of liability that can result from the errors that can occur when conducting electronic transactions and transfers. The EFTA was passed in large part due to the rise of ATM usage across the country during the 1970s. As financial transactions were largely conducted via physical documents such as checks prior to the rise of ATM usage in the late 1960s and early 1970s, errors that occurred during financial transactions were typically limited to the parties directly involved. However, as errors involving ATM transactions were not directly connected to a particular individual or employee, legislation was needed to regulate the newfound use of ATM machines, as well as the subsequent forms of electronic transactions that would later arise. To this end, the EFTA provides American consumers with a means to challenge errors, have said errors corrected, and receive some level of financial penalties as it pertains to electronic transactions. What are the requirements for banks and other financial institutions under the EFTA? Under the EFTA, banks, financial institutions, and other third parties who are involved in the transferring of electronic funds are required to disclose the following information to American consumers: - A summary of liability as it pertains to unauthorized transfers and transactions. - The contact information for the individuals who should be contacted in the event of the occurrence of an unauthorized transfer or transaction, as well as the procedure that consumers must follow when looking to file a claim. - The types of transfers and transactions that consumers are permitted to make, any fees that are associated with these transfers or transactions, as well as any limitations that may exist. - A summary of the rights that consumers have under the law, including the right to receive periodic financial statements and point of sale purchase receipts. - A summary of a particular bank or financial institution’s liability in the event that they fail to either make or stop certain transactions or transfers. - The specific circumstances under which a particular financial institution will share personal information relating to a consumer’s account and account activities with third parties. - A notice detailing the process that consumers must follow to report an error, request further information, as well as the timeframe in which a report must be filed. What’s more, while the EFTA was passed largely in the context of physical money and checks being deposited into ATM machines, the law also covers a wide range of financial services. These services include ATMs, direct deposit, pay-by-phone transactions, internet banking, debit card transactions, and electronic check conversions. Moreover, the EFTA defines an “error” to include the following. - An unauthorized electronic funds transfer. - Incorrect electronic funds transfer from a consumer’s account. - The omission from a periodic statement of an electronic fund transfer to or from a consumer’s account that should have been included. - Bookkeeping or computational error made by a financial institution relating to an electronic funds transfer. - A consumer’s receipt of an incorrect amount of money from an electronic terminal. - An electronic funds transfer “not identified in accordance with the requirements of sections 205.9 or 205.10(a)” of the EFTA. - A consumer’s request for any documentation required by sections 205.9 or 205.10(a) or for additional information or clarification concerning an electronic funds transfer. Contrarily, the EFTA does not consider the following actions to be errors in accordance with the law: - A routine inquiry about the balance in the consumer’s account or a request for duplicate copies of documentation or other information that is made only for tax or other recordkeeping purposes. - The fact that a financial institution does not make a terminal receipt available for a transfer of $15 or less in accordance with 205.9(e) of the EFTA. What are the penalties for violating the EFTA? Individuals, banks, and other financial institutions who fail to comply with the provisions of the EFTA are subject to a variety of penalties. The EFTA is enforced by the Federal Trade Commission or FTC for short, and failure to comply with the law “may result in liability for the actual damages sustained by the consumer, statutory damages of $100 – $1000, class action damages in the lesser of $500,000 or 1% of net worth, as well as reasonable attorney’s fees and costs as determined by the court”. Additionally, failing to comply with the provisions of the EFTA also constitutes a criminal offense. Through the passing of the EFTA in 1978, American consumers were provided with both protections and an avenue of recourse with respect to errors that can occur during the electronic fund’s transfer process. As reporting such errors has become second-hand nature to many American citizens due to the digital and internet-based function of our current age, many people may not know that the EFTA protects their right to challenge financial errors that occur. Because of the EFTA and its provisions, consumers can rest assured that if an error does occur at a bank or financial institution with which they conduct business, they will be able to remedy the issue.
<urn:uuid:28039cdd-c963-4431-a06d-55d8729bc423>
CC-MAIN-2022-40
https://caseguard.com/articles/the-efta-innovative-financial-regulations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00538.warc.gz
en
0.950607
1,149
3.109375
3
Install one layer of cyber security protection, and suddenly there’s a whole new way bad actors have discovered to penetrate it. Staying on top of the latest password security methods can sometimes feel like one big game of whack-a-mole. The truth is that cybersecurity is an ongoing, ever-evolving practice. Part of that practice is staying up-to-date on all the tactics criminals use to get into your systems. With this in mind, we’ve grouped the top six methods hackers use to steal your passwords. Using this list, we hope you can create a strong web of strategies and tools to help secure your business from a broad spectrum of attacks. We’ve talked about phishing recently when scammers began taking advantage of the COVID-19 pandemic to target their victims. Criminals would send emails impersonating legitimate government organizations, attempting to trick users into clicking embedded links or downloading attachments that would take over the user’s system or act as a hidden backdoor to steal credentials. These types of phishing scams are one of the most common ways hackers use to steal your passwords. Phishing can occur through email or SMS – really any electronic communication where the sender can’t be readily identified. Malware is another common tool criminals use to steal credentials. There is a broad range of malware families out there that do everything from secretly capturing your movements to outright locking up systems or destroying files. Keylogging malware will track the strokes typed directly onto a keyboard or pin pad. Spying malware might hack into webcams to watch and record you. Ransomware is a malware attack that blocks access to a business’s data or systems until that business pays up – typically costing a company millions of dollars. Then, of course, there’s the malware that sits quietly in the background collecting data, like passwords, from browser caches. 3. Brute Force Bad actors use many tactics to make brute force attacks less time-consuming and expensive. Dictionary attacks utilize lists of unique words, common passwords, and compromised credentials called cracking dictionaries to quickly guess passwords users are most likely to choose. Password spraying is similar, but the hacker typically already knows the victims’ usernames and is attempting to break into their accounts by more slowly running down a list of commonly used passwords. Credential stuffing takes this one step further. The attacker has already obtained lists of stolen credentials, password and user name combos, and then they test those against other accounts to see if they match. This tactic works well even when sites have suitable security measures because employees are reusing passwords that were compromised in data breaches of other sites. Mask attacks occur when hackers know something about a password, like if a special character is required, and they tailor the brute force guesses to that criterion. All of these approaches involve brute force guessing campaigns to hack into your systems. 4. Data Breaches Data breaches are slightly different because hackers can take advantage of password vulnerabilities, a configuration flaw or other vulnerability to gain network access to your system. Once they do, they can obtain the user table from your identity and access management system (like Windows Active Directory) that holds all your user names and passwords. Good cybersecurity hygiene means that your business isn’t storing these lists of passwords in clear text but encrypting them with hashing and salting algorithms. (You’re doing that, right?) However, as we’ve talked about before, hashing and salting aren’t foolproof. The dark web makes this kind of password attack viable by sharing tools like rainbow tables that can quickly decipher stolen credentials. 5. Technical Hacks Outside of malware, other technologies make it easier for bad actors to get their hands on your passwords. Network analyzers, for example, allow interlopers to monitor and intercept data from your network, including plain text passwords. All a hacker needs is access to a network switch or your wireless network, either by way of malware or being there in person, and they can use an analyzer to search for and capture password traffic. A VPN can help tie up this kind of vulnerability, but with more employees working from home than ever before, many systems remain unguarded from this threat. 6. Targeted Personal Attacks There are quite a few password-stealing methods at a criminal’s disposal when they can be somewhere in person. Targeted personal attacks are advantageous if a hacker is going after a specific, high-value individual. Spidering is a process where a hacker studies their target, gleaning intimate details about their work and home environments to socially engineer their way to the right username and password combo. Shoulder surfing is exactly how it sounds. Someone is simply looking over your shoulder to ascertain your company login information or MFA security code sent via text. Of course, there’s always snooping around an employee’s desk for a password scribbled on a sticky note! We’ve done our best to group the main methods hackers use to steal your passwords. Now, the question is, what can you do about it? Businesses need to take proactive steps to mitigate their exposure to these tactics. Multi-layered cybersecurity strategies are the best defense for your organization. From implementing training for all of your employees to embracing admin tools that prevent users from creating compromised passwords, there are so many methodologies you can use to defend against these password-stealing attacks.
<urn:uuid:ed7faf9e-2b30-4fb1-99ac-78292d9ddb99>
CC-MAIN-2022-40
https://www.enzoic.com/hackers-steal-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00538.warc.gz
en
0.932961
1,124
2.578125
3
The ASEC analysis team has recently discovered the constant distribution of malware strains that spread the infection when Excel file is opened. Besides infecting normal Excel files, they can also perform additional malicious behaviors such as acting as a downloader and performing DNS Spoofing, therefore, users need to take great caution. The common trait of the malware strains is to spread the virus through the VBA (Visual Basic for Applications) codes included in Excel files. Upon opening the infected Excel file, the file containing virus VBA code is dropped to Excel startup path. And when any Excel file is opened, the malicious file dropped in Excel startup path is automatically executed to infect with virus and perform additional malicious behaviors. After the infection, malicious behaviors such as Downloader or DNS Spoofing occurs depending on the malware type. Downloader Type Malware – MD5: f8886b0d734c5ddcccd2a0d57d383637 – Alias: Virus/X97M.Downloader This Excel file is infected with virus, and as shown in the figure below, it has the VBA code defined for virus and additional malicious behaviors. The malicious code inside the file performs malicious activities by calling the “d2p” procedure for spreading malware and the “boosting” procedure including the Downloader logic in the Workbook_Open() procedure that is automatically run when an event for viewing a workbook occurs. The d2p procedure containing the logic for spreading virus creates an Excel file with the name “boosting.xls” to spread the infection in the Excel startup path (see Figure 3). When opening a random document, the malware dropped in the path “%AppData%\Microsoft\Excel\XLSTART\boosting.xls” is automatically executed and infects the Excel file that is currently being viewed, and performs malicious behaviors. As shown in Figure 4, the “boosting.xls” file spreads malware after a certain time has passed. When the infection spreads, the original code defined in the file is deleted. The code then defines codes for infection and additional malicious behaviors in the Workbook_Open procedure of the Excel file. Downloader-type malware downloads and runs Miner-related executables from the C2 after infection (see Figure 5). The C2 URLs for downloading are as follows: Additionally, Excel virus strains of this type scan for the existence of the “%AppData%\Microsoft\Excel\XLSTART\boosting.xls” file. If the file does not exist, they spread virus and perform additional malicious behaviors. This means that if a dummy file with a 0-byte size exists in the path, malicious behaviors can be prevented in advance. DNS Spoofing Type Malware – MD5: 97841a3bf7ffec57a2586552b05c0ec5 – Alias: Virus/MSExcel.Xanpei This type also has a normal Excel file infected with virus with the VBA code for virus and additional malicious behaviors defined. Unlike the Downloader type that was mentioned earlier, this type has a different name for the malicious Excel file dropped at the Excel startup path (accerlate.xls). Also, instead of downloading files, it performs DNS Spoofing by changing the host file. The DNS Spoofing C2 URL is as follows: AhnLab is detecting malicious document files and downloaded executables as shown below. Furthermore, AhnLab is using the ASD network to block the C2 URLs that malicious Excel file connects. – Virus/XLS.Xanpei (2022.03.14.02) – Virus/X97M.Downloader (2018.12.11.07) – Virus/MSExcel.Xanpei (2022.03.14.03) – Trojan/Win64.BitMiner (2017.11.13.03) Subscribe to AhnLab’s next-generation threat intelligence platform ‘AhnLab TIP’ to check related IOC and detailed analysis information.
<urn:uuid:53a55987-cbd8-4190-93b1-8cee993df98b>
CC-MAIN-2022-40
https://asec.ahnlab.com/en/33630/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00538.warc.gz
en
0.856422
938
2.828125
3
It’s time for the Moon to get online. NASA is embarking on an ambitious project to build an Internet for the Moon, opening up its far side, setting the groundwork for human habitation, and preparing us for connected civilizations on Mars. We don’t have much in the way of operating technology on the Moon, but what little we have keeps in touch by direct communication with Earth. "So far, all of the existing lunar stuff has been direct-to-Earth,” NASA exploration and space communications projects division architect David Israel told DCD. This article appeared on the cover of Issue 39 of the DCD>Magazine. Subscribe for free today The dark side of the Moon There are two notable exceptions - China’s Chang'e 4, which achieved humanity's first soft landing on the far side of the Moon in 2019, and last year’s follow up, which both used the Queqiao Relay Satellite to pull off their feat. “But for our missions, right now, we don't have any relay capability,” Israel said. That's a problem if the US ever wants to explore the far side of the Moon, which never faces Earth. Even on the side that faces us, craters and valleys can block direct line of sight, while the need for every mission and every device to have its own direct-Earth connection capabilities can limit their scope and ambition. It’s clear we need infrastructure to communicate with people and equipment on our nearest neighbor, because new missions are planned. After nearly five decades of neglect, the US plans to return to the Moon in a big way. The Artemis program will take the first woman and the next man to the Moon in 2024, if everything goes to plan, and the Biden administration doesn’t make any changes. By the end of the decade, NASA hopes to have set up sustainable operations on the planetary satellite, with an eye to a manned Moon Base. To pull all of this off, potentially along with fleets of rovers, sensors, and exploration projects on both sides of the Moon, direct-Earth communication for every system just isn’t going to cut it. Instead, it needs the LunaNet. “The way to think of the LunaNet is to substitute the word Internet for LunaNet, and then that's the mind set,” Israel, who heads the project, explained. The plan is to deploy a whole interconnected network of lunar science orbiters, lunar exploration orbiters, lunar surface mobile and stationary systems, Moon and Earth orbiters that provide relay and PNT (positioning, navigation, and timing) service to lunar systems, lunar ascent and descent vehicles, and associated Earth ground stations and control centers. “Once we have relays in place, that's where you have the ability to give the network connection to the lunar South Pole, or the lunar farside,” Israel explained, with both regions currently set to be visited in 2024. Equally important is PNT, with GPS-like technologies crucial for human navigation and autonomous systems on the Moon. Of course, putting anything on and around a body some 250,000 miles from us is prohibitively expensive, so a core aspect of LunaNet is to ensure it can be expanded modularly, bringing connectivity to areas only when it is needed. “The analogy that I use is that when the mobile networks started you could get your phone coverage when you were in the city,” Israel said. “But when you went out to the country you didn't have coverage anymore. You didn't need a new phone, they just needed to put base stations out there. So the build up of the LunaNet is very analogous to the build up of mobile networks and the Internet.” Much like the terrestrial Internet, which sprung out of US military labs but has since grown into an international endeavor, the aim is to make LunaNet a joint effort. Standards are built from existing efforts by the Interagency Operations Advisory Group (IOAG) and the Consultative Committee for Space Data Systems (CCSDS), which include all major space agencies. “We could start to have different types of providers for LunaNet,” Israel explained. “International partners, commercial partners, NASA things, European Space Agency things, etc., all part of the larger infrastructure that provides LunaNet service to build out the larger LunaNet.” The more LunaNet Service Providers and technology providers the better, Israel believes. “We need to make sure that this doesn't become one commercial provider with proprietary things where everybody that's going to the Moon has to buy this company's systems.” In procurement documents, NASA notes that “LunaNet relies on standards and conventions to achieve interoperability among Service Providers and Service Users. As a result, no one organization owns LunaNet. The LunaNet community includes government, commercial and academic entities; it could eventually include individuals as well.” The Moon's first telco Part of that played out in late October 2020, when Nokia was awarded a contract to deploy a 4G network on the Moon by the end of 2022. Part of the Tipping Point program, the effort is "really its own thing," Nokia Bell Labs VP and head of the project Thierry Klein explained. "The ambition of the Tipping Point program is really to explore advanced new technologies to support future missions on the Moon or ultimately Mars." The $14.1m contract will deploy cellular technologies that are similar to those used by telcos on Earth, to see if they work just as well on the Moon. "So the mission is to put the equipment on a lunar lander, and what would be the base station and the network side of the solution is integrated into the lunar lander," Klein said. Then the lander will deploy a rover, which will act as the equivalent of a phone user down here. "A cellular link will be established between this rover and the equipment on the lander, and we will both explore short range surface communication at 1-300 meters as well as much longer range where the rover can go up to 2-3 kilometers away from the lander." While mission details are still being finalized, Nokia expects to have to run the cellular network for several weeks to validate if it is space hardened. But Klein is confident the system will do well, as work began long before the Tipping Point contract was awarded. "We've been working on this for several years," he said. "We have built a unit that is space hardened already," with the company putting it through trials akin to lunar operation. First there's the shock, vibration, acceleration of the journey, then there's the temperature changes, vacuum, and radiation to handle on the surface itself. "We put the equipment through those tests, as much as we can on Earth." Nokia built a simulation model to capture the RF performance on the Moon, where there are no buildings or trees, but there are craters and rocks. "And then we found a place on Earth that has similar lunar scape characteristics, the island of Fuerteventura in Spain,” Klein said. “And we set up our entire system exactly in a configuration that we would expect the system to be on the Moon, and validated from a communications perspective, from an RF perspective, as far as throughput, latency, coverage, and so forth is concerned." Should the Nokia trial prove successful, along with a possible 5G follow up, it could serve as the stepping stone to the wider LunaNet. “That's where if you're an astronaut, and you're cruising around on the surface of the Moon, then you get your network access through the equivalent of your cell phone through the Nokia cell tower,” Israel said. “That's your LunaNet access point, and if your data is all sitting on the Moon, then maybe it all stays within that local lunar surface network. But if you're trying to get data back to Earth or back around to the far side of the Moon, then maybe it's going to go from that base station, and work itself straight back to Earth through a relay or any combination thereof - the same way that the Internet traffic is kind of bouncing around.” Just like our earthly efforts, building a network will not just be about connection points, but require compute and storage. “Somebody could land something on the far side of the Moon, that takes raw data from all the different things around it,” Israel explained. “All the sensors themselves don't have to be that smart, you could just have an Edge computing device on the Moon. Cloud computing and storage, and all this stuff that we see here which was enabled by networking protocols and then network access between Edges and devices, then becomes possible in our lunar scenarios.” Some of that lunar communication will likely be carried out with standard TCP/IP communications protocols, but for the perilous voyage to Earth that simply won’t suffice. For all its comparisons to the Internet, back on Earth network infrastructure is primarily designed to be static. Data centers, submarine cables, and cell towers all stay put. LunaNet has to be designed for space systems moving at different orbital speeds. An Internet built for disruption Satellites can come into view of potentially connecting systems, and move in and out of view within minutes, requiring rapid establishment and disestablishment of connections. Then there’s potential radio signal interference and other issues that can cause data loss. With this in mind, LunaNet will rely on the disruption tolerant networking (DTN) bundle protocol which uses a store-and-forward mechanism along with automatic retransmission to ensure data makes it to its destination. DTN came out of work started by NASA back in 1998 for an 'Interplanetary Internet' which, after a series of false starts and changed priorities, has reformed as a plan to build a Solar System Internetwork (SSI) with DTN at its core. This is where LunaNet may prove to be most valuable. There is a huge scientific benefit in Moon operations - with the recent discovery of significant quantities of water showing just how little we know. But it also serves as an important staging ground for operations elsewhere. "Working out things at the Moon is difficult, but it's more accessible than going to Mars," Israel said. If we can hope to reach the goal of a man on Mars by the 2030s, such steps to iron out all the kinks will be vital. “If and when we do make it there, however, network connectivity becomes even more interesting. At some 140 million miles away, speed of light latency really starts to become apparent. "If you went to the Moon on vacation 20 years or so years from now, certainly exchanging emails and posting stuff on the web would be fine," Israel said. "A phone call would be possible, but it would be difficult and annoying because of the few second delay." On Mars, however, "it's measured in minutes," with delays compounded by "data interleaving, where you store up a buffer of data, and you kind of shuffle it in a predictable way. It's a powerful way to help deal with certain types of errors in the system, but there's a penalty of buffering time." Should we build civilizations on Mars, as Elon Musk claims we will, its connection to Earth will be inherently slow. That will require more local storage and communication - and, yes, data centers on Mars. Such notions are a long way off, of course. Before we bring connectivity to Mars, Venus, or even asteroids, we must first focus on our closest neighbor. That’s no mean feat - and one that could unlock significant scientific advances in its own right. Connecting the far side of the Moon will help us explore space well beyond our solar system, says Israel, by giving astronomers access to the clearest signals ever from space: "We don't have anything on the far side of the Moon, but that side has been the dream of radio astronomers forever, because it's like the quietest place,” Israel said. “All of the racket coming from Earth is blocked by the whole Moon." LunaNet will also expand the vision of agencies like NASA, Israel hopes. He pointed out that until the 1990s, software developers had to individually work out how to exchange data between computers for their projects. "Once the Internet hit, then suddenly, all those clever people didn't have to spend any brain power or time at all wondering how to get the data from here to there, they could just spend it all on just dreaming up their applications,” he said. "My goal with LunaNet is that it'll be just as enabling as the Internet was to the Earth. Once this whole network-based mindset gets into the user side, the people planning the missions, then there'll be all sorts of new types of missions and applications that just grow out of it.”
<urn:uuid:fceeadbd-7b64-44fb-a01d-40d8d153c7c0>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/analysis/building-internet-moon/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00538.warc.gz
en
0.95972
2,704
2.984375
3
Everyone who uses the internet is exposed to cyberthreats, which is unfortunate because we practically live online. We express ourselves on social media, communicate with family and colleagues via email and messaging apps, and even operate web-enabled gadgets also known as Internet of Things (IoT) devices. IoT devices like smartphones and wearables are internet-connected devices that utilize embedded systems to collect and act on data. Businesses use various IoT devices because these automate processes and make operations more efficient. This technology has massively grown in popularity — IoT devices are even expected to outnumber conventional computers in the next few years. While IoT has made us even more connected and more efficient, security remains a crucial talking point when it comes to IoT adoption. Here are some of the biggest threats to IoT users today. #1. IoT device manufacturing vulnerabilities Manufacturers are responsible for creating secure devices and yet, many remain vulnerable to cyberattacks due to insufficient security design. Some vulnerabilities involve the lack of a patching system, unsecured hardware, and weak passwords. Ironically, security cameras are among the most vulnerable devices because many models have little to no built-in security mechanisms. Wearable devices such as smartwatches are also vulnerable to threats; according to HP, common security issues in these devices are “insufficient user authentication and authorization.” This may be attributed to the fact that manufacturers prioritize functionality and design over security. What’s more, they don’t follow a single standard to secure IoT devices, and many tend to neglect their products’ security. #2. Difficulties updating security patches While some manufacturers do factor in security, hackers move fast. They can find zero-day vulnerabilities to exploit in new gadgets. One of the cardinal rules in cybersecurity is to update patches as soon as they become available. But it’s not as simple with IoT software, which is more difficult to upgrade compared to a desktop OS or a mobile OS. For instance, medical devices can be challenging to patch because the Food and Drug Administration (FDA) requires time-consuming processes to change these devices’ software. IoT devices such as those used in farms and factories are relatively easier to patch. But they need to be taken offline, which may affect production. Moreover, keeping these devices’ software patched may not always be done on time. #3. Employees’ lack of security awareness Cybersecurity awareness training teaches users to spot common threats such as phishing scams and to patch OS updates as soon as they become available. Unfortunately, not many users are properly trained to detect and prevent risks on IoT devices. Again in the case of connected medical devices, one way that healthcare organizations can secure them is to inventory all devices, identify the different types being used, and have a working knowledge of how their systems work. This way, you can determine how these normally function and use that as a reference point to detect suspicious activity. #4. Physical security Criminals can also tamper with or steal IoT devices installed on office premises or any remote location where they may be left unattended. Hackers can also plug a USB drive into a device to install malware and steal data. Smart home devices aren’t completely safe from this threat and can be hacked remotely. For example, hackers can exploit vulnerabilities in Wi-Fi routers, which connect all smart devices to your network. To know if a router is secure, check its privacy policies and know how updates are enabled. A botnet, or a network of infected computers that attack and overwhelm its target with traffic, can render large portions of the internet inoperable. This was the case in 2016’s Mirai botnet, which shut down big websites like CNN and Twitter, and crippled an entire country’s internet infrastructure. Unsecured IoT devices can cause this much devastation because of insufficient security updates. Behind IoT attacks are people who understand and exploit humans' propensity to commit errors. And since most IoT devices aren’t designed with complex security mechanisms, they aren’t difficult to hack. So beware. More than ever, your New York small- or mid-sized business needs a comprehensive cybersecurity infrastructure that covers your entire network’s security risks. Secure your business today — get a FREE consultation from Healthy IT’s security experts. If you want to learn more about how hackers get around network security, read our FREE eBook, The Top 10 Ways Hackers Get Around Your Firewall And Anti-Virus To Rob You Blind.
<urn:uuid:d0c865b4-1afa-4d2a-9258-b6936bccc7e5>
CC-MAIN-2022-40
https://www.myhealthyit.com/the-biggest-threats-to-internet-of-things-iot-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00738.warc.gz
en
0.943355
943
2.9375
3
A study from King’s College London has found that people who drank red wine had an increased gut microbiota diversity (a sign of gut health) compared to non-red wine drinkers as well as an association with lower levels of obesity and ‘bad’ cholesterol. In a paper published today in the journal Gastroenterology, a team of researchers from the Department of Twin Research & Genetic Epidemiology, King’s College London explored the effect of beer, cider, red wine, white wine and spirits on the gut microbiome (GM) and subsequent health in a group of 916 UK female twins. They found that the GM of red wine drinkers was more diverse compared to non-red wine drinkers. This was not observed with white wine, beer or spirits consumption. First author of the study, Dr. Caroline Le Roy from King’s College London said: “While we have long known of the unexplained benefits of red wine on heart health, this study shows that moderate red wine consumption is associated with greater diversity and a healthier gut microbiota that partly explain its long debated beneficial effects on health.” The microbiome is the collection of microorganisms in an environment and plays an important role in human health. An imbalance of ‘good’ microbes compared to ‘bad’ in the gut can lead to adverse health outcomes such as reduced immune system, weight gain or high cholesterol. A person’s gut microbiome with a higher number of different bacterial species is considered a marker of gut health. The team observed that the gut microbiota of red wine consumers contained a greater number of different bacterial species compared to than non-consumers. This result was also observed in three different cohorts in the UK, the U.S. And the Netherlands. The authors took into account factors such as age, weight, the regular diet and socioeconomic status of the participants and continued to see the association. The authors believe the main reason for the association is due to the many polyphenols in red wine. Polyphenols are defence chemicals naturally present in many fruits and vegetables. They have many beneficial properties (including antioxidants) and mainly act as a fuel for the microbes present in our system. Lead author Professor Tim Spector from King’s College London said: “This is one of the largest ever studies to explore the effects of red wine in the guts of nearly three thousand people in three different countries and provides insights that the high levels of polyphenols in the grape skin could be responsible for much of the controversial health benefits when used in moderation.” The study also found that red wine consumption was associated with lower levels of obesity and ‘bad’ cholesterol which was in part due to the gut microbiota. “Although we observed an association between red wine consumption and the gut microbiota diversity, drinking red wine rarely, such as once every two weeks, seems to be enough to observe an effect. If you must choose one alcoholic drink today, red wine is the one to pick as it seems to potentially exert a beneficial effect on you and your gut microbes, which in turn may also help weight and risk of heart disease. However, it is still advised to consume alcohol with moderation,” added Dr. Le Roy. Journal information: Gastroenterology Provided by King’s College London Microbiome refers to the collective genomes of the micro-organisms in a particular environment, and microbiota is the community of micro-organisms themselves (box 1). Approximately 100 trillion micro-organisms (most of them bacteria, but also viruses, fungi, and protozoa) exist in the human gastrointestinal tract1 2—the microbiome is now best thought of as a virtual organ of the body. The human genome consists of about 23 000 genes, whereas the microbiome encodes over three million genes producing thousands of metabolites, which replace many of the functions of the host,1 3 consequently influencing the host’s fitness, phenotype, and health.2 - Microbiome – the collective genomes of the micro-organisms in a particular environment - Microbiota – the community of micro-organisms themselves - Microbiota diversity – a measure of how many different species and, dependent on the diversity indices, how evenly distributed they are in the community. Lower diversity is considered a marker of dysbiosis (microbial imbalance) in the gut and has been found in autoimmune diseases and obesity and cardiometabolic conditions, as well as in elderly people - Operational taxonomic unit – a definition used to classify groups of closely related organisms. DNA sequences can be clustered according to their similarity to one another, and operational taxonomic units are defined based on the similarity threshold (usually 97% similarity) set by the researcher - Colonocytes – epithelial cells of the colon - Germ-free animals – animals that have no micro-organisms living in or on them - Short chain fatty acids – fatty acids with two to six carbon atoms that are produced by bacterial fermentation of dietary fibres Studying the gut microbiota Twin studies have shown that, although there is a heritable component to gut microbiota, environmental factors related to diet, drugs, and anthropometric measures are larger determinants of microbiota composition.4 5 Animal models can help identify gut microbes and mechanisms, though the degree to which findings translate to humans is unknown. In humans, observational studies can show cross-sectional associations between microbes and health traits but are limited by the inability to measure causal relations. The strongest level of evidence is obtained from interventional clinical studies – in particular, randomised controlled trials. The composition of gut microbiota is commonly quantified using DNA based methods, such as next generation sequencing of 16S ribosomal RNA genes or whole genome shotgun sequencing, which also allow inference of microbiota functions.14 15 Metabolic products of the microbiota are now measurable in stool and serum using metabolomic methods.16 What does the gut microbiota do? The gut microbiota provides essential capacities for the fermentation of non-digestible substrates like dietary fibres and endogenous intestinal mucus. This fermentation supports the growth of specialist microbes that produce short chain fatty acids (SCFAs) and gases.17 The major SCFAs produced are acetate, propionate, and butyrate. Butyrate is the main energy source for human colonocytes, can induce apoptosis of colon cancer cells, and can activate intestinal gluconeogenesis, having beneficial effects on glucose and energy homeostasis.18 Butyrate is essential for epithelial cells to consume large amounts of oxygen through β oxidation, generating a state of hypoxia that maintains oxygen balance in the gut, preventing gut microbiota dysbiosis.19 Propionate is transferred to the liver, where it regulates gluconeogenesis and satiety signalling through interaction with the gut fatty acid receptors 18 Acetate – the most abundant SCFA and an essential metabolite for the growth of other bacteria – reaches the peripheral tissues where it is used in cholesterol metabolism and lipogenesis, and may play a role in central appetite regulation.20 Butyrate and propionate, but not acetate, seem to control gut hormones and reduce appetite and food intake in mice.21 Gut microbial enzymes contribute to bile acid metabolism, generating unconjugated and secondary bile acids that act as signalling molecules and metabolic regulators to influence important host pathways.23 Other specific products of the gut microbiota have been implicated directly in human health outcomes. Examples include trimethylamine and indolepropionic acid. The production of trimethylamine from dietary phosphatidylcholine and carnitine (from meat and dairy) depends on the gut microbiota and thus its amount in blood varies between people. Trimethylamine is oxidised in the liver to trimethylamine N-oxide, which is positively associated with an increased risk of atherosclerosis and major adverse cardiovascular events.24 The gut microbiota and obesity The gut microbiota seems to play a role in the development and progression of obesity. Most studies of overweight and obese people show a dysbiosis characterised by a lower diversity.31-39 Germ-free mice that receive faecal microbes from obese humans gain more weight than mice that receive microbes from healthy weight humans.4 A large study of UK twins found that the genus Christensenella was rare in overweight people and when given to germ free mice prevented weight gain.4 This microbe and others such as Akkermansia correlate with lower visceral fat deposits.12 Although much of the confirmatory evidence comes from mouse models, long term weight gain (over 10 years) in humans correlates with low microbiota diversity, and this association is exacerbated by low dietary fibre intake.28 Gut microbiota dysbiosis probably promotes diet induced obesity and metabolic complications by a variety of mechanisms including immune dysregulation, altered energy regulation, altered gut hormone regulation, and proinflammatory mechanisms (such as lipopolysaccharide endotoxins crossing the gut barrier and entering the portal circulation29 30; fig 1 ). Microbiota diversity and health Lower bacterial diversity has been reproducibly observed in people with inflammatory bowel disease,31 psoriatic arthritis,32 type 1 diabetes,33 atopic eczema,34 coeliac disease,35 obesity,36type 2 diabetes,37 and arterial stiffness,38 than in healthy controls. In Crohn’s disease smokers have even lower gut microbiome diversity.39 The association between reduced diversity and disease indicates that a species-rich gut ecosystem is more robust against environmental influences, as functionally related microbes in an intact ecosystem can compensate for the function of other missing species. But recent interventional studies indicate that major increases in dietary fibre can temporarily reduce diversity, as the microbes that digest fibre become specifically enriched, leading to a change in composition and, through competitive interactions, reduced diversity.22 The functional role of the gut microbiome in humans has been shown using faecal microbiota transplantation.42 This procedure is effective in cases of severe drug refractory Clostridium difficile infection and is now routinely used for this purpose around the world.43 For other pathologies, faecal transplants are not yet clinical practice but have been explored.44 For example, transplanting faeces from a lean healthy donor (allogeneic) to recipients with metabolic syndrome resulted in better insulin sensitivity, accompanied by altered microbiota composition, than using autologous faeces.45 Effects of food and drugs on the gut microbiota Specific foods and dietary patterns can all influence the abundance of different types of bacteria in the gut, which in turn can affect health (table 1). Examples of foods, nutrients, and dietary patterns that influence human health linked to their effect on the gut microbiota |Dietary element||Effect on gut microbiome||Effect on health outcomes mediated by gut microbiome||Human observational studies||Human interventional| |Low FODMAP diet||Low FODMAP diet increased Actinobacteria; high FODMAP diet decreased abundance of bacteria involved in gas consumption58||Reduced symptoms of irritable bowel syndrome56||Yes||Yes| |Cheese||Increased Bifidobacteria,97 98which are known for their positive health benefits to their host through their metabolic activities.99 Decrease in Bacteroides and Clostridia, some strains of which are associated with intestinal infections98||Potential protection against pathogens.100Increased production of SCFA and reduced production of TMAO99||Yes||Yes| |Fibre and prebiotics||Increased microbiota diversity and SCFA production 22 101102||Reduced type 2 diabetes22 and cardiovascular disease103||Yes||Yes| |Artificial sweeteners||Overgrowth of Proteobacteria and Escherichia coli.104Bacteroides, Clostridia, and total aerobic bacteria were significantly lower, and faecal pH was significantly higher47||Induced glucose intolerance105||No||No| |Polyphenols (eg, from tea, coffee, berries, and vegetables such as artichokes, olives, and asparagus)||Increased intestinal barrier protectors (Bifidobacteria and Lactobacillus), butyrate producing bacteria (Faecalibacterium prausnitziiand Roseburia) and Bacteroides vulgatus and Akkermansia muciniphila.107 Decreased lipopolysaccharide producers (E coli and Enterobacter cloacae).106||Gut micro-organisms alter polyphenol bioavailability resulting in reduction of metabolic syndrome markers and cardiovascular risk markers108||Yes||Yes| |Vegan||Very modest differences in composition and diversity in humans and strong differences in metabolomic profile compared with omnivore diet in humans50||Some studies show benefit of vegetarian over omnivore diet,109others fail to find a difference110||Yes||Yes| FODMAP=fermentable oligosaccharides, disaccharides, monosaccharides and polyols; SCFA=small chain fatty acids; TMAO= trimethylamine N-oxide High-intensity sweeteners are commonly used as sugar alternatives, being many times sweeter than sugar with minimal calories. Despite being “generally recognised as safe” by regulatory agencies, some animal studies have shown that these sugar substitutes may have negative effects on the gut microbiota.46 Sucralose, aspartame, and saccharin have been shown to disrupt the balance and diversity of gut microbiota.46 Rats given sucralose for 12 weeks had significantly higher proportions of Bacteroides, Clostridia, and total aerobic bacteria in their guts and a significantly higher faecal pH than those without sucralose.47 Mice given sucralose for six months had an increase in the expression in the gut of bacterial pro-inflammatory genes and disrupted faecal metabolites.48 Food additives, such as emulsifiers, which are ubiquitous in processed foods, have also been shown to affect the gut microbiota in animals.49 Mice fed relatively low concentrations of two commonly used emulsifiers—carboxymethylcellulose and polysorbate-80—showed reduced microbial diversity compared with mice not fed with emulsifiers. Bacteroidales and Verrucomicrobia were decreased and inflammation promoting Proteobacteria associated with mucus was enriched .49 Other areas of concern include the side effects of popular restrictive diets on gut health. These include some strict vegan diets, raw food or “clean eating” diets, gluten-free diets, and low FODMAP (fermentable oligosaccharides, disaccharides, monosaccharides, and polyols) diets used to treat irritable bowel syndrome. Vegans are viewed by some as healthier than omnivores. A study of 15 vegans and 16 ominvores found striking differences in serum metabolites generated by the gut microbes but very modest differences in gut bacterial communities.50 A controlled feeding experiment of 10 human omnivores randomised to receive either a high fat and low fibre diet or a low fat and high fibre for 10 days found very modest effects on gut microbiome composition and no difference in short chain fatty acid production. Together these data support a greater role for diet influencing the bacterial derived metabolome than just the short term bacterial community.50 Animal and in vitro studies indicate that gluten-free bread reduces the microbiota dysbiosis seen in people with gluten sensitivity or coeliac disease.51 52 But most people who avoid gluten do not have coeliac disease or proved intolerance, and a recent large observational study showed an increased risk of heart disease in gluten avoiders, potentially because of the reduced consumption of whole grains.53 One study showed that 21 healthy people had substantially different gut microbiota profiles after four weeks on a gluten-free diet. Most people showed a lower abundance of several key beneficial microbe species.54 The low FODMAP diet has been shown in six randomised controlled trials to reduce symptoms of irritable bowel syndrome.55 56 It is associated with a reduced proportion of Bifidobacterium in patients with irritable bowel syndrome, and responsiveness to this diet can be predicted by faecal bacterial profiles.57 Low FODMAP diets lead to profound changes in the microbiota and metabolome, the duration and clinical relevance of which are as yet unknown.58 59 In addition to diet, medication is a key modulator of the gut microbiota composition. A large Dutch-Belgian population study showed that drugs (including osmotic laxatives, progesterone, TNF-α inhibitors and rupatadine) had the largest explanatory power on microbiota composition (10% of community variation).13 Other studies have shown major effects of commonly prescribed proton pump inhibitors on the microbial community, which could explain higher rates of gastrointestinal infection in people taking these drugs.60 Antibiotics clearly have an effect on gut microbes, and low doses are routinely given to livestock to increase their growth and weight. A large proportion of antibiotic use in many countries is for agriculture—particularly intensive farming of poultry and beef.61 Several observational human studies as well as many rodent studies have pointed to an obesogenic effect of antibiotics in humans even in tiny doses found in food.61 But humans have very variable responses to antibiotics, and intervention studies have not shown consistent metabolic consequences.62 Pesticides and other chemicals are commonly sprayed on foods, but, although levels can be high, solid evidence for their harm on gut health and the effects of organic food is currently lacking.63 Insufficient clinical evidence exists to draw clear conclusions or recommendations for these or other dietary preferences based on gut microbiota. But future studies of food additives, drugs, and the safety and efficacy of dietary modifications must take into account these advances and their effect on the gut microbiota. This is becoming clear in patients with cancer treated with immunochemotherapy, bone marrow recipients, and patients with autoimmune disorders on biologics, where small changes in their microbiota can cause major changes in their response.64Moreover, animal experiments have shown the protective effects of phytoestrogens on breast cancer depend on the presence of gut microbes (such as Clostridium saccharogumia, Eggerthella lenta, Blautia producta, and Lactonifactor longoviformis) that can transform isoflavones into the bioactive compounds.65 Box 2 summarises our current knowledge on the interactions between gut microbiota, nutrition, and human health. Consensus and uncertainties What we know - Probiotic supplementation has several beneficial effects on human health - The microbes in our gut influence and human energy metabolism22 23 24 25 26 27 28 2930 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 - Diet and medication have a strong influence on gut microbiota composition - Microbiota composition influences response to chemotherapy and immunotherapy96 - Microbiome composition defines glucose response to foods and can be used to personalise diet94 - Dietary fibre intake influences gut microbiota composition and is related to better health86 87 104 What we don’t know - Are natural probiotics in food better than probiotic supplements? Should we take them preventively? - Can microbes influence food choices and appetite? - Do low dose antibiotics in food affect human health? - What is the effect of pesticides in food on the gut microbiome? Is organic food better for the gut microbiota? - Should all new drugs and food chemicals be tested on the gut microbiota? Manipulating the gut microbiota through diet Changes to the gut microbiota can occur within days of changing diet; remarkable differences were found after African Americans and rural Africans switched diets for only two weeks.66Increased abundance of known butyrate producing bacteria in the African Americans consuming a rural African diet caused butyrate production to increase 2.5 times and reduced synthesis of secondary bile acid.66 Another study comparing extreme shifts between plant and animal protein based diets showed these changes after only five days.67 But healthy microbiota are resilient to temporal changes by dietary interventions, meaning that homeostatic reactions restore the original community composition, as recently shown in the case of bread 68 Prebiotic foods and dietary fibre Most national authorities define dietary fibre as edible carbohydrate polymers with three or more monomeric units that are resistant to the endogenous digestive enzymes and thus are neither hydrolysed nor absorbed in the small intestine.69 A subset of dietary fibre sources is fermentable, which means that they serve as growth substrates for microbes in the distal bowel.70 Some non-digestible carbohydrates have been referred to as “prebiotics,” which are defined as food components or ingredients that are not digestible by the human body but specifically or selectively nourish beneficial colonic micro-organisms (box 3).71 The prebiotic concept has been criticised for being poorly defined and unnecessarily narrow,72 and some scientists prefer the term “microbiota accessible carbohydrates,”11 which are essentially equivalent to fermentable dietary fibre in that they become available as growth substrates for gut microbes that possess the necessary enzymatic capacity to use them.70 What are prebiotics and probiotics? Dietary amounts of protein, saturated and unsaturated fats, carbohydrates, and dietary fibre influence the abundance of different types of bacteria in the gut. The microbiota can also be modified by adding live micro-organisms to food or by periods of fasting. - Probiotics are live bacteria and yeasts that, when administrated in a viable form and in adequate amounts, are beneficial to human health. They are usually added to yoghurts or taken as food supplements. - Prebiotics are defined as a substrate that is selectively used by host micro-organisms conferring a health benefit. Although all compounds considered prebiotics are microbiota accessible carbohydrates or fermentable dietary fibre, the reverse is not true. The prebiotic concept is an area of current debate70 - Synbiotics contain a mixture of prebiotics and probiotics Consuming resistant starches has been shown to enrich specific bacterial groups (Bifidobacterium adolescentis, Ruminococcus bromii, and Eubacterium rectale) in some people.74 75 The taxa enriched differ depending on the type of resistant starches and other dietary fibres,75 indicating that shifts are dependent on the carbohydrate’s chemical structure and the microbes’ enzymatic capacity to access them. Microbes need also to “adhere” to a substrate and tolerate the conditions generated from fermentation (such as low pH).76 The effect of microbiota accessible carbohydrates on the gastrointestinal microbiome composition can be substantial, with specific species becoming enriched to constitute more than 30% of the faecal microbiota.75 77 Thus, microbiota accessible carbohydrates provide a potential strategy to enhance useful minority members of the microbiome. These changes only last as long as the carbohydrate is consumed, and they are highly individual, which provides a basis for personalised approaches. Many short term feeding trials with purified dietary fibres or even whole plant based diets either have no effect on microbiota diversity or reduce it,22 but can still have clinical benefits, potentially through metabolites such as small chain fatty acids.22 67 Low fibre intake reduces production of small chain fatty acids and shifts the gastrointestinal microbiota metabolism to use less favourable nutrients,78 leading to the production of potentially detrimental metabolites.79 80 Convincing evidence shows that the low fibre Western diet degrades the colonic mucus barrier, causing microbiota encroachment, which results in pathogen susceptibility81and inflammation,82 providing a potential mechanism for the links of Western diet with chronic diseases. Two recent studies showed that the detrimental effects of high fat diets on penetrability of the mucus layer and metabolic functions could be prevented through dietary administration of inulin.83 84 Overall, these findings, together with the role of butyrate in preventing oxygen induced gut microbiota dysbiosis,19 provide a strong rational to enrich dietary fibre consumption to maintain intact mucosal barrier function in the gut.85 Considerable observational evidence shows that fibre intake is beneficial for human health. Two recent meta-analyses found clear links between dietary fibre and health benefits in a wide range of pathologies,86 87 and a recent intervention study found dietary fibres significantly reduced insulin resistance in patients with type 2 diabetes, with clear links to the shifts in the microbiota and beneficial metabolites (such as butyrate).45 Probiotics are live micro-organisms that, when administered in adequate amounts, confer a health benefit on the host).88 Probiotics (mostly Bifidobacterium and Lactobacillus species) can be included in a variety of products, including foods, dietary supplements, or drugs. There are concerns that most microbe supplements are unable to establish themselves in the gut and fail to exert an effect on the resident community.89 90 But probiotics can affect health independently of the gut microbiota through direct effects on the host; for example, through immune modulation or the production of bioactive compounds. The therapeutic effect of probiotic supplementation has been studied in a broad range of diseases. We searched the Cochrane library of systematic reviews for “probiotic*”, yielding 39 studies, and searched Medline for “systematic review” or “meta-analysis” and “probiotic*”, yielding 31 studies. We included information on systematic reviews of randomised controlled trials published in the past five years where the main treatment was probiotics (not dietary supplements in general). Only studies that focused on comparisons of probiotics with a control group, that contained at least some moderate or high quality randomised controlled trials in the estimation of the authors of the systematic review, which resulted in a total of 22 systematic reviews (table 2 ). The analysis of 313 trials and 46 826 participants showed substantial evidence for beneficial effects of probiotic supplementation in preventing diarrhoea, necrotising enterocolitis, acute upper respiratory tract infections, pulmonary exacerbations in children with cystic fibrosis, and eczema in children. Probiotics also seem to improve cardiometabolic parameters and reduced serum concentrationof C reactive protein in patients with type 2 diabetes. Importantly, the studies were not homogeneous and were not necessarily matched for type or dose of probiotic supplementation nor length of intervention, which limits precise recommendations. Emerging areas of probiotic treatment include using newer microbes and combinations, combining probiotics and prebiotics (synbiotics),91 and personalised approaches based on profiles of the candidate microbes in inflammation, cancer, lipid metabolism, or obesity.92 Stable engraftment of a probiotic Bifidobacterium longum, for example, has been shown to depend on individualised features of the gut microbiota, providing a rationale for the personalisation of probiotic applications.93 Summary of systematic reviews analysing the role of probiotics on clinical outcomes |Outcome||Reference||No of studies/participants||Evidence of benefit?||Results/conclusions| |Clostridium difficileassociated diarrhoea in adults and children||Goldenberg et al (2017)111||39/9955||Yes||Moderate quality evidence that probiotics are safe and effective for preventing C difficileassociated diarrhoea. (RR 0.30, 95% CI 0.21 to 0.42)| |Necrotising enterocolitis||Al Faleh et al (2014)112Rees et al (2017)113||17/5338||Yes||Enteral supplementation of probiotics prevents severe necrotising enterocolitis (RR 0.43, 95%CI 0.33 to 0.56) and all cause mortality in preterm infants (RR 0.65, 95% CI 0.25 to 0.81)| |Antibiotic associated diarrhoea in children||Goldenberg et al (2015)114||26/3898||Yes||Moderate evidence of a fall in the incidence of antibiotic associated diarrhoea in the probiotic v control group (RR 0.46, 95% CI 0.35 to 0.61; I2=55%, 3898 participants)| |Probiotics for preventing acute upper respiratory tract infections||Hao et al (2015)115||12/3720||Yes||Probiotics were better than placebo in reducing the number of participants experiencing episodes of acute upper respiratory tract infections, the mean duration of an episode , antibiotic use, and related school absence (12 trials, 3720 participants including children, adults, and older people)| |Urinary tract infections||Schwenger et al (2015)116||9/735||No||No significant benefit for probiotics compared with placebo or no treatment| |Prevention of asthma and wheeze in infants||Azad et al (2013)117||6/1364||No||No evidence to support a protective association between perinatal use of probiotics and doctor diagnosed asthma or childhood wheeze| |Prevention of eczema in infants and children||Mansfield et al (2014)||16/2797||Yes||Probiotic supplementation in the first several years of life did have a significant impact on development of eczema (RR 0.74, 95% CI 0.67 to 0.82)| |Prevention of invasive fungal infections in preterm neonates||Agrawal et al (2015) 119||19/4912||Unclear||Probiotic supplementation reduced the risk of invasive fungal infections (RR 0.50, 95% CI 0.34 to 0.73, I2=39%) but there was high heterogeneity between studies. Analysis after excluding the study with a high baseline incidence (75%) showed that probiotic supplementation had no significant benefits (RR 0.89, 95% CI 0.44 to 1.78)| |Prevention of nosocomial infections||Manzanares et al (2015)120||30/2972||Yes||Probiotics were associated with a significant reduction in infections (RR 0.80, 95%CI 0.68 to 0.95, P=0.009; I2=36%, P=0.09). A significant reduction in the incidence of ventilator associated pneumonia was found (RR 0.74, 95% CI 0.61 to 0. 90, P=0.002; I2=19%)| |Treatment of rotavirus diarrhoea in infants and children||Ahmadi et al (2015)121||14/1149||Yes||Probiotic supplementation resulted in a mean difference of −0.41 (CI 95% −0.56 to −0.25; P<0.001) in the duration of diarrhoea. Probiotics exert positive effect on reducing the duration of acute rotavirus diarrhoea compared with control| |Prevention and treatment of Crohn’s disease and ulcerative colitis||Saez Lara et al (2015)122||14/821 ulcerative colitis| 8/374 Crohn’s disease |Yes||The use of probiotics and/or synbiotics has positive effects in the treatment and maintenance of ulcerative colitis, whereas in Crohn’s disease clear effectiveness has only been shown for synbiotics (no meta- analysis was performed)| |Pulmonary exacerbations in children with cystic fibrosis||Ananathan et al (2016)123||9/275||Yes||Significant reduction in the rate of pulmonary exacerbation (two parallel group randomised controlled trials and one crossover trial: RR 0.25, 95% CI 0.15 to 0.41; P< 0.00001)| |Type 2 diabetes (fasting glucose, glycated haemoglobin test)||Akbari et al (2016)124||13/805||Yes||Probiotics significantly reduced fasting blood glucose compared with placebo (8 studies; standardised mean difference −1.583; 95% CI −4.18 to 4.18; P = 0.000). Significant reduction in HbA1c was also seen (6 studies; SMD −1.779; 95% CI, −2.657 to −0.901; P = 0.000)| |Type 2 diabetes (insulin resistance, insulin levels)||Zhang et al (2016)125||7/425||Yes||Probiotic therapy significantly decreased homeostasis model assessment of insulin resistance (HOMA-IR) and insulin concentration (WMD: −1.08, 95% CI −1.88 to −0.28; and weighted mean difference −1.35mIU/L, 95% CI -−2.38 to −0.31, respectively| |Necrotising enterocolitis in pre-term neonates with focus on Lactobacillus reuteri||Athalye-Jape et al (2016)126||6/1778||Yes||Probiotic reduced duration of hospitalisation (mean difference = −10.77 days, 95% CI −13.67 to −7.86; in 3 randomised controlled trials), and late onset sepsis (RR 0.66; 95% CI, 0.52 to 0.83; 4 RCTs) were reduced in the| |Reduction of serum concentration of C reactive protein||Mazidi et al (2017)127||19/935||Yes||Significant reduction in serum C reactive protein after probiotic administration with a WMD −1.35 mg/L, (95% CI −2.15 to −0.55, I2 65.1%)| |Cardiovascular risk factors in patients with type 2 diabetes||Hendijani et al (2017)128||11/641||Yes||Probiotic consumption significantly decreased systolic blood pressure (−3.28 mm Hg; 95% CI −5.38 to −1.18), diastolic (WMD −2.13 mm Hg; 95% CI −4.5 to 0.24), low density lipoprotein cholesterol (WMD 8.32 mg/dL; 95% CI −15.24 to −1.4), total cholesterol (WMD −12.19 mg/dL; 95% CI −17.62 to −6.75) and triglycerides(WMD −24.48 mg/dL; 95% CI −33.77 to −11.18) compared with placebo| |Reduction of total cholesterol and low density lipoprotein cholesterol||Wu et al (2017)129||15/976||Yes||Lactobacillus consumption significantly reduced total cholesterol by 0.26 mmol/L (95% CI −0.40 to −0.12) and LDL-C by 0.23 mmol/L (95% CI, −0.36 to −0.10)| |Depressive symptoms||Wallace, and Milev (2017)79,130||6/1080||Yes||No quantitative analysis was performed. Most studies found positive results, and the authors conclude that compelling evidence shows that probiotics alleviate depressive symptoms| |Vulvovaginal candidiasis in non-pregnant women||Xie et al (2018)131||10/1656||Yes||Probiotics increased the rate of short term clinical cure (RR 1.14, 95% CI 1.05 to 1.24, low quality evidence) and mycological cure (RR 1.06, 95% CI 1.02 to 1.10, low quality evidence) and decreased relapse rate at one month (RR 0.34, 95% CI 0.17 to 0.68, low quality evidence)| |Chronic periodontitis||Ikram et al (2018) 132||7/220||Yes||The overall mean difference for gaining clinical attachment level gain between probiotics and placebo was significant (weighted mean difference 1.41, 95% CI 0.15 to 2.67, P=0.028)| RR=risk ratio, SBP systolic blood pressure, DBP= diastolic blood pressure, TC= total cholesterol, TG=serum triglycerides, SMD=standardised mean difference, WMD=weighted mean difference’ CI=confidence interval
<urn:uuid:20bee77c-0c01-441f-b570-b18d67e9b6d9>
CC-MAIN-2022-40
https://debuglies.com/2019/08/28/people-who-drink-red-wine-have-an-increased-gut-microbiota-diversity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00738.warc.gz
en
0.898554
7,569
2.828125
3
The General Data Protection Regulation (GDPR) compliance deadline of May 2018 is approaching quickly, and it impacts not only businesses that are located in the European Union, but organizations all over the world. Organizations that fall under its requirements are having to change the way they think about and handle the personal data they collect, store and process. GDPR mandates are aimed at protecting the privacy of personal data, and specific responsibilities toward that end are laid squarely on the shoulders of both the data controllers (organizations that determine how personal data is processed) and the data processors (organizations that perform the processing of the data on behalf of the controller). Locating and protecting data at rest is relatively easy, but data doesn’t stay still. It moves across the network, from controller to processor to third parties and back, even in and out of the country and the EU. Digital data can be copied and those copies can end up in unexpected places. GDPR compliance will require a strategy for dealing not just with stored data, but with data that’s always on the move. With data moving around and changing format, just finding the personal data in order to apply protections can be a challenge. That’s where a good data identification and classification system comes in. The details of implementing data classification are beyond the scope of this article, but keep in mind that your data classification scheme lays the foundation on which protection of personal data is built. Encryption is key In Article 32, Security of Processing, the GDPR requires that both controller and processor “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk.” Then in subsection 1.(a), it specifically calls out encryption as one of those measures. Encryption is a key element in protecting both data at rest and data in transit, but different encryption technologies are used, and it’s important to remember that not all encryption is created equal. Weak cryptographic algorithms, such as MD5 and SHA1, don’t offer the level of encryption needed to protect personal data. Some virtual private networking (VPN) protocols are more secure than others. When symmetric password-based encryption is used, it is essential that the passwords be strong ones: the longer, the better; avoid dictionary words and numbers that are easily guessed; use strong password generators or random number generators. Remember, too, that encryption itself only provides for confidentiality; it does not ensure authenticity or integrity. Encryption must be combined with strong authentication methods to effectively protect personal data. Don’t rely on users to encrypt data. Technological solutions can apply encryption by default and enforce your encryption policies for data according to its classification. If you store or process personal data in the cloud, choose your cloud vendor carefully. Know what data security measures the cloud provider has in place by default, and what optional security measures you can select to enable. For example, Microsoft Azure allows you to encrypt the virtual disks on which your Windows or Linux VMs (virtual machines) run. Key management is a crucial factor in the security of encrypted data. Use strong key management solutions to protect encryption keys. Hardware Security Modules provide added security for personal data when used to store the cryptographic keys because they are separate devices that act as protective vaults for your encryption keys so they aren’t exposed to the same risks as when they’re stored in the computer’s software. Protecting personal data at rest Organizations store many different types of personal data, collected from many different sources. Retail establishments and services operators maintain databases with contact and payment information from customers. Health care institutions keep detailed medical records pertaining to patients under their care. Companies of all types purchase and store lists of potential customers used in marketing and targeted sales, and their Human Resources departments keep on file personal information about their employees. This data may be stored in a number of different formats – in database files, spreadsheets, word processing documents, PDFs, electronic forms, email, instant messages, and so forth. It may be stored in many different locations, including copies of the same data in backup files and archives, or carried over into summaries or reports, or saved to portable removable drives, or written to optical media. Personal data can also sometimes be found in log files, temp files, and other unexpected locations. Data at rest is generally protected via access controls and encryption. Data stored on disk (including removable drives) can be encrypted via full-disk/full-volume encryption technologies or file-level encryption. Permissions can be set on files based on access control lists. Documents and spreadsheets can be password protected. Email can be encrypted with public key encryption or symmetric key encryption based programs or services. It’s important to note that for purposes of the GDPR, all personal data must be protected, and that includes unstructured data as well as structured. It’s possible for personal data to reside in images or even videos, and these too must be protected. Structured data, such as that in a database, is easier to protect because it’s all in one place. Unstructured data may be spread across multiple file servers, kept on the hard drives of individual users, copied to thumb drives or SD cards, and so forth. To protect personal data in general, but especially in regard to unstructured personal data, clear policies and adequate training of users in the handling of personal data are vital. Protecting personal data in transit In theory, it’s also fairly easy to protect data in transit. As with data at rest, encryption plays the major role. Personal data should be encrypted prior to sending it across the network, and encrypted connections should be used to protect the contents of the personal data while it is being transferred. As with data at rest, some methods of encrypting data in transit are better than others. For example, SSL (Secure Socket Layer) is no longer considered adequately secure; it has been replaced by TLS (Transport Layer Security). In addition to encrypting the data, you can protect it while in transit by implementing best network security practices. That means good firewalls, strong (preferably multi-factor) authentication to access the network, antimalware and regular system updating to prevent personal data from being exposed through malicious software and vulnerability exploits. Use technological means to make encryption automatic when, for example, a user attaches a data file to an email message or copies it to a removable storage device. Use rights management to restrict what authorized users can do with the personal data files they work with; rights management can prevent them from forwarding email messages or copying or printing Word documents or Excel spreadsheets. When personal data needs to be sent to or from a remote location, it can be protected by using a VPN to create a secure encrypted tunnel for it to move through. The protocols used to create the tunnel should be chosen carefully to ensure maximum protection of personal data. For example, PPTP provides only basic 128 bit encryption and has numerous documented vulnerabilities, whereas L2TP/IPsec provides 256 bit encryption and includes data integrity checks. OpenVPN uses an SSL/TLS tunnel and can use different algorithms to offer different levels of security. It is considered highly secure when a strong cipher is used. Making personal data less personal: pseudonymization In addition to specifying encryption as a means of protecting personal data, the GDPR also calls out the process of pseudonymization, which refers to the process of keeping data apart from personal identifiers, to decrease the risk of an individual’s privacy being violated if the data itself were exposed. When data has been pseudonymized, it can’t be linked to a specific identifiable person without the use of additional information that is kept in a separate location. Pseudonymized data is still considered personal data under the GDPR and still must be protected, but the regulation offers incentives for using pseudonymization, and it can help organizations meet the GDPR security requirements. Read more about pseudonymization here. If complying with the GDPR feels like trying to hit a moving target, that’s because personal data is, in fact, frequently moving – and that makes it more difficult to identify and protect. You can use a number of technologies, such as encryption and pseudonymization, to help keep data private whether it is at rest or in transit, but it’s essential that you know your options and choose carefully, work closely with your cloud provider, train users, and implement technological enforcements in order to meet your compliance goals.
<urn:uuid:57038e78-3921-484e-9c01-49bfe497bb31>
CC-MAIN-2022-40
https://techtalk.gfi.com/the-gdpr-challenge-protecting-a-moving-data-target/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00738.warc.gz
en
0.914369
1,758
2.890625
3
The z/VM Concepts, System Initialization and Shutdown course describes how virtualization, and in particular z/VM, has become more popular in Data Centers and examines the processes used for z/VM start-up and shutdown. Any IT staff requiring an overview of virtualization, how it relates to the z/VM system and the steps involved in z/VM, service and production guest systems start-up and shutdown. A basic understanding of data processing concepts. After completing this course, the student will be able to: - Identify the characteristics of a virtualized system - Describe virtualization as it relates to z/VM - Identify the roles of IT people using z/VM - Start and stop z/VM, service and production virtual machines - Check the status of initialized environments - Communicate with z/VM users Introduction to Virtualization Virtualization and Cloud Computing Data Center Virtualization Benefits Introduction to z/VM What is z/VM? Live Guest Relocation z/VM Job Roles Starting z/VM, Service and Production Virtual Machines SYSTEM, CONFIG and LOGO CONFIG Files z/VM Warm, Force, Cold and Clean Start-ups Commonly Used Service Virtual Machines Starting Service Virtual Machines IPLing The Production System Virtual Machine Ensuring z/VM System Readiness What is the Control Program (CP) and How is it Used? Interacting with CP Introduction to Privilege Classes Performing Post Initialization Tasks Communicating with Users Shutting Down z/VM Informing Users of a Shutdown Shutting Down the Production System Virtual Machine Shutting Down the Service Virtual Machines Performing a Controlled Shutdown Performing an Immediate Shutdown
<urn:uuid:1c319282-57ed-4340-9d17-4d61a0c8bdd9>
CC-MAIN-2022-40
https://interskill.com/?catalogue_item=z-vm-concepts-system-initialization-and-shutdown&noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00738.warc.gz
en
0.766036
396
3.3125
3
Internet use is ubiquitous in this day and age. Pretty much everyone has a home internet connection, or easy access to some place with free Wi-Fi Whether for business, school, or entertainment, most households use the internet on a daily basis. Despite this, many seem unaware of the dangers that can lie in wait for you if you’re not careful when browsing, and how surprisingly easy it can be to protect yourself. Here are some of the things you can do to make sure you keep yourself safe from viruses, identity theft, and other nefarious acts out there. Layers of the Internet You can vaguely divide the internet into “layers”. There are some sites at the top of the heap, backed by major corporations with a vested interest in making sure that, at the very least, malware attacks aren’t originating from their sites. Just below those is where the majority of the internet sits. Popular community sites (like forums and message boards) and smaller store pages and the like sit here. Both of these layers can typically be trusted to provide what they say they will, whether it be a product, a place to discuss a specific topic, or education info on various topics. Below that, however, are sites that can be described as shadier. These are usually exceptionally niche sites (like very small-scale single topic blogs) or ones that sit in some kind of gray area in terms of savoriness or legality. Torrent sites and the like which are usually safe on their own, can be used to inadvertently access harmful programs, however. These types of sites are usually a complete crapshoot in terms of safety, because they’re largely unregulated. This is where you’ll need to practice some of the other best practices we’ll talk about below. Below even this is what is typically referred to as the “deep web”. Not to be confused with the “dark web”, these are simply sites that are not casually accessible. They don’t show up on standard search engines, and can be pretty much anything, not all of which are nefarious; most email servers are kept here, for instance. You need the exact URL to access one of these, so it’s impossible or at least exceptionally unlikely to do so by accident. And finally, the “dark web”. Like the deep web, you don’t access the dark web by accident. You need a specific type of browser to access these specific networks. These sites require extreme caution to operate on, and typically, an average citizen should not have a need to access dark web content. While not all of it is illegal or immoral content, enough of it is, and the encrypted, secretive nature of the dark web makes it rife with people who can use the near total and assured anonymity to wreak havoc on unsecured, unprepared systems. Use extreme caution if using Tor or a similar system to access the dark web, or better yet don’t do it at all. What Does This Mean? It means you should, primarily, be sticking to the top two layers of the internet in your day-to-day browsing. These sites can mostly be entrusted to not infect your system with malware or collect your information to disseminate it to other people (though even these trusted sites will often collect your information and sell it to other corporations). A lot of the other practices below are going to be helpful, mostly with this rule of thumb in mind, because honestly, the biggest thing you can do to protect yourself from cyber-attacks is to simply never make yourself a target in the first place. Many sites make you choose a username for a reason. Not only does it allow for self-expression, it allows for anonymity. While this anonymity does come with its own issues, primarily in that people are more prone to being rude or unhelpful in interactions without moderation, it does serve to protect people from casual attempts to harass them in more direct ways. When interacting on a website, give as little information as possible. Many forums and the like will let you choose a number of personal details that you can share or keep private. These generally include things like your real name, gender, birthday, location, and so on. My advice is to share none of this. Your username should suffice for most interactions. Keep in mind that even larger, more reputable companies (like Google) do not necessarily have your best interests in mind. Your information is a commodity, and you have no real reason to give them this information unless it’s an absolute requirement. It’s also good to keep in mind that simple lists of your information are not the only source for bad actors to get ahold of a surprising amount of information if they pore over your past posts and such. Try to avoid giving out any more information than necessary in a conversation. This goes double for chat and instant messaging apps like Discord or WhatsApp, as you have less ability to seek out and remove these posts if you regret posting something. Depending on the website, it may be unavoidable. If you are, for example, part of a forum dedicated to woodworking techniques, establishing your credentials as a 15 year veteran of the industry may be necessary to lend credence to your advice. In isolation, these bits of info are largely useless. However, accruing enough of them could let people piece together enough information across multiple posts, or even multiple SITES if you use the same username, to paint a picture of exactly who you are. This information can be used to track you down in real life, or help to crack the passwords on your various accounts and access more detailed information. Speaking of which: Password Security is Paramount Using personal information in your password is something any security expert will tell you is a bad idea. The prevalence of people who simply use “password” as their password is astounding, being the 4th most popular password in the world (behind similar simple passwords as 123456), and passwords that use personal information aren’t much better. Things such as your daughter’s name, spouse’s birthdate, wedding anniversary, pet’s name, and so on are exceptionally easy passwords to crack for anyone who knows what they’re doing, and getting that information from you is depressingly easy, due to the prominence of one of the biggest cyber-security threats out there: social media. It doesn’t help that many people use the same password, or a variant, for every website they use instead of making them completely different. Whether you use Facebook, Twitter, Instagram, or are even one of the few holdouts for MySpace out there, any information you put on any social media site is at risk. Not only is it up for grabs for any corporation who wants it (if you’ve ever wondered why you get so many emails and phone calls about certain types of product, you can likely blame Facebook and similar sites), it’s a treasure trove of information for hackers and crackers of all varieties, who can use posts you’ve made to create a complete profile of your information that can be used for a variety of purposes. Not only can they use it to crack into your accounts and get all kinds of usable information (even bank account or credit card info if you’re unlucky), they can use it to try and target scams directly at you. These scams often come in the form of emails, and while spam filters are much better now than they used to be, sometimes scams still get through. Never Click Suspicious Links If you click a random link from an email you got from someone you don’t know, chances are you’ve been had, and need to immediately start making steps to protect your information. It’s as simple as that. Links can have all sorts of malware attached to them, some of which are incredibly insidious and can lie in wait for months before springing a surprise on you in the form of some blackmail or ransom request where they threaten to lockdown your system or release sensitive information to the public if you don’t pay them. Banks and similar groups with a vested interest in keeping your money safe will often warn you about these. Take these warnings seriously, but use the same amount of caution even when interacting with an email that looks legitimate. The scam warning can sometimes also be a scam in disguise. Always double check that the email address these warnings are sent from matches up with official emails you’ve gotten from that entity before. If it looks like a private email, steer clear. Official correspondence from a bank will always be from a company email, not a private one (like a Gmail account). This doesn’t apply to just emails either. If you’re tooling around one of the “middle layers” of the internet, you need to highly hone the skill of weeding out fake links from the real deal. If you’re lucky, some will just be advertisements, designed to basically just farm clicks for their pay-per-click advertising campaign. If you’re unlucky, clicking an errant link on a site like Mediafire or Mega Upload can result in the same situation as above, with ransomware installed on your computer. Always, always, always mouse over links before you click on them for any reason. Make sure the URL matches what you would expect, and will lead where you need it to go. This is a good practice to get into anywhere (for links posted on forums or social media, for example), but especially for these less regulated sites. Use an Ad Blocker You know what one of the best ways to ensure you never click on an unwanted link is? Reduce the number of them you even see. A good ad-blocking program is going to block a lot of static ads on websites, as well as most popups out there. Not only is it a good safety feature that most antivirus programs (which you should also have) won’t offer you, it’s an excellent quality of life feature that will save you a ton of time loading pages or getting to where you want to be. This is great if you have slower internet, as it could save you hours over the course of the day that would otherwise be spent loading advertisements you have no desire to see. While some sites will throw up an error if you have an adblocker on, generally speaking it’s either easy to temporarily disable the adblocker when needed…or simply find the same information elsewhere on a site that ISN’T going to attempt to undermine your own safety and convenience for their profits; journalistic websites that started out as print media (like newspapers) are the biggest offenders here. Read More: Identity Theft Protection Services Review It’s Okay to Lie Sometimes, you’re going to need or want to sign up for a site you may not be sure about. Maybe it gives you a bad vibe, or you know there’s something wrong with it but, in some ways, it’s worth the risk. In these cases, it’s easy to circumvent many of the inherent issues brought on by sites like this by simply giving them nothing useful to work with. If they ask you for a username? Use one you don’t use anywhere else. An email? Create a new one expressly for this purpose (bonus points if it’s with a provider you don’t usually use at all). Listing your name, birthday, or some other personal info is a mandatory part of the process? Make up every single one of these “facts”. Randomly generate a password instead of using one you would normally think to use (this helps cover any unconscious biases you might have in password creation). That last one is especially good if you only ever need to use a site once for whatever reason. Even if you nominally trust a site, this is a pretty good habit to get into. There is absolutely no reason the average site needs to know your actual name to function. Chances are, they simply want that info so they can sell it to another corporation for a bit of extra cash. And if the site requires your credit card info to continue…it’s usually not worth it unless it’s a service you absolutely need for work or something similar. These are some good practices, but not the only ways to keep yourself safe out there. In general, it’s best to think of the internet the same way you would think of something like a mixer, or going to a party with a bunch of people you don’t know. There’s a lot of opportunity to make new friends, connections, and even business deals…but you have to be careful who you trust, and your default mode of interaction with any new website or people should be wariness bordering on paranoia. If you get into the habit of initially distrusting and vetting EVERYTHING online, you’ll find it’s pretty easy to keep yourself safe when you go surfing the Web. Last Updated on
<urn:uuid:d70fc534-8f09-482a-a7bf-bb594f3e27fc>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/internet-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00738.warc.gz
en
0.933832
2,742
2.796875
3
Spear phishing is a specific cyber-attack aimed at an individual or individuals that are associated with an organisation. The US Federal Bureau of Investigation (FBI) gave the following example: “Customers of a telecommunications firm received an e-mail recently explaining a problem with their latest order. They were asked to go to the company website, via a link in the e-mail, to provide personal information—like their birthdates and Social Security numbers. But both the e-mail and the website where bogus.” The key to spear phishing is that the criminal knows something about the recipient. In the FBI’s example, the criminal knows that the recipients were customers of a telecommunications company. It’s that small piece of information lends credibility to the scam. The dangers of spear phishing Imagine your staff getting an email from a criminal that says they would like to place an order at your restaurant. The email includes a word document with instructions to enable editing, and therefore open the floodgates for malware. This is exactly what happened at the restaurant chain Chipotle when millions of customers’ credit card numbers were stolen. Spear phishing attacks differ from phishing attacks in that they are targeted to a specific group. In a traditional phishing attack, there is no information that shows that the sender knows who they’re reaching out to. How to prevent spear phishing Educate and train employees Education is the most important way to prevent spear phishing in your business. Teach your staff what to look for and make sure that they understand the dangers of spear phishing. Here are some of the guidelines that you can teach your employees to prevent spear phishing: ● Simply never use links in emails Teach your employees to never click a link in an email. If a bank, or even your own company, requests that they log in or make changes, they should go to their browser and type in the URL themselves. ● Verify URLs Every hotlink in an email or even on a website redirects to someplace else. Teach employees to look at the URL more than once before clicking anything. One of the tricks that criminals use is to create a close approximation of a domain. For example, to trick someone into clicking a page, they will change www.usbank.com to www.usbenk.com. The name is close enough to trick someone who is not reading closely. ● Never give out personal data One simple rule to institute is to tell employees to never share any information like passwords or account numbers. Unless they are instructed to do so by management, they should never share any information. Moreover, they should never share it via email or any other electronic medium. Anything typed into a computer connected to the internet is susceptible to having information stolen. ● Be careful with social media The more information that employees put on social media, the easier it can be for criminals to spear phish them. Criminals can use online information to increase confidence in the recipients. Spear phishing prevention with software There are several steps that you can take using software that can protect your company. ● Keep your software up-to-date Spear phishing relies on malware to infect your system. By having the most recent patches and security software, you can minimize the risk of the malware if it arrives. Antivirus software is, and always will be, a necessity. Look for software that scans and updates itself constantly. It could prevent malware from getting a foothold on your server. ● Encrypt sensitive data File and data encryption is a great way to keep spear phishers from being able to use the data. All of the sensitive data on your network should be encrypted. This keeps any data that a criminal receives from being useful for anything. ● Multi-factor authentication If someone asks for an employee’s password, but there are multiple layers of protection, the password is useless. For example, if your system is protected with passwords and bio-metrics, a password is useless on its own. Staying safe from spear phishing All spear phishing is based on human behavior. Therefore, the best way to make sure that your system stays safe from spear phishing is to teach your staff what to avoid. The technological solutions are powerful, but education and security awareness training are the most important elements.
<urn:uuid:dcb66c91-500d-4964-9dba-b194f3291508>
CC-MAIN-2022-40
https://awarego.com/prevent-spear-phishing-danger/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00138.warc.gz
en
0.943853
907
3.078125
3
Today we mark Earth Day, and hopefully remind everyone that they can play their part, not just today, but every day, to help preserve what is left of our planet and its natural resources. IT can do a lot to operate in a more green fashion – it will benefit the planet, and can save the company some hard-earned money too. To mark this year’s Earth Day, here are 18 tips for greener IT. 1. Use Energy Star rated hardware Energy Star rated servers, workstations, monitors and printers use less power. Less power means lower energy bills, and less impact on the environment as well. 2. Use things for longer There are times when you really need to upgrade hardware, but there are times when that four-year-old printer is still working fine. If a piece of hardware is in good working order, parts or consumables are still available for it, and it can be serviced when necessary or covered by an extended warranty, you can stretch its shelf life for another year or two. You’ll save money, avoid environment taxes (if they apply to your country) and you won’t have to dispose of device. Recycle as much you can, from the cardboard boxes in which new gear is shipped in, to the hardware that you are retiring. Many schools, charities and other groups can make good use of retired but functioning used hardware. What cannot be given a second life, can be sent for recycling to retrieve precious metals, eliminate toxins and reduce waste destined for a landfill. 4. OEM packaging When you do order hardware, get it shipped in OEM packaging whenever possible. This reduces the amount of paper, plastic and other packaging materials that you will have to deal with, and the plain cardboard most OEM packaging uses is much easier to recycle. 5. Enable power-saving settings on workstations Use a Group Policy Object to enforce power saving features on all systems in your domain. You can set monitors to dim and then to power off, spin down hard drives, and put workstations into sleep mode, along with many other settings to help reduce power consumption. See http://technet.microsoft.com/en-us/library/cc731700.aspx for some tips to getting a GPO for power-saving setup for your company. 6. Use power-saving mode on printers Printers, especially laser printers, consume tons of electricity every year. Enable power saving mode to cut their power consumption in half (or more) which will also reduce their heat output, which means the air conditioner won’t have to work as hard either. 7. Take things off their chargers Cellular phones, headsets, cordless mice and keyboards and other portable electronics that recharge should be taken off their charger when they are fully topped off to save electricity, and heat. Per device it’s not much but if everyone did it every day, that little bit can add up quickly. 8. Run your datacenter just a little bit hotter Believe it or not, you don’t have to run your datacenter as cold as a walk-in deep freezer. Many of the largest names in IT are running their datacenters warmer than their employees’ work areas, saving millions of dollars a year in cooling costs, and greatly reducing their carbon footprint. What really counts is not how hot or cold but if the temperature is constant. If you check your fans regularly and have monitors for systems in case their fans stop, you can run systems at temperatures people would consider hot and be just fine. You probably don’t want to run things at 35C, but 25C is just fine and can save you significantly in cooling and electricity costs. If you need another reason to start virtualizing systems, it’s to be greener. Fewer hardware devices means lower space requirements, lower electricity consumption, less cooling, and less material to dump when they are retired. Virtualizing systems is a great way to still provide teams or project workloads with dedicated systems, while taking advantage of the greater processing power available in host systems. Consolidating jobs on physical servers can also save on power consumed, cooling required, space in the racks and ultimately in the landfill. If you aren’t a fan of virtualization, you can still go green by consolidating workloads. Do you really need a dedicated print server, or can your file server do that too? Domain controllers are typically very underutilized, and can run DNS, DHCP, IPAM, and WINS services without having to grant any additional user rights. Look around and see what other services you could consolidate. You will find lots of ways to save money and go green. 11. Smart printing Tons of paper is wasted each year on cover sheets, duplicate jobs, unclaimed jobs, and so on. Reduce printing waste with “sign in required” printers that won’t print a job until the owner is there to collect it. Push out draft mode and double-sided mode as default profiles to save on toner/ink and paper. Use recycled paper as much as possible and make sure there are recycle bins for unused or print jobs left behind. Come down heavily on those who print PowerPoint decks or lots of emails unless they really need to. Printing less saves trees, toner and electricity. 12. Switch to fax server software One of the least smart printing jobs is to print faxes. Traditional fax machines print out received jobs to paper. To send a fax, users have to print it out and then feed it into the fax machine. In many cases, all that paper is thrown into the bin, but in other circumstances it then has to be filed and kept forever. Switch to fax server software so that incoming faxes are delivered directly to users’ inboxes. Employees can also send faxes as easily as they send an email, saving all that paper and toner/ink, and getting rid of all those legacy fax machines that just take up space and consume electricity. 13. Use power strips with switches Tell employees to not only shut down their machines but also to push that little button on the monitors. It is such a waste of electricity for nothing. If no one heeds the instruction, use a power strip with an on/off switch, and make it policy to plug everything into that. Then, at the end of the day, they can hit one button and power off their monitor, lava lamp, cellphone charger and all the other little things that suck power all night long. Whether you are going to work, to lunch, to training, to the datacenter, or to another office, carpooling is a great way to reduce the carbon footprint and impact on the environment. A car with four people generates a lot less pollution than four cars with one driver each. In many locations you will even be able to take the HOV or carpool dedicated lanes, lowering commute times and helping you save on gas. Every bit helps you and the environment. A great way to reduce environmental impact and boost employee morale is to allow telecommuting. You can allow your staff to work from home on some days or else whenever they want. Not only will they be happier but it saves on gas, car running costs and the need to get up early. You’re also reducing the number of cars on the road. Two cars may not make a huge difference on paper, but if most companies did the same, you’d see how the overall environmental impact will be reduced. Many airlines are now publishing the CO2 load of flights, just to let you know the impact that business trip has on the environment. Onsite meetings are important and sometimes cannot be avoided. However, why not use web conferencing if both parties agree and there is no need to be in one office? If you have a good internet connection, the meeting should be a breeze. You just don’t shake hands with them. 17. The cloud Between economies of scale, new technologies and orders of efficiency, cloud service providers can do the job just as well as an on-premise product can – but at a lower cost and overall impact on the environment. Sometimes you can’t do away with your on-premise software, but if certain tasks can be managed using a cloud-service, it’s worth doing so. Save time, save money, save on hardware, costly licensing… the list goes on. Never overlook the importance of good preventative maintenance. Whether you are blowing dust out of systems, replacing air filters on HVAC units, keeping drain tubes unclogged, or verifying the integrity of the fuel tanks on your generators, preventative maintenance can help you avoid costly repairs, unanticipated outages, and ensure that all equipment runs as cleanly and efficiently as possible. This too has a tangible positive impact on the environment. If you have any other tips to share on how to keep your IT office as green as possible, leave a comment below!
<urn:uuid:4da7d63a-c6ce-482c-9b6c-2033b2aa8a85>
CC-MAIN-2022-40
https://techtalk.gfi.com/earth-day-2014-18-tips-for-greener-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00138.warc.gz
en
0.940675
1,903
2.53125
3
CORE: In the core (private and public cloud), the data forwarded via the edge nodes and aggregated from various sources is stored, processed and made available centrally. This creates the highest possible level of agility, scalability and response speed. EDGE: Edge nodes or edge gateways serve as nodes between the IoT devices and the core. There, the data generated by the IoT devices is analyzed intelligently in advance and in real time (edge analytics) and filtered – so that only the most relevant data migrates to the core, i.e., to the private or public cloud. This significantly reduces the volume of data at the edge. Furthermore, analyses and local applications such as a machine dashboard can be operated at the edge. DATA SOURCES: The number of possible data sources is increasing, primarily due to the growing number of networked devices (IoT). For example, IoT devices such as sensors, mobile devices, machines or even barcode scanners produce data continuously and sometimes in real time – which is often business-critical and therefore particularly worth protecting. For example, sensors measure environmental values such as speed, rotation, force, light, position, climate or acoustics. New sensors can be attached to existing machines, for example, so that it is not necessary to purchase new machines (retrofitting).
<urn:uuid:58d2e09c-dc55-4f67-92de-ee129f7ca16a>
CC-MAIN-2022-40
https://www.cancom.com/solutions/business-solutions/iot-edge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00138.warc.gz
en
0.930322
270
3.125
3
The data center is in a seemingly constant state of change, with one exception: The utility bill inevitably grows. A single hyperconverged server can easily burn 1,500 watts — and need 1,400W of that burn chilled. Here are a few ways to reduce your data center energy use, as well as the associated costs. 1. Shut Down Hosts That Aren't Operating Sleeping hosts use only nominal power and can wake using either vendor APIs or standard Wake-on-LAN signals. That requires everyone — network admins, sysadmins and the hosts’ regular users — to know that the hosts are asleep, and how to wake them (with any patches or fixes issued in the interim). That, in turn, requires coordination and interdepartmental communication to maintain schedules of inventory types available for sleep. 2. Focus on the Operating System A few years ago, most virtualization platforms pushed the power pedal to the floor. But much has changed. Like hypervisor/virtualization deployments, discrete power settings can often be managed within server OS profiles. If a server uses an OS installation of Red Hat, Ubuntu Server, xBSD or Windows Server, then settings are available that balance performance and power consumption. Using settings correctly can save as much as 50 percent of the operating electrical cost; they are usually easy to find and well documented. 3. Use Zoned Chilling Many data centers are built for chilling. As equipment racks become denser, temperature offsets are raised to chill the anticipated power. But higher-density systems have also become more power-efficient. Using zoned chilling where possible allows data center temperatures to be adjusted in some configurations, saving both power and chilling costs. Many vendors now have hot- and cold-aisle solutions, modular in design and easily installed, that can optimize airflow to create density zones. 4. Put Flexible Loads in the Cloud It’s plausible to move entire workloads into the cloud and simply shut down sections of an aging data center altogether. Forward-looking options, such as solar-power generation, energy generation colocation, thermal cooling and chilled-water cooling, are on the horizon. Until they materialize, the more practical steps of unblocking vents, renegotiating supply contracts, using systems’ power management tools, and simply moving workloads to subcontractors can have an immediate effect.
<urn:uuid:e0c720b0-55ad-407b-a498-3bb58722323b>
CC-MAIN-2022-40
https://biztechmagazine.com/article/2018/05/power-diet-4-tips-reduce-data-center-energy-costs
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00138.warc.gz
en
0.915243
488
2.59375
3
Many companies aren’t equipped to help customers who are deaf, hard of hearing, or have a speech impediment. For people with hearing or speech difficulties, a last-minute change to an appointment or picking up the phone to make a call isn’t always feasible. While there are multiple text-based communication channels available, the most common tools like Facebook Messenger and WhatsApp are intended for personal use, not business use. There are risks for businesses that elect to use personal communication tools for business purposes, including: • The potential transmission and distribution of private or sensitive company data. • Apps often collect personal user data, which may clash with corporate privacy policies. • Restricts the company’s control and visibility over the conversation, therefore hampering their support of end users. • Prevents the organization from being able to investigate electronic communication Code of Conduct complaints or concerns. Organizations often assume that their consumers will easily be able to interact with them using their existing communication channels, but that isn’t always the case for people with a hearing impairment. • Phone Calls. Often difficult and frustrating for customer, and important moments are missed. • Email. Delayed response time and no immediate resolution. • Website Chat Bots. Limited capacity to respond and assist customers because they are unable to adapt to unique situations, which can create a negative customer experience. • Telephone Typewriter (TTY). Not agile or mobile and can require external hardware. • National Relay Service. Time consuming and doesn’t allow for real-time communication due to the translation delay from user to agent to business. • In-Person. Employees are likely ill equipped to assist, leading to a frustrating experience for everyone. • 1 out of 5 men and 1 out of 8 women (37.5 million adults) report at least some trouble hearing. • 1 in 8 people in the United States (30 million) aged 12 years or older experience hearing loss in both ears, based on standard hearing examinations. • About 40 million US adults aged 20–69 years have noise-induced hearing loss. • Men are almost twice as likely as women to have hearing loss among adults aged 20-69. • More than 1 in 2 US adults with hearing damage from noise do not have noisy jobs, meaning the exposure is likely recreational. • 1 in 4 US adults who report excellent to good hearing already have hearing damage. • 28.8 million US adults could benefit from the use of hearing aids. Source: National Institute on Deafness and other Communication Disorders, Healthy Hearing Report Using a text messaging solution, SMBs can easily resolve customer inquiries, send reminders, promote events, automate common responses, and more — all from the business phone number their customers already know and trust. • Clear Communication. No miscommunications or missed context. • Real-Time Communication. No call captioning agents needed, so customers get fast replies and instant service. • Extend Your Customer Base. Reach more customers with ease. Send texts in any language to engage with multilingual customers. • Increase Sales. Add calls to action, links to your website, enticing messages, customer satisfaction surveys, and more. SMS messages with rich media links can achieve a 27% click-through rate! • Reduce Costs. SMS conversations are 8x cheaper for a business than phone calls. • Increase Effectiveness. SMS is 28x times more effective than email. Increase audiences by 11.4% and response times by 60x.* • Enhance Branding. Keep messaging consistent with your brand’s voice. • Reduce Effort & Time. Schedule campaigns and automatically reply to messages based on keyword triggers for frequent customer questions to reduce repetitive message actions and increase employee productivity. How Do Email vs SMS Compare? See Our Quick ROI Comparison Text messaging makes it easy for people who are deaf or hard of hearing to interact a business without needing another device. Consumers already use their smartphones to communicate with friends and family, and most would prefer to communicate with businesses the same way. SMS messages are an inclusive communication channel that lets SMBs effectively interact with their customers, regardless of hearing capacity. All customers get the support they need, saving businesses save time and money. It’s a vital tool for SMBs to drive exceptional customer experiences with their entire customer base. As a SaaS application, business text messaging doesn’t require end users to download another application or maintain software updates. It’s device agnostic and user friendly for all age ranges, regardless of technological proficiency. Best-in-class SMS and MMS solutions offer an intuitive web app, bulk messaging, customizable response templates, message bots, and more. How will they communicate with customers who can’t hear them? Many readily available communication tools lack mobility and require a learning curve. Business text messaging solutions are a cloud-based tools with an intuitive admin portal for provisioning customers and managing accounts — some even offer APIs to integrate back-office ordering, billing, and support systems. The benefits are substantial: • Text messages have a 98% open rate. • 78% of Gen Z consider their mobile phone their most important device to get online. • The average click-through rate for email marketing is 2.5%. In comparison, the SMS marketing click-through rate is around 19%. • 61% of Gen Z purchased a product via mobile in the last month. • Consumers who get SMS marketing messages are 40% more likely to convert than those who don’t. • 90% of SMS messages are read within three minutes. • 95% of text messages are delivered and read within 3 minutes. * Statistics from PR News Wire Business Text Messaging is the ideal solution for any business that seeks to streamline inbound and outbound communications. With limitless applications, service providers can target a wide range of industry verticals to meet SMB needs with ease. Many consumers not only prefer text messages to phone calls for short exchanges but also prefer companies that offer text messaging as a communication channel. In fact, almost 63% of consumers would switch to a company that offers it. *SMBs and enterprises are seeking out text message solutions to meet their customers’ expectations, creating a substantial market opportunity for service providers. By incorporating an SMS solution into an existing solution suite, service providers can improve market differentiation, increase customer stickiness, and expand commercial sales opportunities. As a cloud-based solution, Alianza’s Business Text Messaging can be launched quickly and delivers a profitable business texting solution while allowing providers to maintaining control over pricing, feature bundling, and customer relationships. A 2021 Independence Research survey asked over 500 small business decision makers how they use or would use business text messaging, and here’s how they responded: Deliver a compelling new communication service to SMB customers with Alianza’s Business Text Messaging. With limitless applications, service providers can target an unparalleled range of industry verticals and be able to meet the needs of individual SMBs with ease. Sell Business Text Messaging standalone or bundled with your business voice products to drive greater revenue, market footprint, and customer lifetime value (CLV). It can also be sold anywhere — even outside your broadband or voice footprint — and over the top of any voice solution.
<urn:uuid:be30936e-6b49-402d-8156-26f089601319>
CC-MAIN-2022-40
https://www.alianza.com/blog/business-text-messaging-complete-guide-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00138.warc.gz
en
0.915188
1,516
2.765625
3
Ransomware. It's a very hot topic in today's current affairs and with cyberattacks on the rise, network security is the number one issue on IT Exec's minds. In this 3 part blog series, we will present the three types of network vulnerabilities: Hardware, Software, and Humans. With each type, we will discuss what makes a network vulnerable, how it can be breached, how to prevent it, and what to do if a data breach occurs. Part 1. Hardware Let's breakdown the different categories of hardware and how they can be subject to vulnerabilities. The physical devices that connect to your network come in many form factors and they each carry their own risks. In the early days of IT the types of devices were limited. Servers, routers, computers, printers, fax machines (are they still around?), and firewalls were all installed and managed by IT. In the modern network IT has relinquished a lot of control and management with the onset of IoT devices and BYODs (bring your own devices) programs. Smart phones, tablets, and laptops brought from home have opened new vectors of attack to the enterprise network. USB drives can be scattered in the parking lot with autoloading malware on them. Wireless and IoT devices present their own problems. Smart thermostats, smart door locks, baby video monitors and anything else that connects to a network are now becoming top targets. Though convenient, many do not have the strictest securities in place - making them super vulnerable to hackers. It should go without saying, but if any devices are left physically unprotected at a business - meaning doors aren't locked or hardware is left unsupervised, it's very easy for someone to gain access to the physical network. Unlocked computers and KVM consoles to servers give malicious actors pre-authenticated access to applications and resources throughout the network. Because it is such a simple thing, physical security systems are often overlooked. No one expects the next cyber attack they deal with to be a physical break in. However, a quick search on YouTube and you can watch hours of footage showing physical pen-testers taking advantage of this mindset repeatedly. Physical assets must be secured against theft and unauthenticated access. Physical networks must be secured against unauthorized usage. The basic premise is simple: Lock it up all the time. Physical security systems that accomplish this can be incredibly complex. But taking basic precautions doesn't mean you have to utilize a complicated system. A simple setup might look something like this: - Auto-locking metal doors with keyed or keyless access. - Security cameras around the perimeter and critical interior of the building. - Separately keyed locks on critical infrastructure areas. (switch closets, server rooms, etc.) - Group policies that enforce auto-locking of screens. - MAC authentication to the network. (Port Security) - Removable device security. (Don't let USB drives on your network) Of course if you want to get more complex and ensure that only the right people are getting access there is some really cutting edge tech out there! Biometrics are quickly being adopted for hardware and physical security at the enterprise level, and there are a lot of options out there. Many smart devices, including laptops and phones, already have this type of security in-place so many users already have experience in utilizing them. There are some that have their hesitations about biometrics - privacy, integration issues and cost, but it looks like it could be a standard for small device authentication in the future. Hardware security keys in conjunction with passphrase authentication are a favorite among security experts. Pair a person's credentials with a physical object that has to be there for authentication to happen and you increase the difficulty of compromising a machine exponentially. Questions to ask yourself and your team: - Can someone come into an empty office and gain unsupervised access? - Can anyone plugin and utilize a USB drive? - Can anyone plugin to a network port and gain access to the network? - Are computers locked automatically? - Can the people that have access to open the front door get access to the servers and switches? Getting a good idea of the physical threat is step one to hardware security, so do a walkthrough of your business and determine where you need to improve. How do you know your network has been breached? At first, it might be difficult to detect. A hack might seem chaotic to you as the victim, but to the perpetrator it is a very methodical and purposeful plan. Typically, when a network is breached, threat actors have spent a considerable amount of time researching the network they are attempting to access. Reconnaissance and scanning are done before the first attempt at access. Once the threat actors have gained access they will quietly observe the network to learn info, patterns, and behavior. Over time they begin to move into actively scanning the network and attempting to gain access to systems via privilege escalation and vulnerabilities. Once access is gained, they may begin locking legitimate users out in order to maintain their access and prevent the business from fighting back. If you don't have a way to detect a breach it may take time for visible problems to reveal themselves. And the longer a threat actor has access, the more damage can be done to your business. Fortunately, there are many different ways to detect unauthorized recon, scanning, and access to the network. The more important of these is SIEM or Security Information and Event Management. There are many software packages that can take events and logs from myriad sources and bring them into a centralized system for analysis and detection. They enable a sys admin to know when someone that is not authorized is attempting to gain access and give them a chance to proactively react. Once a threat actor has gained access, technologies like EDR, application whitelisting, and networking monitoring can offer a chance to block the vulnerability or access attempt while it is happening. However, these are generally reactive in nature. Meaning a vulnerability has been identified and the attack has already begun. This might be too late. How to prevent physical network vulnerabilities Firewalls are the first line of defense in a network. They setup the perimeter and allow or deny access based on rules the business sets. Their primary purpose is to restrict access to only what needs to be accessed. Most of these devices run software and need to be kept up to the latest code to prevent them from being exploited and supplying access to the entire network. For extremely critical infrastructure hardware firewalls do exist and can provide a really secure solution to preventing access. Publicly facing applications and services that are allowed to be accessed through the firewalls must be kept up to date and constantly monitored for unauthorized access. Next gen firewalls, also known as web application firewalls (WAFs), can be effective in mitigating attacks to these vectors however, the applications themselves should be updated and managed as well. At the end of the day, good firewalls don't excuse bad code or poor vulnerability management. Wireless and IoT devices are important to protect but are often overlooked. With Wi-Fi, avoid default configs and widely used passwords. For example, passwords posted on a coffee shop wall to a Wi-Fi network with default settings is an easy mark target for hackers. IoT devices should be bought from reputable vendors and then segregated into a subnet with restricted access to the network and no access to the internet. BYOD programs for employees are becoming more common and IT departments need to set boundaries and standards. Though employees may work more efficiently on the device of their choice, the IT department needs to manage them to protect the network. MDM solutions such as Microsoft Intune can provide a method for IT to maintain controls on these devices while still giving a user the liberty to utilize the device of their choice. What to do if your network is hacked? Unfortunately, it's not a matter of if, but when. At the point a breach is recognized, it's important to take swift action to mitigate the threat. Below are the steps to take to recover from a data breach in the network: - Secure the area. If a physical breach has happened, secure the area and immediately change access protocols including key-codes, and keys. Restrict access to essential personnel only. - Secure the network and isolate the breach. Disconnect any potential methods of access from a threat actor. Unplug the internet cable or switch uplink, but do not turn machines off until forensic experts have had an opportunity to examine the machines. If only a portion of the network has been breached, ensure it is isolated from known good areas. - Contact your Cybersecurity Insurer. Insurance companies have exact protocols of how to proceed during/after a security breach and you could lose money and leverage if you don't contact them right away. - Don't go searching on the internet for anti-malware or forensics tools as these could be potential traps for you to download additional malware. Trust products you know and that are familiar. - If possible, replace affected machines with known good images from backup or a disaster recovery product. Run A/V or EDR on all assets whether known clean or not. - It is incredibly important to change ALL passwords and authentication methods. Even the one that breaks applications when you change it. - Do not destroy evidence. Treat affected devices like a crime scene, because that is what they are. - Once the infrastructure is restored and confirmed clean, fix vulnerabilities in software and firmware. Keeping network hardware secure can be a daunting task. It's important to note that the main defense you can take is to have a vulnerability program and patch often. With so many devices to keep secure, it's also vital to have a Network Asset Inventory document. Having a living document that shows every piece of hardware, who has access to it, how old it is, etc. leaves little room for hardware security vulnerabilities. If you need help getting started with a vulnerability program, download this Network Security eBook as a free resource which includes an asset inventory template. If you wish to speak with a network engineer for further assistance, contact us at email@example.com and we'd be happy to connect you with one of our certified Net3 Sales Engineers.
<urn:uuid:3a1be19c-2311-4a61-a535-8b42d3cb7ace>
CC-MAIN-2022-40
https://www.n3t.com/about-us/blog/network-vulnerabilities-hardware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00138.warc.gz
en
0.949764
2,110
3.015625
3
The recently announced Facebook breach directly affected 29 million users (down from their original estimate of 50 million), and others like it, continue to occur nearly every day. Consider the Google+ user data exposure announced just recently, which triggered the eventual shutdown of the Google+ social network. The regulatory scrutiny and reputational damage incurred by companies is real and must be addressed at the board room level. Preventing breaches can be a significant challenge, however, because modern web application and software design has become increasingly complex, and most security programs don’t take a holistic approach to managing all the points of software exposure. Defining software exposure We live in a world of massive digital transformation. The technical backbone of this transformation is software. Software can be found everywhere. It is in our homes, in our phones, and in our businesses. Over 80 per cent of the code in today’s software applications is open source. There will be 30 billion connected IOT devices by 2020. 85 per cent of customer interactions will be managed without any human interaction by 2020. Software is everywhere and it has become incredibly complex. Furthermore, accelerating “time-to-market” has become the new name of the game to bring products to market faster. Amazon deploys to production every 11.6 seconds. Facebook on Android alone, does between 50,000 to 60,000 builds a day! DevOps has changed the way software is built and has led to new risk factors in the form of “Software Exposure.” Anatomy of the Facebook breach Three software flaws in Facebook’s systems enabled this attack. Oddly enough, the first two bugs were introduced via an online tool intended to improve user privacy. The third flaw was in a tool that allows users to upload birthday videos easily. Attackers used Facebook’s “View As” feature in addition to the video uploading program to steal access tokens. This allowed attackers to take control over a user's profile, which may lead to much greater consequences in the future – blackmail or phishing, for instance, or the exposure of highly private information. "It’s important to say—the attackers could use the account as if they were the account holder." - Facebook’s vice president of product management Guy Rosen Complexity and privacy risks People may or may not be aware that every app they open is potentially a privacy risk. And when using a known social media platform, they are most likely not aware of the origin of the app or link they open. This is especially the case in social media platforms such as Facebook as third-party applications and websites intertwine into the main social media user interface. You think you are still on Facebook but you actually are not. This is an example of the complexity of the application in addition to somewhat benign issues by themselves combining to formulate a significant breach. In this case the security issue in the “view as” functionality combined with the token issue in the video uploading program allowed the breach to happen. This is yet another prime example that breaches are not always self-contained issues but are a series of events or actions working together. Consequences of the breach Although the degree of impact is still under investigation, Facebook logged around 90 million people out of their accounts when they discovered the breach. That may not be sufficient, however, as the company confirmed that attackers may have gained access to third-party applications and websites – those that use Facebook Login to authenticate login. Facebook Login makes it easier for people to verify their identity via their Facebook profile across the web, in different sites and services, designed for convenience rather than security, however. Since the Facebook breach, it’s possible that accounts relying on Facebook for authentication have also been compromised. This puts pressure on those third-party apps and services to verify the security of accounts that use Facebook Login for access – and to notify those users if there has been suspicious activity on or changes to those accounts. It’s important to note that although hackers could have used the flaw to steal information belonging to third-party apps that use Facebook as a login method, Facebook said that no outside apps appear to have been affected. What can you do after a breach? All of these factors make the Facebook breach particularly alarming for end users – certainly for the millions of users affected by this most recent breach, but also for the 2.23 billion monthly active users worldwide. Users worried about the security of both their Facebook account and any accounts accessed by Facebook Login may be looking for ways to lock down their account, and rightly so. Here are four initial steps to take: - Make sure your passwords are complex and unique - Enable two-factor authentication whenever possible - Review authorised logins to third-party applications – log out of them and back in to reset the access token - Verify whether your account was impacted in the Facebook breach Facebook is taking this breach seriously, which is an important step. The approximately fourteen months that these flaws were present in the software are cause for concern, however. Google took their data exposure seriously enough to not disclose it publicly for months, then announce a shutdown of the Google+ network. Software security must be top of mind for organisations that reach so deeply into so many lives. Seemingly small flaws like the ones responsible for the attack at Facebook can have a huge impact, showing once again how critical a holistic approach to software security is in modern software development. Matt Rose, Global Director Application Security Strategy, Checkmarx (opens in new tab) Image Credit: Katherine Welles / Shutterstock
<urn:uuid:c88a931c-66bc-4a89-a84a-f59f66f7127a>
CC-MAIN-2022-40
https://www.itproportal.com/features/facebook-google-user-exposure-and-the-breadth-of-consequences/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00138.warc.gz
en
0.952082
1,145
2.625
3
Data drives business decisions that determine how well business organizations perform in the real world. Vast volumes of data are generated every day, but not all data is reliable in its raw form to drive a mission-critical business decision. Today, data has a credibility problem. Business leaders and decision makers need to understand the impact of data quality. In this article, we will discuss: - Data quality, particularly in the enterprise - Measuring data quality - Enforcing data quality, including getting started and key roles - Improving data quality Let’s get started! What is data quality? Data Quality refers to the characteristics that determine the reliability of information to serve an intended purpose (often, in business these include planning, decision making, and operations). Data quality refers to the utility of data as a function of attributes that determine its fitness and reliability to satisfy the intended use. These attributes—in the form of metrics, KPIs, and any other qualitative or quantitative requirements—may be subjective and justifiable for a unique set of use cases and context. If that feels unclear, that’s because data is perceived differently depending on the perspective. After all, the way you define a quality dinner, for instance, may be different from a Michelin-starred chef. Consider data quality from these perspectives: - Other perspectives In order to understand the quality of a dataset, a good place to start is to understand the degree to which it compares to a desired state. For example, a dataset free of errors, consistent in its format, and complete in its features, may meet all requirements or expectations that determine data quality. (Understand how data quality compares to data integrity.) Data quality in the enterprise Now let’s discuss data quality from a standards perspective, as it is widely used particularly in the domains of: - Database management - Big data - Enterprise IT Let’s first look at the definition of ‘quality’ according to the ISO 9000:2015 standard: Quality is the degree to which inherent characteristics of an object meet requirements. We can apply this definition to data and the way it is used in the IT industry. In the domain of database management, the term ‘dimensions’ describes the characteristics or measurable features of a dataset. The quality of data is also subject to external and extrinsic factors, such as availability and compliance. So, here’s holistic and standards-based definition for quality data in big data applications: Data quality is the degree to which dimensions of data meet requirements. It’s important to note that the term dimensions does not refer to the categories used in datasets. Instead, it’s talking about the measurable features that describe particular characteristics of the dataset. When compared to the desired state of data, you can use these characteristics to understand and quantify data quality in measurable terms. For instance, some of the common dimensions of data quality are: - Accuracy. The degree of closeness to real data. - Availability. The degree to which the data can be accessed by users or systems. - Completeness. The degree to which all data attributes, records, files, values and metadata is present and described. - Compliance. The degree to which data complies with applicable laws. - Consistency. The degree to which data across multiple datasets or range complies with defined rules. - Integrity. The degree of absence of corruption, manipulation, loss, leakage, or unauthorized access to the dataset. - Latency. The delay in production and availability of data. - Objectivity. The degree with which data is created and can be evaluated without bias. - Plausibility. The degree to which dataset is relevant for real-world scenarios. - Redundancy. The presence of logically identical information in the data. - Traceability. The ability to verify the lineage of data. - Validity. The degree to which data complies with existing rules. - Volatility. The degree to which dataset values change over time. DAMA-NL provides a detailed list of 60 Data Quality Dimensions, available in PDF. Why quality data is so critical OK, so we get what data quality is – now, let’s look at why you need it: - Cost optimization. Poor data quality is bad for business and has a significant cost as it relates to time and effort. In fact, Gartner estimates that the financial impact of the average financial impact of poor data quality on organizations is around $15 million per year. Another study by Ovum indicates that poor data quality costs business at least 30% of revenues. - Effective, more innovative marketing. Accurate, high-velocity data is critical to making choices about who to market to—and how. This leads to better targeting and more effective marketing campaigns that reach the right demographics. - Better decision-making. A company is only as good as its ability to make accurate decisions in timely manner—which driven by the inputs you have. The better the data quality, the more confident enterprise business leaders will be in mitigating risk in the outcomes and driving efficient decision-making. - Productivity. According to Forrester, “Nearly one-third of analysts spend more than 40 percent of their time vetting and validating their analytics data before it can be used for strategic decision-making.” Thus, when a data management process produces consistent, high-quality data more automation can occur. - Compliance. Collecting, storing, and using data poses compliance regulations and responsibilities, often resulting in ongoing, routine processes. Dashboard-type analytics stemming from good data have become an important way for organizations to understand, at a glance, your compliance posture. How to measure data quality Now that you know what you expect from your data—and why—you’re ready to get started with measuring data quality. Data profiling is a good starting point for measuring your data. It’s a straight-forward assessment that involves looking at each data object in your system and determining if it’s complete and accurate. This is often a preliminary measure for companies who use existing data but want to have a data quality management approach. Data Quality Assessment Framework A more intricate way to assess data is to do it with a Data Quality Assessment Framework (DQAF). The DQAF process flow starts out like data profiling, but the data is measured against certain specific qualities of good data. These are: - Integrity. How does the data stack up against pre-established data quality standards? - Completeness. How much of the data has been acquired? - Validity. Foes the data conform to the values of a given data set? - Uniqueness. How often does a piece of data appear in a set? - Accuracy. How accurate is the data? - Consistency. In different datasets, does the same data hold the same value? Using these core principles about good data as a baseline, data engineers and data scientists can analyze data against their own real standards for each. For instance, a unit of data being evaluated for timeliness can be looked at in terms of the range of best to average delivery times within the organization. Data quality metrics There are a few standardized ways to analyze data, as described above. But it’s also important for organizations to come up with their own metrics with which to judge data quality. Here are some examples of data quality metrics: - Data-to-errors ratio analyzes the number of errors in a data set taking into account its size. - Empty values assess how much of the data set contains empty values. - Percentage of “dark data”, or unusable data, shows how much data in a given set is usable. - The time-to-value ratio represents how long it takes you to use and access important data after input into the system. It can tell you if data being entered is useful. (Learn more about dark data.) How to enforce data quality Data quality management (DQM) is a principle in which all of a business’ critical resources—people, processes, and technology—work harmoniously to create good data. More specifically, data quality management is a set of processes designed to improve data quality with the goal of actionably achieving pre-defined business outcomes. Data quality requires a foundation to be in place for optimal success. These core pillars include the following: - The right organizational structure - A defined standard for data quality - Routine data profiling audits to ensure quality - Data reporting and monitoring - Processes for correcting errors in bad and incomplete data If you are like many organizations, it’s likely that you are just getting settled in with big data. Here are our recommendations for implementing a strategy that focuses on data quality; - Assess current data efforts. An honest look at your current state of data management capabilities is necessary before moving forward. - Set benchmarks for data. This will be the foundation of your new DQM practices. To set the right benchmarks, organizations must assess what’s important to them. Is data being used to super-serve customers or to create a better user experience on the company website? First, determine business purposes for data and work backward from there. - Ensure organizational infrastructure. Having the proper data management system means having the right minds in place who are up for the challenge of ensuring data quality. For many organizations, that means promoting employees or even adding new employees. DQM roles & responsibilities An organization committed to ensuring their data is high quality should consider the following roles are a part of their data team: - The DQM Program Manager sets the tone with regard to data quality and helps to establish data quality requirements. This person is also responsible for keeping a handle on day-to-day data quality management tasks, ensuring the team is on schedule, within budget, and meeting predetermined data quality standards. - The Organization Change Manager is instrumental in the change management shift that occurs when data is used effectively, and this person makes decisions about data infrastructure and processes. - Data Analyst/Business Analyst interprets and reports on data. - The Data Steward is charged with managing data as a corporate asset. Data quality solutions can make the process easier. Leveraging the right technology for an enterprise organization will increase efficiency and data quality for employees and end users. Improving data quality: best practices Data quality can be improved in many ways. Data quality depends on how you’ve selected, defined, and measured the quality attributes and dimensions. In a business setting, there are many ways to measure and enforce data quality. IT organizations can take the following steps to ensure that data quality is objectively high and is used to train models that produce the profitable business impact: - Find the most appropriate data quality dimensions from a business, operational, and user perspective. Not all 60 data quality dimensions are necessary for every use case. Likely, even the 12 included above are too many for one use case. - Relate each data quality dimension to a greater objective and goal. This goal can be intangible, like user satisfaction and brand loyalty. The dimensions can be highly correlated to several objectives—IT should determine how to optimize each dimension in order to maximize the larger set of objectives. - Establish the right KPIs, metrics, and indicators to accurately measure against each data quality dimension. Choose the right metrics, and understand how to benchmark them properly. - Improve data quality at the source. Enforce data cleanup practices at the edge of the network where data is generated (if possible). - Eliminate the root causes that introduce errors and lapses in data quality. You might take a shortcut when you find a bad data point, correcting it manually, but that means you haven’t prevented what caused the issue in the first place. Root cause analysis is a necessary and worthwhile practice for data. - Communicate with the stakeholders and partners involved in supplying data. Data cleanup may require a shift in responsibility at the source that may be external to the organization. By getting the right messages across to data creators, organizations can find ways to source high quality data that favors everyone in the data supply pipeline. Finally, identify and understand the patterns, insights, and abstraction hidden within the data instead of deploying models that churn raw data into predefined features with limited relevance to the real-world business objectives. - BMC Machine Learning & Big Data Blog - Data Analytics vs Data Analysis: What’s The Difference? - 3 Keys to Building Resilient Data Pipelines - Data Management vs Data Governance: A Comparison - Big Data vs Analytics vs Data Science: What’s The Difference? - Data Visualization Guide, a series of tutorials on graphs, charts, Tableau Online & more
<urn:uuid:2fe48424-8ebb-4749-a948-6ae1a49297e2>
CC-MAIN-2022-40
https://www.bmc.com/blogs/data-quality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00138.warc.gz
en
0.916603
2,693
3.03125
3
Secure Sockets Layer Secure Sockets Layer (SSL) is a protocol for establishing a secure channel between two devices that are connected over the Internet or an internal connection. Until 1999 when Transport Layer Security (TLS) replaced it, SSL was the #1 cryptographic protocol for ensuring that web traffic between a user and service provider are secure. Even today TLS is referred to as “SSL” due to the latter’s contribution. SSL guarantees that all information traveling between the two devices is private. This makes it useful for securing online communications such over email, as well as bankcard transactions. Web browsers will show an SSL-protected website as having a padlock in the window where the URL is displayed. The URL prefix is also displayed as HTTPS from its former HTTP. SSL connections are established through the purchasing of SSL certificates from a certificate authority before they are associated with a web server. However, the certificate authority will conduct an inquiry and thus applicants must correspond with and submit documentation to the authority. Once this process is satisfied, the authority will grant the service provider the ability to use SSL. Certificates are subject to expiration dates and must be reauthorized with the certificate authority.
<urn:uuid:8da1ae35-84a7-4b0c-8935-205a2a0bcbef>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/secure-sockets-layer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00338.warc.gz
en
0.950899
241
3.859375
4
The smell of cut grass, freshly baked bread, childhood memories, lost loved ones, Christmas. What happens when it’s all gone? A new study from the University of East Anglia reveals the huge range of emotional and practical impacts caused by a loss of smell. It finds that almost every aspect of life is disrupted – from everyday concerns about personal hygiene to a loss of sexual intimacy and the break-down of personal relationships. The researchers hope that their findings will help motivate clinicians to take smell problems more seriously, with better help and support offered to patients. Prof Carl Philpott, from UEA’s Norwich Medical School, said: “Smell disorders affect around five per cent of the population and cause people to lose their sense of smell, or change the way they perceive odours. Some people perceive smells that aren’t there at all. “There are many causes – from infections and injury to neurological diseases such as Alzheimer’s and as a side effect of some medications. “Most patients suffer a loss of flavour perception which can affect appetite and can be made even worse if distortions in their sense of smell also co-exist. “Previous research has shown that people who have lost their sense of smell also report high rates of depression, anxiety, isolation and relationship difficulties. “We wanted to find out more about how a loss of smell affects people.” The researchers worked with the Smell and Taste clinic at the James Paget University Hospital, Gorleston-On-Sea. The clinic opened in 2010 and was the UK’s first clinic dedicated to taste and smell. The study involved 71 participants aged between 31-80 who had written to the clinic about their experiences. It was carried out in collaboration with Fifth Sense, the charity for people affected by smell and taste disorders. The research shows that sufferers experience wide-ranging impairments to their quality of life. These included a negative emotional impact, feelings of isolation, impaired relationships and daily functioning, impacts on physical health and the difficulty and financial burden of seeking help. Prof Philpot said: “One really big problem was around hazard perception – not being able to smell food that had gone off, or not being able to smell gas or smoke. This had resulted in serious near misses for some. “But smell is not just a life-saving sense – it is also life-enhancing. “A large number of the participants no longer enjoyed eating, and some had lost appetite and weight. Others were eating more food with low nutritional value that was high in fat, salt and sugar – and had consequently gained weight. “Participants had lost interest in preparing food and some said they were too embarrassed to serve dishes to family and friends which had an impact on their social lives. “The inability to link smells to happy memories was also a problem. Bonfire night, Christmas smells, perfumes and people – all gone. Smells link us to people, places and emotional experiences. And people who have lost their sense of smell miss out on all those memories that smell can evoke. “We found that personal hygiene was a big cause for anxiety and embarrassment because the participants couldn’t smell themselves.” “Parents of young children couldn’t tell when their nappies needed changing, and this led to feelings of failure. One mother found it difficult bonding with her new baby because she couldn’t smell him.” “Many participants described a negative impact on relationships – ranging from not enjoying eating together to an impact on sexual relationships,” he added. A new study from the University of East Anglia reveals the huge range of emotional and practical impacts caused by a loss of smell. All of these problems led to diverse range of negative emotions including anger, anxiety, frustration, depression, isolation, loss of confidence, regret and sadness. And the problems were compounded by a lack of understanding about the disorder among clinicians. Prof Philpott said: “The participants described a lot of negative and unhelpful interactions with healthcare professionals before coming to the James Paget Smell and Taste clinic. Those that did manage to get help and support were very pleased – even if nothing could be done about their condition, they were very grateful for advice and understanding.” Duncan Boak, Founder and Chair of Fifth Sense, said: “Anosmia can have a huge impact on people’s quality of life in many ways, as this research demonstrates. An important part of Fifth Sense’s work is giving our beneficiaries a voice and the opportunity to change the way society understands smell and taste disorders, whether through volunteering or participating in research studies like this one. The results of this study will be a big help in our ongoing work to improve the lives of those affected by anosmia.” The study was undertaken at The Smell & Taste Clinic, ENT Department, James Paget University Hospital NHS Foundation Trust. No funding was required. Smell accounts for 95% to 99% of chemosensation; while, taste accounts for the rest of chemosensation. Anosmia is the inability to perceive smell/odor. It can be temporary or permanent and acquired or congenital. There are many causes. For example, any mechanical blockage preventing odors from reaching the olfactory nerves can cause the loss of sense of smell. This blockage can be due to inflammatory processes like simple infections causing mucus plugs or nasal polyps. Neurological causes can include disturbances to the sensory nerves that make up the olfactory bulb or anywhere along the path in which the signal of smell is transferred to the brain. To better understand this process, it is helpful to understand how people can perceive smell. When a particle with odorant molecules in the air is present, it travels up through the nasal canals to the nasal cavity, where olfactory receptor neurons extend from the olfactory bulb that sits on the cribriform plate of the brain. Each nasal cavity contains about 5 million receptor cells or neurons. There are 500 to 1000 different odor-binding proteins on the surface of these olfactory receptor cells. Each olfactory receptor cell expresses only one type of binding protein. These afferent olfactory neurons (cranial nerve I) facilitates the transfer of a chemical signal (particles in the air) to an electrical signal (sensed by afferent receptor neurons) which is then transferred and ultimately perceived by the brain. From the olfactory bulb, the signal is further processed by several other structures of the brain, including the piriform cortex, entorhinal cortex, amygdala, and hippocampus. Any blockage or destruction of the pathway along which smell is transferred and processed may result in anosmia. As stated in the introduction, any problems that cause a disturbance in the pathway that leads to the perception of smell, whether mechanical or along the olfactory neural pathway can lead to anosmia. Inflammatory and Obstructive Disorders (50% to 70% of cases of anosmia) These are the most common causes of anosmia, and these include nasal and paranasal sinus disease (rhino-sinusitis, rhinitis and nasal polyps). These disorders cause anosmia through inflammation of the mucosa as well as through direct obstruction. Head trauma is another common cause of anosmia as trauma to the head can cause damage to the nose or sinuses leading to a mechanical blockage and obstruction. Other ways injury can cause anosmia is by trauma or destruction to the olfactory axons that are present at the cribriform plate, damage to the olfactory bulb, or direct injury to the olfactory areas of the cerebral cortex. The central (CNS) nervous system trauma leading to anosmia can be temporary or permanent depending on the area and extent of the injury. Olfactory neurons have regenerative capabilities that other CNS nerves in the body do not. This unique ability is the center of much current stem cell-related research. Aging and Neurodegenerative Processes These processes are associated with the loss of smell that can eventually result in anosmia. Normal aging is associated with the decreased sensitivity to smell. As individuals age, they lose the number of cells in the olfactory bulb as well as the olfactory epithelium surface area which is important in sensing smell. Interestingly, there have been studies that associate the impairment of the ability to smell with neurodegenerative disorders such as Alzheimer disease, Parkinson disease, and Lewy Body dementia. Studies linked lowered ability to perceive smell associated with increased risk of development of neurodegenerative diseases. The highest association is between anosmia and later development of alpha-synucleinopathy including Parkinson disease, diffuse Lewy body disease, and multisystem atrophy. Congenital conditions that are associated with anosmia include Kallmann syndrome and Turner syndrome. Other Traumatic or Obstructive Conditions Other causes of anosmia include toxic agents such as tobacco, drugs, and vapors that can cause olfactory dysfunction, post-viral olfactory dysfunction, facial traumas involving nasal or sinus deformity, neoplasms in nasal cavity or brain that prohibits the olfactory signal pathway, and subarachnoid hemorrhages. Olfactory groove meningioma can present with slowly worsening impaired olfaction. Common conditions that can uncommonly cause a decreased sense of smell or anosmia include diabetes mellitus and hypothyroidism. Medicationscan sometimes lead to olfactory defects as an unwanted side effect. These medications include beta blockers, anti-thyroid drugs, dihydropyridine, ACE inhibitors, and intranasal zinc. In the United States, anosmia afflicts 3% of the adult population older than the age of 40. The prevalence of impaired olfaction increases with age. In 2016, the National Health and Nutrition Examination Survey (NHANES) measured olfactory dysfunction which involved 1818 participants. Data showed that olfactory dysfunction was 4% at age 40 to 49 years of age, 10% at 50 to 59, 13% at 60 to 69, 25% at 70 to 79, and 39% for those over 80 years of age. Anosmia affected 14% to 22% of those over 60 years of age. History and Physical When taking a history of the possible causes of anosmia, it is important a clinician keep the possible etiologies (listed above) in mind when asking relevant questions. Sudden smell loss is often associated with head injuries or viral infections, while a gradual loss is more associated with allergic rhinitis, nasal polyps, and neoplasms. An intermittent loss is often common in allergic rhinitis and with the use of topical drugs. It is important to ask about preceding events and the patient’s medical history, as the most common causes of anosmia are chronic rhinitis and head trauma. The patient’s age can be helpful because if the patient is very young and has other symptoms, the clinician might investigate congenital causes such as Kallmann syndrome. Under such circumstances, careful examination of the gonads and neurological exams are very important. If the patient is elderly, the clinician may investigate whether the sense of smell is due to normal aging or if there are other symptoms to suggest an early stage of a neurodegenerative disorder like Parkinson disease. Social history is also important in assessing occupation-associated exposures to toxins or allergens that can lead to anosmia. Medication history is always important, and sometimes the causal relationship can only be established by stopping the suspected offending agent. Clinicians should pay attention to associated symptoms as anosmia is a symptom and not a diagnosis. Headaches and behavior disturbances may indicate problems with the CNS. During the physical examination, clinicians should closely examine the nasal cavity and paranasal sinuses. Findings may be important depending on information retrieved from the patient’s history. A neurological examination may be useful in revealing other neurological deficits that can suggest a larger neurological problem causing the loss of smell. Fundoscopy for evidence of raised intracranial pressure will help to pave the way for neuroimaging testing. Examination and skin testing by an allergist might play an important role to evaluate whether rhinitis (if the cause) is allergic or non-allergic. Simple office testing of smell with chocolates or coffee is sometimes conducted informally by a primary care provider. This test is subjective. If the clinician is concerned about any findings, detailed smell testing can be conducted at smell centers. Tests include chemosensory testing, butanol threshold test, among others. These formal tests can give a more accurate level of “loss of smell” in that a minimum concentration of a chemical at which the patient can detect can be given and compared to the average threshold for that patient’s age group. UPSIT, the University of Pensylvania Small Identification Test (Sensonics, Inc., Haddon Heights, NJ) is the most widely used odor identification test which can be administered in about 10 minutes. Other evaluations can be performed depending on the clinician’s suspicion of the underlying cause of the patient’s anosmia. Based on the history and physical examination, if the clinician is suspicious of head trauma, sinus disease, or neoplasm, they may order an MRI or CT. If there is concern about allergic rhinitis, a referral to an allergist and subsequent allergen skin testing might be revealing. If the patient has other symptoms that are suggestive of diseases that are inflammatory, a sedimentation rate might be helpful. Other labs that can be considered depending on the suspected etiology include complete blood count (CBC), plasma creatinine, liver function, thyroid profile, ANA, measurements of heavy metal, lead, and other toxins. It is important to note that imaging (MRI) in those with idiopathic olfactory loss is often unrevealing. In a study of 839 patients with olfactory loss, MRI was used to evaluate idiopathic olfactory loss 55% of the time, but only successfully found an imaging abnormality that would explain the loss 0.8% of the time. Treatment / Management The treatment and management depend on the etiology as anosmia is not a diagnosis but a symptom. As stated above, inflammatory and obstructive diseases are the most common cause of anosmia (para-nasal and nasal sinus diseases), intranasal glucocorticoids can often manage these causes. Other medications that can be given include antihistamines and systemic glucocorticoids. Antibiotics such as ampicillin can be prescribed for bacterial sinus infections. Surgery can be an opinion for those with chronic sinus problems and nasal polyps that fail conservative medical management. For olfactory impairment caused by damage to the olfactory neurons due to trauma, there is no specific treatment. However, olfactory neurons do have the ability to regenerate. But the time and degree of regeneration depend on the extent of damage, and there is the difference in regenerative abilities between individuals. Regeneration can span over the course of days to years, and complete recovery is not a guarantee. For all causes of anosmia, treatment and management depend on the treatment and management of the underlying disease and whether that disease is refractory to medical intervention. Pearls and Other Issues Anosmia amongst patients can have safety implications as those without the ability to smell might miss important warning odors such as smoke from a fire or natural gas leaks. In the evaluation of anosmia without an initial clear cause (sinus disease, head trauma), it is important to assess for other neurological deficits as to not miss a CNS hemorrhage, aneurysm, or neoplasm. Enhancing Healthcare Team Outcomes Because of the diverse cause of anosmia, an interprofessional team should be involved that includes an internist, endodrinologist, neurologist, ENT surgeon, rheumatologist and an infectious disease specialist. Anosmia is a symptom of a disease process, which needs to be treated. Inflammatory and obstructive diseases are the most common cause of anosmia (para-nasal and nasal sinus diseases). Surgery can be an option for those with chronic sinus problems and nasal polyps that fail conservative medical management. For olfactory impairment caused by damage to the olfactory neurons due to trauma, there is no specific treatment. However, olfactory neurons do have the ability to regenerate. Regeneration can span over the course of days to years, and complete recovery is not a guarantee. The overall prognosis for patients with anosmia is good as long as the primary condition has a cure or can be treated. (Level V) University of East Anglia
<urn:uuid:505f4781-ab5d-4053-8004-637a055cc3c0>
CC-MAIN-2022-40
https://debuglies.com/2020/01/23/olfactory-disturbances-have-implications-in-mental-and-emotional-well-being-health/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00338.warc.gz
en
0.942152
3,599
3.1875
3
The damage caused by a virus which infected a home computer or a corporate network can be different – from insignificant increase in outgoing traffic (if a computer is infected by a Trojan sending out spam) to the complete network breakdown or the loss of critical data. The scale of the damage depends on the targets of the virus and sometimes the results of its activity are imperceptible for the users of a compromised machine. Operability of computers and computer networks The catastrophic failure or dramatic slowdown of an individual computer or network can be premeditated or accidental. A virus or a Trojan may delete critical system elements, thus disabling the OS, overload the network with a DDoS attack, or otherwise negatively affect the system’s operability. Fatal problems are often caused by a bug in the virus’ code or principle of operation. Bugs can be found in any software product, including viruses. In addition, it’s most unlikely that viruses are thoroughly tested before they are launched, a practice that is mirrored by some commercial products too. Sometimes malware is incompatible with the software and hardware of the system upon which it is run, resulting in server failure or drastic increases in spam traffic, thereby paralyzing a company’s network. From time to time more disastrous events occur. For example, in 1988 in the USA, the Morris Worm caused an epidemic in Arpanet, ancestor of the modern-day Internet. Over 6000 machines, or about 10% of all the computers on the network, were infected. A bug in the virus code caused it to replicate and distribute itself across the network, resulting in complete system paralysis. In January 2003 the Slammer worm caused a geographically-rotating Internet blackout across the USA, South Korea, Australia and New Zealand. As a result of the uncontrolled prevalence of the worm, network traffic increased by 25%, leading to serious problems with banking operations for the Bank of America. Lovesan (Blaster, MSBlast), Mydoom, Sasser and other network worm epidemics also caused terrific damage to airlines which had to cancel the flights, and to banks which had to temporarily cease their operations. A virus seldom causes hardware failure as modern computers are relatively well protected from software faults. However in 1999 the CIH virus, also known as Chernobyl, disrupted the operation of any infected system by deleting the data in the Flash BIOS, making it impossible to even boot the computer. Home users had to visit a service center to get the Flash BIOS rewritten in order to restore the machine to working condition. On many laptops the Flash BIOS was soldered directly to the motherboard, along with the drive, the video card and other hardware. This meant that in most cases the cost of the repair exceeded the cost of a new laptop, resulting in damaged computers being simply thrown away. Several hundred thousand computers fell victim to the CIH ‘bomb’. Sometimes a Trojan can open and close the CD/DVD tray. Though modern hardware is pretty reliable these days, this could theoretically cause drive failure on computers that are continuously on. Data loss or data theft The damage caused by a successful attack that erases a user’s data can be measured in terms of the value of the erased information to the user. If the attack targeted a home computer used for entertainment, the damage is probably minimal. The theft of important information can result in the loss of many years work, a valued photo archive or some other type of coveted correspondence. The oft-neglected way to prevent data loss is by taking regular backups. If data is stolen as the result of a targeted attack on a specific individual, the damage can be tremendous, particularly if the data belonged to a company or even the state – client databases, financial and technical documentation or even banking details can end up in the wrong hands – the possibilities are endlessly. We live in the information age and its loss or leakage can sometimes have disastrous consequences. Even if there is no visible damage Many Trojans and viruses do not advertise their presence in the system. Viruses can surreptitiously infiltrate the system, and both the files and the system will remain operable. Trojans can hide themselves in the system and secretly do their Trojan thing – and on the face of it everything seems fine, however it is only a front. A virus on a corporate network can be considered a force majeure and the damage caused by it as being equal to the losses associated with the network downtime necessary for disinfection. A Trojan’s presence is also a highly undesirable thing, even if it does not constitute any threat to the network. The Trojan may only be a zombie server sending out spam, but it consumes network and Internet resources and the compromised computers can distribute a great deal of spam which is likely to be directed towards the company’s own corporate mail server. Unfortunately, a considerable number of home users do not realize the problem and do not protect their computers. Our survey from December 2005 showed that 13% of the Russians that took part had no antivirus program installed on their machines. Most of these users were completely unaware that their computers could become a base for spam distribution and attacks on other network elements. Let’s leave it to their conscience.
<urn:uuid:f1ce36fa-e2eb-47dd-8a15-8b05069de51a>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/knowledge/damage-caused-by-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00338.warc.gz
en
0.946303
1,071
3.40625
3
Date and time can be represented in many ways. The Romans had a numbering system where letters from the Latin alphabet signified a value. Roman numerals are still used around the clock and many times for expressing a year something is build, written or made. This year being 2012 in Arabic numerals is MMXII in Roman numerals. Next year is MMXIII and the year after is of course MMXIIII. No wait, it is MMXIV. The 12-Hour Clock A day consists of 24 hours. So naturally 5 hours into the day will be 5:00 and 17 hours into the day will be 17:00. But no. Several countries around the world still stick to the 12-hour clock writing 5:00 AM and 5:00 PM. And in most countries verbal use of the 12-hour clock is common. The American Date Format A date consists of three elements: Day, Month and Year. So to most of the world yesterday the 1st June 2012 will be: 01/06/2012 If you insist using an ISO standard, you’ll do it backward: 2012-06-01 However, if you are from the United States, you’ll do it awkward: 06/01/2012 Even if you are a US data quality tool vendor selling to the whole world, you will still do it awkward: Blog post published 1st June 2012. Flip that date! – as it will be 6th January to the rest of the world. Best practice will be writing June 1st 2012 or in other way avoiding ambiguity.
<urn:uuid:1f4d9ab8-f227-4be0-ba28-0754fc0a194b>
CC-MAIN-2022-40
https://liliendahl.com/2012/06/02/obscure-date-and-time-formats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00338.warc.gz
en
0.926506
337
2.90625
3
In Daw Blog’s previous articles “Dedicated Server Hosting. How Did It Change With The Cloud?” we have talked about the significant developments which web hosting industry experienced within the last 10 years. They have changed the terminology we use today to describe IT hosting services. What was told in this article was that: The world of web hosting has looked pretty simple for most of the time within the last two decades. Most of the terms and niche hosting services were easy to explain even to someone who had very limited technology awareness. Most of the IT Hosting and computing services used to be delivered from physical, bare-metal servers or from virtualized environments (virtual hosts) that worked on top of a stand-alone physical servers. Compute clusters and other form of networking models of computer systems have been around for long time, but were immature as technologies, lacked automation and most of them were infrastructure models build for internet use. All this changed since 2009. Without loosing more time on analysis in this article let’s move to the new terms used in todays Cloud-driven web hosting industry. This is the good old physical server. When you see a provider offering “Bare-Metal Servers”, it means that you get a physical dedicated server connected to Internet, either with or without any Operating System (OS) installed o the server. Virtual Dedicated Server This is a dedicated server, created through a full-virtualization technology. Full Virtualization is a technique that creates a virtual computer environment which is a complete simulation of the underlying physical server. Full virtualization technologies are Kernel-based Virtual Machine (KVM), VMware ESXi, Microsoft Hyper-V, Oracle VM, IBM PowerVM, Citrix XenServer (Paravirtualization), Parallels Server BareMetal, WindRive Linux, etc. The Cloud Server is a “Virtual Dedicated Server”, hosted on a cluster infrastructure. This means that the compute (processing operations) and the data storage must be delivered from different physical server groups. The data of any Cloud-based server (Cloud Server) usually resides on storage area network. There three types of Cloud Servers – Public Cloud, Private Cloud and Hybrid Cloud. They are used to differentiate between the cloud servers open for public use and private cloud instances, created for internal use. “Public Cloud” is a computing instance (Virtual Server) accessible through Internet and available for public use. A Public Cloud is created through any virtualization technology and must reside on top of a Cloud computing infrastructure. Processing operations must be physically separate from the storage operations for any Public Cloud. Private Clouds are computer instances, which are not publicly accessible. Some of them donчt even use public IP addresses. A “Private Cloud” would be a dedicated Cloud computing infrastructure, hosted on the Cloud, but managed and operated internally. Like any “Public Cloud”, the Private one has to reside on top of a Cloud computing infrastructure. Processing operations must be physically separate from the storage operations. Term “Hybrid Cloud” refers to an integrated cloud computing environment setup in a ways to offer both services for public use and private computing environment for internal use. The public and private segments of any Hybrid Cloud are bound together, but they are still separate installations, which are connected on premise. (… to be continued and updated with more web hosting terms) To learn more about how the web hosting industry changed from physical dedicated servers to the Cloud, read article “Dedicated Server Hosting. How Did It Change With The Cloud?“.
<urn:uuid:30e95f6c-11e0-4787-ae7d-eb75c0537b9f>
CC-MAIN-2022-40
https://www.dawhb.com/dedicated-hosting-cloud-computing-terms-knowledge-boost/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00338.warc.gz
en
0.928248
765
2.828125
3
Improved traffic flow, reduced traffic congestion and funding for road maintenance are just a few of the reasons more and more countries are adopting smart toll collection systems. This is an update to the original article published in 2019. Tolling systems have become more efficient and smarter over the years. Not only are these systems reducing the need for physical installations on roads, smart toll collection systems treat data as an asset, utilizing it to make cashless tolls more efficient and secure. Toll collection is an important element of highway operations as the toll collected can help fund highway construction and maintenance. Increasingly, manual collection has given way to electronic toll collection, where various detection and payment technologies are used to make collection efficient and less time-consuming. Nowadays, modern toll systems generally consist of a way to track the vehicle, some identification mechanism and a registration and billing system. "Tracking is either done through global navigation satellite system (GNSS) or at checkpoints along the tolled road section,” said Michael Leyendecker, Director of Tolling Sales for Europe at Vitronic Machine Vision . “For identification most systems use vehicle-to-infrastructure communication or license plate recognition (LPR) José Luis Añonuevo, GM of Traffic Management Systems Operations at Indra , explained that electronic tolls based on RFID technology can be implemented in a unified way in all road corridors. This works by having vehicles affix a tag (a device that allows charging the toll electronically and in movement) on their windshield, which exchanges a signal with antennas arranged in the toll area. The toll value is then immediately charged the users’s payment of preference (for example debit card, credit card and recharge card). Increased use of video and AI Emovis is another company that has developed various smart toll collection and payment solutions. They include cash-less, barrier-free, roadside tolling using a combination of laser, RFID, video imaging treatment and thermal cameras to track and charge travelling vehicles; pay-by-plate solution using a combination of optical character recognition (OCR) combined with AI learning software to constantly improve the automated car recognition performances; satellite-based pay-per-kilometer solution allowing to record and charge the motorists according to the actual distance travelled on certain roads while ensuring full privacy; cloud-based interoperability platform allowing seamless travel across multiple tolled infrastructures with a single tag; and electronic payment back office covering multiple payment modes (credit cards, debit cards and so on) with full PCI/DESS compliance. Especially, more and more video-only-based tolling systems are being used, according to Benoît Rossi, Director of Business Development and Marketing at Emovis . He attributes this to improved optical character recognition (OCR) technology, which is decreasing the need for tags. Rossi also notes the rise of embedded devices such ODBII devices and connected odometers, as well as the use of artificial intelligence (AI) to improve the performance of road sensors. The company has participated in various projects including tolling on the U.K.’s Mersey Gateway Bridge, whereby emovis uses a solution that combines loops and laser detection to trigger both front and rear license plate reading, as well as thermal imaging and laser for classification and vehicle count for auditing purposes. “We are committed to finding solutions to the mobility challenges of the future. With a forward-thinking strategy, we lead on innovation in the digitalization of road payment methods and mobility solutions through the implementation of free flow toll projects in numerous countries,” Rossi said. It should be pointed out, though, that the type of toll system and necessary components depends entirely on the requirements of the implementing authority. This includes the road level equipment, sensors and antennas suitable for the classification and identification of vehicles, as well as the software solutions for the connection with banking entities. Why you should use it According to Leyendecker, one of the benefits of a smart tolling system is that it introduces a usage-based fee for using the road network. “This is a very just way of refinancing as the user only pays for what he or she consumes. At the same time you often see a shift toward car-pooling or public transportation which has beneficial effects on congestion and emission values,” he said. Justin Hamilton, Product Manager at Kapsch TrafficCom , points to other benefits of deploying smart, all-electronic tolling systems, which include: - Lower operating costs for the toll charger. - Greater ability to link additional services to the tolling system. - Lower risk of revenue “leakage,” which is often observed in manual, particularly cash-based, tolls. - The ability to set flexible pricing based on traffic levels as a means to influence traffic patterns and reduce congestion. - The ability to better understand traffic patterns and identify crash hot spots through the collection and analysis of toll data, leading to fewer incidents and safer roads. - The ability to link road tolling and revenue collection with other, complimentary, intelligent transport solutions, such as weigh in motion (WIM) for heavy vehicles, vehicle to infrastructure communication (V2X) and congestion management. Some countries in Asia have even deployed a natural disaster early warning system through the smart tolling network. How to select a system Regardless of the toll solution chosen, Hamilton said there are common necessities which could be sorted into two groups, consisting of hardware and software. “For both, the chosen solution should be both robust and reliable. There is no one size fits all solution within tolling, so the chosen system will likely be different depending on whether tolls will be applicable to all vehicles or just a particular segment, such as heavy goods vehicles (HGVs),” Hamilton explained. Ultimately, any system selected will need to find a balance between cost and efficacy. “In order to strike the best possible balance an authority should consider all potential technological solutions and in-vehicle devices, which typically consist of either RFID (TDM, 6C), European Committee for Standardization dedicated short-range communications (CEN DSRC), LPR or GNSS,” Hamilton said. Hamilton further emphasizes that enforcement is crucial. “Without the ability to fully enforce all tolls, both technically and legally, the system will not function at anywhere near peak efficiency. Typically enforcement requires the use of LPR cameras; however, this must also be twined with a capable roadside and back office set-up,” Hamilton said. As an example, Vitronic has a case study in Poland, where in 2021 the government has converted paper ticket payment into a modern GNSS-based electronic payment system. While this facilitates the flow of traffic and toll payment, it also presents a significant challenge to toll handling – the new solution makes it easy to bypass the payment, whether intentionally or accidentally. To this end, Vitronic supplied 102 patrol cars equipped with its Enforcement Bar – a fully automatic and mobile number plate recognition system (ANPR) that can be installed on top of a patrol car. The solution enables reliable enforcement of toll payment regardless of weather and visibility conditions or vehicle type, allowing the patrol cars to be part of the free-flowing traffic or enforce from the roadside – a fusion of precision and flexibility.
<urn:uuid:b61055dc-52d0-4865-ba22-2321b42e9c2c>
CC-MAIN-2022-40
https://www.asmag.com/showpost/28574.aspx?name=news
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00338.warc.gz
en
0.931998
1,529
2.84375
3
The design of sidewalks, streets, buildings and even traffic lights could change once computers are in the driver’s seat. Self-driving cars may not be on the road in full force yet, but some researchers believe the cars could alter the way entire cities look. The design of sidewalks, streets, buildings and even traffic lights could change once computers are in the driver’s seat. “We’re looking at the broader urban effects -- and urban opportunities -- of this technology,” Illinois Institute of Technology architect Marshall Brown told Wired. “It’s in the news a lot, but nobody’s been discussing what it will actually do to cities.” Many questions surrounding autonomous cars are still unanswered, including how they’ll impact traffic volume. Car-sharing using autonomous vehicles has the potential to cut traffic, but at the same time, inexpensive autonomous vehicles could put more people on the road. One of the bigger challenges facing autonomous vehicles is the roads they’ll be driving on. The Department of Transportation estimates that 65 percent of the nation’s roads are in poor condition, Reuters reported. Faded lane markers, damaged road signs, broken streetlights and many other roadway infrastructure issues are forcing automakers to develop more sophisticated sensors and maps to compensate, industry executives told the news agency. “If the lane fades, all hell breaks loose,” Christoph Mertz, a research scientist at Carnegie Mellon University, told Reuters. “But cars have to handle these weird circumstances and have three different ways of doing things in case one fails.” Automakers are considering radar as well as Light Detection and Ranging or LiDAR technology, which bounces light pulses off objects, giving the car information about its bearings. Mercedes told Reuters its new drive pilot system incorporates 23 sensors to see guard rails, barriers and other cars to keep the vehicle in a lane at speeds up to 84 miles per hour. Additionally, international companies such as Germany's HERE and the Netherlands’ TomTom are working to create three-dimensional maps that can plot a vehicle's location on the road within centimeters, Chris Warrington, CEO of mapping technology company GeoDigital, told Reuters. The Transportation Research Board, an independent group that advises the government, plans to issue recommendations for standardized lane markings for machine vision by 2017. NEXT STORY: DARPA demos optical sense and avoid for drones
<urn:uuid:74ef0b00-85ab-4852-b805-678384322048>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2016/04/cities-must-adapt-to-autonomous-vehicles/299112/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00338.warc.gz
en
0.938985
497
3.125
3
The need for cloud computing security methods has changed drastically in the past decade. But the point is that many enterprises are still stuck with outdated versions and hence are vulnerable to security threats. The businesses even approach and deal with essential matters like passwords and security questions is a primary example of the security approaches from an outdated version. Risks have changed now, and there is a paradigm shift in the security technologies that have to be incorporated to keep in tandem with the newfound cybersecurity threats. With a changing technology landscape, organizations have more powerful authority to delegate strong password policies to every individual. Hence, there is no need to rely on the older security techniques in cloud computing like the date of birth, mother’s maiden name, which seems to be obsolete now in the changing times. Text messages, emails, and messaging apps have taken over the security challenges, but very few enterprises have shifted their approaches. The main problem lies in the lack of knowledge regarding the consequences of data breaches that may take place and their after-effects for the enterprises. Cloud Computing Security methods Cloud computing security methods and accountability can be increased by imparting quality education about its necessity and implementation. Indeed, the legislation has a vital role to play, but the issuing of fines for non-compliance with security regulations has had little effect. Compliance is taken very seriously now. EU’s General Data Protection Regulation (GDPR) fines up to €20m or 4% of the annual turnover, and this should act as a deterrent for the companies to sincerely take their security issues. Organizations cannot come up with the excuse of increased expenditure or the lack of return on investment. There should be increased awareness and education on cybersecurity for the people who build the IT infrastructure. Organizations should know that it is wiser to invest in imparting the right knowledge about security to their staff rather than cough up the hefty security breach fines in cases of any such problems encountered of data leakages at later stages. Cloud Data Security linked to data maximization Enterprises have full control over the data that they collect and retain. Herein lies the problem of data maximization. Enterprises tend to keep even unnecessary data over more extended periods, leading to the aggravation of security leaks. Organizations hoard data thinking that they might be useful later if at all things go wrong. But the basic principle is that you don’t lose what you don’t have. So, the sensible thing for all the enterprises would be to retain data that is essential and let go of the rest. Because when any data breach occurs, the enterprises need to know the processes of dealing with it and putting things in place. About CASB Solutions Using Cloud Access Security Broker (CASB) solutions is a cost-effective way of increasing cloud data security. When employees are trained to use it effectively, the data remains safe and secure. The plus thing about educating the folks on cybersecurity is that it pays off in the long run. The employees apply the knowledge gained by the training across multiple projects. The most sensible and fundamental thing would be to educate people about CASB solutions related to cybersecurity issues in their job roles. It helps fix defects and is cheaper here than to fix it after a breach has taken place. Education is needed in the data collection and retention aspect. CASB solutions should be effectively implemented across the enterprise, and the employees should be trained for their practical use.
<urn:uuid:c2377627-418c-4ed7-ad5e-8e2d08864ea1>
CC-MAIN-2022-40
https://www.cloudcodes.com/blog/cloud-computing-security-methods.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00538.warc.gz
en
0.950038
686
2.75
3
Most of us have seen enough prophecies fail that we recognise predicting exact events is a fool’s errand. Nevertheless, we can use lessons from the past to accurately determine which trends are most likely to impact our future, even in the age of Covid-19. While it may feel as though the pandemic is driving unrivalled disruption, this isn’t the first time a health crisis has collided with tremendous innovation that transformed society. For example, London’s cholera epidemic in the 19th century led to several changes, such as public health reforms and improved sanitation systems. But wider shifts, including advances in manufacturing, electrification, water and gas supply, communication, and transport networks, also followed in the industrial revolution. Now, as then, calamity has accelerated progress, especially when it comes to digitalisation. But, like previous periods of technological evolution, change won’t happen all at once; it will grow in speed and scale over multiple decades. Paradoxically, this means the seemingly rapid pace of digital transformation is the slowest it will ever be. Current innovation is only the beginning of an ongoing journey that will incrementally transform and improve everyday life, work, and play. Through 2021, we can expect digital transformation to blossom, offering a taste of what’s to come. Following are three examples of where we’re seeing digital innovation occur most prominently: healthcare, supply chain and customer experience. - These are the best cloud storage (opens in new tab) solutions Bringing Covid-19 under tighter control: At its heart, digital transformation is about making life and society better — and in the current climate, that means the focus will centre on reducing the suffering caused by Covid-19. Amid the ongoing need for robust and agile defences, artificial intelligence (AI) is set to play a bigger role in managing the pandemic and protecting the global population. From a vaccine point of view, AI-assisted analysis will enhance allocation efficiency. For example, the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) uses AI software to process potential adverse side effects to the Covid-19 vaccine. Machine learning (ML) helps ensure the agency misses no details and can more easily identify patterns. As worldwide distribution grows, machine learning can monitor large-scale distribution, proactively predicting where bottlenecks might occur and helping identify actions to prevent them. The same could also apply for limiting safety risks, with ML harnessed to constantly watch out for patterns in individual reactions that might indicate storage or manufacturing issues. When it comes to minimising spread, sophisticated tracking and data evaluation will prove equally valuable. By quickly processing data about new cases, AI-enabled tracking tools will help identify emerging hotspots before medical facilities are overloaded. AI also can guide smart preparation, such as ramping up localised testing and redirecting specialist equipment, including ventilators. Combined with vaccination, data-driven responses to outbreaks will go a long way towards bringing Covid-19 under control in the near-term and provide a multi-pronged mechanism for dealing with future health challenges. Driving supply chain resilience: Beyond directly tackling the virus, implementation of transformational technology will continue to reconfigure the way businesses function. During the last year, extreme turbulence has led organisations across sectors to increase the use of technology that allows for faster adaptation of not just AI, but also cloud platforms and robotics. According to McKinsey research, global company executives say internal digitalisation is running up to four years ahead of expectations, especially in areas such as supply chain optimisation. So far, modernisation of supply systems and procedures has mostly revolved around improving short-term resilience. Companies have strived to limit potential complexity and wastage by using fewer product stock-keeping unit (SKU) codes and bolstering data capabilities. This strategy ensures teams align stock availability to real-time needs — be that consumer goods or urgent healthcare supplies. In the next 12 months, attention will move to how businesses can leverage technology for the long-term benefit of everyone: suppliers, sellers, employees, and customers. In particular, development will be about redesigning supply chains as more intuitive, user-centric, and streamlined processes. For example, greater use of intelligent analytics will create instant visibility into each stage of the chain and current demand, giving suppliers and dealers the insight needed to keep deliveries flowing smoothly. Meanwhile, automation of labour-intensive tasks will ease pressure on planners, leaving them with more capacity to concentrate on creating the best possible service for customers and fuelling lasting loyalty. - Check out our rundown of the best cloud storage for photos and pictures (opens in new tab) Elevating the digital customer experience: An essential part of meeting today’s customer requirements is bolstering online offerings. Thanks to cloud technology, various businesses have now mastered the fundamentals of ecommerce, including facilitating simple online ordering, estimating precise preparation and packaging times, and enabling easy collection or delivery. But as online shopping continues to expand — with sales due to rise by another 16 percent in 2021 — companies will start setting their sights on boosting digital success by enhancing customer experience. A good experience will depend on the right blend of granular customer understanding and convenience. Alongside technology capable of quickly resolving customer queries before they turn into larger issues — such as advanced chatbots — businesses will start integrating tools that equip them with the insight needed to provide consistently engaging interactions. The ability to generate and instantly utilise actionable data about who customers are and what they do will become a necessity. This, in turn, will spark an uptick in technology demand and supply. As interest in real-time decisioning and granular behavioural analytics rises, so will the accessibility of AI tools. Or, in other words, an increased need will spur the mass uptake and democratisation of AI. Advanced technology will go from leading edge to commonplace, providing a more level playing field for companies of all shapes and sizes, while simultaneously ensuring they have the means to enhance customer experience persistently. Last year brought a colossal wave of innovation, with companies in multiple industries ramping up their digital presence and power. But this was just the first step in an online ‘land rush’ that will keep on rolling. In 2021, we will see an even bigger tide of companies using new technology to address current issues and unlock broader possibilities, including strengthening Covid-19 defences, ensuring greater supply sustainability, and delivering the personalised experiences customers want. Over the coming decades, companies keen to make the most of digital transformation must prepare for the long-haul, readying themselves for steady yet unrelenting evolution that will require constant adjustment, agility, and an up-to-date store of insight. As Dennis Gabor, the inventor of holography, once said: “The future cannot be predicted, but futures can be invented.” - These are the best cloud hosting services (opens in new tab) Sanjay Srivastava, Chief Digital Officer, Genpact (opens in new tab)
<urn:uuid:a22d8c38-1596-4d14-b06a-d0f6c297c78a>
CC-MAIN-2022-40
https://www.itproportal.com/features/three-examples-of-how-companies-can-prepare-for-digital-transformations-long-haul/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00538.warc.gz
en
0.920158
1,451
2.640625
3
The COVID-19 pandemic has affected the world in many different ways, from bringing about a digital transformation and changing the way we do business, to creating a labor shortage fueled by fear of returning to the workforce. Even now, society is still dealing with repercussions of the worldwide lockdown, and perhaps the largest impact of all are the issues created within the global supply chain. Now that the world is beginning to recover from the COVID-19 pandemic, demands for goods are running high but supply chain disruptions are causing a global shortage of goods. Anything from household appliances and electronics to automobiles are becoming increasingly difficult to acquire, and this has left many wondering how it will impact the holidays. The main cause for these shortages is the result of the pandemic mitigation strategies that reduced production and the economy’s effort to return to its former operational capacity. Demand for goods and services are exceeding supply, creating order delays and supply shortages around the world. The recent labor shortage is another contributing factor. Warehouse workers and truck drivers are a major key to a smooth supply chain process, and many are quitting their jobs in pursuit of higher paying careers. Large companies such as Walmart, Target and Amazon are increasing benefits to attract workers, and retail companies are trying to keep up with competition by increasing wages. Retirements are also contributing to the shortage, leaving new truck drivers needing to be trained after closures due to the pandemic. Many people are concerned about these shortages occurring so close to the holidays, but supply chain experts suggest that consumers practice patience and give the economy time to level out, or they could possibly end up paying more for goods and services.
<urn:uuid:5298f423-a3ac-4e5c-bb0f-b0576eca452f>
CC-MAIN-2022-40
https://hobi.com/global-pandemic-results-in-global-supply-chain-issues/global-pandemic-results-in-global-supply-chain-issues/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00538.warc.gz
en
0.970792
331
2.6875
3
There is no question of how important passwords are. They are the simplest methods to secure data and ensure network security, whether internal or external. Your business thrives on data, no matter what industry you belong to, and data backups need to be protected especially against hackers, phishers, and other intruders who may be attempting to gain unlawful access to your systems. Here is a look at two factor authentication. What is Two Factor Authentication? Passwords are so common these days that we often take them for granted. However, password strength is something that shouldn’t be overlooked. Depending on what passwords you choose to use, they might be easy to determine and decode. This is an obvious security flaw that needs to be addressed. TFA adds another layer of security that is virtually standard with all smart devices these days. You will be asked to input a key code or password, but a second security measure will then be triggered – even if it isn’t Face ID. Fingerprint and retinal scans are some standardized forms of authentication. Getting into work accounts or accessing certain areas of your workplace is much different than unlocking your smartphone or work computer to begin using it for the day. How Exactly Does It Work? With all of that in mind, let’s take a closer look at how it is all supposed to work. Data protection falls into three categories based on the layer of complexity associated with it. For instance, the first layer prompts you for username and password – most online services will ask you for this, whether you are getting into an email account or are attempting to do some online banking through a mobile app. The second layer is tied to a device or a specific item. Bank cards, smartphones, and smartwatches often fall under this category. The third and final layer brings biometric security into play. Fingerprint scans, facial recognition, and retinal scans are occasionally combined with voice recognition technology in order to enable this level of two factor authentication and allow access to the information or portal you are attempting to bypass. There is are some other pieces of information we think you could benefit from when it comes to learning about two factor authentication in order to set it up. There is a clear distinction between authentication and authorization. Authentication has to do with identity verification while authorizations have to do with permissions. Being aware of phishing attempts and other attempts to violate data integrity to cause an embarrassing leak or to steal trade secrets is vitally important. You and everyone on your team should know how to secure your browsers and your credentials so that nothing happens. Trust the Professionals at ARK Systems Located in Columbia, Maryland, ARK Systems provides unsurpassed quality and excellence in the security industry, from system design through to installation. We handle all aspects of security with local and remote locations. With over 30 years in the industry, ARK Systems is an experienced security contractor. Trust ARK to handle your most sensitive data storage, surveillance, and security solutions. Contact ARK Systems at 1-800-995-0189 or click here today. Check us out on Facebook and Twitter as well!
<urn:uuid:4ad17d23-ff6f-4c2c-91c5-f9b22fccc36c>
CC-MAIN-2022-40
https://www.arksysinc.com/blog/a-brief-look-at-two-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00538.warc.gz
en
0.931406
635
2.921875
3
Webinar: Join us, Tues 5/24. Nightfall & Hanzo experts will discuss how machine learning can enhance data governance, data security, and the efficiency of legal investigations. Register now ⟶ 5 Basic Data Security Protocols Developers Must Know Despite the fact that developers are more tech-savvy than your average employee, software developers and engineers are still susceptible to security threats. In fact, software developers are a very appealing target for hackers. “Software developers are the people most targeted by hackers conducting cyberattacks against the technology industry, with the hackers taking advantage of the public profiles of individuals working in the high-turnover industry to help conduct their phishing campaigns,” reported ZDNet. Software developers often have administrator privilege across systems, which makes them a prime target for a cybercriminal looking to steal a company’s data. And, while devs are tech-savvy, they are still human and susceptible to phishing and other data security risks. Not only are software developers personally under threat for hacking, but they’re also responsible for building secure apps and tools that protect a user’s valuable data. Unfortunately, the majority of successful cyberattacks are caused by human error. Software developers make mistakes like anyone else, but these mistakes can cause millions of dollars of damage if not caught in time. In this case, writing good code means writing secure code. This guide will break down some of the basic data security protocols developers must know to protect the integrity of their app and, inevitably, the data of the end-user. The basics: CIA No, not the intelligence service. In this case, CIA stands for Confidentiality, Integrity, and Availability. Software developers must undertake security protocols to make sure data maintains these three qualities. Here’s what this looks like in practice: - Confidentiality: Data should only be available to those who are authorized to view it. Maintaining data confidentiality leads to security mechanisms such as authentication, authorization, and encryption. - Integrity: Data should only be changed by those who are authorized to change it. This involves the use of security measures such as hashing, authorization, accountability, and auditing. - Availability: This refers to the assurance that those authorized to do so can access data when needed. It encompasses security areas such as disaster recovery, business continuity, and resiliency. By keeping these three data security tenets front of mind, developers can begin to prioritize the data security protocols that protect a business’s valuable information. 5 Important data security protocols These data security protocols not only help keep your data safe, but they can also meet PCI application security requirements. Credentials and secrets are in danger of being exposed or shared on cloud systems daily: credentials may be embedded directly in code repositories, for instance, or shared via email or chat among developers. Developers must be careful to only share credentials and other data once it has been encrypted. Encryption is a major requirement for PCI DSS compliance. Developers should know how to build encryption for any data that is transferred in and out of a platform, as well as any PII that’s stored by the company. Passwords, API keys, and other credentials should also be encrypted to protect this valuable information. “As simple as they sound, firewalls are one of the most efficient tools in battling with cyber criminals and malicious attackers. An efficient and up-to-date firewall keeps various threats away, such as malware, viruses and spam,” writes one expert. Firewalls may be common, but they are often overlooked by devs and left out of date by security teams. Maintaining a firewall is the first of the 12 PCI application security requirements. Developers should know how to set up a firewall and keep it secure. “Cryptography is highly relevant to any software that holds sensitive data, PII, PHI, or anything that is beholden to industry standards such as PCI-DSS or HIPAA,” wrote the Simple Programmer. “Since this data is extremely sensitive, it’s important for developers to understand which algorithms to use in which situation, as well as which algorithms are stronger than others.” Cryptography can help enforce the security, privacy or confidentiality of messages sent over an insecure channel. For developers, knowing the ins and outs of cryptography is crucial to making sure your data is kept safe. 2FA or MFA Two-factor or multi-factor authentication help developers satisfy data confidentiality and integrity. These processes require a user to verify their credentials using more than one method. For instance, two-factor authentication may require someone to click a link in an email or receive a code via SMS. For developers, working with MFA or 2FA in place helps reduce the risk of insider threat. It’s a basic identity and access management step that can ensure anyone accessing valuable data is who they say they are. Implementing 2FA for applications like GitHub can be critical to preventing attackers from stealing data from code regardless of if it’s in production or not. Phishing and other threats Software developers need to be aware of the risks associated with their role in the organization. Phishing scams are on the rise, as is whale phishing. What is whale phishing? It’s a phishing attack in which a hacker poses as a senior member of an organization and attempts to steal sensitive information or gain access to a computer system to steal data. Devs need to know how to prevent email spoofing, but also be aware that they could be the target of a whale phishing attack or normal phishing attack. Even with the right security protocols in place, the threat of human error will still lead to vulnerabilities in the system. Vulnerability scanning/application security Vulnerability scans perform regular systemic check-ups to make sure data is kept secure. Vulnerability scans are particularly critical in the age of remote work as the resources organizations might be relying on can have vulnerabilities that leaves data insecure. However, organizations should scan and manage not just the vulnerabilities in the software resources they use to work, but also within their own applications that are in production. This is critical to ensuring that the apps they build for their customers are secure and are compliant with data protection laws. In addition to vulnerability scanning applications in production, there are other things developers can to ensure the security of their users. Companies rely on Nightfall DLP, for example, to scan their applications in real-time for any sensitive data they might collect from customers. Using our API, companies that work with us can classify strings of text or files like documents and images to ensure their own customers don’t inadvertently overshare PII, PHI, or other sensitive information while using their services. Nightfall has over 150+ detectors that can scan over 100+ file types in order to identify instances of improper data sharing. Nightfall can then redact, quarantine, and delete text, strings, messages, or files containing sensitive tokens. Learn more about cloud DLP and setting up your organization for secure remote work in our complete 2021 Security Playbook for Remote-first Organizations. And, learn more about Nightfall by scheduling a demo at the link below. Subscribe to our newsletter Receive our latest content and updates Nightfall is the industry’s first cloud-native DLP platform that discovers, classifies, and protects data via machine learning. Nightfall is designed to work with popular SaaS applications like Slack, Google Drive, GitHub, Confluence, Jira, and many more via our Developer Platform. You can schedule a demo with us below to see the Nightfall platform in action. Schedule a Demo Select a time that works for you below for 30 minutes. Once confirmed, you’ll receive a calendar invite with a Zoom link. If you don’t see a suitable time, please reach out to us via email at email@example.com.
<urn:uuid:5a935d24-2dde-46ef-b484-71f06823acff>
CC-MAIN-2022-40
https://nightfall.ai/5-basic-data-security-protocols-developers-must-know
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00538.warc.gz
en
0.932418
1,660
2.953125
3
What is Cyber Liability Insurance? Also known as cybersecurity insurance or cyber risk insurance, cyber liability insurance protects businesses against property losses and liability associated with cyber attacks. These include hacks, virus attacks, devices and systems infected with malware, data breaches, and denial of service attacks (Dos). As businesses increase their Internet usage and continue to adopt digital technologies, the risk of a cyber attack is only going to grow, which could cost businesses potentially millions in damage. In the event of an attack or data breach, cyber liability insurance can cover your business losses, regardless if they are first-party losses or losses from third-party providers. It is an essential risk management tool for many IT companies, tech companies, and all other companies that deal with a large amount of sensitive information and conduct a lot of business online. The cyber insurance industry is rapidly expanding. The structure of the industry is designed for businesses to pool cybersecurity risks together as a way to internalize risks associated with operating over the Internet. There is a wide and diverse risk pool spanning the industry, which makes it difficult for smaller companies looking to break into the industry. Currently, cybersecurity insurance remains a niche product even though virtually every modern business utilizes the Internet. How Cybersecurity Insurance First Emerged The cybersecurity insurance industry first emerged all the way back in the early 1990s as cybersecurity was becoming an increasingly important factor for many major businesses at the time, even though the Internet was a relatively unknown entity. During this time, the two biggest online threats were copyright infringement and theft of intellectual property. The big players in the computer industry at the time were worried that rivals or cybercriminals would steal their innovations and claim them as their own. However, by the turn of the new century, many industry experts began to realize the scope of cybersecurity is much greater than just copyright theft. At this point in time, industry leaders were creating risk management tools to better cope with the emerging threats they faced. It took two major events for all major players in the industry to take notice: Y2K and the 9/11 attacks. These events helped convince governments that small-to-medium sized businesses need cybersecurity insurance for protection. And after the 2008 financial crisis, it is now realized that larger corporations need cybersecurity insurance too. One of the first industries to give serious consideration to cybersecurity insurance was banking in the early 2000s, which was a necessary move for an industry that was rapidly digitizing and dealing with an immense amount of sensitive information. Initially, the insurance only covered third-party costs and some in-house business interruptions. However, insurance soon expanded to cover both first and third-party elements, which we will discuss in greater detail later in the article. Further Innovations in Cybersecurity Insurance The increase in the number of cyber attacks and potential losses for businesses both small and large has given way for many consumers and business owners to rethink their insurance needs. It is important to note that the nature of cybersecurity threats is constantly changing and evolving. Therefore, cybersecurity insurers have to constantly adapt, change their policies, and provide different services to keep pace with the emerging threats. Much of this part of the industry is still in the developmental stage, so some level of experimentation can be expected from insurance providers. One newly formed option is bundling cybersecurity insurance with IT security services. Companies who specialize in creating customized Internet security products for clients are also partnering with cybersecurity insurers to address the needs expressed during the consultation process. Not to mention, the government also has a significant stake in the cybersecurity insurance industry due to the amount of losses that could potentially arise without insurance. If an insurance provider can help a business recoup some of its losses, it prevents the government from stepping in and dishing out the necessary amount of money. It is imperative to expand the existing pool of both insurance providers and clients. There are compounded risks if there are too few players in the system. For one, without enough insurance providers, it can lead to an oligopoly or even a monopoly, which could create higher than market value insurance premiums, putting greater financial strains on businesses. On the other hand, having too few clients increases the risk of losses for providers because the cybersecurity risks are not efficiently dispersed. Insurance providers also have to minimize "free-riders" who get generalized protection without paying for the coverage. What Does Cybersecurity Insurance Cover? Cybersecurity insurance covers a lot but not all of the losses associated with a data breach. Providers reimburse businesses for the costs they already incurred from the attack. Here is some of the coverage you are likely to find in an insurance policy: - Data Restoration: This covers the cost to restore or replace software, electronic data, and other programs that were damaged or destroyed by a malware attack, hacker attack, denial of service (DoS), or any other form of cyber attack. - Loss of Income and Other Expenses: This covers income lost from an attack as well as other expenses required to restore operations following a shutdown from a virus, hacker attack, etc. Some providers include policies that cover losses to a supplier, distributor, and others that were forced to shut down because of a data breach by a particular company they rely on to maintain operations themselves. - Notification Costs: This covers the cost of notifying parties who were impacted by the data breach. Coverage here is important because many states have laws that require businesses to notify customers or employees if their personal information is compromised. Policies could also cover the costs associated with credit monitoring as well as establishing a call center for those impacted. - Cyber Extortion: This covers the ransom paid to a hacker who breached a company's network and threatened to commit more damage to the company, such as releasing sensitive data, infecting the system with a virus, initiating a DoS attack, etc. These policies generally reimburse any extortion payments paid to an attacker as well as any expenses related to the incident, such as hiring a negotiator to try and reason with the attacker. - Crisis Management: A majority of cyber insurance policies cover some crisis management expenses. This could include the cost of hiring a lawyer, cybersecurity expert, forensic accountant, or public relations manager to assess the situation, determine the scope of damages, pinpoint whose data was compromised, and to help mitigate the losses to salvage a company's reputation. Many cybersecurity insurance policies also cover some liability claims. These usually pertain to settlements or damages as well as defense costs, which can fall within the original policy limit or outside it. Some liability examples typically covered include: - Electronic Media Liability: Electronic media liability insurance reimburses lawsuits against a company for cases like libel, defamation, slander, etc. It also covers copyright infringement, domain name infringement, and invasion of privacy. These claims are only covered if the policyholder publishes electronic data on the internet. - Privacy Liability: Privacy liability insurance is important for businesses with information risk or privacy risks. Whenever a cyber criminal exposes sensitive customer or employee information, it also exposes businesses to liability. This liability coverage protects businesses from liabilities arising from a cyber attack or privacy law violation. These could cover everything from liabilities in a contractual obligation to regulatory investigations conducted by the government. - Network Security Liability: This covers claims against a company accused of negligent acts, critical errors, or omissions. Omissions could include failure to provide notification of a data breach, failure to protect sensitive business information, or failure to prevent a DoS attack or the introduction of a virus or malware into the system. - Regulatory Proceedings: This covers penalties, fines, or hearings pressed upon businesses by regulatory agencies that direct data breach laws. It also helps cover the cost of hiring an attorney to help respond and represent your legal regulatory proceeding. What Cyber Insurance Policies Do Not Cover As is the case with all insurance contracts, cyber policies do not cover everything and exclude certain types of claims. Here are a few typical exclusions policy providers do not cover: - Property Damage and Bodily Harm: Cybersecurity insurance does not cover claims of bodily harm or property damage. That is where general liability insurance is essential. - War and Terrorism: Acts of war and terrorism do not fall under the scope of a cybersecurity insurance policy claim. - Utility Failure: Insurance providers are not responsible for attacks caused by utility failures, such as electrical grid shortages. - Intentional Dishonesty by the Insurance Holder: If a business intentially lies or withholds critical information to insurance providers, they are not only liable for damages but could face possible legal action from the insurance provider. - Contractual Liability: Each contract is different. But policy holders generally assume some type of liability in a contractual agreement with insurance providers. - Cyber Attacks Committed Before the Retroactive Date: A cybersecurity insurance policy is only applicable after the policy goes into effect, so all damages that occurred before the implementation date are not covered. - Restoring Computer Systems to a Higher Level of Functionality than Before the Attack: A cyber security insurance policy is not responsible for upgrading a business's operations. It is only responsible for the system they have in place at the time of the attack. Who Needs Cybersecurity Insurance? Cybersecurity insurance is ideal for businesses that store confidential, sensitive, and proprietary information online. If your business stores any of the following information, you should seriously consider adding cybersecurity insurance to better protect your business.: - Credit Card Numbers - Social Security Numbers - Driver's License Information - Phone Numbers/ Email Addresses - Medical Records, Health Information, Medical Expenses - Patent Applications, Trade Secrets, Copyright Claims Even if you're a smaller business and do not deal with nearly as much sensitive information as larger businesses, it is still important to invest in cybersecurity insurance. The truth is, you never know when an attack could occur, and you should always be prepared, even if you believe the chances of it occurring are low. As you will see in the next section, there are many different ways cybercriminals can attack your business. Fortunately, there are safeguards to mitigate the chances of a significant data breach. How Do Cyber Breaches Occur? There are a multitude of ways a cyber breach can occur. For example, many cybercriminals use social engineering tactics to manipulate users into clicking infected links. Cyber criminals can send phishing emails or texts to unsuspecting employees or customers pretending to be your company. Once they click the email or text link, the cyber criminal can steal their personal information. Or, they can even use a virus to infect company data files. The best way to protect your company is through robust internal safeguards. For instance, businesses should limit the number of employees who have access to sensitive business files and information. Likewise, you should have a thorough password policy, with periodic password updates. And employees, under no circumstances, should share their password with anybody. There should also be regular software updates, because outdated software is an immense security risk to a company. With proper safeguards in place coupled with cybersecurity insurance, businesses can mitigate the risk of a data breach while also protecting themselves financially in the case of an attack occurring. Both measures ultimately safeguard the business and its reputation for the future. Security should always be a boardroom agenda for any business and cybersecurity insurance adds an extra layer of protection to a company's security policy. What Types of Cybersecurity Insurance Do You Need? There are two types of cybersecurity insurance businesses may need. These include: - First-Party Coverage: Covers expenses related to a data breach or stolen data - Third-Party Coverage: Provides protection to businesses being sued by customers for failing to prevent a cyber attack Below we discuss both types of coverage in greater detail as well as why your businesses should consider both to protect your business. First-Party Cybersecurity Coverage First-party coverage is insurance that deals with the costs that directly impact your business in the event of a cyber attack. These include expenses for restoring a breached network or recovering compromised data. It is sometimes referred to as data breach insurance, and you can usually add it to your general liability insurance if the policyholder allows it. Additionally, first-party coverage helps offset the costs of notifying clients about an attack and providing credit monitoring services to those impacted by the breach. First-party cybersecurity insurance can usually cover the following: - Crisis Management - Public Relations - Cyber Extortion Payments - Hiring Expert Investigators - Cost of Hiring Additional Staff - Renting Equipment - Purchasing Third-Party Services - Custom Credit and Fraud Monitoring Services Third-Party Cybersecurity Coverage Third-party cybersecurity insurance helps cover lawsuits related to a business's cybersecurity risks. These are the claims made against businesses by third-party providers impacted by a data breach. In essence, it is liability coverage that protects businesses who fail to prevent a cyber attack or data breach at their company. Third-party insurance is particularly valuable for IT consultants, tech professionals, and software developers who provide software recommendations to clients. Third-party insurance will help protect those individuals and their employers who recommended software that was later responsible for a cyber attack or data breach. This type of cybersecurity insurance generally covers the following: - Legal Defenses - Legally Binding Judgements of the Case - Any Settlements Agreed To (Both In and Out of Court) - Any Other Legal Expenses Businesses can also bundle their third-party cybersecurity insurance with their errors and omissions insurance, which covers lawsuits relating to work that was later, inaccurate, or never delivered. When paired together, these are known as technology errors and omission insurance, and provide companies with robust third-party liability coverage. How Much Does Cybersecurity Insurance Cost? Cybersecurity insurance is not a one-size-fits-all type of coverage. There are many different factors that determine the cost of coverage. Depending on the size of the businesses and the scope of the insurance coverage, cybersecurity insurance can range anywhere from a few hundred dollars a year to well over 50,000. However, if you work with policy providers to tailor coverage that matches your business needs, you should be able to get a rate that fits within your budget. There are a few key criteria businesses and insurance providers must factor in to deter the cost of your cybersecurity insurance. These include: - Coverage Limits: The more complex a cybersecurity network is, the more expensive coverage will be. For instance, businesses with multiple servers will have higher insurance premiums than those operating with one. - Security Measures: A great way for businesses to lower their insurance premiums is to have robust cybersecurity measures in place. This could include a company-wide cybersecurity policy, employee training, regularly running system maintenance checks, updating software, updating passwords, etc. - Industry: Companies that primarily operate online and deal with large amounts of sensitive/ personal data will have to pay considerably more than a small business with a low traffic website. Industries that generally have higher cybersecurity insurance premiums include healthcare, financial/banking, and tech because they deal with so much sensitive information on a day-to-day basis. - Data Access: Businesses can save money by limiting the number of people who have access to sensitive business/ client information. If companies limit access to certain departments or senior officials, the risk of a data breach is reduced significantly. Likewise, hiring an in-house or third-party cybersecurity expert can lower premium rates. - Claims History: If a company has a history of claims, the insurance company will generally charge higher premiums because of the perceived risk of providing cover. That is why it is essential for businesses to mitigate claims reports and protect their business as best as possible. When compared to other types of business insurance, cybersecurity insurance generally has higher premiums because of the scope and impact a data breach can cause on not only a business, but also its clients and third-party providers. The costs of a cyber attack can add up very quickly. That is why it is essential to contain the crisis quickly and respond to customers in an honest manner. Likewise, companies need to fix the damaged hardware and immediately update software, have a public relations correspondent to publicly address the situation, and be prepared for any legal proceeding ahead, Partner With Cybersecurity Experts to Better Protect Your Business and Lower Your Interest Rates Cybersecurity insurance is a crucial asset for SMBs all the way up to the Fortune 500 companies. However, one distinct advantage of many larger enterprises is that they generally have the resources to produce an in-house IT and cybersecurity team. Creating a full-time cybersecurity team is an undeniably expensive endeavor, which is why many SMBs are unable to implement one. Fortunately, there is a cost-effective alternative many SMBs can use to manage cybersecurity risks that aligns with their budgets. Instead of building a team in-house, companies can partner with third-party cybersecurity experts that can help businesses navigate through the increasingly complex cybersecurity landscape. At BitLyft, we are the cybersecurity risk management experts that you want in your corner. Our team consists of highly trained cybersecurity analysts, developers, and strategists. We can handle the day-to-day tasks of helping you achieve your cybersecurity goals, while you can focus on growing your business. Not to mention, when we're a part of your team, we can help lower your company's cybersecurity insurance premiums. If you would like to learn more about BitLyft, the cybersecurity services we provide, and how we can help your business, feel free to visit our website and contact us today!
<urn:uuid:ad3dadc7-fa92-427b-8154-d3d7229f4d11>
CC-MAIN-2022-40
https://www.bitlyft.com/resources/intro-to-cybersecurity-insurance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00538.warc.gz
en
0.954806
3,569
2.6875
3
Europe’s first exascale supercomputer is to be located in Forschungszentrum Jülich facilities in Germany. The European High Performance Computing Joint Undertaking (EuroHPC JU) has selected the German research institution as a partner in Germany's Gauss Center for Supercomputing to host the future machine, to be named Jupiter. Jupiter, which stands for "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research", will be installed in a purpose-built building on the campus of Forschungszentrum Jülich from 2023 and operated by the Jülich Supercomputing Center (JSC). As with the Forschungszentrum Jülich’s Juwels system, Jupiter will be based on a modular supercomputer architecture that the research institution developed as part of the European Deep research projects . The system will reportedly require just 15MW of energy, which will be sourced from renewable projects. It will utilize hot water cooling and may be connected to a district heating scheme. The system is to be procured by the European supercomputing initiative EuroHPC JU. The total costs for the system amount to €500 million ($519.3m); half will be provided by EuroHPC JU and the other half be provided by the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW). Located in Jülich, in North Rhine-Westphalia’s Düren, the Jülich Supercomputing Center at Forschungszentrum Jülich launched Germany's first high-performance computing center in 1987. It currently operates the Gems, Jureca, Jusuf, and Juwels supercomputers and largely uses Atos-based hardware. It has also procured a D-Wave quantum computer. Oak Ridge's exascale 'Frontier' system in the US is named the world's most powerful supercomputer in the most recent Top500 list of supercomputers. On the main High-Performance Linpack (HPL) benchmark used by Top500, Frontier reached 1.102 exaflops of sustained performance. It has a theoretical peak performance of 1.686 exaflops - although Oak Ridge believes it can reach 2 exaflops in time. For the mixed-precision computing benchmark, useful for calculating AI performance, it hit 6.88 exaflops - also the world's fastest. However, China has operated two exascale systems for over a year - it has just kept them relatively secret, and not submitted them to the Top500. A third Chinese exscale system is reportedly in development. EuroHPC JU picks new locations for more supercomputrs, MareNostrum5 back on track As well as Germany, the EuroHPC JU has selected a further four sites to host mid-range supercomputers with petascale or pre-exascale capabilities: Daedalus will be hosted by the National Infrastructures for Research and Technology (GRNET) in Greece. Levente will behosted by the Governmental Agency for IT Development (KIFU) in Hungary. Caspir will be hosted by the National University of Ireland Galway (NUI Galway) in Ireland. EHPCPL will be hosted by the Academic Computer Centre Cyfronet AGH (Cyfronet) in Poland. Specifications of the new systems have not been shared yet. Professor. J-C Desplat of the Irish Centre for High-End Computing (ICHEC) at NUI Galway, said: “A new supercomputer, expected to be around 25 times more powerful than the current national supercomputer Kay, would provide a national competence development platform for both numerical modelling and for the next generation of data-centric techniques and platforms and, as such, accelerate the adoption of powerful new hybrid techniques embedding machine learning within mainstream computational science models and Grand Challenges.” The machines will be co-funded by the EuroHPC JU – with JU co-funding up to 50 percent of the total cost of the high-end supercomputer and up to 35 percent of the total cost of the mid-range supercomputer – with budget stemming from the Digital Europe Programme (DEP), Horizon Europe (HE), and by contributions from relevant participating states. The exact funding arrangements for the new supercomputers will be reflected in hosting agreements that will be signed soon. The most powerful supercomputer in Europe is currently Lumi, a EuroHPC JU system launched earlier this month and located in a former paper mill in Kajaani, Finland. The HPE Cray EX supercomputer is currently capable of 152 sustained petaflops (Linpack), but that is expected to grow to more than 375 in the coming weeks - and a peak performance potentially above 550 petaflops. Five of EuroHPC JU's systems are currently operational, including LUMI. Along with it is Vega, in Slovenia; MeluXina, in Luxembourg; Karolina, in Czechia; and Discoverer, in Bulgaria. Three more supercomputers are also underway: Leonardo in Italy, Deucalion in Portugal, and MareNostrum 5 in Spain, the latter of which is seemingly back on track after delays. The future of MareNostrum 5 had looked uncertain last year, with the public procurement process for the system canceled in July 2021 after the parties involved couldn’t agree on a vendor to supply the system. According to the document published on the EuroHPC site, voting results at a meeting “did not achieve the needed majority to reach an agreement to adopt the selected tender.” At the time, Politico reported involved parties were divided over whether to focus on sovereignty and rely on local ‘made in Europe’ supply chains or buy the best available technology to better support research; a decision between a more Euro-centric Atos system vs a joint IBM-Lenovo bid couldn’t be reached. EuroHPC JU put out a new tender call at the start of the year, and this week announced that it has selected Atos as the winning vendor. To be hosted in Barcelona, the system will have a peak performance of 314 petaflops, more than 200 petabytes of storage, and 400 petabytes of active archive. “The EuroHPC Joint Undertaking continues to lead the way in European supercomputing. MareNostrum 5 will provide European scientists and industry with access to cutting-edge HPC infrastructures and services,” said Anders Dam Jensen, the Executive Director of the EuroHPC Joint Undertaking. “It will power medical innovation but also climate research, engineering and earth sciences while supporting our objective to promote green and sustainable technologies.” Director of BSC-CNS Mateo Valero added: “The acquisition of MareNostrum 5 will enable world-changing scientific breakthroughs such as the creation of digital twins to help solve global challenges like climate change and the advancement of precision medicine. In addition, BSC-CNS is committed to developing European hardware to be used in future generations of supercomputers and helping to achieve technological sovereignty for the EU’s member states.” The Barcelona Supercomputing Center – Centro Nacional de Supercomputación (BSC-CNS), which all through the uncertainty seemed sure the system was still going ahead, inaugurated the new 12,000 sqm HQ building due to host MareNostrum 5 in October 2021.
<urn:uuid:6498d221-f215-46db-a9d1-23611585b269>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/europes-first-exascale-supercomputer-to-be-located-in-germany/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00538.warc.gz
en
0.922793
1,610
2.859375
3
During last fall’s wildfires in California, the largest electric utility provider in the state was forced to shut off power for millions of customers. In early October, Pacific Gas and Electric (PG&E), curtailed power to more than 30 counties in Central and Northern California. California is prone to more wildfires, natural disasters, and inevitably, more power shutdowns, making microgrids a critical part of the infrastructure to support future operations. This is why PG&E is planning to build 20 new microgrids near utility substations that could be affected by future power shutoffs. Communities, cities, schools and universities -and yes, data centers – are looking to microgrids to deploy more resilient power solutions above and beyond generators and traditional backup solutions. There’s More to Microgrids Than You Think Microgrids can bring together multiple sources of energy, offering a holistic approach rather then reliance upon a single utility provider. Microgrid solutions are entering mainstream use-cases and are delivering some serious benefits to resiliency, uptime, and power delivery. And the use-cases just keep growing. Outside of the California wildfires, there are other use-case as well. One of the nation’s largest microgrids helps power Alcatraz Island and its 1.5 million annual visitors, helping save more than 25,000 gallons of diesel a year, while reducing the island’s fuel consumption by more than 45% since 2012. How did the Texas A&M RELLIS Campus, boasting a growing list of multimillion-dollar state and national research facilities, testbeds, and proving grounds, deliver high availability power supply for their mission? Microgrids. In a traditional sense, microgrids act as a self-sufficient energy system. And they are capable of serving discrete geographic footprints. These locations and geographies include college campuses, hospital complexes, business centers, or entire neighborhoods. Here’s what’s changed: Microgrid architecture has advanced from merely delivering power to doing so intelligently. Advanced microgrids are smart and leverage data-driven solutions for software and their control plane. Rob Thornton, president and CEO of the 105-year-old International District Energy Association, often says that microgrids are “more than diesel generators with an extension cord.” In other words, a microgrid is not just a backup generation mechanism but should be a robust, 24/7/365 asset. Also, an advanced microgrid may provide grid and energy management services. Consider this list of microgrid capabilities: - Produce on-site generation, and, in some cases, thermal energy. - Sell capacity, energy and ancillary services to the grid and participate in demand response — activities that create a potential revenue stream for the asset owner. - Optimize energy resources to priorities set by the host. - Manage load to reduce energy waste and achieve superior efficiency. A fundamental feature of a microgrid is its ability to island — meaning it can disconnect from the central grid and operate independently and then reconnect and work in parallel with the grid. So, for example, whenever there is a significant storm or another natural weather event that potentially causes an outage on the power grid, the microgrid islands and activates its on-site power generators. When the power outage ends, the microgrid reconnects to the grid. A microgrid controller gives the microgrid its islanding capability as well as new, data-driven capabilities. Also known as the central brain of the system, the controller can manage the generators, batteries, and nearby building energy systems with a high degree of sophistication. The controller orchestrates multiple resources to meet the energy goals established by the microgrid’s customers by increasing or decreasing the use of any of the microgrid’s resources – or combinations of resources. These types of solutions can also create microgrid-as-a-service capabilities. Microgrid-as-a-service delivers a fully managed, data-driven solution to help you with your power delivery requirements. Advanced data gathering from numerous operational microgrid deployments allows leading partners to make better decisions and proactively service units. This type of managed offering will enable customers to never worry about their microgrid unit; it’s all serviced, monitored, and maintained by your microgrid provider. 3 Myths Regarding Microgrids and Data Centers In researching microgrids and learning about their capabilities, I quickly ran into three myths that people still carry around this piece of technology. - Microgrids are too expensive. Yes, there is an upfront cost of building a microgrid., which will vary depending on your use-case and the scale of the project. Some design costs are in the thousands of dollars, while more complex systems may cost more than a few million dollars. However, look at it from a healthcare data center perspective for a second. “The extreme case would be for your medical device to stop working,” says Dave Carter, the managing research engineer at the Schatz Energy Research Center and the lead technical engineer on microgrid projects. “The value of the power that the microgrid can provide when the rest of the county [in California] is de-energized is high.” - Microgrids are way too complicated and challenging to manage. Modern microgrids are a lot smarter, automated, and data-driven than ever before. Plus, the whole design around microgrid-as-a-service enables enterprises, healthcare providers, cities, and even data center operators to focus on what they’re good at and what their business requirements. Today, the microgrid is far easier to manage, has more integration points with power solutions, and can significantly improve resiliency. - Microgrids are basically the same as a generator. Microgrids are certainly not the same as a traditional generator. First of all, if you have a diesel generator, there is a chance that you might be limited in how much you can test it due to environmental regulations. Secondly, microgrids can be wholly independent and not rely on diesel fuel. Remember, they can source power from multiple locations. Finally, you can absolutely use a generator alongside a microgrid. Here is a specific example, there was a diesel-backed microgrid operated during Super Bowl 50 in San Francisco, California. Using Tier-4 technology, the microgrid powered Super Bowl City using renewable diesel fuel — as opposed to petroleum diesel fuel. A big difference is that this fuel is not biodiesel. Instead, it’s Neste renewable diesel, created from renewable raw materials, including any organic biomass, such as vegetable oil. Join Bill Kleyman and other energy experts on June 23 for a live webinar on Microgrids for Data Centers – Click Here to Learn More. Getting Started Means Asking the Right Questions The realm of power delivery in the data center and IT space continues to become more interesting. Although power consumption is becoming more efficient, we definitely see more compute instances deployed. These instances translate to edge computing, remote locations, more distributed computing, and more ecosystem that will require access to reliable and secure power solutions. To shift your paradigm around microgrids and power delivery, start by asking some essential questions: - Is my power delivery as efficient as I need it to be? - Am I worried about power outages? - How much do I really trust my current generators? - When was the last time I reviewed my power solution? If you’ve never looked at microgrids as a real option for your data center, enterprise, or specific use-case, it might be an excellent moment to explore these solutions. These systems are supporting major hyperscale data centers, critical healthcare facilities, cities and towns, and even the island of Alcatraz.
<urn:uuid:4bbebe05-41d3-4144-ae2a-eb8668077eb9>
CC-MAIN-2022-40
https://datacenterfrontier.com/microgrids-and-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00538.warc.gz
en
0.93251
1,614
3.09375
3
Executing custom scripts for Windows Organizations may sometimes seek the assistance of script commands to execute specific routine and time-consuming operations on the Windows machine that are not natively supported by the MDM platform. A script is a series of commands written in a scripting language that details the operations you want to perform on the machine and automatically execute it on the devices without any manual intervention. These tasks otherwise have to be manually administrated one by one on each device. For executing custom scripts on Windows machines, create script files with custom configurations that suit your demands and upload the file to Hexnode. Then push the script file to the target devices. However, it is recommended to manually validate the script execution on a single device before associating it with bulk devices. This helps to analyze and remediate the associated issues without heavily impacting business continuity or productivity. Create and Run scripts Hexnode supports .ps1, .bat and .cmd script files. The PowerShell scripts (.ps1) and batch files (.bat or .cmd) can be created via different code editor tools. However, the easiest way of creating a script file is by using a Notepad editor. Let’s look into how PowerShell and batch script files can be created and executed using Notepad on a Windows machine. - On your Windows 10 PC, open the Notepad app. - Enter the commands required for your script in the text file. An example of a PowerShell script and a batch script is as follows. New-Item -Path 'D:\temp\Hexnode Folder' -ItemType Directory This is a script file for creating a new folder called “Hexnode Folder” in the location D:\temp on the device. This script will create a new text file named Hexnode in the C-Drive on the device. - Save the file with the extension “.ps1” – for PowerShell scripts. Use the extensions “.cmd” or “.bat” for batch scripts. - Open the command prompt and enter the full path of the script in the format c:/scripts/scriptname.ps1. Press the Enter button to run the operation. To run the script, Running the script on selected devices manually before bulk deployment enables pre-emptively identifying and removing errors before causing severe business continuity issues. Upload Script to Hexnode The scripts are used to proactively set up configurations or execute operations on the devices. For deploying these script files to the devices via Hexnode, they have to be first uploaded in the MDM console. To upload script file in Hexnode, - On your Hexnode MDM console, navigate to Content. - Choose My Files and click on Add. - Enter an appropriate file name and upload the script file. - Save the file in Hexnode. Execute Custom Scripts Once the script is created and uploaded, it can be pushed remotely to deployed devices via Hexnode. To execute a custom script on Windows, - On your Hexnode portal, navigate to Manage > Devices. - Select the Windows PC’s or tablets to which the script is to be deployed. - Click Actions > Execute Custom Script. - Choose the platform as Windows. - Choose the script file source as either Upload file or Hexnode repository. - If Upload file is chosen, click on Choose file… corresponding to the Upload File field. Choose a script file that resides on your machine. - If Hexnode repository is selected, you will be asked to select the previously uploaded script file in the Content tab in Hexnode. - The Script name filed will be auto-populated with the name of the script file. - Enter the Arguments for the chosen script file. Arguments are the variable provided in the command line while executing a script. Such arguments can be remotely deployed to the device via the MDM. Add spaces between arguments if you are entering more than one argument to a single script file. - Provide a time duration in the Timeout field. If the script execution does not finish within the specified period, it will be forcefully terminated. The minimum time duration is 15 minutes. However, it is recommended that the default value of 30 minutes be left unchanged. - Click on Execute. Navigate to the Action History tab of the device to view the script execution status. The Hexnode administrators can check the output of the pushed script file on the device (only if the script file returns an output) by navigating to Manage > Device > the device to which the action is pushed > Action History. Click on Show Output corresponding to the script execution action to view its output. If the action is successful and the configurations of the script file is correct, then the script will get automatically executed on the device. The uploaded script file was for creating a new document named “par” in the C-Drive of the device with content 1, 2 and 3, the parameters passed through Hexnode.
<urn:uuid:ed2b1ee2-6053-44bd-9fca-030588041594>
CC-MAIN-2022-40
https://www.hexnode.com/mobile-device-management/help/executing-custom-scripts-for-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00738.warc.gz
en
0.858574
1,064
2.578125
3