text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Cybersecurity: Always keep in mind its human component
Protecting hardware and software against cyber threats may require a lot of highly technical skills but it is fairly straightforward, considering how direct the causal relationships are between IT vulnerabilities and data breaches. To illustrate, if a zero-day vulnerability is discovered by information security (infosec) experts, developers must find a way to patch it before cybercriminals can exploit it.
Indeed, IT departments have countless hardware and software protection tools at their disposal, such as anti-malware programs and network firewalls. However, they must always keep in mind that their biggest vulnerability by far is the human user. This is primarily due to three reasons: people make mistakes, people can be lazy, and people may not feel that they are part of the organization’s cybersecurity efforts.
People are prone to making mistakes
Fraudsters take advantage of staff members’ weaknesses all the time. For example, a phisher may send employees a fake email saying that company accounts may have been compromised in a hacking campaign. The email will go on to say that account holders must log in and change their access credentials to keep their accounts accounts safe from takeovers. Out of sheer worry, some email recipients click on the link provided and arrive at a spoofed login page.
Unbeknownst to them, as soon as they submit their login details, they’re actually handing over their credentials to the phisher. That cybercriminal will then go to the real login page, sign in using the stolen credentials, then change the username and/or password to lock the original user out of the account. The hacker is then free to pose as the victim, roam around the company network, and steal as much data as they can get their hands on.
Organizations must always keep in mind that their biggest cybersecurity vulnerability by far is the human user.
Zero trust: A way to cover people’s fallibility
There are plenty of ways to fool people, so one way to manage this risk is by minimizing the consequences of staff members falling for fraudsters’ tricks. In a zero trust security model, the organization assumes that their network has already been infiltrated, which means that mere entry no longer signifies trustworthiness.
Therefore, users who enter the network are only granted access to the data and apps they need to accomplish their tasks. This means that if a hacker overtakes a marketer’s account, they won’t be able to dive into the accounting department’s drives and steal from their folders. Rather, the hacker will be limited to what the authentic user has access to.
Machine learning-powered tools
Another way to cover for people’s fallibility is by being smarter at nipping hacking instances in the bud. To illustrate, identity and access management (IAM) programs can now identify the IP addresses of the devices on which logins are made. Thus, if a user normally logs in from Salt Lake City but suddenly pops up at Melbourne, Australia, then the IAM program can flag that instance as suspicious.
Additionally, there are now many machine learning-powered network monitoring tools that can be trained to identify normal and innocuous behaviors over time. Once behavioral baselines are established, the tools can identify suspicious activities that the IT department must investigate.
People can be lazy
There are many small things that require the barest of efforts but staff members fail to do out of sheer laziness. For instance, they’ll forget to lock their computers when they leave their workstations. This lets unauthorized users take over the station, launch browsers, and open tabs for email and other accounts that the authentic user are signed into.
At other times, people just tend to use the most convenient methods available to them. They’ll use short and easy-to-crack passwords or reuse passwords for multiple accounts if they can. And even when they’re required to change their passwords regularly, they may just use a base phrase for all of their passwords, then add month and year to make them unique from one another. While this may look ingenious at first, it actually introduces predictability. That is, if a hacker gets a hold of an expired password, they can easily guess what the current one may be.
Multi-factor authentication (MFA): Require users to submit more proofs of identity
The most popular solution to the problem of passwords is tacking on more steps during the login process. One may be asked to submit a one-time passcode from an authenticator app, or they may be asked to have their fingerprint scanned. Considering how password-based systems are currently the most prevalent identity authentication tools today, building on top of these systems is intuitively the next logical step because developing and implementing entirely new systems requires much more effort.
As previously mentioned, it’s easier to add extra steps to existing processes, but MFA runs counter to the frictionless login experience that users want. This is why new methods such as passwordless authentication methods have been introduced, such as hardware security tokens and advanced biometrics.
People feel they’re not a part of the company’s cybersecurity efforts
According to a 2014 study, staff members often see themselves as outside of an organization’s cybersecurity efforts and are therefore lazy about cybersecurity or tend to do things without the company’s information security in mind. Reversing this mindset requires overhauling corporate culture, which is no easy feat. Another study suggests that companies need to take these five steps to improve their infosec culture:
- Pre-evaluation: Analyze existing infosec policies and determine how aware employees are of such policies and infosec as a whole.
- Strategic planning: Set clear metrics and targets when creating an infosec awareness program.
- Operative planning: Involve managers so that security awareness and training programs become a regular part of their responsibilities. They must strategize with IT experts so that infosec becomes integral to the company’s culture.
- Implementation: The steps laid out during the prior stages are executed. Actual performance metrics are recorded during this stage.
- Post-implementation evaluation: Actual metrics are compared against expected results or targets to see if the organization is on track and where they must improve their efforts. Henceforth, the process of evaluation, planning, and implementation becomes cyclical.
What we’ve shown you so far is just the tip of the cybersecurity iceberg, which is why countless organizations in Salt Lake City and beyond rely on NetWize for their infosec needs. Let our IT specialists take care of your company, too. Request a FREE consultation today or call us at 801-747-3200. | <urn:uuid:837614bd-7499-4f59-8029-e88b5b4c0422> | CC-MAIN-2024-38 | https://www.netwize.com/cybersecurity-always-keep-in-mind-its-human-component/ | 2024-09-10T05:32:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00541.warc.gz | en | 0.951359 | 1,383 | 3.140625 | 3 |
Network encryption operates on layer 2 or 3 of the OSI layer model. It ensures the confidentiality and integrity of transmitted data as well as the authenticity of communication partners and is transparent to higher-layer protocols or applications.
In today’s digital age, communication over the internet is becoming increasingly prevalent. However, this ease of communication also comes with potential security risks such as data breaches, hacking, and unauthorized access to sensitive information. Network encryption is a vital tool that helps protect against these risks.
In this blog post, we will explore what network encryption is, how it works, and why it’s essential in ensuring secure communication over the internet. Whether you are a business owner, an IT professional, or simply interested in learning more about cybersecurity, this post will provide valuable insights into the world of network encryption.
- What is Network Encryption?
- History of Network encryption
- Network Encryption: Pros & Cons
- Network Encryption Devices
- Network Encryption Protocols
- Goals of network encryption
- Layer 2 encryption
- Network encryption in a WLAN
- Layer 3 encryption
- FAQs about Network Encryption
- What is network encryption?
- Why is network encryption important?
- How does network encryption work?
- What are the different types of network encryption?
- What devices are used for network encryption?
- Can network encryption be hacked?
- What are the benefits of network encryption?
- What are the downsides of network encryption?
- How do I know if my network traffic is encrypted?
- How do I implement network encryption?
What is Network Encryption?
Network encryption is the process of encoding data sent between two devices over a network, such as the internet, to make it unreadable by unauthorized users. The goal of network encryption is to ensure that sensitive information, such as passwords, financial data, and personal information, remains secure and protected from potential cyber threats.
Encryption works by using a mathematical algorithm to scramble the data, making it unintelligible to anyone who does not have the decryption key. The encrypted data is transmitted over the network to the intended recipient, who can then use the decryption key to decode and read the information.
There are several different encryption methods available, including symmetric key encryption, public key encryption, and hybrid encryption. Each method has its own strengths and weaknesses, and the choice of encryption method will depend on the specific needs and requirements of the user.
In summary, network encryption is a critical component of modern cybersecurity, helping to protect against data breaches and cyber attacks by securing the transmission of sensitive information over the internet.
History of Network encryption
The history of network encryption can be traced back to ancient civilizations, where encryption was used to protect confidential information during times of war. For example, the Spartans used a cryptographic device called a scytale to encrypt messages sent between commanders. The scytale consisted of a rod with a strip of leather wrapped around it, and the message was written on the leather. The leather strip was then unwrapped, and the message could only be read if it was wrapped around a rod of the same diameter.
During World War II, encryption played a critical role in the war effort. The Enigma machine, developed by the Germans, was used to encrypt and decrypt messages sent between military commanders. The Allies were eventually able to crack the Enigma code, which gave them a significant advantage in the war.
In the 1970s, public key encryption was developed independently by Whitfield Diffie and Martin Hellman, and by Ralph Merkle. Public key encryption allowed for secure communication over a public channel without the need for a shared secret key, which was a significant breakthrough in the field of encryption.
As the internet became more widespread in the 1990s, the need for network encryption grew, and various encryption protocols were developed to secure communication over the internet. The Secure Sockets Layer (SSL) protocol was developed by Netscape in 1994 and later became the basis for the Transport Layer Security (TLS) protocol, which is widely used today to secure online transactions and communication.
In recent years, the development of quantum computing has posed a new challenge for encryption, as it has the potential to break many existing encryption methods. Researchers are currently working on developing new encryption techniques that can withstand quantum attacks.
Network Encryption: Pros & Cons
Network encryption is an essential tool for securing sensitive information transmitted over the internet. Here are some of the key pros and cons of network encryption:
- Increased security: Encryption makes it much harder for hackers or unauthorized users to intercept and access sensitive information.
- Protects privacy: Encryption ensures that only authorized users can access and read sensitive information.
- Builds trust: The use of encryption can help build trust between parties that are communicating sensitive information, such as customers and businesses.
- Regulatory compliance: Many industries, such as healthcare and finance, are required to comply with regulatory standards for data security, and encryption can help meet those requirements.
- Can slow down network performance: Encryption can require additional processing power and increase the transmitted data’s size, which can slow down network performance.
- Key management: Encryption requires careful management of encryption keys, which can be a complex and time-consuming task, particularly in large organizations.
- Vulnerable to attacks: While encryption can provide strong protection, it is not foolproof, and there have been instances of encryption being compromised through attacks such as side-channel attacks or brute force attacks.
- Limited protection against some types of attacks: Encryption is effective against attacks that involve interception of data, but it does not provide protection against attacks that involve social engineering, phishing, or malware.
In summary, while there are some downsides to network encryption, the benefits of increased security and protection of sensitive information generally outweigh the potential drawbacks. Careful implementation and management of encryption protocols can help ensure that organizations and individuals can communicate securely over the internet.
Network Encryption Devices
There are various types of devices used for network encryption, each designed to provide different levels of security and encryption methods. Here are some of the most commonly used network encryption devices:
- VPN (Virtual Private Network) devices: VPN devices create a secure connection between two networks, such as a remote worker’s computer and a company’s network. VPNs use encryption to protect the data transmitted over the connection.
- Firewalls: Firewalls are security devices that control incoming and outgoing network traffic based on predefined security rules. Firewalls can also perform encryption and decryption of data to enhance security.
- SSL/TLS accelerators: SSL/TLS accelerators are hardware devices that offload the processing required for SSL/TLS encryption and decryption from servers, freeing up server resources to handle other tasks.
- Hardware Security Modules (HSMs): HSMs are specialized devices designed to securely store and manage encryption keys. They can be used to generate, store, and protect cryptographic keys used in network encryption.
- Network encryption appliances: Network encryption appliances are hardware devices that provide encryption for network traffic, typically using IPSec or SSL/TLS protocols. These appliances can encrypt data in transit between networks or endpoints, such as servers and clients.
- Routers with encryption capabilities: Many routers today have built-in encryption capabilities, such as IPSec or SSL/TLS, allowing for secure internet communication.
There are various devices used for network encryption, each with different capabilities and encryption methods. The choice of device will depend on the specific needs and requirements of the user, such as the level of security required and the type of data being transmitted.
Network Encryption Protocols
Network encryption protocols are used to secure data transmitted over the internet by encrypting the data and providing a secure channel for the transmission. Here are some of the most commonly used network encryption protocols:
- Transport Layer Security (TLS): TLS is the successor to the Secure Sockets Layer (SSL) protocol and is used to provide secure communication over the internet. TLS provides authentication, data integrity, and encryption of data transmitted over the network.
- Internet Protocol Security (IPsec): IPsec is a suite of protocols used to secure IP communication over the internet. IPsec provides encryption, authentication, and data integrity, and can be used to secure communication between two hosts or between a host and a network.
- Secure Shell (SSH): SSH is a protocol used to provide secure remote access to a computer or server. SSH provides encryption and authentication of data transmitted between the client and the server.
- Pretty Good Privacy (PGP): PGP is a protocol used for secure email communication. PGP provides encryption and authentication of email messages, ensuring that only the intended recipient can read the message.
- S/MIME: S/MIME is a protocol used for securing email communication, similar to PGP. S/MIME provides encryption and authentication of email messages, as well as digital signatures for message integrity.
- Datagram Transport Layer Security (DTLS): DTLS is a protocol used for securing real-time communication, such as voice and video, over the internet. DTLS provides encryption, authentication, and data integrity for the communication.
There are various network encryption protocols used to secure communication over the internet, each with different capabilities and strengths. The choice of protocol will depend on the specific needs and requirements of the user, such as the type of data being transmitted and the level of security required.
Goals of network encryption
Network encryption initially pursues the usual goals of encryption. These are to ensure the:
- Confidentiality of the transmitted data
- Integrity of the transmitted data
- Authenticity of sender and receiver
In addition, network-level encryption provides other benefits. The transparency for higher level protocols allows the combination with any application. The encryption remains unnoticed by the applications.
Network encryption ensures that all data is encrypted within the transport networks used, even if higher level protocols do not use encryption. Combining this with other application-level encryption increases the level of protection for transmitted data.
Layer 2 encryption
Layer 2 encryption works on the data link layer of the network. It secures the transmission section by section on a Layer 2 link between sender and receiver. Depending on the layer 2 transmission protocol used, different encryption methods can be used.
They can be used together with transmission protocols such as Ethernet, MPLS, Frame Relay, PPP, Wireless LAN, SDH, SONET or ATM. With Ethernet encryption, communication is encrypted at the MAC level between the switches or between the switches and the end devices.
One advantage of Layer 2 encryption, like Ethernet encryption, is that better performance can be achieved compared with encryption methods at higher levels, since most of the encryption and decryption is supported by hardware modules.
Network encryption in a WLAN
Network encryption in a WLAN is also implemented at Layer 2. WPA2 encryption (Wi-Fi Protected Access 2) is the most common. It is based on the Advanced Encryption Standard (AES) and ensures that all wirelessly transmitted user data is encrypted. Only participants who can authenticate themselves to the WLAN and know the network key is allowed to use the wireless network for communication.
Layer 3 encryption
Layer 3 encryption operates at the network layer. It secures the connection end-to-end from the sender to the receiver through the entire network and not just on a link section like Layer 2 encryption. In the TCP/IP environment, IPSec (Internet Protocol Security) is often used for Layer 3 encryption. In the four-layer TCP/IP reference model, IPSec is located on the Internet layer (layer 2). A frequent application of IPSec is the implementation of virtual private networks (VPNs).
FAQs about Network Encryption
What is network encryption?
Network encryption is the process of securing data transmitted over a network by encrypting it, making it unreadable to unauthorized users.
Why is network encryption important?
Network encryption is important because it provides a secure channel for transmitting sensitive information, such as financial data, personal information, or confidential business information.
How does network encryption work?
Network encryption works by using algorithms to scramble data in such a way that it can only be unscrambled by authorized recipients who have the key to decrypt the data.
What are the different types of network encryption?
There are various types of network encryption, including SSL/TLS, IPsec, SSH, PGP, S/MIME, and DTLS.
What devices are used for network encryption?
Devices used for network encryption include VPN devices, firewalls, SSL/TLS accelerators, hardware security modules (HSMs), network encryption appliances, and routers with encryption capabilities.
Can network encryption be hacked?
While network encryption can provide strong protection, it is not foolproof, and there have been instances of encryption being compromised through attacks such as side-channel attacks or brute force attacks.
What are the benefits of network encryption?
The benefits of network encryption include increased security, protection of privacy, building trust between parties, and compliance with regulatory standards for data security.
What are the downsides of network encryption?
The downsides of network encryption include the potential for slower network performance, complex key management, vulnerability to attacks, and limited protection against some types of attacks.
How do I know if my network traffic is encrypted?
You can check if your network traffic is encrypted by looking for the “https” in the URL or the padlock icon in the browser address bar, or by using network monitoring tools that can detect encrypted traffic.
How do I implement network encryption?
Implementing network encryption involves selecting the appropriate encryption protocol, choosing the right encryption devices, configuring the devices and protocols, and managing the encryption keys. It is best to consult with a cybersecurity professional for proper implementation.
In conclusion, network encryption is a critical tool for securing data transmitted over a network, protecting sensitive information from unauthorized access. There are several types of network encryption protocols, including SSL/TLS, IPsec, SSH, PGP, S/MIME, and DTLS. Various devices, including VPNs, firewalls, SSL/TLS accelerators, hardware security modules (HSMs), network encryption appliances, and routers, can be used for network encryption.
While network encryption provides several benefits, such as increased security and protection of privacy, it also has potential downsides, such as slower network performance and vulnerability to attacks.
To implement network encryption, it is important to select the appropriate encryption protocol, choose the right encryption devices, configure the devices and protocols, and manage the encryption keys. It is recommended to consult with a cybersecurity professional to ensure proper implementation and maximize the benefits of network encryption while minimizing the risks.
Network encryption is a critical component of any comprehensive cybersecurity strategy, helping to ensure the confidentiality, integrity, and availability of sensitive data transmitted over a network.
Information Security Asia is the go-to website for the latest cybersecurity and tech news in various sectors. Our expert writers provide insights and analysis that you can trust, so you can stay ahead of the curve and protect your business. Whether you are a small business, an enterprise or even a government agency, we have the latest updates and advice for all aspects of cybersecurity. | <urn:uuid:83682d2f-faed-4913-843c-47b7b1725fd1> | CC-MAIN-2024-38 | https://informationsecurityasia.com/what-is-network-encryption/ | 2024-09-13T22:37:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00241.warc.gz | en | 0.920437 | 3,161 | 3.84375 | 4 |
An Intro to Data Mining Part 2: Analyzing the Tools and Techniques
The conclusion of this two-part series looks at the tools and techniques used in data mining and the issues surrounding implementations.
Data mining is one of the hottest topics in information technology. The first part (April 2000, Enterprise Systems Journal, page 32) of this two-part series focused on: What data mining is (and isn’t), why it is important, and how it can be used to provide increased understanding of critical data relationships in rapidly expanding corporate data warehouses. This second part looks at the tools and techniques used in data mining, and the issues surrounding implementations.
Data mining applications can be described in terms of a three-level architecture: applications, approaches, and algorithms and models. These three layers are discussed in the following subsections; the characteristics of the data repository are addressed under "Implementation Issues."
The application level maps the domain-specific attributes into the application space. An appropriate approach is selected, consistent with the application’s goals. Finally, one or more models or algorithms are selected to implement the selected approaches.
Data mining applications group by class, into sets of problems that have similar characteristics across different application domains. Table 1 lists sample applications by class for each of the traditionally IS-intense industries. While the fundamental problem characteristics are similar across the application domains, the parameterization of the applications are distinct from industry to industry, and from application to application. Hence, the same approaches and underlying models used to develop a fraud detection capability for a banking enterprise can be used to develop medical insurance fraud applications. The difference lies in how the approaches and models are parameterized (i.e., which of the domain-specific attributes in the data repository are used in the analysis, and how they’re used).
Table 1: Some Representative Applications of Data Mining Technology by Industry
Industry | Applications |
Retail | Profile-based ("targeted") marketing |
Financial | Credit risk analysis; customer retention; targeted marketing; portfolio analysis/risk assessment |
Telecommunications | Fraud detection; customer retention; targeted marketing |
Transportation | Customer retention; targeted marketing |
Healthcare & Insurance | Fraud detection; best practices; customized care |
Each data mining application class is supported by a set of algorithmic approaches used to extract the relevant relationships in the data. These approaches differ in the classes of problems they are able to solve.
Association. Association approaches address a class of problems typified by market basket analysis. Classic market basket analysis treats the purchase of a number of items as a single transaction. The desire is to find sets of items that are frequently purchased together, in order to understand and exploit natural buying patterns. This information can be used to adjust inventories, modify floor or shelf layouts, or introduce targeted promotional activities to increase sales or move specific products.
These approaches had their origins in the retail industry, where they were used to analyze the purchase of goods. The techniques are more general and can be applied equally well to the purchase of services, to develop targeted marketing campaigns or determine common (or uncommon) practices. In the financial sector, these approaches can be used to analyze customers’ account portfolios and identify sets of financial services that people often purchase together. This might be used, for example, to create a service "bundle" as part of a promotional sales campaign.
Association-based approaches often express the resultant item affinities in terms of confidence-rated rules, such as "80 percent of all transactions in which beer was purchased also included potato chips." Confidence thresholds can typically be set to eliminate all but the most common trends. The results of the association analysis (i.e., the attributes involved in the rules themselves) may trigger additional inquiries.
Sequence-based Analysis. Traditional market basket analysis deals with a collection of items as part of a point-in-time transaction. A variant of this problem occurs when there is additional information to tie together a sequence of purchases (e.g., an account number, a debit or credit card, or a frequent buyer/flyer number) in a time series. In this situation, it may not only be the coexistence of items within a transaction that is important, but also the order in which those items appear across ordered transactions and the amount of time between transactions.
In healthcare, such methods can be used to identify routine and exceptional courses of treatment by identifying the common and uncommon succession of multiple procedures over time.
Clustering. Clustering approaches address segmentation problems. These approaches support the assignment of records with a large number of attributes into a relatively small set of groups or "segments." This assignment process is performed automatically by clustering algorithms that identify the distinguishing characteristics of the dataset, then partition the n-dimensional space defined by the dataset attributes along natural cleaving boundaries. There is no need for the user to identify a priori, the groupings desired, or the attributes that should be used to segment the dataset.
Clustering is often one of the first steps undertaken in data mining analyses. It identifies groups of closely related records that can be used as a starting point for exploring further relationships of interest. This technique is used to support the development of population segmentation models, such as demographic-based customer segmentation. Additional analyses can be used to determine the characteristics of these segments with respect to some desired outcome.
Classification. Classification is the most commonly applied data mining technique. It employs a set of classified examples (a training set) to develop a model that can be used as a classifier over the population of records at large. Fraud detection and credit risk applications are examples of the types of problems well suited to this type of analysis. The use of classification algorithms begins with a training set of preclassified example transactions. For a fraud detection application, this would include complete records of activities, classified as either valid or fraudulent on a case-by-case basis. The classifier training algorithm uses these preclassified examples to determine the set of parameters required for proper discrimination, encoding these parameters into a model called a classifier. The classifier is then tested to determine the quality of the model.
Once an effective classifier has been developed, it is then used in a predictive mode to classify new records into these same predefined classes. Classifiers can be developed using one of a number of algorithms that fall into two categories: decision trees and neural networks. The selection of a particular approach has implications for a number of factors relating to the training and use of the classifier, including the number of examples required for training, susceptibility to "noisy" data, and the explanation capability of the system.
For decision tree-based classifiers, the parameters that define the classification model are the set of attributes that comprise the input vectors, and hence are in the same vocabulary as the domain attributes. With syntactic transformations, they can be put into sets of "if-then" rules that allow for "explanation" of the classifier’s reasoning in human terms.
For neural network-based classifiers, model parameters are a set of weighting functions on the network layer interconnections – a purely mathematical formulation, which is opaque, when it comes to explaining the basis for classification. In some domains, the explanation capability can be critical to the acceptance of the classifier as a useful tool; in other domains, the classifier’s success rate and effect on the business is the only metric of success.
Estimation. A variation on the classification problem involves the generation of metrics, often referred to as scores, along various dimensions in the data. Rather than employing a binary classifier to determine whether a loan applicant is a "good" or "bad" risk, this approach would generate a credit worthiness "score" based on a pre-scored training set. The method differs from algorithmic scoring approaches, in that scoring functions are derived from attributes that occur in the data, but may not directly support computation according to an arithmetic formula.
Other Techniques. The techniques previously described are predominantantly used in today’s data mining tools. Other approaches include case-based reasoning, fuzzy logic, genetic algorithms, and fractal-based transforms.
Algorithms and Models
Within each of these general approaches, there are a number of specific algorithms or models that may be applied. Each has its own strengths and weaknesses regarding the problem characteristics best addressed, discrimination capabilities, performance and training requirements. For example, different classification solutions are available for 2-class and N-class problems, and a variety of algorithms (both statistical and neural network-based) may be employed. The algorithms are often tunable using a variety of parameters aimed at providing the right balance of fidelity and performance.
Implementation issues. Having introduced a number of the approaches and algorithms employed in data mining applications, there are many considerations associated with implementing data mining applications.
The development of large data warehouses is driving the need for automated data mining approaches. The data warehouse is the typical repository for which today’s data mining applications are developed and are often implemented using large parallel relational database engines on open systems platforms (large symmetric multiprocessor-SMPs, SMP clusters, and massively parallel processors [MPPs]). Today’s data warehouses range from a few gigabytes to a few terabytes and are spread across many relational tables, with the largest tables in excess of 100 GB. These systems are typically configured with sufficient concurrent I/O capability to ensure very strong relational scan performance.
Today’s data mining tools have evolved from pattern recognition and artificial intelligence research efforts of government- and industry-funded research laboratories. These tools have a heavy algorithmic component, and are often rather "bare" with respect to user interfaces, execution control and model parameterization. They typically ingest and generate UNIX flat files (both control and data files), and are implemented using a single-threaded computational model.
This state of affairs presents a number of challenges to users, which can be summed up as a "tools gap." The gap is caused by a number of factors and requires significant pre- and post-processing to get the most out of a data mining application. Preprocessing, or conditioning activities, include selection of appropriate data subsets for performance and consistency reasons, and sometimes complex data transformations to bridge the representational gap. Post-processing often involves subselection of voluminous results and the application of visualization techniques to provide added understanding. These activities are critical to effectively address key implementation issues on multiple levels.
Users, data mining tools, and SQL-based relational databases each "speak their own language" when it comes to describing the fundamental aspects of data mining applications.
Some of the more common implementation issues include results interpretation, data selection and representation, and system implementation and scalability considerations, as described below.
Inability to "explain" results in human terms. Many of the tools employed in data mining analysis use complex algorithms that operate in an N-dimensional space that are very different than the original data’s attribute space. The ability for these systems to "explain" their results in human terms is minimal. Even with approaches that are capable of analyzing the underlying attributes, the volume and format may be unusable without additional post-processing.
Data Selection and Representation
Ensuring appropriate attribute "coverage." As with all analytical techniques, meaningful results are obtained only if one has sufficient data to answer the question posed. The attraction of a data warehouse as a source for data mining applications is that it includes information from a wide variety of sources. Despite the theoretical ability of data mining tools to automatically ferret out relevant attributes, the task of determining which source attributes to analyze remains a practical implementation issue. This may require iterative approaches with different data subsets.
Ensuring adequate training sets. For algorithms requiring training sets (classification problems), it is critical these records adequately cover the population at large, lest they introduce a misleading bias into the development of the classifier. This may lead to iterative approaches as users strive to find reasonably sized training sets that ensure adequate population coverage.
Data representation and transformation decisions. Most of the source data for today’s data mining applications reside in large, parallel relational database systems. The information is somewhat normalized so the attributes being used in a data mining application may span multiple tables. Normalization is a process in database design in which attributes, such as customer address, account balance, and other account information, are separated into multiple tables for a variety of implementation considerations. These considerations include efficiency of access to relevant attributes, minimal replication of data, and appropriate concurrency control during update. Almost universally, the data mining tools tend to look at long vectors of attributes that have been "put back together again," or denormalized.
The data mining engines operate over a set of attribute "vectors" presented through a UNIX flat file. Conditioning code must be used to provide the denormalized representation the tools need. This may require multiple passes over the input data if complex aggregations are required, or if the database engine does not support an OUTER JOIN operation. (Traditional join operations return result rows only if there are rows matching the join predicate in all contributing tables. An outer join generates a result row if any of the contributing tables have an entry, and a zero or a blank space fills the remaining columns.)
Many of the tools are constrained by the types of data elements with which they can work. Users may have to make continuous variables discrete, or remap categorical variables in order to satisfy the input constraints of a particular data mining tool. Because most data mining tools operate only on time invariant attributes, analysts often have to transform time series data into a parameterized representation that can be inserted into the analysis "vector."
It is impossible to know a priori which parameterizations retain the critical information, so the utility of a particular parameterization must be examined in context. Additionally, the encoding of time series information often requires transformations not easily described in SQL. Generation of the analysis vectors may be more complex as well.
Susceptibility to "dirty" data. Data mining tools have no higher level model of the data on which they operate and no application-oriented (semantic) structure. As such, they simply take everything they are given as factual. Users must ensure that the data being analyzed is "lean." This may require significant analysis of the attribute values being supplied to the discovery tools.
Exploiting parallel computing resources. Many of today’s data warehouses and datamarts are implemented on top of parallel relational database systems, storing data across numerous disks and accessing it through multiple CPUs. These engines perform well when doing selection, projection and aggregation operations within the database. However, current database architectures are such that result sets generated by the database engine are eventually routed through a single query coordinator process. The single coordinator process can be a significant bottleneck in making efficient use of parallel database resources. Even if one is able to generate multiple partitioned parallel output streams from the database (which can be done through "parallel client" implementations), the data mining applications themselves are typically single threaded implementations operating off of a UNIX flat file, and are unable to process partitioned datasets. Even if you are able to extract large datasets from the database, processing them can be a computationally intense operation. Although most data mining tools are intended to operate against data from a parallel database system, many have not yet been parallelized themselves.
Even though relational database systems have parallelized "engines," query results are almost universally routed through a single coordinator process working on behalf of the user’s session. This makes extraction of large datasets from the database inefficient.
This performance issue is most frequently mitigated by aggressively sampling the input dataset. The sampling required for performance reasons must be considered for potential impact on the quality of the results and an appropriate balance found. Once again, this may introduce iterative investigations into the overall analysis effort.
The 2 GB file limit. Although this is becoming less important with the advance of 64-bit operating systems, many UNIX implementations still limit the size of a single file to 2 GB. For flat file-based data mining tools, this effectively limits the size of the datasets that can be operated upon, making sampling a necessity for analysis, training and testing.
Changing business environments and evolving information technology landscapes are driving the need for more capable data analysis tools. Data mining promises increased understanding of large data repositories via automated, computer-based discovery techniques. However, effective implementation and use of the tools still requires significant expertise in extracting, manipulating and analyzing data from large data warehouses. Nevertheless, these tools are currently producing significant strategic and tactical advantages for businesses in highly competitive environments. As tools continue to mature, advances in server-side connectivity, the development of business-based analysis models, user interface improvements, and a larger population of analysts familiar with these tools will bring data mining into the mainstream of corporate information technology.
About the Author: Mitch Haskett is with the Business Intelligence/Data Warehouse Practice for Keane Inc. | <urn:uuid:23540572-8c3c-4480-9010-36a875133cd6> | CC-MAIN-2024-38 | https://esj.com/articles/2000/05/01/an-intro-to-data-mining-part-2-analyzing-the-tools-and-techniques_633718642326100682.aspx | 2024-09-15T04:21:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00141.warc.gz | en | 0.922152 | 3,495 | 2.5625 | 3 |
Hackers do not discriminate between big or small enterprises, which is a reason that answers the question, why is cybercrime expanding rapidly. The rise of data breaches, ransomware attacks, and cyberterrorism incidents is unprecedented. Recent publications of high-profile attacks are a testament that adversaries are unrelenting in their malicious intentions. For example, malware variants, such as ZCryptor, Petya, and WannaCry, have caused untold reputational and financial damage to organizations all over the world.
As cybercriminals leverage emerging technologies to advance their malicious campaigns, companies are increasingly exposed to cybersecurity threats. Moreover, digital innovations are being applied in critical sectors on a large scale. In turn, hackers have exploited digital technologies’ opportunities to gain high payoffs from the proceeds of cybercrime. The rapid expansion of cybercrime requires organizations to implement stringent precautions to eradicate vulnerabilities that can cause attacks. Various reasons have led to the rapid increase of global cybercrime.
Common Types of Cybercrime
Cybercrime consists of all activities that use or target a networked device, computer network, or any I.T. infrastructure. Cybercriminals use computer technologies to commit illegal actions, such as stealing user identities, violating personal privacy, or trafficking in intellectual property and child pornography. They exploit security weaknesses in digital systems to attack information assets via the Internet.
The following are some of the most popular types of cybercrime:
1. Identity theft
Identity theft is a scam practice where criminals use the identification credentials of another person for malicious reasons. For example, hackers may gain unauthorized access to a person’s banking account or credit card information and use it to steal funds or make purchases using the owner’s identity.
Although the identity theft concept has been around even before the Internet advancements, the increased use of digital information makes it easier for adversaries to steal a victim’s identity. Identity theft crimes are prevalent in various online deals and often come in forms like ad pop-ups, spam emails, and phishing attacks.
2. Phishing scams
Cybercriminals use phishing attacks to trick victims into revealing sensitive information, such as passwords, bank account information, social security number, and other personal information types. Phishing scams have proved to be highly effective since criminals require minimal resources to execute the attacks.
Hackers can create a phishing website, which mimics a real website to trick users into providing sensitive information. Criminals may also send email messages in bulk containing links to malicious websites or attachments, hoping that users will click them.
3. Malware attacks
Malicious cyber actors use malware attacks to infect a computer network or system with viruses, trojans, ransomware, and spyware. Malware is any program developed to harm a computer. A malware infection can enable cybercriminals to compromise an organization and steal highly confidential information, such as intellectual property and competition strategies.
One of the most popular types of malware is ransomware. This attack enables a cybercriminal to lock a victim’s computer systems and only provides a decryption key after paying a ransom. An example of a ransomware attack is the global WannaCry attack. Cybercriminals infected thousands of computer systems across the world.
4. Distributed Denial of Service (DDoS) attacks
Cyber adversaries use DDoS attacks to take down organizational networks and computer systems. Hackers target a company with an overwhelming amount of network traffic to prevent authorized users from accessing or using the network resources. DDoS cybercrimes overwhelm a computer system using standard communication protocols for spamming the system with numerous connection requests.
Cybercriminals often deploy the strategy in cyber-extortion schemes, threatening a DDoS attack unless they are paid a certain amount of money. Malicious actors may also use DDoS tactics as a distraction while they commit other types of cybercrimes. A recent example is the 2017 DDoS attack that impacted the U.K. National Lottery Website. The unavailability of the lottery’s mobile application and website prevented online users from playing.
Recent Cybercrime Statistics
There has been an unprecedented increase in cybercrime threats in recent years. Despite this, many people and organizations fail to take cybersecurity seriously, with individuals using common credentials to secure their accounts and devices while others use devices with inadequate security.
The following cybercrime statistics indicate the severity of the cybercrime threat:
1. There is an attack every 39 seconds: A University of Maryland study revealed a computer attack occurs every 39 seconds. The adversarial incident could be in the form of a phishing attack, malware attack, or direct hacking.
Screenshot: A live threat map showing more than 27 million attacks have occurred in a single day (Source: Check Point Software Technologies)
2. 78% of U.S. organizations have been victims of attacks: Most hackers target companies that process personal or financial information due to monetary gains. Financial motivation is among the reasons why cybercrime is expanding rapidly. Cybercriminals usually go for small- and medium-sized enterprises. They often lack the resources to implement robust cybersecurity measures. Such businesses are the majority and, therefore, form the majority of the victims.
3. There has been a 54% increase in mobile malware variants: The increase of mobile malware indicates how cybercriminals have continually enhanced their attack techniques. An increase in the usage of mobile and IoT technologies has seen malicious adversaries develop newer sophisticated malware variants.
4. 63% of businesses have been victims of data breaches: A Dell survey found that the data of 63% of companies were compromised due to a software or hardware-level security breach. The same survey indicated that only 28% of organizations are satisfied with vendor-implemented security measures.
5. There was a 14% increase in unique malware programs: According to Kaspersky, its web antivirus solution detected 24,610,126 unique malware programs in 2019, a 14% increase from 2018. The sharp rise of malware advancements subjected almost 20% of internet users to various malware attacks.
Why is cybercrime expanding rapidly? The 6 reasons
1. An unprecedented rise of cyber-stuff
The prefix cyber has become common in virtually all crimes involving digital technologies. We have become accustomed to words like cyberwar, cybercriminals, and cybercrime. Therefore, it is vital to stop perceiving cyber-related attacks as sophisticated concepts and instead think of them as crimes hackers commit through easy tactics.
Today, it is much easier to steal personal information or compromise the security of a company remotely. Numerous automation tools with A.I. and ML capabilities have advanced, enabling criminals to commit cybercrimes without the need for high skills or technical expertise. The tools are readily available on the dark web for a small amount of money. Anyone with trivial technical inkling can easily find and use them. As a result, there have been higher levels of cybercrime compared to yesteryear.
2. The Internet architecture
The Internet infrastructure’s original architects focused more on durability and stability and gave little thought to security. They were not security-conscious when designing and building network infrastructure. Besides, the architects never thought that the Internet would provide a platform for transmitting millions of dollars or information worth a lot more than it is today.
As the Internet advanced to become more of a social and commercial space than for academic purposes, measures to make it more secure continue to be developed. Nevertheless, most of the underlying design depends on insecure transportation methods that can be hijacked with ease.
Cybercriminals have continued to exploit the security shortcomings to carry on their malicious campaigns. The Internet has also become central to most vital processes, including controlling critical assets and infrastructures. Hackers continue to capitalize on the Internet’sInternet’s insecurity to rump up attacks, resulting in the continued rise of cybercrime.
3. The role of hackers in information security
Most people today are paid to be professional hackers, professionally known as security researchers or ethical hackers. Their roles include enumerating security vulnerabilities in information systems and creating tools for demonstrating and detecting the flaws. The researchers then release the tools to the general public, most of which end up in malicious individuals’ hands.
Many cyber criminals use legitimate hacking tools to compromise systems and steal sensitive information. Also, other black hat hackers develop similar tools to facilitate the expansion of cybercrime activities. Since hackers have become more experienced and continuously gain access to newer technologies, there has been an explosion of hacking tools. Therefore, the cyberspace and information security field has become a race between the adoption of protective technologies and advancements of hacking tools and processes. The result is a rising wave of cybercrime.
4. Companies are slower in adopting strong security.
The reality of the current cybercrime landscape is that most companies don’t deem it profitable to overhaul their security systems unless the need arises. Profit-minded organizations usually hold off redesigning their security systems until they suffer an attack or their customers demand better security. A prime example is where Facebook failed to implement secure sessions until its CEO, Mark Zuckerberg’s account was hacked. Facebook only took user security seriously once the company deemed it as a personal problem.
Many other companies have the same security approach. Some may be aware that their systems or networks are insecure or vulnerable but fail to remedy time issues. Furthermore, most private and public entities have poor security practices, which further contributes to the continued rise of cybercrime.
5. Targeting people
For the longest time, humans have been the weakest link in the cybersecurity chain. Many computer users and company employees are untrained on the best security practices and secure system usage. While numerous users focus on security and software tools to detect and eliminate malware, cybercriminals have channeled their efforts on humans.
Most of the successful attacks begin by tricking unsuspecting victims into clicking on malware-laden attachments and websites. Cyber adversaries are adept at exploiting human trust through social engineering methods and other similar scams. Tricking users to volunteer information, such as passwords, banking details, healthcare information, and personal data, has caused cybercrime to rise significantly.
6. Internet of Things (IoT) proliferation
The current global IoT market is valued at $82.4 billion and is estimated to register a compounded annual growth rate of 21.3% between 2020 and 2028. IoT comprises devices that can connect to the internet. Each IoT device represents an attack surface, and the high usage of IoT systems has contributed to the rise of cybercrime.
Many businesses permit employees to use IoT devices since they are known to enhance productivity and streamline crucial operations. With so many endpoints introduced to a network, hackers can easily detect a vulnerable device and exploit it to commit a cybercrime. Besides, IoT systems are increasingly being used to control critical infrastructure and factory operations, thus attracting more adversaries. Vendors are also racing to release the most products due to the large market. The rush to outdo competitors causes manufacturers to include security as an afterthought, resulting in devices with exploitable vulnerabilities.
How can businesses protect themselves?
Since cybercrime is expanding rapidly, businesses should take proactive measures to protect themselves. The following recommendations can help in reducing cybercrime levels:
- Regularly update software: Updating software and operating systems regularly deny cybercriminals the opportunity to exploit vulnerabilities. Patching security flaws make one a less likely target, which is essential to lowering cybercrime.
- Outsource security services: Outsourcing security is the best strategy for small- and medium-sized businesses that lack the resources to strengthen their cybersecurity posture. Managed service providers have access to the latest and most effective security practices, tools, and professionals. Outsourcing security reduces cybercrime significantly.
- Protect against identity theft: Using VPNs in a home or corporate network can help prevent identity theft. It is essential to securely share personal information and passwords to prevent cybercriminals from intercepting the communication.
- Normalize training: Cybersecurity training and awareness should be a common occurrence for businesses and individual computer users. Being conversant with the best security
- Use robust antivirus/antimalware tools: Antivirus software programs enhance cybersecurity since they detect and eliminate harmful programs. Users must ensure to update the antimalware solutions regularly to gain access to the latest threat definitions. | <urn:uuid:6313ce80-8715-4478-93f1-4de5397f8291> | CC-MAIN-2024-38 | https://cyberexperts.com/why-is-cybercrime-expanding-rapidly/ | 2024-09-07T21:26:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00841.warc.gz | en | 0.930913 | 2,509 | 3.3125 | 3 |
Body scans performed by experts using remote control software on personal computers can produce better quality images than those obtained by on-site operators, according to a study published in the November issue of the journal Radiology.
Patients, unfortunately, still need to visit the hospital to get strapped into sophisticated scanning equipment. Technologists must also be present to give patients instructions and to administer any necessary contrast agents, intravenous fluids that make internal anatomy more visible to the imaging device.
The study evaluated images from some of the most complicated body scans, such as those looking for subtle heart deformations in children or those scanning blood vessels for difficult-to-detect problems. Scans of 30 patients were performed by an on-site operator, and scans of another 30 patients were performed by an off-site operator who remotely controlled all the imaging parameters using a personal computer.
For all exams, an on-site technologist made sure patients were comfortable and properly positioned within the machine. The on-site and remote technologists communicated over a hands-free telephone connection.
All scans were performed on the same MRI (magnetic resonance imaging) machine, and the images were then ranked for quality by other experts blinded to patient details and whether images were taken by an on-site or remote operator.
Ninety percent of remote scans received an “excellent” rating, as opposed to 60 percent of scans performed with the operator onsite. The study authors concluded that the difference could be explained because of experience.
The remote operator had over 20 years of experience in radiology, while most scans performed by local operators had 2 to 15 years of experience. (The remote operator performed local scans on nine of the 30 local patients, but the study did not compare his performance between the remote and local scans.)
Teleradiology is not new. Scores of experts in India analyze images from patients in America. In this case, teleradiology lowers costs and speeds diagnosis because radiologists in India charge less, and their workday occurs after U.S. working hours.
However, this application is different because it would control the scans itself and allow a facility to perform specialized procedures even without specialized staff.
“As the speed and reliability of the Internet increases, it seems inevitable that distance will provide no barrier to the global application of this technology,” said J. Paul Finn lead author and chief of diagnostic cardiovascular imaging at the David Geffen School of Medicine at the University of California at Los Angeles. He noted that remote control of sophisticated scans should work for computed tomography as well as MRI.
Ultra-thin client software allowed the MRI to be controlled remotely. It used a derivative of VNC (virtual network computing), an open-source platform-independent protocol for controlling a computer from a remote console with a standard transmission control protocol. Software on the local computer scans for updates to images on the screen.
The connection between the remote and operator systems must be real-time with very little latency. However, a high-bandwidth connection is probably not necessary because large image files do not need to be transferred in real time.
For this study, information sent between the computers was not encrypted because it was a single, closed-network institution. The software has not been evaluated in situations where encryption would be necessary. | <urn:uuid:fb715a73-84a8-444d-87c1-0ba0efe36beb> | CC-MAIN-2024-38 | https://www.eweek.com/news/remote-body-scans-can-produce-better-images/ | 2024-09-07T21:51:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00841.warc.gz | en | 0.963988 | 671 | 2.578125 | 3 |
Companies today rely on a range of different systems to complete everyday tasks. But as business functions expand, companies can quickly become overwhelmed by the range of different tools at their disposal.
When these solutions aren’t aligned, they can’t share data and work together as they should. This can lead to disconnected data, lost productivity, and even security issues.
System integration is crucial for preventing this from happening, and many organizations today rely on it to stay on top of the expanding toolset they use for day-to-day business.
This article tells you everything you need to know about system integration, including what it is, its components and types, and examples of the process in action.
What is System Integration? Definition
System integration is the process of connecting multiple different systems into a single larger system that functions as one. This allows businesses to share information between different sub-systems autonomously by translating data from different sources in the technology stack.
The goal of system integration is to break down communication barriers among disparate systems by ensuring that data flows seamlessly between different platforms.
In the tech world, this typically involves linking software applications, databases, and other IT systems so they can communicate and share data seamlessly.
This allows for a more streamlined workflow and eliminates the need for manual data entry between different programs.
How does system integration work?
System integration is all about connecting different computer systems and software applications to function as a single, unified system. There are various ways to achieve this, but the core dea is to enable data to flow freely between different systems.
This might involve customer information from an e-commerce platform being transferred to a customer relationship management (CRM) system. This way, the CRM system has the latest data to provide better customer service
By integrating systems, you eliminate data silos, which are isolated pockets of information within an organization. This improves efficiency and avoids situations where different departments have conflicting information.
Components of System Integration
There are different approaches to system integration, but a common thread is ensuring all the connected systems work together smoothly to achieve the desired outcome. This process involves various components working together seamlessly to achieve a unified view of an organisation's different systems.
The key components of system integration include:
1. Data Integration
The foundation of system integration lies in ensuring seamless data exchange between different systems through data integration. This involves ensuring that data is accurate, reliable, and can be shared and utilized effectively across different systems.
Data integration focuses on combining data from various sources into a single, unified view to create a consistent and complete dataset for further analysis, reporting, or data warehousing. This data can come from different types of systems like databases, CRM, ERP, and even social media platforms.
Data integration is often a key step in system integration because once systems are connected, their data needs to be combined and standardized for effective use. This allows you to break down data silos and gain valuable insights from a holistic view of your information.
2. Application Integration
Connecting different applications to communicate and operate in harmony is essential for system integration. Application integration involves establishing a unified interface that allows applications to exchange data and interact without disruptions. These applications may be from different vendors, reside on-premise or in the cloud, and have entirely different functionalities.
The key goal of application integration is to break down information silos between applications. This enables smoother workflows, improves data consistency, and allows users to leverage functionalities from various applications without switching between them.
3. Process Integration
Process integration is the glue that binds the functionalities of different systems within a larger integrated system. This involves simplifying workflows, eliminating redundancies, and automating tasks to enhance productivity.
Process integration ensures that individual tasks within different systems are connected and flow smoothly, eliminating the need for manual data entry or switching between applications and creating a more streamlined workflow. This makes it easy for data generated in one system to be automatically transferred and used by another, reducing errors and improving efficiency.
4. Infrastructure Integration
Infrastructure integration focuses on managing and connecting the underlying hardware and network components that support all the software applications within the system. It ensures these resources are managed effectively to support the integrated system's demands, ensuring the infrastructure can handle the processing power and data flow required by the integrated system.
It also establishes secure and reliable network connections, such as Local Area Networks (LANs) or Wide Area Networks (WANs), depending on the system's needs. This ensures smooth communication, efficient resource utilization, and overall system stability, which are all crucial for a successful system integration project.
Types of system integration
The types of system integration can vary depending on who you ask. Some experts categorise system integration by the methodologies used to connect systems, such as APIs, webhooks, Integration Services Components and Orchestration.
On a basic level, however, there are four main types of system integration:
1. Enterprise application integration (EAI)
Enterprise Application Integration (EAI) is a specific type of system integration that focuses on connecting various software applications within a single organization. It specifically targets enterprise applications used by different departments, like CRM, ERP (Enterprise Resource Planning), and inventory management systems.
EAI aims to establish a standard way for these applications to communicate and share data, regardless of their underlying technology. This makes integration smoother and reduces the need for extensive modifications to each individual application.
2. Point-to-point Integration
Point-to-point integration is a type of system integration where individual systems are directly connected to share data or functionalities they need to work together. Unlike other methods that rely on a central platform, point-to-point integration establishes a dedicated link between the two systems involved, enabling faster data exchange and potentially less complexity in the initial setup. Data exchange can be facilitated through custom code written specifically for the integration or by leveraging APIs provided by the systems themselves, which offer a standardized way for applications to communicate and share data.
Point-to-point integration is well-suited for scenarios where you only need to connect a few specific systems. It's a relatively simple and straightforward approach to achieve data exchange between a limited number of applications.
3. Vertical integration
Vertical integration in system integration focuses on connecting various systems within a specific department or business function within an organization. It aims to create a streamlined workflow and improve efficiency by unifying data and functionalities within a particular area.
Vertical integration concentrates on systems used within a specific function like HR, finance, manufacturing, or supply chain. For example, integrating HR management systems, payroll processing systems, and benefits administration systems within the HR department.
By connecting these departmental systems, vertical integration allows for smoother data flow and eliminates the need for manual data entry between them. This reduces errors, saves time, and facilitates better decision-making within the department.
4. Horizontal integration
Horizontal integration brings together functionalities and data from different departments within an organization, breaking down departmental silos and fostering a more unified view of operations. Unlike vertical integration which connects systems within a single department, it bridges the gap between different departments.
By sharing data and functionalities across departments, horizontal integration allows for better collaboration and communication. Departments can gain insights into each other's operations, leading to more informed decision-making and providing a more holistic view of the customer journey. For instance, integrating a customer service platform with a sales CRM allows customer service representatives to see a customer's past interactions and purchase history, enabling them to provide more personalized and efficient service.
Benefits of System Integration
System integration is a complex process, but it can offer a wide range of benefits for businesses of all sizes. Some of the major benefits of system integration include:
- Increased Efficiency and Productivity. By connecting disparate systems, data silos are eliminated, and manual data entry is minimized. This streamlines workflows, reduces errors, and frees up employees to focus on more strategic tasks.
- Improved Data Visibility and Accuracy. System integration ensures consistent data flows across the organization. This provides a more comprehensive and accurate view of operations, leading to better decision-making based on real-time data.
- Enhanced Collaboration. Integration fosters communication and collaboration between departments. With shared data and functionalities, departments can work together more effectively to achieve common goals.
- Better Customer Experience. A horizontally integrated system allows for a 360-degree view of the customer journey. Businesses can personalize interactions, anticipate customer needs, and provide a more consistent and positive customer experience.
- Reduced Costs. System integration eliminates redundancy in data storage and reduces the need for manual processes. It can also lead to increased productivity and reduced operational costs.
- Improved Competitive Advantage. By streamlining operations, gaining better customer insights, and making data-driven decisions, businesses can gain a competitive edge in the market.
- Scalability and Flexibility. A well-designed integrated system can be easily scaled to accommodate future growth and changing business needs.
- Simplified IT Management. System integration can simplify IT management by consolidating tools and reducing the complexity of managing multiple isolated systems.
Examples of systems integration
Companies today have a range of methods available they can access to connect systems and tools. Some of the most common connectors include things like middleware for connecting disconnected data, application programming interfaces (APIs), and webhooks or HTTP call-backs. They can also use electronic data exchange systems for the same purposes.
Systems integration strategies can also involve various models. For example, a point-to-point model involves extracting data from one system and submitting it to another environment automatically. Meanwhile, a hub-and-spoke model uses a central hub to sort through the data collected from each environment and deliver it in a useful format for business leaders.
The unified environment would pull data from each tool leveraged by the company, without requiring them to access the software solutions separately, allowing for better end-to-end visibility, and improved productivity for the team.
System integration touches various aspects of an organization, so there are many real-world examples across different industries. Here are a few to illustrate the concept:
1. Inventory Management & Point-of-Sale (POS) Integration
Imagine a retail store connecting its inventory management system with its POS system. This allows real-time updates on stock levels whenever a sale is made. The system can automatically trigger purchase orders when inventory falls below a certain threshold, preventing stockouts and lost sales.
2. CAD & Manufacturing Execution Systems (MES) Integration
In a manufacturing setting, integrating Computer-Aided Design (CAD) software with the MES can streamline production processes. The MES receives product design data directly from CAD, eliminating errors and ensuring production follows the exact specifications.
3. Banking Systems & Accounting Software Integration
Banks can integrate their core banking systems with accounting software used by their corporate clients. This allows for automatic data exchange between the two systems, such as account balances and transaction details. This streamlines reconciliation processes and reduces manual data entry errors.
4. Electronic Health Records (EHR) & Appointment Scheduling Systems Integration
Hospitals can integrate their EHR systems with appointment scheduling systems. This allows patients to view their medical history, book appointments, and manage their healthcare information online. Additionally, doctors can access a patient's complete medical record during appointments, leading to better-informed treatment decisions.
5. CRM & Marketing Automation Integration
E-commerce businesses can integrate their CRM systems with marketing automation platforms. This allows them to target marketing campaigns based on customer data stored in the CRM. They can send personalized emails, recommend relevant products, and improve the overall customer experience.
Want to know who are the best system integration companies for you business? Check out our Top 10 System Integrator companies for 2024 to learn more! | <urn:uuid:73356563-bae2-4d21-99c7-fcd8c77183ca> | CC-MAIN-2024-38 | https://em360tech.com/tech-article/system-integration | 2024-09-09T01:26:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00741.warc.gz | en | 0.926131 | 2,442 | 3 | 3 |
As time goes by, in order to meet the need for higher bandwidth, faster speed and better utilization of fiber optics, FTTH access networks designs have developed rapidly. And there are two basic paths of FTTH networks: active optical network (AON) and passive optical network (PON). However, how much do you know about the them? Do you know what’s the differences between the two systems? Now, this article will give a detailed comparison between them.
Active optical network, also called point-to-point network, usually uses electrically powered switching equipment such as a router or switch aggregator, to manage signal distribution and direct signals to specific customers. This switch opens and closes in various ways to direct incoming and outgoing signals to the proper place. Customers can have a dedicate fiber running to his or her home, but it needs many fibers.
Different from AON, PON doesn’t contain electrically powered switching equipment, instead it uses fiber optic splitters to guide traffic signals contained in specific wavelengths. The optical splitters can separate and collect optical signals when they run through the network. And powered equipment is needed only at the signal source and the receiving ends of the signals. Usually, the PON network can distribute signals into 16, 32 and 64 customers.
As data travel across the fiber connection, it needs a way to be directed so that the correct information can arrive at its intended destination. And AON and PON offer a way to separate data and set it upon its intended route to arrive at the proper place. Therefore, these two networks are widely applied in FTTH systems. However, each system has their own merits and shortcomings. Here is a simple comparison between them.
In AON networks, subscribers have a dedicated fiber optic strand. In another word, each subscriber gets the same bandwidth that doesn’t be shared. While the users share the fiber optic strands for a portion of the network. These different network structures also lead to different results. For example, if something goes wrong in a PON network, it will be difficult to find the source of the problem. But this problem does not exist in AON.
As we have noted above, AON directs optical signals mainly by powered equipment while PON has no powered equipment in guiding signals except for two ends of the system.
When running an existing network, it’s known to us that the main source of cost is the maintenance and powering equipment. However, PON uses passive components that only need less maintenance and do not need power, which contributes to that PON building is cheaper than that of AON.
AON networks can cover a range to about 100 km, a PON is typically limited to fiber cable runs of up to 20 km. That is to say, subscribers must be geographically closer to the central source of the data.
Of course, apart from what have been listed above, there are other differences between these two networks. For instance, AON network is currently the industry standard. It’s simple to add new devices to the network. And there are numbers of similar products on the market, which are convenient for users to select. Besides, AON is a powered network, which decides it’s less reliable than PON. However, since the bandwidth in PON is not dedicated to individual users, people who use a passive optical network may find that their system slows down during peak usage times.
In summary, AON and PON have their own advantages and disadvantages, but both of them provide practical solutions for FTTH network connection. There is no right or wrong answers when it comes to choose which one of them. FS.COM provides several kinds of PON equipment such as PON splitters and OLT/ONT Units. If you want to find out more, please visit Fiberstore website. | <urn:uuid:35f82435-24b3-4b58-b934-eec8b7c05a00> | CC-MAIN-2024-38 | https://www.chinacablesbuy.com/comparison-active-passive-optical-network.html | 2024-09-09T01:25:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00741.warc.gz | en | 0.961627 | 784 | 3.125 | 3 |
Returns a handle to a speech parse tree, representing the sentence structure of what was decoded by the Speech Engine, according to the active grammars.
- LVParseTree GetParseTree(int VoiceChannel, int index)
The audio channel containing the input audio
It is possible to have more than one parse tree for an utterance (for instance if the grammar is ambiguous); this is the index of the tree
The return type is a parse tree object.
A parse tree and the parse string returned are logically the same. However, an LVParseTree object makes it easy to search the parse tree for useful information.
The LVParseTree object can be manipulated using the methods described in the LVParseTree C++ API. | <urn:uuid:392dfdf0-a2fd-47b3-9861-2fd47cd971c5> | CC-MAIN-2024-38 | https://www.lumenvox.com/knowledgebase/index.php?/article/AA-00786/0/LVSpeechPort%3A%3AGetParseTree.html | 2024-09-09T02:42:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00741.warc.gz | en | 0.824561 | 156 | 2.546875 | 3 |
What If suddenly one day, someone comes to you and says, “the phone you are holding in your hand, has no longer got the things that will get you going with the old applications?”
You will feel shocked and will ask the reason behind it and when you will get to know that suddenly in the past few months technology has changed so much that you can’t exist in a technical world without a timely upgrade, you will buy a new phone.
That’s just an example of how technology has been changing and how fast new trends are emerging. You can learn about top emerging technologies from here. Talking about emerging technological trends, one such trend or we should say, one such new kind of technology is, the Internet of Behaviour.
Sounding similar to one of the parts of technology that we use in our daily lives, the Internet of Behaviour is our complete area of focus, for example, technology is vastly used in businesses, technology in yoga, and many more.
Internet of Behaviour (IoB)
IoB can’t be talked about without the mention of IoT. The Internet of Things (IoT) is an interconnected network of physical devices that gather and share data and information via the Internet.
The IoT is continually increasing and changing in terms of its complexity, i.e. the way devices are interconnected, the calculations that these things can perform on their own, and the data that is stored in the cloud are all evolving.
The Internet of Behaviour refers to the gathering of data (BI, Big Data, CDPs, etc.) that offers important information on client behaviours, interests, and preferences (IoB).
(Must read: Top 10 examples of IoT)
From a behavioural psychology standpoint, the IoB tries to comprehend the data acquired from users' online activities. It aims to answer the question of how to interpret data and how to use that knowledge to develop and promote new goods, all from the perspective of human psychology.
The term "IoB" refers to a method of analyzing user-controlled data from a behavioural psychology standpoint. The findings of that study influence new ways to create a user experience (UX), search experience optimization (SXO), and how to advertise a company's final products and services.
As a result, while doing IoB is technically easy, it is psychologically challenging. For ethical and legal reasons, it is necessary to perform statistical studies that record everyday routines and behaviours without totally revealing customer privacy.
(Must catch: Internet of Robotic Things)
Benefits of IoB
The following are some of the Benefits of IoB:
(Also read: Basics of Product Positioning)
Combined Impact of IoB and IoT
Internet of behaviour is an extension of IoT. Let us try to know more about it. It's not about the "things" at all when companies use the Internet of Things to persuade us to change our habits. We've crossed over into the Internet of Behavior as the IoT connects individuals with their activities.
Consider the IoB as a mash-up of three disciplines:
Emotions, choices, augmentations, and companionship are the four areas of behavioural science that we examine when we utilize technology.
Companies that know us through the data provided by IoT, can now influence our behaviour using the data provided by IoB. Consider using a smartphone health app to check your nutrition, sleep habits, heart rate, or blood sugar levels. The app can warn you about potentially dangerous circumstances and propose behaviour changes that would lead to a more positive or desirable outcome.
(Most related: IoT in healthcare)
For the time being, corporations are mostly using IoT and IoB to watch and attempt to influence our behaviour to reach Allstate behavior their intended goal—typically, to purchase.
Working of IoB
How Data Is Collected?
Consumer data may be gathered from a range of sites and technologies, including a company's website, social media profiles, sensors, telematics, beacons, health monitors (such as Fitbit), and a variety of other devices.
Each of these sites gathers various types of information. For example, a website may keep track of how many times a person visits a certain page or how long they remain on it. Furthermore, telematics may track how hard a vehicle's driver brakes or the vehicle's typical speed.
(Suggested blog: How is IoT influencing the Human Body?)
What Happens to the Information Gathered?
Data is collected and analyzed by businesses for a variety of purposes. These reasons include assisting businesses in making educated business decisions, customizing marketing techniques, developing products and services, and driving user experience design, among others.
Companies establish standards to aid in the analysis of this data. When a user performs a specific action(s), the firm then begins to convince the user to modify their behaviour. For instance, if a user visits a company's page selling men's slim jeans three times, the digital shop may show them a pop-up ad offering them 25% off a pair of jeans.
Using Data from a Variety of Sources
Combining data from many sources and evaluating it to make a decision is another component of the Internet of Behaviors. Companies may develop in-depth user profiles for each user by combining data from a variety of sources. These profiles may then be looked at to see what the best course of action is for the person.
For example, on the brand's Instagram page, a customer called Ted comments on a photo of a new sneaker. Ted visits the brand's website a few days later and looks at the identical sneaker. After a week, Ted is watching an ad for the sneaker on YouTube. In the meanwhile, the brand is keeping track of all of Ted's digital content touchpoints.
Because Ted has expressed an interest in the brand's shoe, the brand may synthesize this information and devise a strategy for converting Ted into a customer. Remarketing display advertising or emailing Ted a discount coupon are examples of actions the brand might do.
(Also read: Network Marketing)
Use of IoB in Various Sectors
IoB in Business
Online advertising is increasingly being used by a variety of businesses to reach out to their clients. They may discover and target certain persons or groups that could benefit from their products or services with the help of IoB.
Both Google and Facebook utilize behavioural data to provide ads to users on their sites. This enables companies to interact with their target consumers and measure their behaviour in response to advertisements via "click rates."
Similarly, Youtube uses behavioural analytics to enhance the viewer's experience by only recommending or highlighting videos and subjects that they are interested in.
IoB During the Covid-19 Pandemic
The epidemic has increased our awareness of the precautions we must take during this period. Employers might use sensors or RFID tags to see if there are any inconsistencies in following safety standards. Restaurants and food delivery applications, for example, utilize the protocol information to guide their decisions.
Swiggy and Zomato, for example, both exhibited and promoted restaurant safety procedures. They also recorded and broadcast the temperature of the delivery person to reassure consumers that they were safe.
IoB for the Insurance Industry
In the insurance industry, IoB may be quite beneficial. Driver tracking tools are already used by insurance companies like Allstate and StateFarm to track and secure a driver's conduct. With the help of IoB, they may evaluate the behaviour and perhaps determine if a certain occurrence was an accident or a misjudged assumption on the part of the insured.
This can help prevent incidents of drunk driving, driving under the influence of drugs, and even underage or retired persons from getting behind the wheel and causing an accident.
(Related reading: Types of insurance)
The Internet of Behaviors offers businesses cutting-edge methods for marketing products and services as well as influencing user and employee behaviour. This technology is highly useful to organizations since it allows them to optimize their customer relationships depending on the data acquired.
Behavioral data technology is still developing. However, as new IoT devices proliferate, the argument over what constitutes critical data and ethical use is only beginning. | <urn:uuid:e412bf29-12e4-4585-8ba9-4c6884e8cf7c> | CC-MAIN-2024-38 | https://www.analyticssteps.com/blogs/introduction-internet-behaviour-iob | 2024-09-12T18:24:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00441.warc.gz | en | 0.945356 | 1,693 | 3.171875 | 3 |
In our previous article on Risk Assessment, we explored the importance of identifying and analyzing risks to safeguard your organization and comply with ISO 27001 standards. Now, we advance to the critical phase of Risk Treatment, where we implement strategies to manage identified risks effectively.
What is Risk Treatment?
Risk Treatment is the process of selecting and implementing measures to address the risks identified during the risk assessment phase. The goal is to reduce risks to an acceptable level, ensuring that your Information Security Management System (ISMS) aligns with organizational objectives and meets ISO 27001 requirements. This phase involves choosing appropriate risk responses and ensuring that residual risks are within acceptable levels.
Step 1: Defining your Risk Appetite
Risk appetite is the level of risk your organization is willing to accept in pursuit of its objectives.
Defining your risk appetite is a crucial step in the risk management process, providing a foundation for making informed decisions during risk assessment and treatment. It ensures that your Information Security Management System (ISMS) aligns with your organization's goals and that the controls you implement are appropriate to the level of risk deemed acceptable. A clearly defined risk appetite guides your organization in prioritizing risks, selecting suitable controls, and managing residual risks effectively. This clarity not only supports compliance with standards like ISO 27001 but also facilitates better communication with stakeholders about the organization's approach to risk management.
How to define Risk Appetite:
- Understand Organizational Objectives: Begin by reviewing the organization's strategic goals and objectives. Understanding these will help determine how much risk is acceptable to achieve these goals.
- Determine Risk Tolerance Levels: Define the maximum acceptable levels of risk for different risk categories (e.g., operational, financial, reputational). This includes setting thresholds for both the likelihood and impact of risks.
- Align Risk Appetite with Resources: Ensure that the defined risk appetite aligns with the organization's resources, including budget, personnel, and technology. The organization should have adequate resources to manage risks within the defined appetite.
- Define organizational risk limits by determining the level of risk your organization is willing to accept to achieve its goals.
- Guide risk management decisions by establishing clear risk appetite statements that align with organizational objectives and priorities.
- Ensure alignment and adaptability by regularly reviewing and adjusting the risk appetite to reflect changes in the risk landscape and organizational strategy.
Step 2: Plan and Implement Risk Treatment Controls
The next step in risk treatment involves planning and implementing controls to mitigate identified risks. This includes selecting the most suitable risk response strategies such as mitigation, transfer, avoidance, or acceptance.
Types of Risk Treatment Actions:
- Mitigation: Implement security controls to reduce the impact or likelihood of risks. This might include deploying firewalls, encryption, and access controls to protect sensitive data.
- Avoidance: Change processes to eliminate risk exposure entirely. For example, discontinuing the use of vulnerable technologies or practices that pose significant risks.
- Transfer: Shift risk responsibility to third parties through contracts or insurance, particularly for risks associated with third-party vendors and suppliers.
- Acceptance: Decide to accept the risk if the cost of mitigation exceeds the potential impact. This is typically applied to low-impact risks where the cost of control implementation outweighs the benefits.
- Control Selection: Choose controls that effectively address specific risks, guided by the controls outlined in ISO 27001 Annex A.
- Action Plan Development: Develop a detailed action plan with steps, timelines, and responsibilities for implementing each control. Include milestones to track progress and ensure accountability.
- Integration with Business Processes: Ensure that controls are integrated seamlessly into existing business processes to enhance security without disrupting operations.
- Select controls based on risk priorities and business objectives.
- Assign clear responsibilities and timelines for implementing controls.
- Ensure ongoing measurement and monitoring of control effectiveness.
Step 3: Calculate and Manage Residual Risk
Once controls are implemented, it's essential to calculate residual risk—the risk that remains after controls are applied. This step ensures that all significant risks are managed within acceptable levels and is a requirement for ISO 27001 certification.
Residual Risk Calculation:
- Evaluate Control Effectiveness: Measure how well controls reduce the likelihood and impact of risks. Use metrics and key performance indicators (KPIs) to quantify effectiveness.
- Estimate Residual Risk: Reassess the risk level post-control implementation and compare it to your organization’s risk appetite.
- Document and Report: Clearly document residual risks, providing justification for their acceptance or further treatment. Share this documentation with stakeholders.
- Assess whether implemented controls sufficiently mitigate risks.
- Determine if additional measures are needed to manage residual risks.
- Ensure residual risks align with the organization's risk tolerance.
Step 4: Monitor and Review Risk Treatment
Continuous monitoring and review of risk treatment efforts are crucial to ensure ongoing effectiveness and compliance. Regularly evaluate control performance and adjust strategies as necessary to respond to new threats and changes in the business environment.
- Performance Metrics: Use KPIs to measure control effectiveness and identify areas for improvement.
- Regular Audits: Conduct periodic audits to evaluate the implementation and performance of risk treatment measures.
- Feedback Mechanisms: Establish channels for receiving feedback from stakeholders to identify potential issues and areas for enhancement.
- Implement regular monitoring to ensure control effectiveness over time.
- Adapt risk strategies based on emerging threats and changing business environments.
- Conduct audits and reviews to maintain compliance and improve security posture.
Engaging Stakeholders Through Tabletop Exercises
A practical approach to refining risk treatment strategies is to conduct tabletop exercises with key stakeholders from various departments. These exercises involve simulating incidents and evaluating the effectiveness of implemented controls in a controlled environment. By bringing together diverse perspectives from HR, IT, operations, and security teams, organizations can test their response plans, identify weaknesses in controls, and enhance their overall risk management capabilities. This collaborative method not only improves preparedness but also fosters a culture of continuous improvement in risk management practices.
How Brainframe Enhances Risk Assessment and Treatment
Brainframe offers powerful tools to streamline risk assessment and treatment, helping organizations efficiently achieve ISO 27001 certification.
- Automated Risk Assessment: Leverage AI and machine learning to quickly identify and assess risks, offering insights into potential threats.
- Centralized Risk Register: Maintain an up-to-date risk register for comprehensive documentation and analysis.
- Control Management: Manage and track control implementation with an intuitive interface, ensuring timely completion of risk treatment plans.
- Residual Risk Calculation: Use analytical tools to accurately calculate residual risks, aligning them with organizational risk appetites.
- Real-time Monitoring: Access real-time monitoring and alerting capabilities to continuously track control effectiveness and respond to emerging threats.
Benefits of Using Brainframe:
- Increased Efficiency: Automate and streamline risk processes, saving time and resources.
- Enhanced Accuracy: Ensure thorough and accurate risk evaluations using advanced analytics.
- Improved Compliance: Align practices with ISO 27001 standards, facilitating certification.
- Proactive Risk Management: Gain real-time insights to stay ahead of potential risks.
Risk Treatment is a critical component of a robust ISMS, involving the careful selection and implementation of controls to effectively manage risks. By calculating and managing residual risks, organizations ensure compliance with ISO 27001 requirements and maintain a strong security posture. Tools like Brainframe can enhance these processes, making risk management more efficient and effective.
Next Steps: Exploring the Statement of Applicability
With a comprehensive risk treatment plan in place, our next focus will be on the Statement of Applicability (SOA). This phase involves selecting and justifying the controls needed to address identified risks, ensuring that the ISMS is tailored to your organization's specific needs. Stay tuned as we delve into the SOA, exploring how it links risk management with the practical application of security controls to strengthen your overall security posture. | <urn:uuid:d0074500-368f-4ba6-8341-e9fec4a9a212> | CC-MAIN-2024-38 | https://www.brainframe.com/blog/security-compliance-professionals-1/building-an-effective-isms-part-4-risk-treatment-22 | 2024-09-12T18:02:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00441.warc.gz | en | 0.911595 | 1,643 | 2.5625 | 3 |
One of the key technology developments that enables the IoT to reach its vision is much better battery life in wireless technologies. It’s important because it allows for devices to stay out in the field, serving its purpose without needing to be serviced (which costs a lot and reduces the product’s overall ROI). That’s why so many in the low-power, wide-area (LPWA) space are clamoring to claim long battery life.
Many try to look at the transmit power used to send a signal as a single metric for comparing battery life. Battery life is one of those complicated beasts that unlike coverage, which can be summed up using link budget, requires the whole system to be in place before you can know what it really is.
When it comes to battery life, it is better to transmit quickly at a higher power, than to transmit slowly at a lower power. Why? Well, that’s calculus my dear fellow! If battery usage is the area under the curve, then you want to minimize the area under that curve. So sending one acknowledged message at high transmit power very quickly (RPMA) uses far less battery than sending a single message three times because it isn’t acknowledged using less transmit power (e.g., Sigfox & LoRa technologies). Here’s a picture to demonstrate: | <urn:uuid:4023cec7-77e7-4a6f-b0e2-5993bb7eb29d> | CC-MAIN-2024-38 | https://www.ingenu.com/tag/sigfox/ | 2024-09-12T18:20:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00441.warc.gz | en | 0.936793 | 276 | 2.5625 | 3 |
19 Feb 16 Common Types Of Cyberattacks And How To Prevent Them
This week in cybersecurity from the editors at Cybercrime Magazine
Sausalito, Calif. – Feb. 19, 2024
Today’s cybercriminals are not part-time amateurs or script kiddies but rather state-sponsored adversaries and professional criminals looking to steal information and make large amounts of money.
TechTarget reports that disruption and vandalism are still prevalent, and espionage has replaced hacktivism as the second main driving force behind cyberattacks — after financial profit. With these different motives and the increasing sophistication of attackers, many security teams are struggling to keep their IT systems secure.
Cybersecurity Ventures has predicted that the global cost of cybercrime would hit $8 trillion in 2023 and increase to $9.5 trillion in 2024.
The costs of cyberattacks are both tangible and intangible, including not only direct loss of assets, revenue and productivity, but also reputational damage that can lead to loss of customer trust and the confidence of business partners.
Security managers and their teams must be prepared for all the different attacks they might face. To help with that, TechTarget lists 16 of the most damaging types of cyberattacks and how they work.
The list includes obvious attacks such as Malware, Ransomware, Password, DDoS, Phishing, and Botnet, and some that you might not be thinking about. TechTarget provides details on each one, and how to prevent them.
Cybercrime Magazine is Page ONE for Cybersecurity. Go to any of our sections to read the latest:
- SCAM. The latest schemes, frauds, and social engineering attacks being launched on consumers globally.
- NEWS. Breaking coverage on cyberattacks and data breaches, and the most recent privacy and security stories.
- HACK. Another organization gets hacked every day. We tell you who, what, where, when, and why.
- VC. Cybersecurity venture capital deal flow with the latest investment activity from various sources around the world.
- M&A. Cybersecurity mergers and acquisitions including big tech, pure cyber, product vendors and professional services.
- BLOG. What’s happening at Cybercrime Magazine. Plus the stories that don’t make headlines (but maybe they should).
- PRESS. Cybersecurity industry news and press releases in real time from the editors at Business Wire.
- PODCAST. New episodes daily on the Cybercrime Magazine Podcast feature victims, law enforcement, vendors, and cybersecurity experts.
- RADIO. Tune into WCYB Digital Radio at Cybercrime.Radio, the first and only round-the-clock internet radio station devoted to cybersecurity.
Contact us to send story tips, feedback and suggestions, and for sponsorship opportunities and custom media productions. | <urn:uuid:d91406ff-76f2-4370-8cf2-12e59b79d74a> | CC-MAIN-2024-38 | https://cybersecurityventures.com/16-common-types-of-cyberattacks-and-how-to-prevent-them/ | 2024-09-16T13:42:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00141.warc.gz | en | 0.937432 | 577 | 2.546875 | 3 |
Examples of Legal Documents That Can be Compromised and How It Can Happen
Here are some examples of the types of legal documents that can be compromised and how it can happen:
-Contracts: Contracts are legally binding agreements that contain confidential information such as pricing, payment terms, and non-disclosure agreements. Contracts can be compromised by hackers who gain unauthorized access to email accounts or file-sharing systems. For example, in 2019, a ransomware attack on a legal services provider exposed over 1 million confidential contracts.
-Intellectual Property Documents: Intellectual property documents, including patents, trademarks, and copyrights, contain confidential information about a company’s products, services, and inventions. These documents can be compromised by cybercriminals who steal them for their own financial gain or to sell them to competitors. For example, in 2020, a Chinese national was charged with stealing trade secrets from a Houston-based energy company.
-Client Files: Law firms often store confidential client information, including personal data, financial information, and medical records. Client files can be compromised by insiders who have access to these documents or by cybercriminals who gain unauthorized access to law firm networks. For example, in 2016, a cyberattack on a large law firm resulted in the theft of over 2.5 million files, including confidential client information.
–Court Documents: Court documents, including briefs, pleadings, and motions, often contain sensitive information such as trade secrets and personal data. Court documents can be compromised by hackers who gain access to court systems or by insiders who leak the information. For example, in 2019, a court clerk in Georgia was sentenced to prison for leaking confidential court documents to a defendant.
-Corporate Records: Corporate records, including shareholder agreements, financial statements, and tax records, contain confidential information about a company’s operations and financials. Corporate records can be compromised by insiders who have access to these documents or by cybercriminals who gain unauthorized access to company networks. For example, in 2020, a ransomware attack on a law firm resulted in the theft of over 756 GB of corporate records.
Tools for Legal Document Security
-Encryption: Encryption is the process of encoding information in such a way that only authorized users can read it.
-Password Protection: Password protection is an easy and effective way to secure legal documents. Passwords should be strong and complex, with a mix of upper- and lower-case letters, numbers, and special characters. Password managers can be used to manage multiple passwords securely.
-Secure File Sharing: Secure file sharing tools provide a secure platform for legal document sharing. These tools allow users to set access permissions, add expiration dates, and track document activity.
-Digital Signatures: Digital signatures provide an additional layer of security to legal documents. They ensure the authenticity and integrity of the document and help prevent fraud. Tools can be used to add digital signatures to legal documents.
-Anti-Malware Software: Anti-malware software can be used to protect against malware, viruses, and other types of cyberattacks. This software can detect and remove malware that could compromise legal documents.
-Virtual Private Network (VPN): A VPN provides an encrypted connection between a user’s device and the internet, making it difficult for hackers to intercept and access data. VPNs can be used to secure connections when accessing legal documents from outside the office.
-Data Loss Prevention (DLP): DLP software can be used to detect and prevent data breaches. These tools can identify sensitive data, track its movement, and block unauthorized access to legal documents.
Invisible Labels: A Tool for Personalized and Secure Legal Document Management
In addition to the tools mentioned above, there is another useful tool for securing legal documents: invisible labels. These labels can help to create personalized copies of legal documents each time somebody accesses them, making it possible to detect the source of a leak if one occurs.
Invisible labels are essentially unique markers that are added to each copy of a legal document. These labels are not visible to the naked eye but can be detected using special software or tools. Each time somebody accesses a document, the labels are applied, creating a new, personalized copy of the document that is unique to that individual. This makes it much easier to track down the source of a leak if one occurs, as each copy of the document can be traced back to the person who accessed it.
Invisible labels can be especially useful in the legal sphere, where confidentiality and privacy are paramount. By using these labels, law firms and other organizations can ensure that their legal documents remain secure, and that any leaks or breaches are quickly identified and addressed. Additionally, invisible labels can help to deter unauthorized access to legal documents, as they make it much more difficult for individuals to share or distribute copies of a document without being detected.
It’s worth noting that a solution incorporating invisible anti-leak labels is already available in the form of LeaksID. LeaksID is a software tool that provides advanced security measures for legal documents, including the use of invisible labels to prevent leaks and unauthorized access. With LeaksID, law firms and other organizations can rest assured that their legal documents are protected and secure, and that any attempts to leak or share confidential information will be detected and tracked.
Overall, invisible labels are an important tool for securing legal documents and protecting sensitive information. By using these labels, law firms and other organizations can ensure that their legal documents remain confidential and that any leaks or breaches are quickly identified and addressed. | <urn:uuid:55eb9da4-4fd9-408a-a256-231aa0e07e0f> | CC-MAIN-2024-38 | https://leaksid.com/security-of-legal-documents/ | 2024-09-20T03:59:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00741.warc.gz | en | 0.914775 | 1,134 | 2.53125 | 3 |
Catfishing Meaning: 7 Cybersecurity Tips, History and Examples
Author: Carter | 09 Jan 2024In This Post
From sole individuals to large-scale businesses, everyone has at least once fallen victim to one of the digital fraud attempts. Be it phishing, spyware, or any other means of data breaches. Getting a notification on a phone with a follow-up or message request from a person already known is nothing new. Everyone accepts and connects with people they know. Fraudsters or bad actors now use this option as a scam method to lure people into their traps.
Today, we will introduce catfishing, another type of attempt that attackers use, this time not to breach business networks but to distract or manipulate people for personal gains. Catfishing is the very approach that people take to carry out their illicit activities. It is not the kind of a marine species but a person who impersonates someone else.
- Catfishers exploit online users by posing as someone else for their benefit.
- Catfishes use personal sensitive information for their financial gains, including taking loans at prey’s identity or to defraud a victim.
- An online harasser builds a close relationship with the person online whilst pretending to be someone else. This is usually done by using another user’s personal data or image to fabricate a fake persona.
Catfishing Meaning: What Is Its Role in Online Platforms and Dating Apps?
Catfishing is the activity in which a person is depicted as somebody else by copying the personal data of a real person. In this technique, people use personal information like:
- Social media profiles
They create a fake persona of the real entity and interact with other people online. With the increase in AI tools across the industry, this technique is rising across each sector, as creating multiple images of someone is just a single prompt away. Catfishers do this for several reasons, however, it is a major scam that can lead to fraud and crimes.
What is Catfishing Online?
Catfishing, the act of creating fake online personas, is unfortunately common in online dating. Recognizing a catfish involves noticing certain red flags. Typically, they come across as too good to be true, tailoring their personality to match the person they’re talking to. Catfish tend to have limited photos on their social media profiles, often stolen from others. They avoid video calls and meeting up, offering excuses like shyness or technical issues. Dodging personal questions and withholding information, such as a phone number, is another warning sign. If asked for a candid selfie, they struggle to provide one, as they’d need to find a matching photo from their stolen profiles.
What is Catfishing in Dating?
Online impersonation, digital deception, or catfishing is a kind of online romance fraud in which the harasser makes a fake identity to manipulate the victim. The primary purpose is to harass or troll the victims, steal their identity, and scam them.
Catfishing is common in dating apps as scammers sometimes get close to people by playing with their feelings and getting access to all their sensitive information. Specifically on dating websites, catfishes approach people, make a relationship, and then use their sensitive data to blackmail, bully, and embarrass them. However, a person doesn’t realize that he or she is being catfished as a catfish poses as a person they are not – to trap people into romance scams. This can be due to many reasons, such as loneliness, sexual exploration, revenge, or financial gain.
How Catfish Originated- A Brief History
The word ‘Catfish’ originated from an old story about catfish. Anglers used to ship these fishes combined with codfish to keep them agile, providing a better taste and quality. In the modern world, a catfish keeps people on their toes similar to fish in the tale, displaying the literal meaning of catfishing. This advanced term originated from the 2010 documentary, but this type of scamming is older than this. Catfishes have been around since the commencement of online forums and have been arrested by police for trying to exploit users, specifically minors.
MTV’s Catfish is a TV Show that is well-known, but its origin story may not be as familiar. Host Nev Schulman experienced catfishing firsthand when he fell in love with a girl online, only to discover she wasn’t who she claimed to be. This personal experience led to the creation of a documentary titled “Catfished,” giving rise to the term “catfish.”
Is Catfishing Illegal?
It’s essential to understand whether catfishing is legal, as it’s a tricky question. However, representing someone else isn’t against the law, but extortion, cyberbullying, and fraud can be. People must be careful whilst sending nude and intimate images to someone digitally, and if both parties are underage, then it’s considered child pornography.
Catfishing is manipulative and dangerous to the extent that it can take someone’s life. That’s why it’s a crime if the harassers:
- Commits identity theft
- Commit scams such as asking victims to send goods or money
- Use trademarked or copyrighted material
- Take or records other images without their approval
- Gains illegal access to the network or a system
- Involves a minor in fraud
- Introduces system viruses or damaged computers
Catfishing and Cyberbullying – Understanding Why People Catfish
Cyberbullying involves embarrassing, humiliating and harming someone on online platforms by using illegal means. Therefore, catfishing is a kind of cyberbullying as it harms the target and plays with their minds. Furthermore, catfish lure people into fake relationships and grab their sensitive information so they can use this data against them for financial gains or execute their illicit activities.
Online harassers use the person’s emotions against them and are involved in collecting different aspects of the user’s personal history by recognizing their physical traits and any other sensitive information that can make them sad, scared, or depressed. Catfishers usually target isolated people who are in need of any relationship – such people can be exploited easily.
Digital imposters present fake financial needs by projecting that they are in dire need of money, have an emergency, or are ill. Furthermore, they also portray themselves as travelers who run out of cash to meet their expenses. Catfishers pose with different scenarios to acquire funds from the same target.
Yes, catfishing has many effects, but the inspiration behind this isn’t always nefarious. Sometimes, the perpetrator has a psychological illness that urges them to claim the fake identity. Reports suggest that people who catfish have a higher fear of rejection, and anxiety levels and disturb their mental health. This is just a game for some perpetrators, and statistically, men catfish more than women.
Usually, people who feel bad about their lives or aren’t confident seem to be involved in catfishing. They try to create fake identities of someone who they want to be by taking their images. Moreover, they also pretend to have a successful career just to enjoy the feeling of accomplishment. Hence, catfishers who idealise attractive identities also try to grab more attention to feel more confident and famous.
Catfishing crimes are more famous than ever as these scammers take revenge on the people they impersonate. They use another individual’s name and face to create their online presence. Then, they do things or speak badly about people just to negatively impact that user.
Catfishing usually leads to identity theft in different ways, as catfishers use the targeted person’s personal information to commit fraud and create a new identity.
What are the Examples of Catfishing?
Imposters manipulate and deceive individuals for multiple purposes, such as terrorism and financial gain. Catfishing also impacts organisations and businesses that are more than just peer-to-peer – targeting individuals on a personal level. Some of the most common catfishing cases are below:
The Manti Te’o Case
In 2012, Manti Te’o was a famous football player who was catfished by a lady known as Lennay Kekua. Basically, Kekua was a man who made a fake profile on a social media platform to start a relationship with him that went too far. Even Kekua took sympathy from Te’o by faking her death, and this story has been documented on Netflix with the name The Girlfriend Who Didn’t Exist.
The Military Imposter
John Edward Taylor was imprisoned for 14 years for catfishing on social media platforms, explicitly dating sites with different women. He pretends to be a retired CIA agent or Navy SEAL. John made this fake identity to impress his targets, earn their trust and defraud them monetarily. Afterwards, he was caught red-handed and sentenced on different counts of identity and theft fraud.
The ISIS Recruiter
Mohamad Jamal Khweis – a Virginia man, was sentenced to jail for attempting to join Syrian ISIS in 2015. Umm Isa al-Amrikiya recruited him and believed Khweis was a young woman interested in Islam. However, Khweis was an imposter and ISIS agent who worked to attract individuals to Western recruits and worked as a suicide bomber. That’s why he was jailed for 20 years.
How to Restrict Catfishers this Valentine’s?
Catfishing scandals are more well-known as imposters usually target specific individuals for fraud or deception, so it’s a hazardous activity, especially for minors. This is generally used in dating websites or romance scams to compromise a victim for financial gain. In a recent survey, 78% of Indians couldn’t differentiate between a letter written by ChatGPT and a human being that portrays the potential of catfishers to misuse the technology. In the last 5 years, in the US alone, 1.3 billion people have reported dating scams.
Catfishers use matrimonial and online dating sites to create relationships with victims and then blackmail them for financial gains or by using false accusations such as medical emergencies. Socialcatfish.com is an accurate resource that assists victims of online scams. Following are a few points that are listed on this website to stay away from imposters this Valentine.
- Don’t respond if someone connects out of nowhere, as there must be a catfisher hidden agenda that they want to fulfil. They probably try to deceive by providing fake identities for money or any illegal activity.
- Never share sensitive and personal information with anyone online, as this can be used for blackmail. This also empowers catfishers if the victim tries to file a report against them or can hack personal accounts, including email, social media and bank accounts.
- Let’s suppose someone is interested in your credit card or bank account information and talks about this in the second to third chat. Consider this as a warning sign and stay away from them.
- When chatting with someone online, don’t doubt your instincts; trust your sixth sense when it seems or feels weird and complete research.
- Start a video call with that person and pay attention to details, such as their background, etc.
How to Discontinue a Catfish Relationship?
Obviously, it’s difficult to end a relationship with catfishers, but doing the following things will prove to be a lifesaver.
- By continuously thinking about all the incidents that had happened in the past.
- Immediately block them to prevent mental peace and any mutual friends.
- Seek mental therapy if it’s getting on nerves. Obviously, handing over financial data and getting scammed by it isn’t easy.
7 Cybersecurity Tips to Outsmart a Catfish
Catfishing scandals are increasing with technological expansion as imposters now know how they can deceive users on digital platforms. These tips are effective to avoid and restrict catfishers.
1. Stay Vigilant
Individuals must be cautious whilst talking to someone over the internet as there are higher chances of being spoofed. It’s been said that ‘prevention is better than cure’, and this phrase should be remembered when going into the internet world.
2. Consider Reverse Picture Search
To know if the person is authentic, always run a reverse image search. In many cases, the fake picture shows up in stock images or on real social media user accounts.
3. Do a Video or Audio Call
Video or audio calls are the most effective ways to understand if the user is authentic or any scammer. Stay vigilant if they always refuse, as it could be a sign that they are hiding themselves and don’t want to reveal their identity.
4. Try to Ask Specific Questions
In case of any red flags, try to ask questions from different perspectives and do it occasionally, as imposters can’t hide their identity for a long period of time. Specifically question them about their background, education, and work. For example, if they state that they live nearby, then ask them for proof or about a local place and judge them.
5. Efficiently Conduct Research
Make sure to do thorough research whilst engaging with new users on social media platforms. For example, a Google search can provide many answers to exposing a catfisher. Just write the user name on the search engine and see if his/her image is fake or taken from the internet.
6. Take Advice From Friends
Don’t leave friends, just reach out to them after seeing threats. Ask for therapy and advice that will help in moving on from this phase. Also, it will give a clear and transparent view of the situation that will help in making better decisions.
7. Update the Privacy Settings
Catfishing cases can be reduced by keeping social media accounts private. It’s really helpful that imposters can’t even reach and see what is going on in anyone’s life. People have control to limit who can see their profiles, and restrict catfishing on social media platforms.
How Conventional Verification Techniques are Lagging?
Facebook is the biggest dating app that verifies identities, but catfishers create fake profiles and try to look authentic. Imposters use credential-stuffing tactics that employ bots to hack account logins. That’s why manual verification methods are no longer sufficient and can actually put the user at high risk. Digital platforms have to provide advanced user protection features that imposters can’t hamper with.
Why Facial-Centric Identity Verification Is a Future?
To restrict imposters, dating sites must provide a blue tick to authentic users by doing their facial verification. Then, users can decide if they want to communicate with the specific person or just scroll on to the next. That’s how sites can secure user information and persons from being catfished.
Face recognition helps dating sites verify authentic users and eliminate bots or scammers. By doing so, social media can also secure users and their sensitive information from being trapped. Users just have to scan their faces through their webcams or Android phones, and AI-powered systems store their information in the database. Every time a user logs in to a dating site or any social media platform, these systems verify them by comparing their facial imprints with database information in less than one second. This process also confirms that this user is authentic and not any catfisher.
How Facia Counters Catfishing with Technology?
Virtual socialisation and online dating have been on the rise since the pandemic, so social applications and sites have to facilitate digital meetups and conversations. Social media platforms and dating sites have to secure users from catfishers to provide authentic connections. They have to ensure that all users are verified. This is where Facia steps in.
Facia’s white-label technology assists businesses in reducing spoofing, phishing, and cyber-attacks to secure user’s sensitive information and personal details. We aim to make the internet a secure place for users and continuously monitor imposter activities to restrict them.
Prevent catfishing from your platform today, contact us now to get a free demo of our service and ensure security with Facia.
Frequently Asked Questions
An AML (anti-money laundering) compliance program helps financial institutions prevent money laundering activities. It includes:
- Internal policies & procedures
- Employee training
- Customer due diligence (CDD)
- Transaction monitoring & reporting
Yes, AI-powered solutions play a crucial role in enhancing AML compliance. AI can analyze large volumes of transactions to identify patterns that may indicate money laundering. It helps in real-time fraud detection, improves the accuracy of identifying suspicious activities, and reduces false positives, thereby enhancing operational efficiency.
In AML terminology, 'high risk' refers to situations or jurisdictions that pose a greater risk of money laundering or terrorist financing. FATF regularly updates its list of high-risk jurisdictions that have strategic deficiencies in their AML/CFT (Combating the Financing of Terrorism) regimes. Financial institutions operating in these areas are required to apply enhanced due diligence measures.
In AML, sanctions refer to restrictive measures imposed by governments or international bodies to prevent entities or individuals from engaging in activities like money laundering, terrorism financing, or other financial crimes. These sanctions can include trade restrictions, financial prohibitions, and travel bans.
Certainly, integrating biometric face recognition in onboarding processes dramatically enhances both the speed and security of these procedures. It allows companies to perform quick, secure identity verifications, drastically cutting down the time typically required for manual identity checks and significantly bolstering security measures. | <urn:uuid:651e5890-46d7-4694-a708-3c48409a160d> | CC-MAIN-2024-38 | https://facia.ai/blog/catfishing-meaning-7-cybersecurity-tips-history-and-examples/ | 2024-09-08T03:30:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00041.warc.gz | en | 0.953462 | 3,705 | 2.53125 | 3 |
The actual purpose of Artificial intelligence is to create human-like learning abilities inside a machine. It is a process of installing a humanistic language into software that can respond to stimulus in the same way as human beings do.
In simple terms, artificial intelligence means creating an unnatural mind using technological tools that can think, judge, react like a human. AI has the proficiency to mimic decision-making ability similar to intellectual beings. The response given by AI will not let anyone believe like taking to a gadget.
Idea of Artificial Intelligence
AI comes into assistance after going through sufficient research on human behavior and thoughts. Successful results came out when programmers find a way to link specialized programs with natural human behavior. For example, if we ask a normal human being to introduce himself. He can quickly tell his name, place, age, profession, etc. Looking into this parameter, programmers also adjust specific data inside a machine so that the gadget relates itself to the asked question and gives output identical like human beings. The more a device can connect things with human behavior, the more intelligent it will be in providing correct answers to the asked questions.
How does Artificial Intelligence work:-
The functioning of AI depends upon the combination of large data files, processing units, and intellectual algorithms. All these are implemented inside an AI by using advanced hardware processors and software units. The programs inside an AI automatically builds connections after getting input data and delivers an outcome same as humans. The response of their Intelligence depends on the type of processors used to make the system. Programmers need to focus on neural networks, Cognitive computing, Natural language processing for establishing data transfer connection like a brain.
Types of artificial intelligence:-
AI also has other categories looking into the way functioning and ability to do the task. Some of the classes are the following:-
1. Narrow AI: These types of AI are very specific in doing tasks. They are designed to perform only a particular task like self-driving cars, image recognition, Google assistant, Siri, etc. The performance of Narrow AI is very limited. But still, their accuracy of doing work is very satisfactory. The application of Narrow AI is wider in manufacturing units. Companies hire developers for making intellectual AI for performing task in an accurate manner.
2. General Intelligence: Another term for General intelligence is STRONG AI. It can perform work the same as humans. These types of AI can clean floors, deliver foods to the table, recognize the name of a person, etc. They are very well at giving satisfactory answers to humans. People love talking to them because of their polite behavior in front of humans. Maximum robots constructed using AGI techniques have unbelievable accuracy in finding data.
Abilities of Artificial intelligence
The capabilities of Artificial intelligence have given unexpected results. Some of the possible abilities of AI are the following:-
• Quick Response: The processing units inside AI works by voice recognition techniques. So, they give replies in a fraction of seconds after detecting the provided data.
• Give personal views: Artificial Intelligence can think the same as human beings. So, they can give suggestions about any product or information to maintain proper interaction with humans.
• Decline impossible results: Humans can’t do everything. Some questions still exist whose answer can’t assume to be correct. To deal with such problems, it declines the referendum and say No to it. For example, Asking an AI to fly in the sky is probably an impossible task. So, they politely refuse to perform such task.
• Detects visual and voices: This feature of AI has broader applications in many fields of science. Nowadays, artificial voice and face locker has come to make the unlocking task very simple. Google Assistant and Siri like AI are available for Android and iOs devices for answering chit-chats.
The development of AI gives hope to do impossible in the upcoming future. So far, technology is not reached to the level of creating an advanced artificial brain. But, somehow this task seems to be possible after some decades. Till now, Humans have restricted the thinking level of AI for safety. But, soon, all these barriers will come to an end once humans learn to deal with the consequence of Artificial intelligence. | <urn:uuid:718d62e8-77a5-4563-b9ed-1d49bfac6c1b> | CC-MAIN-2024-38 | https://beyondexclamation.com/artificial-intelligence-making-machines-behave-like-humans/ | 2024-09-09T05:29:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00841.warc.gz | en | 0.930661 | 857 | 3.78125 | 4 |
What is a dinosaur?
A. Dinosaurs are imaginary creatures in stories.
B. Dinosaurs are extinct reptiles that lived millions of years ago.
C. Dinosaurs are creatures that still roam the Earth today.
The answer is B
Dinosaurs were reptiles that lived millions of years ago during the Mesozoic Era. They ruled the Earth for over 160 million years before becoming extinct. These fascinating creatures come in various sizes and shapes, from the massive long-necked sauropods to the fierce carnivorous theropods.
Scientists have discovered and studied numerous dinosaur fossils, providing us with a glimpse into the world of these ancient creatures. The study of dinosaurs, known as paleontology, helps us understand their behavior, physiology, and the environments they lived in.
Despite their extinction, dinosaurs continue to capture the imagination of people around the world through movies, books, and museums. Their legacy remains alive in our curiosity and fascination with these incredible prehistoric animals. | <urn:uuid:60f804d3-50c6-4635-bbde-ec24b5bd5b59> | CC-MAIN-2024-38 | https://bsimm2.com/world-languages/the-amazing-world-of-dinosaurs.html | 2024-09-09T05:55:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00841.warc.gz | en | 0.959973 | 200 | 3.578125 | 4 |
Show VersionsManaging Disaster Recovery Scenarios with Two Data Centers
How does a data collection device cluster deal with disaster recovery scenarios?
The BIG-IQ system uses high availability and zone awareness functions to maintain data collection device (DCD) operations even when a node or an entire data center goes down. DCDs in each data center are assigned to the appropriate zone. This zone awareness enables the system to manage the distribution of your data and maintain DCD operation in all but the most severe outages.
For a better understanding of how this process works, consider the following example. A hypothetical company named Acme has two data centers; one in Seattle and the other in Boston. Acme wants to ensure data reliability, and has set up these data centers so that if one goes offline, the DCD data it was receiving is routed to the other data center. To achieve this, Acme has an HA pair of BIG-IQ systems for viewing and managing the data, and six DCDs divided equally between the two data centers. Two BIG-IP devices are used to load balance data and configure Fraud Protection Services and Application Security Manager settings for both data centers.
The HA pair ensures that one BIG-IQ system is always available for managing configuration data. The standby system is available for viewing configuration data, and managing data. The DCDs are treated as one large cluster that is split between two sites. Each data node is replicated, so that if one goes down, or even if an entire data center goes down, the data is still available.
Two data centers, one DCD cluster example
The DCD cluster logic that governs the distribution of data between your DCDs identifies one node in the cluster as the master node. The master node monitors the cluster health and manages the cluster operation. It is elected by the cluster, and can reside on any node. (The BIG-IQ system, however, does not store any data.) If a master node goes down, a new master is elected from among all the nodes in the cluster.
How does the minimum master eligible devices setting work?
One parameter of special significance in determining the behavior of the data collection device (DCD) cluster is the minimum master eligible devices (MMED) setting. All of the devices in the cluster (including the primary and secondary BIG-IQ systems) are eligible to be the master device.
When a device is added or removed from the DCD cluster, the system performs a calculation to determine the optimum default value. You can override the default value to suit your requirements.
This setting determines how many DCDs in the cluster must be online for the cluster to continue to process alert data. If your goal is to keep operating regardless of device failures, it might seem like the obvious choice would be to set this number to as low a value as possible. However, you should keep in mind a few factors:
- The BIG-IQ system is counted as a device in the cluster, so a cluster size of 1 does not make sense.
- Similarly, a cluster size of 2 (a DCD and the BIG-IQ console) is not a good idea. Because the DCD cluster logic uses multiple DCDs to ensure the reliability of your data, you need at least two logging devices to get the best data integrity.
- It might also seem like a good idea to set the MMED to a higher value (for example, one less than the number in the entire cluster), but actually, best practice is to not specify a value larger than the number of devices in one zone. If there is a communications failure, the devices in each zone compose the entire cluster, and if the MMED is set to a lower value, both clusters stop processing data.
How is alert data handled when data collection devices fail?
Here are some of the most common failure scenarios that can occur to a data collection device cluster, and how the cluster responds to that scenario.
What failed? | How does the cluster respond? |
One of the data collection devices fails. | All alert data, including the data that was being sent to the failed node, is still available. When a node is added, removed, or fails, the cluster logic redistributes the data to the remaining nodes in the cluster. |
The master node fails. | The cluster logic chooses a new master node. This process is commonly referred to as electing a new master node. Until the new master is elected, there may be a brief period during which alert processing is stopped. Once the new master is elected, all of the alert data is available. |
All of the data collection devices in a zone fail. | Just as when a single data collection device fails, the cluster logic redistributes the data to the remaining nodes in the cluster |
How is data handled when communication between the two data centers fails?
This scenario is a little more complex than the case where data collection devices fail, so it needs a little more discussion to understand. However, it's also much less likely to occur. The cluster behavior in this scenario is controlled by the Minimum Master Eligible Devices (MMED) setting. The default MMED setting is determined by a simple formula: the (total number of nodes / 2) + 1. This setting should handle most scenarios. Before you consider changing the default setting, you should learn more about how the Elasticsearch logic uses it. Refer to the article on discovery.zen.minimum_master_nodes in the Elasticsearch Reference, version 6.3 documentation.
You do not control which node is the master, but the master node is identified on the Cluster Settings tab of the BIG-IQ Data Collection Configuration page. Considering the two-data-center scenario discussed previously, let's assume that the master node is in the Seattle data center. If communication goes down between the data centers, the Seattle data center continues to function as before, because with four nodes (the BIG-IQ system and three DCDs) it satisfies the MMED setting of 3. In the Boston data center, a new master node is elected because without communication between the two data centers, the Boston data center has lost communication with the master node. Since the Boston data center also has four master eligible nodes, it satisfies the MMED setting. The Boston data center elects a new master and forms its own cluster. So, once the two new master nodes are elected, BIG-IP devices in each data center send their alerts to the DCDs in their own cluster, and the master node in each zone controls that zone.
Two data centers form two DCD clusters following a communications failure
When communication resumes, the cluster that existed before the failure does not reform on its own, because both data centers have formed their own independent clusters. To reform the original cluster, you can restart the master node for one of the clusters. However, reforming the cluster without first doing a couple of precautionary steps is not generally the best practice because you will lose some data.
What happens to your data if you just reform the cluster
- If you decide to restart the master node in the Boston data center when communication is restored, the cluster logic sees that the Seattle data center already has an elected master node, so the Boston cluster joins the Seattle cluster instead of forming its own. The Seattle master node then syncs its data with the Boston nodes in the cluster. The data sync overwrites the Boston data with the Seattle data. The result is that Boston data received during the communication failure is lost.
- If you decide to restart the master node in the Seattle data center when communication is restored, the cluster logic sees that the Boston data center already has an elected master node, so the Seattle cluster joins the Boston cluster. The Boston master node then syncs its data with the nodes in the Seattle cluster. That data sync overwrites the Seattle data with the Boston data. The result is that Seattle data received during the communication failure is lost.
How to preserve the maximum amount of data
To preserve as much data as possible, F5 recommends that, instead of just reforming the original cluster by restarting one of the clusters, you perform these two precautionary steps.
- When a communication failure occurs, change the target DCDs for the BIG-IP devices in the zone that did not include the original master node (Boston, in our example) to one of the DCDs in the zone that housed the original master node (Seattle, in our example).
- When communication is restored, in the zone that did not include the original master node (Boston, in our example), use SSH to log in to the master node as root, and then type bigstart restart elasticsearch , and press Enter. Restarting this service removes this node from the election process just long enough so that the original (Seattle) master node can be elected.
After you perform these two steps, all alerts are sent to nodes in the zone that contained the original master node. Then when communication is restored, the DCDs in the zone where the master node was restarted (Boston in our example) rejoin the cluster. The resulting data sync overwrites the Boston data with the Seattle data. The Seattle data center has the data that was collected before, during, and after the communications failure. The result is that all of the data for the original cluster is saved and when the data is synced, all alert data is preserved. | <urn:uuid:e93939ae-2ad7-4897-9f93-fb547a2fdefe> | CC-MAIN-2024-38 | https://techdocs.f5.com/kb/en-us/products/big-iq-centralized-mgmt/manuals/product/big-iq-centralized-management-plan-implement-deploy-6-0-1/02.html | 2024-09-09T06:19:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00841.warc.gz | en | 0.91794 | 1,918 | 2.515625 | 3 |
You need to tell your friend something, it’s urgent! But you’re at a crowded party and you’ve become separated.
Suddenly, you see them across the room.
The music is loud, people are in the way, and it’s the 90s so no, you can’t just text them. What do you do?
Talking to your friend
To try to communicate with your friend across the room, maybe you try shouting as loudly as you can.
Shouting means you’re putting more energy into the message. Similarly with machines, increasing the energy of a signal means that it can be heard over greater distances. Connectivity examples: Cellular and Satellite.
But what if your lungs aren’t strong enough to shout that loudly? Instead of shouting, maybe you try walking across the room, decreasing the distance.
Walking towards your friend means you don’t have to expend as much energy to talk, but now you have shorter range. For machines, decreasing power consumption means the messages have decreased range. Connectivity examples: Bluetooth and WiFi.
Is there another option? Is it possible to communicate with your friend without increasing the energy and without decreasing the range?
What is LPWAN?
As the name implies, Low-Power Wide-Area Networks (LPWANs) allow for low power consumption over a wide area, aka long range. So how is this accomplished?
With your friend, perhaps all you want to communicate is that you’re done with the party and want to leave. Instead of shouting or moving closer, you might simply point to the exit.
Because of its simplicity, this message can be communicated over the distance without shouting. For machines, decreasing the amount of data sent (the bandwidth) means lower energy at range.
This is what LPWANs do, they send and receive small packets of information at infrequent intervals. Sensor and devices can send data over miles of range instead of feet and can last for years on a AA battery instead of weeks or months.
LPWANs aren’t without downsides. When you point to the exit, your friend might not be looking; the message is transmitted but it isn’t received. Similarly, LPWANs aren’t very reliable and messages sometimes need to be sent several times to make sure that it’s received. In the analogy, it would be like pointing to the exit a bunch of times before you leave because you’re not sure if your friend saw you.
Despite the low reliability and high latency associated with LPWANs, they do place an essential role in IoT.
Enabling the Internet of Things
IoT applications can vary greatly, but many applications need tons of sensors spread over big areas, such as Smart Agriculture or Industrial IoT (for a deeper look at potential applications, check out last week’s #askIoT post).
As discussed in a previous #askIoT post, there are many ways for these sensors and devices to communicate, each with varying pros and cons. When you have thousands of sensors spread over a big area, you need wireless communication with long-range and low power consumption. After all, it would suck to have to replace the batteries in thousands of sensors all the time.
Also, it costs money to send messages and connectivity options like cellular are expensive. Imagine having to pay your phone bill not just for one device, but for thousands. Yikes.
LPWAN technology thus plays a crucial role in enabling the Internet of Things. These networks make it possible to have many thousands of sensors collecting and sending data at lower cost, over longer range, and with better battery life than other connectivity options. Some Applications include:
A parking garage — sensors detect when spots are open, sending a simple Yes or No message only when that value changes.
A school building — battery-powered locks can be remotely activated or deactivated, helping with general security and crisis situations.
A city — waste containers throughout a city can send alerts when they’re close to being full, allowing for more efficient garbage collection.
And many more…
It’s important to note that LPWAN is a general term, and there are many different competing standards and technologies under that umbrella. The competing LPWAN standards and technologies include but are not limited to: LoRa, SIGFOX, Ingenu, Weightless, and SymphonyLink.
There are varying pros and cons for each of these LPWAN options, but in the interest of keeping this post high-level and non-technical, I’ll refrain from digging much deeper in this post.
So now you can answer, “what is LPWAN?”! If you want to learn more, you can find a high-level explanation of the LPWAN options and their pros/cons here. And if you want a deep dive into LPWAN, check out our white papers. | <urn:uuid:8286f908-46d5-4eca-9f5f-5ef5a0acb7f5> | CC-MAIN-2024-38 | https://www.iotforall.com/what-is-lpwan | 2024-09-09T06:07:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00841.warc.gz | en | 0.942435 | 1,035 | 3.3125 | 3 |
Keeping kids offline long ago stopped being a possible preventive measure against the increasing number of security risks online.
As kids across the world ease into summer vacation mode and bury their noses in their devices, we propose a deep dive into ways you can teach your kids about their privacy and the importance of personal data to help them overcome online security threats.
Kids’ privacy, security and digital well-being are essential in an era where staying offline is no longer an option. Due to the prevalence of social media and parents’ oversharing of photos, it’s believed that about 80% of children in the UK have an online presence by the age of two, according to a report.
As young children become digital citizens and are familiarized with the intricacies of owning a digital profile, concerns about them suffering harm online increase dramatically.
We already know that children share a great deal of information on social media and other platforms, bypassing age requirements (minimum 13 or older). They use their real names and post photos wearing school uniforms, leaving their profiles accessible to anyone with an internet connection.
What most kids, especially teenagers, don’t realize is that oversharing or careless exposure of information can impact their future. Their digital profiles serve as an extension of themselves and can set back their college applications, careers and financial stability.
Cybercrooks often target personal data of kids to commit identity crimes that go undetected for months, if not years. Using a child’s Social Security number to open credit card accounts will bring a low credit score and debt that could derail their plans in adulthood.
Additionally, poor judgment and bad online behavior can impact their lives offline, with more and more educational institutions and employers looking at profiles of potential candidates before making a decision.
Ensuring your kids’ safety in the real and digital world is a full-time job. Here’s a handy guide that will help you and your child take control over their privacy, personal data and online reputation:
- Use strict privacy settings on all apps and websites your kids use – by controlling who can see what they post or send friend requests, you limit the chances of malicious individuals targeting them
- Depending on their age, consider supervising them whenever they download applications or sign up on new platforms. Check whether the app asks permission to access your child’s contacts and photos. If the app’s functionality does not need this information, avoid installing it
- Talk about good password hygiene – make sure your child uses unique passwords for all online accounts and enables two-factor authentication to protect against unauthorized access from cybercriminals and other digital miscreants
- Make sure your child does not overshare information on digital platforms, including their real name, the name of their school, home address, phone numbers, credit card information, Social Security numbers or any sensitive media files
- Teach kids about phishing and other social engineering tactics, making sure they never click on links or attachments in emails, texts or instant messages, especially if they are unsolicited or come from strangers
- Enforce the stranger danger rule with your kids – digital threats targeting children are not limited to phishing and malware. Online predators often pose as kids to trick young internet users into revealing sensitive information or meeting them in the real world
- Promote good digital behavior to your child and teach them to never respond to mean comments, threats or other bad conduct online
If this information is helpful to you read our blog for more interesting and useful content, tips, and guidelines on similar topics. Contact the team of COMPUTER 2000 Bulgaria now if you have a specific question. Our specialists will be assisting you with your query.
Content curated by the team of COMPUTER 2000 on the basis of marketing materials provided by our partners/vendors.
Follow us to learn more
Let’s walk through the journey of digital transformation together. | <urn:uuid:3c48e83c-2cfd-46e9-a094-53dbd91cfac1> | CC-MAIN-2024-38 | https://computer2000.bg/protecting-kids-data-online-is-key-to-ensuring-digital-safety/ | 2024-09-10T11:29:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00741.warc.gz | en | 0.926816 | 796 | 3.0625 | 3 |
What happens if North Korea launches a first strike at the US?
The US monitors North Korea using various means, including satellite imaging, intercepted communications, and spies within the country.
Private firms with government funding are also working on satellite-based radar that could watch for launches even through cloud cover, which can block conventional satellite imaging.
If North Korea were to launch a missile, the US could spot it and work out where it’s heading, but intercepting it would be much harder.
America has three lines of defensive rockets that could take out an ICBM, in increasing order of distance from North Korea, according to ABC News:
in south Korea, on US Navy ships in East Asia, and in Alaska and California.
They’re not a guaranteed defense, however.
Could the Terminal High Altitude Area Defense system (THAAD) that the U.S. is installing in South Korea protect the country from an attack by the North?
The problem with relying on missile defenses is that they are not good enough to replace efforts to solve the problem directly through diplomacy.
You might think a missile defense system is an insurance policy, but it’s hard to have a lot of confidence in that technology.
Ther is a study done a year ago by the U.S. Ground-Based Midcourse Defense system based in California, and they found that even under controlled test circumstances it has a 50 percent failure rate.
That’s a reflection of several things:
One is that it’s a very hard technical problem to hit a bullet with a bullet.
Also, the program behind the system has not been run very well.
It was exempted by former Pres. George W. Bush from the standard fly-before-you-buy oversight and testing rules, and so it’s been able to go forward without jumping through the hoops that Congress finds—again and again—are important to make these things work.
And these intercept tests are incredibly expensive, about $2 million each.
The Pentagon’s main tester has said the missile defense system has no demonstrated operational capability, which we agree with.
And yet people are saying that they could shoot down North Korean missiles, they have a 90 percent chance of success.
I worry about political leaders who don’t understand the technical issues, thinking that they have capability that they don’t actually have.
If Pres. Trump thinks that we could launch an attack on North Korea and then defend ourselves using THAAD or the Aegis Ballistic Missile Defense system, then it could lead to bad decisions.
What other options does the U.S. have to defend itself from North Korea’s arsenal?
Most of the defense systems are intended to work as a missile passes out of the atmosphere, during the midcourse phase, because once you’re above the atmosphere things move on a predictable trajectory.
The problem is that outside the atmosphere it’s also easier to deploy decoys that can confuse a missile defense system—things have the same trajectory above the atmosphere regardless of their mass.
One thing the U.S. doesn’t appear to be working on now—which is a bit of a mystery to me—is so-called boost-phase missile defense, which attempts to hit the missile while it’s still burning early in flight.
A missile burns for its initial three to five minutes of flight.
During that time it’s a large target that’s moving relatively slowly and is easy to see.
The problem is that because boost time is short you need to be relatively close to the launch site to be able to do it.
But North Korea is a small country surrounded by water—it was made for boost phase missile defense positioned on ships.
If you’re really worried about North Korea and want to develop a missile defense system, that’s probably the way you’d want to go.
If a nuclear-tipped missile were hurtling toward the United States, would we be able to stop it?
Maybe, if we were very lucky.
Right now, a constellation of sensors and 36 interceptor missiles make up the ground-based midcourse defense system, or GMD.
It’s intended to act as insurance against a small-scale nuclear attack from North Korea, or possibly Iran, according to the Department of Defense.
(Neither country has missiles capable of reaching the US, although US officials say North Korea is getting closer.)
It’s not meant to ward off an unlikely attack from the much larger and more sophisticated arsenals of Russia or China — nor would it be able to.
Still, it’s the only defense we have against an intercontinental ballistic missile or ICBM once it’s in the air.
On May 30th, 2017, the US tested these defenses against an ICBM-like target for the first time.
To stop it, a ground-based interceptor missile fired from Vandenberg Air Force Base collided with the incoming warhead and smashed it to smithereens.
The test appears to have been a success — but that doesn’t necessarily mean the GMD could stop an enemy weapon under real-world conditions.
In fact, the Government Accountability Office — a nonpartisan government agency also known as the congressional watchdog — reported in 2016 that the GMD “has not demonstrated through flight testing that it can defend the U.S. homeland against the current missile defense threat.”
HOW IS OUR MISSILE DEFENSE SYSTEM SUPPOSED TO WORK?
For a second, let’s imagine a frightening future where North Korea actually does have working ICBMs — and decides to launch one. Satellites with infrared sensors and radar systems deployed in Japan and on US Navy ships would spot the missile launch, and alert control centers in the US. Sensors, including a sea-based, high-resolution radar, would track the hostile missile as it flies.
When the missile leaves the atmosphere, it enters longest phase of its flight called the midcourse.
At this point, the missile breaks up into the warhead, debris, decoys intended to confuse our sensors, and the last stage of the burned-out rocket booster.
On the other side of the Pacific, people in control centers in Alaska and Colorado would work quickly to find the warhead and figure out where to intercept it.
Then, they give the order to fire interceptor missiles from Vandenberg Air Force Base in California, or Fort Greely, Alaska.
There are 36 interceptors stashed in silos at these two sites, each carrying a “kill vehicle” on a three-stage rocket booster.
(Tom Karako, a missile defense expert with the Center for Strategic and International Studies, describes the kill vehicle as a “funny looking telescope with a jetpack attached to it.”)
As the interceptor leaves the atmosphere and enters space, the 120-pound kill vehicle and the rocket separate.
Using infrared sensors to find the incoming warhead, the kill vehicle moves into the warhead’s path by firing its own little thrusters.
When the two objects collide, the kill vehicle should, theoretically, obliterate the warhead without causing a nuclear detonation. | <urn:uuid:b35b0cb3-0923-403b-8027-24bc1c4cc375> | CC-MAIN-2024-38 | https://debuglies.com/2017/09/05/report-could-the-us-really-stop-nuclear-missile-attack/ | 2024-09-16T15:59:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00241.warc.gz | en | 0.956334 | 1,514 | 2.796875 | 3 |
A new platform enables high-quality 3-D video communication on mobile devices such as smartphones and tablets using existing standard wireless networks.
“To our knowledge, this system is the first of its kind that can deliver dense and accurate 3-D video content in real time across standard wireless networks to remote mobile devices such as smartphones and tablets,” said Song Zhang, an associate professor in Purdue University’s School of Mechanical Engineering.
The platform, called Holostream, drastically reduces the data size of 3-D video without substantially sacrificing data quality, allowing transmission within the bandwidths provided by existing wireless networks, he said.
It improves the quality and expands the capabilities of popular applications already harnessing real-time 3-D data delivery, such as teleconferencing and “telepresence,” which uses virtual reality and other interactive technologies, allowing people to feel or appear as if they were present in a remote location.
“This technology also could enable emerging applications that may require high-resolution, high-accuracy 3-D video data delivery, such as remote robotic surgery and telemedicine,” said Zhang, director of Purdue’s XYZT Lab.
Findings are detailed in a research paper to be presented during the Electronic Imaging 2018 conference, Jan. 28-Feb. 2 in Burlingame, Calif.
The paper was authored by doctoral student Tyler Bell; Jan P. Allebach, Purdue’s Hewlett-Packard Distinguished Professor of Electrical and Computer Engineering; and Zhang.
Existing 3-D video communication technologies have limited applications, in part because of they require specialized and often expensive hardware, complex system setup, and highly demanding computational resources for operation in real-time, or without delay.
Before 3-D video can be transmitted it must be compressed.
However, it has been difficult to compress 3-D video for transmission in real time using conventional methods.
The new platform solves the problem by first converting 3-D video to 2-D format.
“Standard 2-D image and video compression techniques are quite mature and enable today’s modern 2-D video communications over standard wireless networks,” Zhang said.
“If 3-D geometry can be efficiently and precisely converted into standard 2-D images, existing 2-D video communication platforms can be immediately leveraged for low bandwidth 3-D video communications.”
The 3-D objects are represented by a mesh of intersecting lines that form triangles.
On top of this geometry is a “texture” of features that make the objects look realistic.
“This paper presents a novel method for the efficient and precise encoding of 3-D video data and color texture into a regular 2-D video format,” Zhang said.
“We developed a novel 3-D video compression method that can drastically reduce 3-D video data size without substantially sacrificing data quality.”
The framework was tested using standard medium-bandwidth networks to simultaneously deliver high-quality 3-D videos to multiple mobile devices.
Holostream is made possible through a new pipeline for 3-D video recording, compression, transmission, decompression and visualization.
The team developed both the hardware and software for the pipeline including a 3-D video capture system.
A 3-D camera captures the images, using an LED light to project structured patterns of stripes onto the object being scanned.
These stripes allow the system to determine the depth and shape of the object.
The platform could enable applications where real-time delivery of high-resolution, high accuracy of 3-D video data is especially critical, such as collaborative design and online “facial behavior analysis,” which could reveal a person’s mental state and medical conditions such as depression and post-traumatic stress disorder. | <urn:uuid:62750248-6845-4ef9-b147-549167bd30c6> | CC-MAIN-2024-38 | https://debuglies.com/2018/01/10/holostream-allows-high-quality-wireless-3-d-video-communications/ | 2024-09-16T15:06:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00241.warc.gz | en | 0.936271 | 795 | 2.890625 | 3 |
Creator: Alberta Machine Intelligence Institute & University of Alberta
Category: Software > Computer Software > Educational Software
Topic: Algorithms, Computer Science
Tag: dynamic, experience, learning, model, planning
Availability: In stock
Price: USD 100.00
In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment—learning from the agent's own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment's dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: – Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience – Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model – Understand the connections between Monte Carlo and Dynamic Programming and TD. – Implement and apply the TD algorithm, for estimating value functions – Implement and apply Expected Sarsa and Q-learning (two TD methods for control) – Understand the difference between on-policy and off-policy control – Understand planning with simulated experience (as opposed to classic planning strategies) – Implement a model-based approach to RL, called Dyna, which uses simulated experience – Conduct an empirical study to see the improvements in sample efficiency when using Dyna | <urn:uuid:cd9bdacb-0aa6-4ac3-8185-8f68382530ac> | CC-MAIN-2024-38 | https://datafloq.com/course/sample-based-learning-methods/ | 2024-09-20T06:22:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00841.warc.gz | en | 0.88934 | 353 | 2.71875 | 3 |
When it comes to the education of our children the reality is that no tool is too crazy to help them in a caring and helpful way. What are you are only a parent, a teacher are merely a concerned citizen you know that school can be very hard for children, and giving them as many tools as possible to make this transition as easy as possible is very important. As you can imagine one of the best ways to help children learn is by reaching out to them in ways that will reach them. This is why many educators are starting to use video games as a teaching tool so they know that most children play video games and it is a great tool to connect with the younger generations who are struggling when it comes to learning with simple pen and paper. So let’s take a look at what video games can teach children and how we can use those to enhance the learning experience.
You see their kids can learn a lot from video games and not only what the best Fortnite hacks are or out to secure a victory Royale. I think a lot of people forget that children in general are genuinely curious and will seek out information about things that they are passionate about. But what’s great about using video games in the school context is that it allows you as an educator to funnel their attention toward something that you want them to learn. So where do you even start with things like this and are there tools out there to simplify this process? Thankfully you will be happy to learn that there are companies even AAA video game companies that are making tools specifically to help children learn.
One of the best examples of this is of course Ubisoft which is most well known for its video game franchises Assassin’s Creed and the Tom Clancy property name. But one of the lesser-known things about this video game company is that they are working hand in hand with Canadian universities to develop teaching tools that are built with their video game engine. The great thing about these learning tools is that they’re using the Assassin’s Creed universe to teach people about history and these tools are so precise and so well made that they can both be used at the university level or to teach children about how people used to live. These kinds of tools are essential to not only replicate the way we understand history but also to share it with as many people as possible. The knowledge gathered by these researchers and Ubisoft 3D modelers is so impressive that it is being used to rebuild the Notre Dame Cathedral.
On their own time, one of the best things that children can learn from video games is logic in problem-solving. Especially at a young age children need to be entertained but also stimulated in a way that challenges them to work their brain in a way that allows them to develop critical thinking as well as problem-solving. According to some research, it has been proven that video games do stimulate many areas of your brain that govern problem-solving as well as logic so it only seems fitting to use those tools the children use every day to help them learn these skills that will serve them through their whole life. Seeing these things can help people develop problem-solving in day-to-day life In the same way that a box puzzle would. This is why if you are somebody in charge of purchasing things for a kid you might want to look at puzzle games and these types of games that can help your child feel challenged while also playing.
Last but not least one of the most important things that kids can learn from video games is to work with others and develop a sense of community. As we live in a society where isolation and alienation from society have become more and more prevalent being able to connect with others over shared hobbies is very important, especially at a young age. Being able to have a circle of friends online that shares interests as well as be able to connect on a human level this way is a great way to not only help your child understand how to work with people but also develop empathy. How many people argue that empathy is innate to human beings the reality is that like everything else it is like a muscle and needs to be worked and taught starting as soon as possible. In a world where there are so many different experiences and possibilities, it is important to make sure that your child is open-minded and empathetic to circumstances that are different from their own. | <urn:uuid:1b729538-fec4-48ec-97bc-555bcab673e9> | CC-MAIN-2024-38 | https://disruptionhub.com/what-can-our-children-learn-from-video-games/ | 2024-09-20T08:05:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00841.warc.gz | en | 0.981013 | 879 | 2.90625 | 3 |
Emerging technologies have the potential to change how these industries operate. One such major technology that has readily changed how today’s industries operate is Cloud computing. However, the uptake of new technologies is traditionally slow for pharmaceutical companies due to strict regulation and compliance issues. But the COVID-19 pandemic has been a wake-up call for the pharma industry to rethink its operations. Realizing the true potential of digital technologies, pharma companies are increasingly harnessing the power of advanced technologies to reduce costs and streamline their workloads. Today, cloud computing in the pharmaceutical industry is not just a storage solution, it has evolved to address significant privacy, security, and compliance challenges that the pharmaceutical industry faces. Here is how cloud computing in the pharmaceutical industry is driving the industry towards innovation and efficiency.
Read Intone insights: Digital transformation in the pharmaceutical industry
Cloud computing speeds up the Drug discovery process.
The research and development of a single drug can span over many years and Cloud-based high-performance computing will help researchers access virtually unlimited storage and computing resources that can help organize resources and guide initial surveys. Data, including lab, imagery, and statistical analysis data, can all be delivered at speed with the help of cloud-based infrastructure. This saves valuable time during the early R&D stages. This technology, combined with the power of Artificial Intelligence (AI) and Machine Learning (ML) opens up a wave of opportunities for pharmaceutical companies to speed up commercial results and save costs. These productivity savings, coupled with the operational savings can all be put back into the R&D efforts. Thus, cloud computing in the pharmaceutical industry helps speed up the drug discovery process.
Cloud computing enables seamless collaboration
The recent pandemic has made us understand the importance of the globalization of healthcare. Pharma companies often collaborate and work with a range of partners including biotech firms and research firms located anywhere in the world. This is where cloud computing comes in. With cloud computing in the pharmaceutical industry, there is scope to integrate and standardize these information flows to create a truly efficient, streamlined collaboration platform. This technology also provides pharma companies with greater control over their scalability potential. Moreover, pharmaceutical companies often collaborate with other companies to discover and develop new drugs. In such cases, the cloud can help companies come together to share a common platform while keeping their independent data safe and secured. Thus, cloud computing in the pharmaceutical industry enables a quick, efficient and secure way to deliver the value of their work.
Cloud computing ensures secure data sharing during clinical trials
Clinical research and clinical trials are transforming from a paper-driven model into one that is almost wholly electronic. Right now, wearables and smartphone technology are enabling researchers to collect data directly from clinical research trial participants. But despite having this advanced model to collect data and a high-speed computer to compute this data, clinical trials still take a long time. This is because clinical trials generate a large amount of data that often go underutilized due to obstacles that prevent data sharing including data misrepresentation and risking patient privacy. The solution is to bring clinical trial management into the cloud computing infrastructure. Having one centralized “console” in the cloud from which to query, receive and archive feedback vastly speeds up the clinical trial process. It also helps researchers aggregate clinical data and analyze them at the granular level. Thus, cloud computing in the pharmaceutical industry for clinical trials ensures professional-grade security and discretion of valuable collaborative input.
Cloud computing supports pharma marketing operations
Every player in the pharmaceutical industry wants to enhance and personalize patient engagement while also lowering costs to reduce the risk while tapping into new markets. This is exactly why pharma companies are increasingly investing in cloud computing in recent years. Cloud computing has made leaps and bounds over the last few years and offers a huge opportunity for pharma companies to boost their marketing activities. It allows marketers in the pharmaceutical industry to function more efficiently wherever they are and thus allows them to directly access tonnes of data instantly. Implementing cloud computing in the pharma industry also maximizes cost efficiency, improves communication among marketers, and optimizes the digital media spend that could save millions of dollars. This is how cloud computing in the pharmaceutical industry supports marketing operations.
Constant and persistent innovation is the key driver of today’s multibillion-dollar pharmaceutical industry. With the digitalization accelerated by the coronavirus pandemic, the market for cloud computing in the pharmaceutical industry is expected to experience rapid growth as regulatory issues are ironed out and security fears settle down. Moreover, the future will see a further explosion of data and it is only wise for the pharmaceutical industry to adopt digital technologies such as cloud computing, Artificial Intelligence, Big data analytics, and IoT to withstand the demands of tomorrow. For those ready to seize the opportunity and meet the challenge, we at Intone are creating new and innovative strategies that result in infinite new possibilities to reinvent the patient experience and redefine the future of the pharmaceutical and life sciences. We offer a full range of services that set the gold standard in the pharmaceutical and life sciences industry by creating innovative, intelligent, intuitive, and integrated solutions including patient services and CRM, R&D, supply chain, cloud migration, commercial services, financial transformation, GRC, and technology design, build and integration. Let us transform your pharmaceutical company and raise new levels in science. | <urn:uuid:bc2b1aa7-4dd7-4478-a2da-708eb0a61d3e> | CC-MAIN-2024-38 | https://intone.com/cloud-computing-in-the-pharmaceutical-industry/ | 2024-09-20T07:33:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00841.warc.gz | en | 0.931786 | 1,083 | 2.59375 | 3 |
Tactile learning is both a learning style and an approach to learning and development. In both cases, it’s all about hands-on experience — specifically through touch. Also known as kinesthetic learning, tactile learning involves immersing oneself in training content to essentially ‘learn by doing.’ Tactile learning materials are developed with interactivity in mind, usually focused on engaging one’s sense of touch.
Tactile learners tend to perform best when they’re able to directly explore, experiment with, and experience training content rather than memorizing it. They may display excellent manual dexterity and strong spatial awareness along with a tendency to move or fidget when they have to process information. Beyond that, tactile learners usually fall into one of three groups:
Tactile learning is a form of hands-on training that places a greater emphasis on physical interaction and movement. It may or may not take a multisensory approach to learning, incorporating tactile, aural, and visual. Tactile learning activities may include:
Tactile learning is beneficial for a lot of the same reasons as hands-on training.
First, it promotes improved learner engagement and retention, especially if you incorporate multisensory experiences. People tend to learn a lot more effectively when they can directly interact with training content, especially if that content taps into more than one sense.
Tactile learning also improves both comprehension and understanding. Rather than having to sit through a painfully boring slide deck or read through a sprawling knowledge-base, participants can learn how your software works by actually using it. It improves the sales journey and onboarding process for customers and prospects and also helps streamline employee training.
Tactile learning can also be tweaked to accommodate a diverse range of learning styles. Instead of focusing solely on one type of learner, you can make sure everyone can engage in a way that makes sense to them. Through adaptive learning, this can even be achieved at scale.
Lastly, from a customer education perspective, tactile learning makes it easier to narrow the focus of your training content. Customers can focus exclusively on the content that interests them while you stand by waiting to provide them with feedback and guidance. It’s a huge step up from scripted demos and sales pitches.
Although it sees some applications in post-secondary institutions, tactile learning is most frequently deployed in early childhood, elementary, and secondary education. That doesn’t mean it’s not relevant to software training, though. Interactive simulations, particularly those delivered via mobile device, are innately tactile experiences.
Moreover, as technology such as virtual reality and augmented reality continues to proliferate, businesses could have the opportunity to incorporate even deeper interactivity into both their employee and customer training. Rather than just simulating software environments, they could simulate entire offices or locations.
Imagine if, instead of staring down at a phone screen or laptop during a cybersecurity simulation, you were actually put directly in the middle of an office experiencing a major cyberattack. It’d take learning by doing to a completely different level. More importantly, it’d help your people be better-prepared than ever for when they actually have to put their knowledge to the test. | <urn:uuid:0eb4a770-17ef-4b6a-a186-5f2d6595977e> | CC-MAIN-2024-38 | https://www.cloudshare.com/virtual-it-labs-glossary/tactile-learning/ | 2024-09-08T05:50:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00141.warc.gz | en | 0.957013 | 654 | 3.390625 | 3 |
USB drives offer so much convenience. A little storage device as big as your finger, you can carry it around without even noticing it—and with every passing year, the amount of data they can hold grows and grows. These small storage devices are so easy and convenient to use that they are found everywhere in the business world, from desk drawers to branded swag drives on keychains. And since they are so easy to pop in and out of your USB drive, if you are like many people, you probably do not even bother to eject them before you take them out of your drive. Is there really any problem with not ejecting your USB drive properly? Unfortunately, the answer is a definite “Yes.”
From losing data to ruining the drive, failing to properly eject your USB drive can lead to real issues. Read on to discover the way your USB drive works and why it is so important to go through the ejection process on your computer.
Removing a USB Drive Without Ejecting—What You Need to Know
How USB Drives and Computers Communicate
Using a USB drive is such a seemingly simple task. But when you look more closely at what goes on with your drive and your computer when they interact, you will discover that the way they work together involves a lot more than just plugging in and unplugging.
When you plug a USB drive into your computer or laptop, the first thing that happens is the computer delivers power through the USB port to the USB drive. The drive does not have its own power source, so it requires power from the computer to operate. After the computer has supplied power, the computer and the drive must communicate with one another.
Proper communication between a computer and a drive requires having the right drivers installed on your computer. Fortunately, today’s drives come equipped with drivers that your computer can download to allow it to communicate with the drive—which is why modern USB drives are considered “plug-and-play.”
When the computer and the drive have established communication, the computer does what it needs to do to figure out what is on the drive. There are multiple steps to just this process, including reading the directory structure, Master Boot Record or Partition Boot Record (the process can vary by drive).
Every one of the things described above happens before you are able to see your USB drive contents on your computer—all within a matter of seconds. There are numerous other things that go on behind the scenes as you use the USB drive as well. While it may seem like the changes you make to your drive happen instantly, in reality, there are multi-stage processes occurring that may take longer than you realize.
Alterations to Your Drive Happen in Batches
As your computer is reading your drive, it is changing the information in the metadata on the files, such as changing the time and date that the file was last modified. Then, when you make changes to files, such as adding or deleting a file, the changes you make will first occur in your computer’s cache. Eventually, your computer will make the actual alterations to the information on your drive. Again, these things happen quickly, but it is important to understand that they do not happen instantly, which is one of the reasons why pulling the drive out can cause problems.
Other Programs May Be Using Your Drive
You see a very small portion of what actually happens with your computer at any given moment. While you may not be interacting with your drive right now, other programs on your computer could be doing so. For example, your antivirus and anti-malware programs could be busy scanning your drive while you are doing other things. Removing the drive while such programs are doing things on your drive can cause the files to be corrupted.
What Happens When You Eject the Drive?
Your computer and your drive have to go through a process to say goodbye just like they had a process to say hello. By pressing the eject button in your system you are telling the computer to start this process and finalize everything so that the drive can be removed safely. The computer will make sure that all of its interactions with the drive are completed before it says that you can safely remove the drive—like waiting until the antivirus is done scanning the drive.
Always Eject the Drive to Avoid Damaging Files or the Drive
Failing to properly eject your USB drive can damage files or corrupt the entire drive. That is why you always want to go through the proper ejection process. Failing to do so could cause you to lose your data on the drive or cause you to lose the ability to use the drive at all. | <urn:uuid:eb8500d4-03a5-4847-ba8c-a95d2ed6c201> | CC-MAIN-2024-38 | https://www.elevateservicesgroup.com/do-you-really-need-to-eject-that-usb-drive/ | 2024-09-08T07:01:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00141.warc.gz | en | 0.9628 | 956 | 2.578125 | 3 |
November 6, 2019 Alex Woodie
Businesses in Northern California learned about a new threat to business continuity in October: the Public Safety Power Shutoff, or PSPS. With weather conditions ripe for the rapid spread of wildfire, Pacific Gas and Electric de-energized thousands of miles of power lines, sending thousands of homes and businesses into disaster recovery mode.
In late September, as weather conditions turned dry, hot, and windy, PG&E warned that it would have to cut power across large swaths of its Northern California service territory, potentially shutting off power to 2.4 million people. In the wake of power line-sparked wildfires that destroyed tens of … | <urn:uuid:e2a95f9a-aa89-477f-b3c9-6fd2fbd7497b> | CC-MAIN-2024-38 | https://www.itjungle.com/tag/pssp/ | 2024-09-09T12:30:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00041.warc.gz | en | 0.951311 | 139 | 2.625 | 3 |
Earning trust through principled privacy operations and transparency.
The EU General Data Protection Regulation (GDPR) restricts the transfer of personal information outside of the European Economic Area except in cases where adequate protections are in place for the sufficient protection of personal information. As both a global enterprise and multinational company, NetApp recognizes the need to provide adequate levels of data protection to ensure personal information is protected when transferred across borders and has put in place a number of measures to meet the requirements of the GDPR.
Read more about NetApp’s response to the recent EU decision on the Privacy Shield here.
Modern global enterprises expect information to be available regardless of where they are, where their workforce is, and where their customers are. Everything from human resources to product development and transportation is data driven, and the ability to confidently transfer data between geographies is imperative for building and maintaining a global business. When the data being transferred is personal information, however, safeguards must be in place to ensure that the privacy of the data subject—the person whose data is being transferred—is sufficiently protected.
Over 100 countries have data protection laws. While many of these laws share common principles, they can and do vary in their requirements for cross-border data transfers. For example, under the GDPR, personal information is not permitted to be transferred outside of the EU unless certain conditions are met. Other laws, such as restrictions on the transfer of personal information collected by government agencies or related to an individual’s health or finances, may impose additional conditions or restrictions. Whatever your geolocation requirements are, NetApp has you covered.
The primary reason that people are concerned with data location is because it relates to which government has the right to make legal decisions and judgments regarding access to the data—what lawyers refer to as “jurisdiction.” International legal rules regarding jurisdiction are based on an underlying recognition of a nation’s sovereignty and often involve complex rules of interpretation when dealing with international transactions. Questions of jurisdiction become particularly concerning when dealing with individual rights of data privacy, as different jurisdictions recognize and enforce individuals’ rights in their personal data in different ways.
For example, in Europe, the GDPR restricts moving personal information outside of the European Economic Area except under certain circumstances. These circumstances include an adequacy decision by the European Commission that the receiving country has implemented adequate legal protections of personal data. The GDPR anticipated that countries outside the EU may not be willing or able to change their laws for the purpose of meeting Europe’s privacy requirements. Therefore it has provided other options for cross-border transfers, where individuals can rely on the private law of contracts to ensure that their personal information is adequately protected. For entities operating in those countries without an adequacy decision, the GDPR permits these cross-border transfers when the entity transferring the data is subject to Binding Corporate Rules or when the contracts for the treatment of such data include Standard Contractual Clauses.
NetApp is a global company operating throughout the world and has long recognized the need for responsible cross-border data transfers. With headquarters in California, we are not eligible to rely on an adequacy decision by the European Commission. Instead, we place our commitments to protect personal information in our Binding Corporate Rules (BCRs). In fact, NetApp was one of the first companies to have our BCRs approved by our supervisory authority in the Netherlands. We have updated our BCRs to reflect the requirements of GDPR and we are currently awaiting their approval. Additionally, we provide standard contractual clauses as part of our Customer Data Processing Addendum as further assurance for how data is transferred as part of the processing activities. Each of these clauses is backed by administrative, technical, and operational safeguards that are regularly assessed for compliance.
Some types of personal information, such as information collected by a government on its citizens, may have additional restrictions regarding movement across borders. As a global leader in storage across platforms, NetApp offers many solutions that can meet even the most stringent requirements for data localization. Customers can choose between the industry’s broadest portfolio of all-flash, hybrid-flash, and object storage systems for a variety of on-premises storage solutions.
Customers who need specific storage locations are not limited to on-premises solutions, though. The NetApp hybrid multi-cloud offerings also allows customers to choose among the top public cloud providers to choose a data storage location best suited for their business needs. NetApp has solutions available on Microsoft Azure, Amazon Web Services, and Google Cloud Platform, each of which offers options for data location choices.
NetApp’s approach to global data privacy laws and the movement of data across national borders.
How we collect, use, process, store, transfer, and disclose personal information.
The terms and conditions that apply when we and our affiliates act as a processor and process customer personal data.
How the EU determines if a non-EU country has an adequate level of data protection. | <urn:uuid:e132636b-6815-488e-b020-fad80189c381> | CC-MAIN-2024-38 | https://www.netapp.com/esg/trust-center/privacy/data-location-cross-border-transfers/ | 2024-09-09T11:24:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00041.warc.gz | en | 0.938181 | 1,013 | 2.515625 | 3 |
Encapsulating Security Payload (ESP) is a member of the Internet Protocol Security (IPsec) set of protocols that encrypt and authenticate the packets of data between computers using a Virtual Private Network (VPN). The focus and layer on which ESP operates makes it possible for VPNs to function securely.
The enhanced version of IPsec in use is an Internet-layer security protocol. It is pre-programmed for IP-layer application security whereas other protocols such as Transport Layer Security (TLS) and Secure Shell (SSH) function on the application layer.
Security Authentication Header (AH) is another IPsec member protocol. ESP and AH can operate between hosts and between networks. The can also operate in two modes: the less-secure Transport Mode that encrypts the data packet, for use between two workstations that are running a VPN client; and Tunnel Mode, which is more secure. Tunnel Mode encrypts the whole packet including header info and source, and is used between networks.
“Security for a VPN involves IPsec, and with IPsec’s protocols of AH and ESP, the connection between a user and a network is secure. Going further ESP, on the application layer, can run in its more secure Tunnel Mode offering the most privacy.” | <urn:uuid:955ec26b-9967-4b82-ada6-1b1d78bd442f> | CC-MAIN-2024-38 | https://www.hypr.com/security-encyclopedia/encapsulating-security-payload-esp | 2024-09-10T14:16:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00841.warc.gz | en | 0.944826 | 263 | 3.171875 | 3 |
Guided Access is one of the most important accessibility features offered by Apple. It allows users to restrict the functionality of an iPhone, iPad, or iPod Touch to a single application, and to require a password to exit that application.
In this article, you’ll learn how Guided Access can be useful, how to activate it and deactivate it, and several important tips to keep in mind.
What Makes Guided Access Useful?
As of this writing, the most current Apple operating systems (iOS13 and iPadOS 13) feature Guided Access. Guided Access is worth mastering because of its ability to easily restrict phone functionality.
After all, there are many circumstances when someone would only want to have access to a single application on a phone.
Uses for iPad or iPhone Guided Access might include:
- Giving a phone to a child and ensuring the child does not access adult content.
- Using the Guided Access feature on iPhones or iPod Touches as a study aid, as Guided Access can ensure a distraction-free environment.
- Using Guided Access on iPad to create a kiosk outside of business settings (such as creating a virtual guestbook for a baby shower).
- Legitimizing the use of Apple devices as calculators during academic exams and other evaluations.
Additionally, Guided Access allows the device owner to deactivate touch functionality on certain areas of the screen while Guided Access is active.
For children, this might be useful as a way of blocking specific interface elements of certain apps (for example, areas on screen that children could use to alter in-app settings).
If an individual with tremors or other physical challenges uses a device, this feature can be used to keep unintentional touches from disrupting device functionality.
The “no-touch zone” is visually indicated to the user with a grey color; this can be also used as a visual aid to indicate to users what parts of a given app are static and which are interactive.
Activating Guided Access
Activating Guided Access is quick and easy. Guided Access must first be activated in the Settings menu, where you can also choose a password for Guided Access functionality. (Please note that the default password for Guided Access is the same as the device password).
For instructions, please visit this link. Once this is done, Guided Access can be engaged at any time by triple-clicking a device’s home (or side, depending on device) button while using an app.
If you have other functionality (such as color inversion) associated with the triple-click action, you will need to manually select Guided Access from the menu that appears when you triple-click.
When you first enter an app and activate Guided Access, you will be asked if you wish to designate no-touch zones; you can do so by drawing one or more shapes of your choice with your finger; alternately, you can simply deactivate touch controls altogether.
Additionally, you will have the option to activate or deactivate features like the device’s physical buttons, as well as the virtual keyboard. You can also input a time limit, after which the device will return to its lock screen.
While in Guided Access, users cannot activate Siri, download new applications, or follow any links that would require a different application to open. This way, parents can have the security of knowing that children will not visit the App Store and make purchases during their Guided Access session.
Deactivating Guided Access
Guided Access can be turned off in several ways, depending on whether or not the device owner remembers their Guided Access password.
In order to turn off Guided Access (provided you remember the password), simply triple-click the same button used to activate Guided Access mode; enter the password, and your device will now work as normal.
This triple-click method will work even if you have chosen to deactivate the functionality of the button used for the triple-click.
If you have chosen for Guided Access to be timed, the device will display the device lock screen when time is up. From there, simply enter your standard device password, and the device will function normally.
Unfortunately, forgetting Guided Access passwords is common, leading them to ask how to get out of Guided Access on iPhone, iPad, and iPod touch devices.
In this scenario, leaving Guided Access is slightly more complicated, but still entirely possible.
Even if all external buttons have been deactivated, Apple still allows every device to complete a “soft reboot” using a specific sequence of button presses.
Once the “soft reboot” is complete, the device will display the standard lock screen; use the device password to enter the device and resume normal functionality.
Some Important Considerations
The “soft reboot” method mentioned above can be used to bypass Guided Access if the user knows the device password. Therefore, the device owner must keep both the device and Guided Access passwords secret from the Guided Access user, as knowing either renders Guided Access moot.
If children are using devices with Guided Access, parents may be concerned that children will not be able to contact emergency services if Guided Access on iPhone will not turn off.
This is not true, however. Once a device undergoes a “soft reboot,” however, the user can access emergency phone services from the device’s lock screen.
As a result, it may be worth teaching the Guided Access user about the “soft reboot” option, so if necessary, the user can call for help.
One emergency feature enabled by default on iPhones is that tapping the home button five times will quickly call emergency services.
Please note that while in Guided Access mode, this five-click shortcut does not work, regardless of what buttons are activated or deactivated.
Alternatives to Guided Access
Given the workarounds inherent to Guided Access, device users may wonder if other options are available.
Indeed, other options may be more suitable for devices that require prolonged regulation rather than a short period of restricted use.
For instance, Guided Access is infeasible for a child’s own phone or tablet if the child plans to use the device for multiple functions, or if the child knows the device password.
Apple offers Parental Controls (also known as Restrictions), an app designed to provide extensive control over a child’s device.
The app allows parents to allow or hide apps, set time limits on non-emergency usage, and more.
In addition to Parental Controls, there are many other ways to child-proof a phone or tablet. In particular, Android devices feature app pinning, a loose equivalent to Guided Access.
Finally, please note that Guided Access is only intended for use outside of business contexts. In order to keep business devices in single-application mode, mobile device management software works with Apple and Android devices in ways that Guided Access does not.
p>This enables greater control over device functionality. For example, SureMDM by 42Gears enables true single-application functionality on business-owned devices, such that the “soft reboot” workaround in Guided Access is not applicable for managed devices.
iPad, iPod Touch, and iPhone Guided Access is by no means a perfect way to restrain users to a single application, but it is still a very useful tool.
As long as one keeps the “soft reboot” workaround in mind, and considers possibly implementing Parental Controls in place of Guided Access on devices for children, Guided Access is worth using in a range of situations. | <urn:uuid:662e6e85-5a88-4734-921b-80b529a4d2da> | CC-MAIN-2024-38 | https://www.42gears.com/fr/blog/how-to-use-guided-access-on-iphone-and-ipad/ | 2024-09-11T21:02:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00741.warc.gz | en | 0.925205 | 1,591 | 2.796875 | 3 |
What is network security?
Network security entails protecting a business’s data and resources from unauthorized access and cyber threats. It relies on various technologies and practices to keep information safe and ensure smooth network operations. Effective network security safeguards against data breaches and helps maintain operational efficiency and trust.
Typically, network security involves implementing firewalls, encryption, and access controls. It also requires monitoring a system for vulnerabilities, such as common SNMP security vulnerabilities.
Essential terms and concepts in network security
Understanding and implementing network security means familiarizing yourself with key terms and concepts. The definitions below represent foundational elements and tools that protect networks from threats and ensure secure operations.
- Firewall: A security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.
- Intrusion Detection System (IDS): A device or software that monitors network traffic for suspicious activity and potential threats.
- Intrusion Prevention System (IPS): Similar to IDS, but with the added capability to block or prevent detected threats.
- Virtual Private Network (VPN): A service that encrypts your internet connection to secure data transmission over less secure networks like public Wi-Fi.
- Secure Sockets Layer (SSL) / Transport Layer Security (TLS): Protocols that encrypt data sent over the internet, ensuring secure communications.
- Endpoint Security: Protection measures for individual devices connected to the network such as computers and mobile devices.
- Antivirus and Anti-malware: Software designed to detect, prevent, and remove malicious software.
- Encryption: The process of converting data into a coded format to prevent unauthorized access.
- Multi-Factor Authentication (MFA): A security method that requires more than one form of verification to access a system.
- Zero Trust Architecture: A security model that assumes no implicit trust and requires verification for every access request, regardless of its origin.
The importance of network security
Network security is vital for protecting sensitive information and maintaining a system’s integrity. It defends against unauthorized access and cyber threats such as malware, ransomware, and phishing attacks—all of which can lead to significant financial and operational damage.
Implementing effective network security prevents data breaches that could compromise customer and business data. It ensures compliance with industry standards and regulations and helps prevent disruptions and downtime to keep systems reliable and efficient. Network security protects an organization from incurring the legal and regulatory penalties associated with data breaches, helping maintain its organization’s reputation and operational credibility.
By shielding an organization from potential threats and fostering trust with clients and partners, robust network security measures support long-term business success and stability.
Common network security threats
Recognizing and understanding common network security threats is crucial for developing effective defense strategies. Here are some of the most significant threats to be aware of:
- Malware and Ransomware: Malware is software specifically designed to disrupt, damage, or gain unauthorized access to computer systems. Ransomware is a type of malware that encrypts the victim’s data and demands payment for the decryption key. Both can cause severe damage by corrupting files, stealing sensitive information, or rendering systems unusable.
- Phishing Attacks: Phishing involves deceptive attempts to obtain sensitive information such as usernames, passwords, or financial details. This is typically done through fraudulent emails, text messages, or fake websites that trick individuals into divulging their credentials or downloading malicious attachments.
- Denial of Service (DoS) Attacks: DoS attacks aim to make a network or service unavailable to intended users by overwhelming it with excessive traffic. These attacks can result in degraded performance or a complete shutdown. Either way, it disrupts business operations and harming productivity and revenue.
- Man-in-the-Middle (MitM) Attacks: In a MitM attack, an attacker intercepts and potentially alters the communication between two parties without their knowledge. This type of attack can involve capturing sensitive information, such as login credentials, or injecting malicious content into the communication stream. As a result, the security of the entire network is compromised.
- Insider Threats: Insider threats come from individuals within an organization who can access the network and its systems. These threats can be intentional (employees who deliberately steal or sabotage data), or unintentional (employees who inadvertently expose security vulnerabilities through careless actions or lack of awareness).
Understanding these threats allows organizations to prepare and implement strategies that protect their networks and data from harm.
How Atera enhances network security
Atera provides a robust set of tools designed to fortify network security and simplify IT management. The platform integrates advanced security features that proactively protect an organization’s network from threats and vulnerabilities. From real-time monitoring to automated patch management, Atera’s solutions address the complexities of modern cybersecurity challenges.
Here’s a closer look at how Atera enhances network security:
- Remote Monitoring and Management (RMM): RMM capabilities allow users to continuously monitor a network and its systems for unusual activity or potential threats. This enables the identification and resolution of issues before they escalate, ensuring real-time protection and minimizing risks.
- Patch Management: Keeping software and systems up-to-date is a crucial defense against known vulnerabilities. Patch management tools automatically deploy updates and patches across a network, reducing the risk of security breaches caused by outdated software.
- Integration with Security Tools: Atera seamlessly integrates with various third-party security tools and solutions. Users can leverage additional security measures and enhance their overall defense strategy while operating from a single, unified platform.
- Automated Alerts and Reporting: Atera sends automated alerts for suspicious activities or potential security threats. The platform also provides detailed reports and analytics. These insights help users make informed decisions that strengthen network defenses.
- Centralized Management: Atera’s centralized management system simplifies the administration of network security tasks. Consolidating various security functions into one platform allows users to efficiently oversee and manage security measures, streamline workflows, and ensure comprehensive coverage.
By leveraging Atera’s advanced features, users can enhance network security, protect critical systems, and maintain a secure and reliable IT environment.
Summing it all up
Network security is essential for protecting sensitive data, maintaining operational efficiency, and ensuring IT system’s reliability. Understanding key concepts, recognizing common threats, and implementing robust security measures helps you safeguard your network against potential risks.
That said, it’s essential to choose a network security tool that defends against cyber threats, promotes business continuity, and builds trust with clients and partners. Atera’s advanced tools and features—including remote monitoring, patch management, and seamless integrations—provide comprehensive support for managing and enhancing network security. Explore how Atera can secure your network and streamline your IT management, try a free 30-day trial of our platform. Discover firsthand Atera’s beneficial effects on network security and IT operations. | <urn:uuid:5bbbda34-7993-496c-9b28-7c92b2329a91> | CC-MAIN-2024-38 | https://www.atera.com/glossary/network-security/ | 2024-09-11T20:29:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00741.warc.gz | en | 0.911367 | 1,421 | 3.609375 | 4 |
The Science of Cooling Innovation
Get Our FREE Guide:
Super-Cooling Powers Supercomputing for Port d’Informacio Cientifica
Supercomputers can advance science and help mankind. But they can be hard on the environment, produce intense heat, and take up a lot of space. See how Port d’Informacio Cientifica (PIC) overcame these challenges and others through immersion cooling, laying the foundation for doing more great things while cutting energy usage.
Case Study Highlights:
- How this organization reversed rising energy consumption
- Overcoming the challenges of limited space
- Adding capacity while reducing power requirements by 30%
- Attaining an mPUE of 1.16
Discover how this organization’s success can be yours, too. Fill out the form at right to get your copy today! | <urn:uuid:58a8a0a3-2cbf-43c7-b276-9ae7e68b0132> | CC-MAIN-2024-38 | https://www.grcooling.com/learning-center/super-cooling-powers-supercomputing-for-port-dinformacio-cientifica/ | 2024-09-11T20:03:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00741.warc.gz | en | 0.814521 | 177 | 3.125 | 3 |
In defining a constant with the Const statement, you specified a numeric value that is too small for the specified or default data type:
- The value is too small for the data type specified by the value's suffix character.
- If no suffix character is specified, the value is too small for a Double.
Const X = .1E-300! ' Illegal because the value is too small for
' the data type Single
Const X = .1E-300# ' Legal
Change the suffix character to match the magnitude of the value, or specify a larger value. | <urn:uuid:da5d59e1-8177-4663-bf6b-15534a861a69> | CC-MAIN-2024-38 | https://help.hcl-software.com/dom_designer/10.0.1/basic/H_STR_UNDERFLOW.html | 2024-09-15T13:35:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00441.warc.gz | en | 0.694967 | 119 | 2.578125 | 3 |
If you’re looking for something to celebrate this Earth Day, consider this: despite steady increases in Internet-connected devices and data consumption, green computing initiatives across the technology sector have managed to prevent a corresponding rise in energy consumption. But that doesn’t mean companies should stop doing their part to reduce their CO2 footprint. On the contrary, as new technologies continue to enable more connectivity than ever, it’s increasingly important for businesses to offset and drastically reduce their resource use through green computing initiatives.
This post will examine green computing and sustainable information communication technologies (ICT) by discussing the rise of green computing, measures that reduce resource consumption, and explain how Azion is taking progressive steps to minimize its energy consumption and use through its architecture and products like edge computing, serverless computing, and other advanced technologies.
What is Green Computing?
Technopedia defines green computing as measures that improve IT through “environmentally sustainable production practices, energy-efficient computers and improved disposal and recycling procedures.” The article breaks down this wide array of practices into four categories:
- Green use: minimizing the electricity consumption of computers and IT equipment
- Green disposal: recycling, repurposing, or appropriately disposing of unwanted equipment
- Green design: Improving the design of devices to make them more energy efficient
- Green manufacturing: minimizing manufacturing waste to reduce its environmental impact
Green computing is one way companies can meet their Environmental, Social and Corporate Governance (ESG) targets. Investopedia defines ESG as “a set of standards for a company’s operations that socially conscious investors use to screen potential investments.” The three areas of interest include environmental impact, commitment to social values, and ethical governance. As Investopedia notes, “Environmental criteria may include a company’s energy use, waste, pollution, natural resource conservation, and treatment of animals.”
Another related term, sustainable ICT, looks at ways that IT itself can be used to advance environmental initiatives. An example of this is how edge computing and 5G are being used to create smart meters that measure the use of resources like water and electricity in order to minimize usage and maximize resource efficiency.
What Are the Benefits of Green Computing?
As we move toward a more global and connected world, the benefits of protecting the planet and pursuing environmental sustainability cannot be overstated. Improving connectivity is an amazing tool for change, but the more connected we become, the more resources we produce and consume. In fact, a study from Nature magazine predicted that with current usage trends, information and communications technology could increase from producing 2% of all global emissions in 2018 to producing anywhere from 8% and 21% by 2030.
In addition to the obvious environmental benefits, green computing also has a number of advantages for businesses themselves, particularly as environmentally responsible business practices are rewarded by consumers, investors, and governments alike. A recent Thoughtworks article noted that 50% of growth of consumer goods in the last five years has come from green products, 70% of employees said they are more likely to work at a company with an environmentally friendly agenda, and companies with sustainability initiatives have superior long-term stock performance compared to those that don’t. Furthermore, the same measures that improve resource efficiency, such as reducing large files and caching, improve application performance. The article’s list of benefits for green computing includes:
- Meeting sustainability goals
- Reducing infrastructure costs
- Optimizing application performance
- Speeding development cycles with efficient code
- Satisfying eco-conscious investors
- Attracting and retaining eco-conscious talent
How Can Digital Businesses Go Green?
While the decision to go green may be an obvious one for many companies, the path toward sustainability may be less clear. Ultimately, businesses and providers have a shared responsibility to improve sustainability by reducing the sources of usage, which can be broken down into four areas:
- Power and cooling
- Storage drives
Fortunately, as technology becomes more sophisticated, the efficiency of each of these areas has improved considerably.
A 2020 article in Wired cited that from 2010 to 2018, “The typical computer server uses roughly one-fourth as much energy, and it takes roughly one-ninth as much energy to store a terabyte of data.” Moving servers out of on-premise data centers and into the cloud and edge has enabled virtualized and elastic resource use that improves the efficiency of servers and, as a result, reduced the power and cooling needed to keep those servers running.
Power and Cooling
In addition, power and cooling methods themselves have become more efficient. Hyperscale cloud data centers increasingly use renewable energy for power and innovative methods of power and cooling, such as recycled waste heat and submerging data centers underwater. In addition, as explained in an article by STL partners, “Arguably, an edge data centre may require less energy for cooling, relative to its output and size. This is known as “free” cooling and particularly relevant in cooler climates. A few racks of servers (edge “data centre”) would have a higher surface area per server than if the same size rack was being processed in a hyperscale data centre.”
Similarly, edge computing reduces the resources needed for networking, as it decreases overall traffic. By moving processing closer to where data is generated or needed, the time and energy needed to serve each request is reduced and bandwidth is improved. This makes networking more green because, as explained by LFE, “High bandwidth consumption is linked to high energy usage and high carbon emissions since it uses the network more heavily and demands greater power.”
Finally, storage devices today are considerably more energy efficient than their predecessors. Older data centers and CDNs often use an older type of storage device called hard disk drives (HDD) that read and write data using a mechanical process where data is stored magnetically on spinning disks, whereas newer facilities (such as Azion’s edge locations) use solid-state drives (SSDs) which store data digitally. This makes SSDs not only faster and more durable than HDDs, but also more resource-efficient. Intel observed in their report that, “SSDs commonly use less power and result in longer battery life because data access is much faster and the device is idle more often. With their spinning disks, HDDs require more power when they start up than SSDs.”
How Does Azion Reduce Energy Consumption?
Azion is built on the foundations of serverless and edge computing models to make energy consumption more efficient. Not only does Azion’s Edge Application reduce bandwidth through measures such as image optimization, efficient caching, and load balancing, our serverless model eliminates wasted resources by automatically scaling to meet need. By running workloads when and where they are needed, companies do not need to overprovision resources to prepare for usage spikes; as a result, both the environment and the company benefit from less energy waste.
As Internet connectivity and usage increases, so must energy efficiency in the IT sector. Device manufacturers, service providers, digital businesses, and end users all must play a part in helping to offset and reduce the environmental toll of increasingly data-hungry technology. By following and furthering the use of renewable energy, innovative heating and cooling, energy-efficient hardware and software, and other green computing measures, companies that produce, consume, and process data can ensure that next-generation technologies help to reduce, rather than increase energy consumption. The more energy we use, the more important it is to keep in mind that with great power comes great responsibility. | <urn:uuid:57710f73-fad6-43cc-9983-5e3b5342fafb> | CC-MAIN-2024-38 | https://www.azion.com/en/blog/green-computing-with-azion/ | 2024-09-15T14:31:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00441.warc.gz | en | 0.939759 | 1,560 | 3.53125 | 4 |
What is password spraying? How to prevent password spraying attacks
What is a password spraying attack?
Password spraying is a type of brute force attack which involves a malicious actor attempting to use the same password on multiple accounts before moving on to try another one. Password spraying attacks are often effective because many users use simple and easy-to-guess passwords, such as “password” or “123456” and so on.
In many organizations, users are locked out after a certain number of failed login attempts. Because password spraying attacks involve trying one password against multiple accounts, they avoid the account lockouts that typically occur when brute forcing a single account with numerous passwords.
A particular feature of password spraying – as the word ‘spraying’ implies – is that it can target thousands or even millions of different users at once, rather than just one account. The process is often automated and can take place over time to evade detection.
Password spraying attacks often take place where the application or admin within a particular organization sets a default password for new users. Single sign-on and cloud-based platforms can also prove particularly vulnerable.
While password spraying might seem simplistic compared to other types of cyber attacks, even sophisticated cybercrime groups use it. For example, in 2022, the US Cybersecurity & Infrastructure Security Agency (CISA) issued an alert about state-sponsored cyber actors, listing various tactics they use to gain access to targeted networks – and password spraying was included.
How does a password spraying attack work?
Password spraying attacks typically involve these stages:
Step 1: Cybercriminals buy a list of usernames or create their own list
To initiate a password spraying attack, cybercriminals often start by buying lists of usernames – lists which have been stolen from various organizations. It’s estimated that there are over 15 billion credentials for sale on the dark web.
Alternatively, cybercriminals may create their own list by following the formats that corporate email addresses follow – for example, email@example.com – and using a list of employees obtained from LinkedIn or other public information sources.
Cybercriminals sometimes target specific groups of employees—finance, administrators, or the C-suite – since targeted approaches can yield better results. They often target companies or departments using single sign-on (SSO) or federated authentication protocols – that is, the ability to log in to Facebook with your Google credentials, for example – or that have not implemented multi-factor authentication.
Step 2: Cybercriminals obtain a list of common passwords
Password spraying attacks incorporate lists of common or default passwords. It’s relatively straightforward to find out what the most common passwords are – various reports or studies publish them each year, and Wikipedia even has a page which lists the most common 10,000 passwords. Cybercriminals may also do their own research to guess passwords – for example, by using the name of sports teams or prominent landmarks local to a targeted organization.
Step 3: Cybercriminals try out different username/password combinations
Once the cybercriminal has a list of usernames and passwords, the aim is to try them until finding a combination that works. Often, the process is automated with password spraying tools. Cybercriminals use one password for numerous usernames, and then repeat the process with the next password on the list, to avoid falling foul of lockout policies or IP address blockers which restrict login attempts.
Impact of password spraying attacks
Once an attacker accesses an account via a password spray attack, they will be hoping that it contains information valuable enough to steal or has sufficient permissions to further weaken the organization’s security measures to gain access to even more sensitive data.
Password spraying attacks, if successful, can cause significant damage to organizations. For example, an attacker using apparently legitimate credentials can access financial accounts to make fraudulent purchases. If undetected, this can become a financial burden on the affected business. Recovery time from a cyberattack can take up to a few months or more.
As well as impacting an organization’s finances, password spraying can significantly slow down or disrupt daily operations. Malicious companywide emails can reduce productivity. A business account takeover by the attacker could steal private information, cancel purchases, or change the delivery date for services.
And then there is reputational damage – if a business is breached in this way, customers are less likely to trust that their data is safe with that company. They may take their business elsewhere, causing additional harm.
Password spraying example
"I was asked to change my password when my bank was targeted by a password spraying attack. Malicious actors were able to try millions of username and password combinations against the bank's customers - and unfortunately, I was one of them."
Password spraying vs brute force
A password spraying attack attempts to access a large volume of accounts with a few commonly used passwords. By contrast, brute force attacks attempt to gain unauthorized access to a single account by guessing the password – often using large lists of potential passwords.
In other words, brute force attacks involve many passwords for each username. Password spraying involves many usernames to one password. They are different ways of performing authentication attacks.
Signs of a password spraying attack
Password spraying attacks typically cause frequent, failed authentication attempts across multiple accounts. Organizations can detect password spraying activity by reviewing authentication logs for system and application login failures of valid accounts.
Overall, the main signs of a password spraying attack are:
- A high volume of login activity within a short period.
- A spike in failed login attempts by active users.
- Logins from non-existent or inactive accounts.
How to defend against password spraying attacks
Organizations can protect themselves from password spraying attacks by following these precautions:
Implement a strong password policy
By enforcing the use of strong passwords, IT teams can minimize the risk of password spraying attacks. You can read about how to create a strong password here.
Set up login detection
IT teams should also implement detection for login attempts to multiple accounts that occur from a single host within a short time period – as this is a clear indicator of password spraying attempts.
Ensuring strong lockout policies
Setting a suitable threshold for the lockout policy at domain level defends against password spraying. The threshold needs to strike a balance between being low enough to prevent attackers from making multiple authentication attempts within the lockout period, but not so low that legitimate users are locked out of their accounts for simple errors. There should also be a clear process for unlocking and resetting verified account users.
Adopt a zero trust approach
A cornerstone of the zero trust approach is providing access to only what is required at any given time to complete the task at hand. Implementing zero trust within an organization is a key contribution towards network security.
Use a non-standard username convention
Avoiding selecting obvious usernames like john.doe or jdoe – which are the most common methods for usernames – for anything other than email. Separate non-standard logins for single sign on accounts is one way to evade attackers.
To prevent attackers from exploiting the potential weaknesses of alphanumeric passwords, some organizations require a biometric login. Without the person present, the attacker can’t log in.
Look out for patterns
Make sure any security measures in place can quickly identify suspicious login patterns, such as a large volume of accounts attempting to log in simultaneously.
Using a password manager can help
Passwords are intended to protect sensitive information from bad actors. However, the average user today has so many passwords that it can be difficult to keep track of them all – particularly as each set of credentials is supposed to be unique.
To try to keep track, some users make the mistake of using obvious or easy-to-guess passwords, and often use the same password across multiple accounts. These are precisely the type of passwords that are vulnerable to password spraying attacks.
Attacker capabilities and tools have evolved considerably in recent years. Computers are much faster today at guessing passwords. Attackers use automation to attack password databases or online accounts. They have mastered specific techniques and strategies that yield more success.
For individual users, using a password manager, such as Kaspersky Password Manager, can help. Password managers combine complexity and length to offer up hard-to-crack passwords. They also eliminate the burden of having to remember different login details and moreover, a password manager will help to check whether there is a repetition of passwords for different services. They are a practical solution for individuals to generate, manage, and store their unique credentials. | <urn:uuid:57909591-7484-4fd3-a7e4-ed102a32c9d8> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/definitions/what-is-password-spraying | 2024-09-15T14:45:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00441.warc.gz | en | 0.943719 | 1,763 | 2.859375 | 3 |
Today, businesses are looking for ways to work smarter and greener. Despite this, many still use tons of paper, which costs a lot of money and hurts the environment. There's a better way to do things: go paperless with the help of document scanning and document management systems (DMS). Going digital reduces clutter and waste, makes businesses run smoother, and saves them money. In this article, we'll dive into eight shocking facts about how much paper businesses waste and how switching to digital can make a big difference.
8 Shocking Facts About Paper Waste in Businesses and Solutions
Fact 1: Businesses waste an enormous amount of paper every year
Statistics reveal that an average office worker in the U.S. uses around 10,000 sheets of paper annually, contributing significantly to paper waste. The solution lies in document scanning and DMS, which convert these paper documents into digital formats, substantially reducing physical waste and conserving resources.
Fact 2: Paper waste is costly for businesses
The financial toll of paper use—and waste—on businesses encompasses not just the purchase price but also associated costs for storage, printing, and disposal. Transitioning to a DMS allows companies to avoid these expenses, as digital documents negate the need for physical storage space or printing supplies.
Fact 3: Paper clutter leads to inefficiency
Research indicates that the average office worker spends at least two hours a day searching for needed documents, which impacts productivity. Digital documents, organized and easily accessible through a DMS, can streamline workflows and eliminate physical clutter.
Fact 4: Paper production and disposal have profound environmental impacts
Producing and disposing of paper significantly contributes to carbon emissions and deforestation. Adopting digital solutions allows businesses to dramatically reduce their carbon footprint and the environmental impact of traditional paper practices.
Fact 5: Excessive paper usage lowers workplace productivity
Handling and processing excessive paper can decrease workplace productivity by up to 21%. Digital files, easily organized, searched for, and shared through a DMS, enhance efficiency and business operations.
Fact 6: A large portion of landfill waste comes from paper
Approximately 85 million tons of paper waste is created each year. By moving to digital archiving, companies eliminate the need for paper disposal and promote environmental sustainability by reducing waste production.
Fact 7: A significant amount of paper in businesses is not recycled
Less than 70% of the paper used in businesses undergoes recycling. Implementing a paperless system with DMS ensures sustainable document management without relying on physical recycling methods.
Fact 8: Paper documents pose security risks
Mishandling paper documents can lead to security and compliance risks. DMS offers secure storage, access controls, and audit trails, enhancing document protection and meeting compliance requirements.
These statistics and findings stress the critical need for businesses to reconsider their reliance on paper. They highlight the efficiency, cost benefits, and environmental advantages of shifting to digital document management.
Benefits of Going Paperless
Transitioning to a paperless office offers many benefits for businesses, impacting everything from operational efficiency to environmental sustainability. Here are the key advantages:
Increased Efficiency and Productivity
By adopting digital document management systems, businesses can streamline their operations. Employees spend less time searching for documents and more time on productive tasks, significantly boosting overall efficiency.
Going paperless reduces the need for physical storage, printing, and paper purchasing costs. The savings extend beyond tangible resources, including reduced document retrieval and management expenses.
A paperless office greatly diminishes the need for paper, directly contributing to fewer trees being cut down for paper production. It also reduces the carbon footprint of paper manufacturing and waste, supporting broader environmental sustainability efforts.
Enhanced Security and Compliance
Digital documents are more secure and easier to manage in compliance with privacy laws and regulations. Document management systems offer advanced security features such as access controls, encryption, and audit trails, ensuring that sensitive information is protected against unauthorized access.
How Your Business Can Reduce Paper Waste
Businesses can take several steps to reduce paper usage effectively:
- Adopt document scanning: Converting existing paper documents into digital formats eliminates physical clutter and facilitates easier document management.
- Implement a DMS: A document management system organizes, stores, and secures digital documents, making them easily accessible and reducing the need for paper-based files.
- Make a digital-first culture: Encouraging digital communication and document sharing within the organization can significantly reduce the reliance on paper.
How Can Scan to Zero Help Organizations Go Paperless?
The "Scan to Zero" approach is an innovative method for businesses looking to transition towards a paperless environment efficiently and sustainably. It's designed to tackle the challenge of existing paper documents that clutter offices and complicate document management processes. Here’s how it can transform your organization:
Phased Approach to Digitization
"Scan to Zero" takes a phased approach to systematically digitizing existing paper documents. This method doesn't overwhelm the business processes but integrates smoothly, ensuring ongoing operations are not disrupted. Businesses can prioritize documents based on their immediate relevance or compliance requirements, scanning and converting them into digital formats in a structured manner.
Reducing Paper Dependency
As more documents are digitized, the organization's dependency on paper gradually decreases. This shift reduces the physical space required for storing documents and lowers the costs associated with printing, photocopying, and paper supplies. Over time, the organization's workflow becomes increasingly digital, paving the way for a fully paperless office.
Enhancing Document Accessibility and Security
Digitized documents are easier to store, search, and retrieve, significantly enhancing operational efficiency. Moreover, document management systems (DMS) offer robust security features, such as access controls and encryption, to protect sensitive information. This digitization process also supports compliance with data protection regulations by providing detailed audit trails for document access and modifications.
The move towards digitization inherently supports environmental sustainability goals. Businesses reduce deforestation and the carbon footprint of paper production and waste by reducing paper use. This benefits the organization's sustainability credentials and aligns with the growing consumer expectation for businesses to operate responsibly.
The shift towards a paperless office marks a critical step for businesses aiming to boost efficiency, cut costs, and support environmental sustainability. With shocking statistics about paper waste driving the urgency for change, digital solutions like document scanning and document management systems (DMS) offer a clear path forward. These technologies streamline operations and enhance data security and compliance, showcasing the multifaceted benefits of going digital.
Is your business ready to reduce paper waste and embrace a more sustainable, efficient future? Contact MES Hybrid Document Systems today for more information on how our document scanning and management solutions can transform your operations. | <urn:uuid:56874cf1-37d3-47f8-88c8-b5da6d037819> | CC-MAIN-2024-38 | https://blog.mesltd.ca/go-paperless-8-shocking-facts-about-paper-waste-in-businesses | 2024-09-18T01:06:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00241.warc.gz | en | 0.918422 | 1,365 | 2.859375 | 3 |
Historically, access control was partially accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost, or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.
With the advent of technology, electronic access control (EAC) uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. The door is unlocked for a predetermined time when access is granted, and the transaction is recorded. When access is refused, the door remains locked, and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked1.
Depending on the modes and usability, access control can be segmented into four (4) different categories:
- Discretionary Access Control (DAC)
- Mandatory Access Control (MAC)
- Role-Based Access Control (RBAC)
- Physical and Virtual Access
Physical Access Control
Physical access control, as the name suggests, regulates access through physical entry points such as doors and turnstiles after identity authentication and authorization. Access to offices, buildings, rooms, and physical IT assets is restricted via physical access control.
Virtual / Logical Access Control
Virtual access control, or logical access control, is a critical aspect of modern security systems. It governs access or connection to computer networks, system files, and data. This form of access control is becoming increasingly important as organizations digitize and move more of their operations online. Organizations like OLOID are at the forefront of this shift, leading the unification of cyber and physical identities to enhance security and provide a better user experience. Their solutions transform every door, every turnstile, and every access point into a secure, smart, and digitally accessible point. This ensures the security of your organization’s data, automates business processes, and elevates the employee experience. For more information on how OLOID revolutionizes virtual access control, visit Cyber-Physical Access.
Why is Access Control important?
Most organizations set up on-prem access control systems with multi-factor authentication capabilities to secure their workplace. However, despite many measures, we frequently see reports of high-profile data leaks and security breaches. For instance, CSO Online provides a comprehensive list of some of the biggest data breaches in the 21st century, affecting millions of users. These instances highlight the importance of robust access control systems in protecting sensitive data.
On the face of it, access control is all about things like who gets access, when they get access, how they get access, and under what conditions. However, it’s not an easy task to accomplish. A robust and modern PACS must help secure the premises, provide audit trails, and help you identify the potential loopholes in the physical access space.
Modern access control solutions, like the M-Tag by a leading security solutions provider, help augment your existing PACS by converging the physical and cyber identities. These solutions leverage cloud-based security, data encryption, and enriching data intelligence to provide a secure and seamless access experience.
Regarding access control systems, what was once aspirational has become a must-have today. And therefore, it’s important that your PACS represent agile technologies and are mobile, scalable, and risk-averse.
Aspects of Access Control Systems
Your access control systems are foundational to your organizational security. To any PACS, there are three key aspects.
To create an effective PACS, you must first identify the individual. It is here that a badge reader, facial recognition, or biometric authentication panel can be installed. Facial recognition or a biometric system can record the individual’s name accessing a door or an endpoint. Most organizations prefer to use either one or more of these identification systems to control access in the workplace.
Once the system recognizes the individual trying to access it, their authenticity is assessed. Various methods can be used for authentication, such as – Facial Recognition, Biometrics, QR codes, Badges, Mobile Access, Single Sign On (SSO), and Security Assertion Markup Language (SAML). Most organizations prefer to use more than one or a combination of these modes to accomplish authentication. This ensures better security.
Using more than one authentication method to gain access qualifies as multi-factor or multimodal authentication.
Once the user’s identity is authenticated by the badge reader, he is authorized to go through the access point. This is an authorization system where individuals in the same building can access different rooms and spaces. In a logical space, this is a role-based permissions system.
These three aspects of the PACS work together to provide a robust security infrastructure. They work as designed and in a way that no unauthorized person can access the physical spaces under their purview.
Bypassing a well-built security system is not easy. Identification is required for accounting (i.e., recording user behavior) and providing anything to authenticate, for example, easy access to the common room, generally, or to the server room in more specific cases.
Authentication prevents an unidentifiable, unauthenticated person from entering the restricted areas.
A PACS is an essential component of providing physical access to an establishment. Advancements and innovations have led us to build systems with cutting-edge technologies to set up intelligent identification, authentication, and authorization systems.
What are the three (3) types of access control?
There are three main types of access control systems: Discretionary Access Control (DAC), Mandatory Access Control (MAC), and Role-Based Access Control (RBAC). DAC is the least restrictive and allows owners to control access to their own data. MAC is the most restrictive and does not allow owners to control access. RBAC assigns access based on the role of the user within the organization.
What are examples of access controls?
Examples of access controls include physical controls like locks and biometric scanners, and logical controls like passwords, network access control systems, and data encryption. More advanced systems may use multi-factor authentication, which requires users to present multiple credentials for access.
What are the five (5) areas of access control?
The five areas of access control are identification, authentication, authorization, access approval, and accountability. Identification involves recognizing an entity in the system. Authentication verifies the identity. Authorization involves granting or denying rights to access resources. Access approval involves the process of approving access to a system. Accountability involves tracking the actions of a user. | <urn:uuid:c660df81-46c0-4671-ac6c-c93dd76b83f1> | CC-MAIN-2024-38 | https://www.oloid.ai/blog/what-is-access-control-what-are-the-different-aspects-of-access-control-systems/ | 2024-09-18T01:25:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00241.warc.gz | en | 0.914101 | 1,437 | 2.90625 | 3 |
Understanding what IoT Devices are and what they do can help you make informed decisions for your organization. Here is what you need to know about RFID and RFID asset tracking.
As a leading provider of global RFID asset and inventory tracking software, we understand the importance of staying informed about the latest IoT tracking devices and technologies. In today’s world, smart devices have become a ubiquitous part of our daily lives. This includes smartphones and home appliances to sensors and beyond. Thanks to advances in information technology and electromechanical components, these devices are now capable of processing and computing data quickly and efficiently. One of these technologies is Radio Frequency Identification (RFID).
What is RFID and how does it work?
Radio Frequency Identification (RFID) technology automatically identifies and tracks objects in a wide range of industries. RFID tags contain microchips that store information about the object they are attached to. These can then be read remotely by a scanning device using radio waves and electromagnetic fields. The RFID tag has a built-in antenna that communicates to a scanning device that reads the data remotely. The data is then transferred from the scanning device to the enterprise application software that houses the data. Each RFID tag has its own unique identifying number.
RFID also records the movement of assets and personnel. You’ve probably experienced RFID tags on the windshield of your vehicle during toll scans or even in newer biometric passports. RFID is increasingly becoming popular as a replacement to barcode for scanning and tracking.
The components of RFID
Using radio waves and electromagnetic fields to send data, RFID tracking systems consist of three main components:
Component #1 – RFID tags: There are two types of RFID tags: passive and active. Passive tags are popular in retail settings since they do not require a power supply. Active tags have their own power source and can collect more detailed information about the object they are attached to. RFID tags can be attached to objects or embedded in devices like cameras and GPS sensors, allowing you to identify and locate them easily.
Component #2 – the RFID reader: An RFID reader is a device that scans RFID tags and collects information about the asset or inventory item attached to the tag. These readers can be hand-held or wired, and work with USB and Bluetooth.
Component #3 – the RFID applications or software: RFID inventory or asset tracking software controls and monitors RFID tags associated with your assets. This can be a mobile application or a standard software package. Most of the time the two work in conjunction. For example, Apptricity’s RFID asset tracking software allows users to assign tags via our mobile application or through the software. The software receives the information from the reader’s scans, relaying that data to the user for up to real-time asset and inventory tracking!
What are the benefits of RFID technology?
RFID tracking technology offers numerous benefits, including increased efficiency and accuracy in tracking, real-time visibility in managing inventory and assets, reduced labor costs, improved supply chain management, and enhanced security.
Increased efficiency and accuracy in tracking
Since this technology uses radio waves to identify and track assets, it provides a much faster and more accurate tracking process than traditional methods. With RFID, enterprises can receive real-time scan data the moment that readers scan assets, inventory, and even personnel. If implementing with a proven infrastructure, such as Apptricity’s enterprise-level tested system, your team can track all of these in real-time. This eliminates the need for manual data entry, reducing errors and improving overall efficiency.
Real-time visibility in managing inventory and assets
With RFID tags, enterprises can track location, movement, lifecycle, custodianship history, and more data in real-time. This greatly improves asset and inventory visibility. In addition, Apptricity’s technology has the ability to pair with sensor data such as light sensitivity, pressure, temperature, etc. This enables enterprises to make informed decisions about inventory management and asset utilization. By acting upon the data, enterprises reduce the risk of stockouts and increasing productivity.
Reduced labor costs
RFID tracking technology eliminates the need for manual data entry, reducing labor costs associated with tracking inventory and assets. This is especially true when considering cycle counting. Employees spend lots of time locating lost or misplaced assets. During cycle counts, this time increases as employees manually search and scan every asset and inventory item. With an automated RFID tracking solution, the process becomes streamlined. Employees get to focus on more important tasks, such as optimizing inventory, rather than spending time manually inputting data.
Improved supply chain management
In addition, this technology provides enterprises with end-to-end visibility in their supply chain, from raw materials to finished goods. With RFID, businesses can easily track the movement of their goods, including where they are in the supply chain, their shipment status, and their expected time of arrival. Apptricity’s RFID inventory tracking software can scan items even within vehicles. This provides inventory managers with real-time updates on how much inventory is onboard vehicles. The software also sends alerts the moment inventory exits a vehicle or falls below user-defined thresholds. When these thresholds break, managers use the software to automate purchase orders from preferred vendors. This level of visibility enables enterprises to optimize their entire supply chain processes, from purchase to retirement or sale. The whole process becomes easily visible and manageable with Apptricity.
RFID tracking technology can help enhance security by allowing businesses to monitor the movement of assets and personnel in real-time. Simply put, RFID tags can track employees who have access to certain areas, ensuring only authorized personnel can enter. It can also restrict certain personnel from checking out tools or equipment that they do not have clearance to operate, preventing injuries and ensuring employee safety.
This technology can also help identify potential theft or loss of assets. Missing items become easier to track and recover. Users can also establish geofences to restrict assets from leaving specific locations. The same users can receive alerts as these assets get closer to their geofences. The same goes for personnel, which is especially important for large enterprises where safety is a chief concern.
How do I get started with RFID tracking?
Getting started with RFID tracking involves selecting the right tags, readers, and software for your specific needs. It also requires implementation of the RFID tracking hardware to pair easily with the tracking software, to give users an easy asset and inventory management experience. The best place to begin RFID tracking is right here! The US Army, US Air Force, Verizon, Komatsu, Brinks, and other Fortune companies trust Apptricity to help them automate their supply chain management. You can start by watching a demo to see the software in action!
Where can I find RFID readers?
RFID readers can be found through a variety of sources, including online retailers and specialized RFID hardware providers. Apptricity offers a range of RFID readers and related hardware. Apptricity’s patented IoT edge devices are the fastest tracking edge solutions on the market. Named the “I-Connect Controller,” this series reads RFID and BLE tags and transmits that data to the cloud via secure WiFi, LTE, or ethernet connection. These track assets and inventory, both in-transit and static environments and report to the Apptricity asset and/or inventory tracking software in real-time.
What is a RFID inventory tracking system?
A RFID inventory tracking system is a software solution that allows you to track and manage inventory using RFID tags and readers. It typically includes features such as real-time tracking, automated data capture, and analytics tools for reporting and analysis. Apptricity’s RFID inventory tracking system is synonymous with asset tracking and supply chain management. Where other companies may focus on the overall record of your assets, inventory, and equipment, Apptricity is your boots on the ground. With our RFID inventory tracking system, you can track items in up to real-time along with important data such as lifecycle, ownership history, geofence capability, mapping, details for each item, and so much more.
What is the difference between RFID inventory tracking versus RFID asset tracking?
RFID inventory tracking focuses on tracking inventory items, or items that will eventually be sold to a consumer. RFID asset tracking tracks a broader range of assets, including equipment, tools, and vehicles. In other words, these assets do not have selling intentions and knowing data such as lifecycle, upcoming warranties, and maintenance schedules become the priority. Apptricity’s tracking solutions are customizable to the needs of your enterprise. You can track these data points for any asset or inventory item in your supply chain, whether in-transit, or in a warehouse, or other enclosed location.
What is RFID employee tracking?
RFID employee tracking refers to the use of RFID technology to monitor the location and movement of employees within a facility or worksite. This improves security and safety, while also increasing tracking productivity and efficiency. In situations where clearance is important, Apptricity’s tracking solutions offer the best RFID employee tracking solution. By enabling geofences within the software, you can receive alerts anytime an employee is not where they should be. In addition, if check-in/out is a priority for your team, users can configure permissions to restrict personnel from checking out items they do not have permission to attain. This keeps your team safe, reduces shrinkage, and enhances security.
What is RFID asset tracking software?
As stated earlier, RFID asset tracking, in Apptricity terms, is synonymous with RFID inventory tracking. The main differentiation is the data associated with the items you are tracking. For example, you may track lifecycle data for your assets. On the other hand, you may want to use RFID to track how fast inventory items leave certain facilities. Both are achievable with Apptricity’s RFID tracking software. We give your enterprise real-time, 360° visibility for all assets and inventory, so you can focus on other important tasks.
Where can I get reliable RFID asset tracking software?
To get started with RFID asset tracking software, you will need to select a solution that meets your specific needs and requirements, and work with a provider to implement and configure the software for your organization. Whether you want to track assets and inventory within one warehouse or one thousand, Apptricity scales with your business by offering real-time RFID tracking. If you eventually need to upgrade to Bluetooth tracking or Satellite GPS or Satellite LTE, we do that too! Get everything you need in one easy-to-use tracking solution.
Why is Apptricity the best RFID asset tracking software?
In Apptricity tracking terms, “asset,” “inventory,” and “equipment” are interchangeable. With this in mind, Apptricity is considered the best RFID asset tracking software because of its accuracy, reliability, and ease of use. Trusted by the US Army, US Air Force, Verizon, Brinks, Komatsu, and more Fortune companies, Apptricity leverages a combination of RFID tracking technology, Bluetooth® Low Energy (BLE) beacons, patented edge devices, and proprietary software to provide real-time tracking of assets and inventory. Apptricity’s system boasts up to real-time location tracking, which is critical for enterprises needing to track assets, inventory, or their fleets. Additionally, the system easily integrates with other enterprise systems, such as enterprise resource planning (ERP) to provide a comprehensive solution for RFID asset tracking. Overall, Apptricity’s RFID Asset Tracking software solution is a top choice for enterprises looking to improve their tracking capabilities.
Contact us today to learn more about how RFID technology can benefit your organization. | <urn:uuid:ba6d9c3b-c2b9-4e5b-ab59-87e2c8954f69> | CC-MAIN-2024-38 | https://www.apptricity.com/iot-devices-what-you-need-to-know-about-rfid/ | 2024-09-19T07:30:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00141.warc.gz | en | 0.920824 | 2,481 | 2.796875 | 3 |
Many of us would concede that buildings housing data centers are generally pretty ordinary places. They’re often drab and bunker-like with few or no windows, and located in office parks or in rural areas. You usually don’t see signs out front announcing what they are, and, if you’re not in information technology, you might be hard pressed to guess what goes on inside.
If you’re observant, you might notice cooling towers for air conditioning and signs of heavy electrical usage as clues to their purpose. For most people, though, data centers go by unnoticed and out of mind. Data center managers like it that way, because the data stored in and passing through these data centers is the life’s blood of business, research, finance, and our modern, digital-based lives.
That’s why the exceptions to low-key and meh data centers are noteworthy. These unusual centers stand out for their design, their location, what the building was previously used for, or perhaps how they approach energy usage or cooling.
Let’s take a look at a handful of data centers that certainly are outside of the norm.
The Underwater Data Center
Microsoft’s rationale for putting a data center underwater makes sense. Most people live near water, they say, and their submersible data center is quick to deploy, and can take advantage of hydrokinetic energy for power and natural cooling.
Project Natick has produced an experimental, shipping-container-size prototype designed to process data workloads on the seafloor near Scotland’s Orkney Islands. It’s part of a years-long research effort to investigate manufacturing and operating environmentally sustainable, prepackaged datacenter units that can be ordered to size, rapidly deployed, and left to operate independently on the seafloor for years.
The Supercomputing Center in a Former Catholic Church
One might be forgiven for mistaking Torre Girona for any normal church, but this deconsecrated 20th century church currently houses the Barcelona Supercomputing Center, home of the MareNostrum (Latin for Our sea, the Roman name for the Mediterranean Sea) supercomputer. As part of the Polytechnic University of Catalonia, this supercomputer is used for a range of research projects, from climate change to cancer research, biomedicine, weather forecasting, and fusion energy simulations.
The Under-a-Mountain Bond Supervillain Data Center
Most data centers don’t have the extreme protection or history of the The Bahnhof Data Center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described Bond villain ambiance.
We previously wrote about this extraordinary data center in our post, The Challenges of Opening a Data Center — Part 1.
The Data Center That Can Survive a Class 5 Hurricane
Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category five hurricane winds.
The MI1 facility provides access for the Caribbean, South and Central America and to more than 148 countries worldwide, and is the primary network exchange between Latin America and the U.S., according to Equinix. Any outage in this data center could potentially cripple businesses passing information between these locations.
The center was put to the test in 2017 when Hurricane Irma, a class 5 hurricane in the Caribbean, made landfall in Florida as a class 4 hurricane. The storm caused extensive damage in Miami-Dade County, but the Equinix center survived.
The Data Center Cooled by Glacier Water
Located on Norway’s west coast, the Lefdal Mine Datacenter is built 150 meters into a mountain in what was formerly an underground mine for excavating olivine, also known as the gemstone peridot, a green, high- density mineral used in steel production. The data center is powered exclusively by renewable energy produced locally, while being cooled by water from the second largest fjord in Norway, which is 565 meters deep and fed by the water from four glaciers. As it’s in a mine, the data center is located below sea level, eliminating the need for expensive high-capacity pumps to lift the fjord’s water to the cooling system’s heat exchangers, contributing to the center’s power efficiency.
The World’s Largest Data Center
The Tahoe Reno 1 data center in The Citadel Campus in Northern Nevada, with 7.2 million square feet of data center space, is the world’s largest data center. It’s not only big, it’s powered by 100% renewable energy with up to 650 megawatts of power.
An Out of This World Data Center
If the cloud isn’t far enough above us to satisfy your data needs, Cloud Constellation Corporation plans to put your data into orbit. A constellation of eight low earth orbit satellites (LEO), called SpaceBelt, will offer up to five petabytes of space-based secure data storage and services and will use laser communication links between the satellites to transmit data between different locations on Earth.
CCC isn’t the only player talking about space-based data centers, but it is the only one so far with $100 million in funding to make their plan a reality.
A Cloud Storage Company’s Modest Beginnings
OK, so our current data centers are not that unusual (with the possible exception of our now iconic Storage Pod design), but there was a time when Backblaze was just getting started and was figuring out how to make data storage work while keeping costs as low as possible for our customers. It’s a long way from these modest beginnings to almost one exabyte (one billion gigabytes) of customer data stored today.
The photo below is not exactly a data center, but it is the first data storage structure used by Backblaze to develop its storage infrastructure before going live with customer data. It was on the patio behind the Palo Alto apartment that Backblaze used for its first office.
The photos below (front and back) are of the very first data center cabinet that Backblaze filled with customer data. This was in 2009 in San Francisco, and just before we moved to a data center in Oakland where there was room to grow. Note the storage pod at the top of the cabinet. Yes, it’s made out of wood. (You have to start somewhere.)
Do You Know of Other Unusual Data Centers?
Do you know of another data center that should be on this list? Please tell us in the comments. | <urn:uuid:35c7c8a6-ad92-4674-b0d0-585310b51d90> | CC-MAIN-2024-38 | https://www.backblaze.com/blog/these-arent-your-ordinary-data-centers/ | 2024-09-19T06:32:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00141.warc.gz | en | 0.946172 | 1,457 | 3.171875 | 3 |
Identity and access management (IAM) refers to the processes, practices, and technologies related to managing identity information, including authentication, authorization, and accounting. IAM helps ensure the access rights of individuals are applied as needed based on their business roles or relationships in an organization.
While it may not seem like a necessary part of daily tasks, IAM is an important element of any secure IT environment, especially one that is growing increasingly digitized and mobile-friendly. Every organization must ensure that IAM is part of their cybersecurity risk management strategy.
Why Is Identity and Access Management Important?
IAM is a central part of the security and compliance of an organization. One can say that without IAM, an organization has no security and a useless audit trail. IAM is about knowing who somebody is so you know whether to let them into the bank, to work at the teller station, or into the vault. Meanwhile, your audit trail and reporting would show that many things happened, but not who did them, so you couldn’t prove compliance with privacy and data protection regulations or investigate incidents effectively.
IAM is a part of the identity and access management domain, consisting of several sub-domains:
What are the basic components of identity and access management (IAM)?
IAM systems mainly perform three basic tasks: identifying, authenticating, and authorizing. This means that only the intended persons are allowed access to specific hardware, software, applications, and IT resources—as well as specific data and content—to perform tasks. An IAM framework includes components such as:
- User: An identity that has associated credentials and permissions
- Group: A collection of users that specify permissions for multiple users to give administrators an economy of scale
- Policy: Permissions and controls to access a system, resource, data, or content, within a business context
- Role: A set of policies, typically corresponding to the minimum set of privileges needed to perform a particular business function, such as accounting or marketing, that can be applied to users or groups
Benefits of Identity and Access Management
There are various cybersecurity benefits associated with IAM that include:
By providing a common platform for access and identity management, IAM allows you to apply the same security principles across all systems, applications, data, and content used in an organization. IAM frameworks enable organizations to implement and enforce user authentication, privileges, and validation policies.
IAM systems help identify and mitigate security risks by identifying violations to set rules. IAM systems also facilitate the resolution of unauthorized access privileges without necessarily having to search through multiple systems.
IAM simplifies sign-in, sign-up, and user management processes for all users and user groups in a system. It makes it easy to set and manage system access privileges to users to enhance user satisfaction.
Since IAM automates and centralizes the identity and access management processes, it helps create automated workflows that enable personnel to increase their productivity by reducing manual tasks like onboarding new personnel or when personnel change roles. It also helps to reduce errors that may occur during the manual processes.
Compliance with regulations
Virtually all compliance regulations require authorization or access controls for enforcing policies, as well as an audit trail and reporting for proving compliance in audits. To prove proper access control enforcement and assign accountability, the activities tracked in the audit trail must always be tied to authenticated users.
Implementation of zero trust
Organizations have responded to the exponential rise in advanced persistent threat attacks with a new model for security architecture called zero trust. IAM plays a central role because the key tenets of zero trust include granular authorization for all resources, such as assets, services, workflows, etc., and authentication of users and systems before access to each resource is permitted.
What Are Some of the Common IAM Standards?
A good IAM system should have sound standards that ensure accuracy in meeting compliance requirements. Some of the commonly used protocols and standards in IAM systems include:
OAuth 2.0 authorization protocol enables third-party risk management (TPRM)—namely, how organizations permit vendors and other third parties in their supply chain to access protected systems and sensitive content through access tokens. It is also essential for employees using mobile devices and remote systems outside the physical walls of the enterprise, and for single sign-on (SSO).
User-Managed Access (UMA)
UMA is an Oauth-based access management protocol standard that helps regulate access to protected systems by third parties.
Security Assertion Markup Language 2.0 (SAML 2.0)
SAML is an open standard that allows identity providers (IdP) to authenticate users and pass their authorization assertions to service providers. With SAML, users can use one set of credentials to log into many different web applications. It is often used to implement single sign-on (SSO).
Next Generation Access Control (NGAC)
NGAC enables systematic and policy-consistent approaches to access control that grant or deny users administrative capabilities.
How Do IAM Systems Function?
Task | Function |
Authenticating users | IAM systems authenticate users by confirming that they are who they say they are. They traditionally use credentials, user IDs, and passwords, but today support multiple factors such as face or fingerprint biometrics, SMS texting of a one-time password (OTP) to a proven device, or a code from a managed authenticator app. |
Authorizing users | Access management ensures a user is granted the exact level and type of access to a tool they’re supposed to have. |
Single sign-on | Identity and access management solutions that have single sign-on (SSO) allow users to authenticate their identity with one portal instead of many different resources. After authentication, the IAM system becomes the source of identity truth for the other resources available to the user. The user therefore does not need to remember several passwords. |
Managing user identities | IAM systems can be used as a sole directory for creating, modifying, and deleting users. They can be integrated with one or more other directories and synchronized with them. Identity and access management can also create new identities for users needing specialized access to an organization’s tools. Many organizations also centralize user role assignments in IAM attributes to streamline assignment of policies in multiple applications across the enterprise. |
Provisioning and de-provisioning users | Provisioning a user entails specifying which tools and access levels (editor, viewer, administrator) to grant them. IAM tools allow IT departments to provision users by role, department, or other groupings as needed. Identity management systems enable provisioning via policies based on role-based access control (RBAC). |
Reporting | IAM tools generate reports after actions taking place on a system (such as login time, what resources have been accessed, and the type of authentication granted). This helps to ensure compliance and assess any security risks. |
Best Practices for Implementing Identity and Access Management
The purpose of identity and access management (IAM) is to ensure that only authorized people have access to corporate applications and information assets. IAM systems are used to provide single sign-on (SSO) capabilities, meaning users only need one set of credentials to access all their applications. These best practices ensure your company’s web and local applications stay secure as you further adopt IAM tools and technologies.
Defining what information assets need protection and who should have access to them
When it comes to IAM, there are best practices that organizations should follow to ensure the security of their data. One of the most important steps is defining what information assets need protection and who should have access to them.
Where data, information, and content are treated as information assets in the true business and accounting sense, implementing different levels of access, authentication, and authorization to each assets is important. This seems like a simple task, but it’s one that is often overlooked or not given enough attention. Organizations need to take inventory of their information assets and determine which ones contain sensitive content that needs to be protected. They also need to identify who needs access to this data and for what purpose.
Determining identity verification methods
There are many best practices for implementing IAM, but one of the key aspects is determining identity verification methods. To properly secure an organization’s data, it is essential first to verify that users are who they say they are.
There are various ways to do this, from simple username and password combinations to more complex multi-factor authentication schemes. The important thing is to choose a method (or combination of methods) that will effectively verify identities without being too burdensome for users.
Developing policies and procedures for managing user accounts
When it comes to identity and access management, best practices always involve developing policies and procedures for managing user accounts. This is because user account management is the foundation of an effective security strategy. Having clear policies and procedures in place ensures that only authorized users have access to your systems and data.
Additionally, regular account reviews can help you identify potential issues—such as access no longer required due to a partner’s change of status—before they become serious problems.
Automating processes whenever possible
In identity and access management, one best practice is to automate processes whenever possible. This can help ensure that users have the correct permissions, and that private data is properly protected.
Automation can also help reduce errors and improve efficiency. There are a few ways to automate processes, such as using scripts or tools that provide prebuilt functionality. Automatic suspension of unused accounts after a standard period of time reduces risk and is required by many security and privacy regulations. When selecting a solution, it is important to consider factors such as ease of use and compatibility with existing systems.
Monitoring user activity to detect suspicious behavior
One of the best practices for identity and access management is to monitor user activity for suspicious behavior.
This can be done by tracking log-in times, failed log-in attempts, and unusual log-in activity patterns. By doing so, you can quickly identify any potential threats and take steps to mitigate them. A “fail2ban” policy automates the shutdown of login attempts from a given IP address after a burst of failed logins, on the assumption a bot is attempting to break in by brute force (trial and error with commonly used credentials).
Reviewing and updating IAM configurations on a regular basis
Organizations should review and update their configurations on a regular basis. Doing so helps ensure that only authorized users have access to sensitive data and systems, protecting the organization from potential breaches.
Regular reviews of IAM configurations also help identify potential weaknesses in an organization’s security posture, allowing them to be addressed before they can be exploited. Organizations can keep their data and assets safe from unauthorized access by taking a proactive approach to IAM.
Finding the Right IAM for Managing Sensitive Content Communications
Using an IAM solution is a requisite for managing log-in access to sensitive content communication tools. Whatever IAM solution is employed, the result needs to be a consistent SSO log-in experience across all devices, the desktop client, as well as any plugins. Key capabilities that you need to look for in an IAM solution include:
- It never sends a password unless it is encrypted
- Only a single login per session is needed with credentials defined at login passed between resources without the need for additional logins
- Mutual authentication where a client proves its identity to a server and a server proves its identity to the client
- Institution of password management best practices such as minimum password length and character combinations, password expiration, limiting users from reusing previous passwords, etc.
The Kiteworks Private Content Network seamlessly integrates with the IAM components in virtually any organization’s security stack, such as Okta, Azure Active Directory, LDAP, SMS, and Ping Identity, supporting SAML 2.0, OAuth, Radius, and a variety of authenticator apps. This enables Kiteworks to control access to the multiple sensitive content communication channels in the Kiteworks platform—for both internal and third-party users.
Kiteworks applies IAM to the resources that ultimately need to be secured and tracked—the sensitive content of the enterprise—rather than just the systems, applications, and features that contain the content at rest. After all, content moves between applications, and between an enterprise and its customers, its employees, and most problematic, its third parties: supply chain partners, regulators, outsourcers, attorneys, accountants, and many other types. Kiteworks helps ensure security and compliance by using identity, authentication, and authorization to dynamically apply the right policy to the right user with the right content in the right context, and track the user’s actions with that content for compliance and accountability, even as it moves between organizations. This is key for organizations seeking to manage their security risk and comply with privacy and security regulations.
Organizations can see the Kiteworks Private Content Network in action by scheduling a custom demo of it today. | <urn:uuid:c902aa37-6bd5-40fe-a021-b3e7229939d5> | CC-MAIN-2024-38 | https://www.kiteworks.com/risk-compliance-glossary/identity-access-management/ | 2024-09-19T08:37:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00141.warc.gz | en | 0.929389 | 2,689 | 2.984375 | 3 |
AI: the challenges and opportunities for cyber security
Artificial Intelligence (AI) is a branch of computer science that enables machines to imitate human intelligence by ‘thinking’ and learning from experience. Although the true implications of AI are still speculation, it is certain that AI will play a pivotal role in the future of technology.
According to Boston Consulting Group (BCG), 70% of executives expect AI to play a significant role at their companies in the next five years. Meanwhile, Senseon figures reveal that 88% of SMEs have a dedicated AI security budget, with more than half (53%) believing greater expenditure would help them better deal with their cyber security workload. In this article, we explore what AI means for cyber security departments.
Why is AI useful for cyber security?
There is no doubt AI has huge potential for cyber security. In fact, according to Senseon, 82% of security SMEs believe AI is crucial to the industry’s future. Security professionals are likely to soon have toolsets at their disposal that can understand and react to security threats far more efficiently than most practitioners could. So, as we stand on the brink of a potential golden age in our field, how can AI be a good thing for security?
AI-based SIEM tools
We are already seeing real-world applications of AI in security, notably new SIEM-supplementary tooling such as Darktrace. AI allows you to automate the detection of threats and combat them, even without the involvement of humans. In theory, this keeps your data more secure by removing the margin for human error entirely. Since AI is totally machine-learning driven, it assures you complete error-free cyber security services, or so vendors would have us believe. As BCG’s research has suggested, companies have also started to allocate more resources than ever before towards boosting AI-driven technologies. In the long run, this should save an organisation money in areas such as staff and training, as well as lead to a more effective cyber defence team.
Come out, come out, wherever you are: insider threats
Cyber criminals often leave ‘backdoors’ into systems they have illegally accessed, to allow for straightforward re-entry next time they want to create mayhem for their victims. While the entry points have shown to be relatively easy to hide from humans, AI constantly scans situations and behaviours to spot these rogue operators, then shuts them out.
And breathe: space for security personnel to think
Senior security professionals are, on the whole, an intelligent bunch. Most have big plans on how they can develop, evolve and improve their function’s security profile. When they have the time to focus on these improvement programmes, their firms often prosper. However, many senior staff find their time is consumed by stamping out fires and working on the proverbial front line.
The application of AI to cyber operations promises to upset the status quo, according to the World Economic Forum. AI can offer more efficient and effective tools for defending against attacks that occur at machine speeds. This is where the cavalry that is AI tooling can come to the rescue. With AI fighting many of the battles that have typically required senior attention, alongside human support, senior staff can return to what they do best – planning how to protect their employer’s vital resources.
Why is AI a risk to cyber security?
We know AI is already playing a crucial part in enhancing breach detection and relieving some of the cyber security workload. Capgemini research shows 61% of enterprises say they cannot detect breach attempts without the use of AI technologies. But what happens if cyber criminals are also using this technology against us?
Due to AI’s speed of learning, it will be able to identify vulnerabilities and exploit them far quicker than its human counterparts. This has traditionally been a very laborious and time-consuming process. Examples of exploiting vulnerabilities could include AI creating specific email content that would result in a significantly higher click rate for phishing campaigns or using machine learning to mimic behavioural patterns and mask its own activity during a breach. AI could also be capable of predicting a computer’s response before it attacks in order to avoid triggering the target system’s defences. This level of sophistication in attack may leave firms wide open if they do not have sufficient detection and defence capabilities.
Furthermore, firms using AI may be lured into a false sense of security, as AI is usually designed to work autonomously and is empowered to make its own decisions. Corruption within the AI’s defences can often go undetected for some time, leaving firms unaware they have been breached.
Finally, one of the biggest risks is the lack of understanding and costs of integrating AI into a security function, which creates significant barriers to adoption. Security leaders may be leaving themselves wide open if they fail to understand the implications of AI, as well as the repercussions for cyber security when the technology becomes widely available to cyber criminals.
Ethics and AI
While AI and machine learning represent a big opportunity within security, as we’ve discussed, there remain numerous challenges. In addition to the technical aspects of getting these technologies up and running, there is a considerable amount of concern regarding ethics and privacy with AI programs and products.
From our conversations with those in the industry, many professionals seem to suggest there just aren’t enough people within organisations who have an understanding of technology and are focused on ethics. Technology and AI move far faster than emerging skillsets or an organisation’s cultural growth.
As such, tech teams are currently left to themselves to try and make ethical decisions. Not only is that really not their job (and having people mark their own homework is always problematic), but they also lack either privacy, sociological or psychological training and knowledge that would be required for such a complex task. Data protection is possibly the closest most organisations will have to an ‘ethics’ department that also has technological understanding. While privacy is catching up, your privacy team will require a large amount of upskilling to also be able to make the correct ethical calls on AI.
That said, what happens with AI and machine learning responses can be a privacy issue. We’ve seen this to some extent with voice assistants, such as Siri, Cortana and Google Home Hub. All these systems require the use of data to improve the services they provide. However, there have been significant privacy concerns after reports emerged that these systems pick up conversations even when not active, with Siri being the most recent to fall foul of this. Furthermore, employees and contractors for these organisations have been tasked with listening to customer conservations.
This is an ethical and a privacy issue, so both ethics and privacy professionals are required to advise organisations. Warnings and recommendations should cover the risk of falling foul of GDPR and various other privacy regulations, as well as the more intangible ethical concerns that can affect an organisation’s bottom line in either stock value or fines.
The human element
To summarise, AI is a powerful tool that can be used by organisations and cyber criminals alike. It is a technology that can’t be ignored, as it will play a pivotal role in the future of cyber security. AI has the power to greatly streamline threat detection and defence capabilities, reducing costs and freeing up resources for allocation on other parts of the function.
Nevertheless, AI could greatly enhance the speed and exploitation of vulnerabilities in the hands of cyber criminals. Lack of understanding and the cost of adoption could mean cyber criminals have the upper hand once the technology becomes more widely available. Organisations should also be aware of how AI uses personal data because this raises several ethical issues worth considering.
As AI develops and becomes an integral part of cyber security, it will still require security professionals who have their fingers on the pulse with new and emerging technologies to effectively integrate, govern and manage these systems. Falling behind on this trend could mean game over. Ultimately, technology isn’t always the answer, and it will never be the full solution; people are always the strongest defence.
If you would like to discuss your cyber security recruitment needs in relation to emerging technologies such as AI, please get in contact with me on 0207 936 2601 or via email at firstname.lastname@example.org.
Our 2019 Market Report combine our review of the prevailing conditions in the security & resilience recruitment market with the results of our latest employer and candidate surveys.
Image credit: Hitesh Choudhary via Unsplash | <urn:uuid:0a0cf2d1-91d9-4fd2-869e-6c838016a462> | CC-MAIN-2024-38 | https://www.barclaysimpson.com/ai-the-challenges-and-opportunities-for-cyber-security/ | 2024-09-19T11:45:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00241.warc.gz | en | 0.961958 | 1,732 | 2.765625 | 3 |
One of the commonly used online technology phrases today, is cloud enabled file sharing. This term is also frequently referred to in conjunction with the Egnyte technology company. For those who may not be familiar with this company, it provides a wide range of cloud computing and online infrastructure related services. This includes commonly used technology functions such as online file sharing, online file storage, and data system back up.
To understand how cloud enabled file sharing works, it is important to first understand the concept of cloud computing. This is a type of technology that uses software and hardware resources to transfer data over the internet. There are various types of cloud computing services, which include data security and desktop virtualization. In addition, cloud computing is also a very important part of data recovery functions.
In terms of cloud enabled file sharing, this is a cloud related feature that is used by a wide variety of both large and small companies. Some of the benefits of using this feature include providing employees the ability to share important files and documents between different departments and divisions, and also with outside contractors. Some of the types of businesses that frequently use cloud enable file sharing include hospitals, banks, insurance companies, architectural firms and interior design agencies.
Looking towards the future, cloud related functions such as online file sharing are expected to continue increasing in the medical field and financial sector. This will be primarily due to the growing advancements made in cloud computing and digital technology.
Article by Scott Huotari, President CCSI, Google | LinkedIn | <urn:uuid:0b17f5ba-18a5-4eab-a46e-348b34ac0136> | CC-MAIN-2024-38 | https://www.ccsipro.com/blog/cloud-enabled-file-sharing-functions-and-features/ | 2024-09-20T16:19:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00141.warc.gz | en | 0.955827 | 303 | 2.953125 | 3 |
“You should be very worried,” warned Elshan Kashefi, Chief Scientist at the UK’s National Quantum Computing Centre. Assuming the role of the mythical Greek priestess Cassandra, Kashefi warned the Sifted Summit about the severe threat quantum computers pose – before they’re even functioning.
Today, malicious actors are stealing confidential, encrypted data en-masse to decrypt it later, with a functioning quantum device. These harvest now, decrypt later (HNDL) attacks mean that sensitive information with a ‘shelf-life’ – or information that will continue to be sensitive and require being secure between five to 10 years – is at high risk of being stolen.
Given the amount and type of information they hold, financial institutions are high on the list of nefarious actors looking to carry out these attacks. Easily mistaken for a Nolan plotline, quantum computers pose the most significant cyber threat to banking and financial infrastructures before they even exist.
What, then, should banks be doing about it?
Why Banks Are Acutely Vulnerable
Banks are acutely vulnerable because their large, sprawling infrastructures make them uniquely challenging to protect. The success of modern banks is intrinsically bound to their interconnectivity, which determines their ability to facilitate the movement of money, trade and business. However, interconnectivity also increases exposure to risks and makes it harder to insulate against threats.
If a chain is only as strong as its weakest link, the banking system (read, world) is only as secure as the weakest bank. Hackers can exploit encryption ‘backdoors’ and weak points to access sensitive information into the broader network, transforming a bank's interconnectivity from strength to crippling weakness.
Alarmingly, The Federal Reserve Bank of New York published research demonstrating how an attack on a vulnerable mid-size bank, defined as less than $10bn in assets, could bring down the entire US banking system. For reference, a bank smaller than 5% of Goldman Sachs could inadvertently wring irrevocable financial damages and loss of trust in banks.
Secure and reliable cryptography is key to the stability of our financial system and society – a cryptographically relevant quantum threat can undermine this and eclipse the damages of the 2008 global financial crisis and the Great Depression.
The risk models above still underestimate the quantum impact. If a bank falls victim to a ransomware attack today, as bad as it may sound, there is an absolute remedial action. If the bank is willing to pay, it can restore its systems, and this ransomware incident will not permanently impact all the other peer banks, suppliers and customers.
The relatedness risk is relatively low and non-contagious. A quantum attack will be much more systemic, affecting the entire industry(ies), and remedial actions will take much longer – if even possible – unless you start the migration much sooner. A better risk profile is COVID lockdowns several years ago when there was neither a cure nor a way of knowing who had been infected. The only difference here is that if you have not completed the quantum migration to make yourself self-sufficient, there is no cure, as no one can trust your information anymore.
What’s Being Done So Far?
For now, at least, solutions exist that preempt the threat. Post-quantum cryptography can be leveraged to secure against quantum computers, similar to how public key encryption protects against classical computers. Over the past 12 months, Google Chrome, Signal, and Apple have announced post-quantum integrations within their platforms to secure their users.
Leading the charge in the financial sector, The Bank for International Settlements (BIS) announced Project Leap, a quantum-resistant secure communication channel between the Banque de France and Deutsche Bank, and outlined six more projects earlier this year.
Encouragingly, we’ve also seen the quantum threat come into focus for legislators in the US, with the Quantum Computing Cybersecurity Preparedness Act passing with bilateral Congressional support.
However, with the risks being so total, encouraging isn’t enough. Besides, the projects are insufficient, exposing last-mile vulnerabilities and little scope for interoperability. For instance, Project Leap secures communications from perimeter to perimeter rather than between the end users, presenting a last-mile vulnerability and an obvious target for hackers.
How Banks Should Build Quantum-Secure Infrastructure
Hopefully, I have impressed the importance of a quantum-proof banking infrastructure, so what should banks do?
While we have seen some regulatory top-down movement, banks cannot afford to wait. Financial quantum security regulations have not been mentioned in the US or the UK's national budgets. I recommend that financial institutions create their own end-to-end secure infrastructures.
This starts with evaluating your IT systems and infrastructures to identify vulnerabilities and prioritizing securing sensitive long-life information vulnerable to HNDL attacks first. Work out from there, securing communications – perhaps with an end-to-end quantum-safe messenger – to your entire infrastructure.
To avoid the issues associated with interoperability, look to the Internet Engineering Task Force (IETF), which recently ratified a VPN protocol that enables multiple post-quantum and classical encryption algorithms to be incorporated into VPNs, ensuring no disruption to the functioning of existing IT systems.
Due to the nature of interconnectivity in the banking sector, there will always be counterparties which will fall victim and not be able to remedy the problem. At a minimum, you must regularly archive the most critical information in your own ring-fenced, out-of-band secure data repository.
In case of a claim, you can then prove their accuracy, validity and completeness to the regulators and underwriters. Therefore, in the event of a breach, you will be best positioned to convince the regulators, make insurance claims and, importantly, onboard new customers from failing banks. It may sound extreme, but I believe it will be a winner-takes-all future.
Within the last 20 years, the advent of a quantum computer has gone from theoretical to absolute. Momentum is building in quantum computing research, and everyday time-to-quantum is reduced. The importance of staying ahead and acting decisively cannot be overemphasized. If we continue on the current track, even the most advanced quantum migrations will not be completed in time.
So, I agree with Kashefi: unless we act – you should all be very worried. | <urn:uuid:150ac4c2-dcd6-479e-a581-98399dd5cdde> | CC-MAIN-2024-38 | https://www.infosecurity-magazine.com/opinions/banks-quantum-security-seriously/ | 2024-09-20T18:11:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00141.warc.gz | en | 0.932938 | 1,335 | 2.640625 | 3 |
What is the difference between a CSU/DSU and a modem?
Click on the arrows to vote for the correct answer
A. B. C. D.D
A CSU/DSU (Channel Service Unit/Data Service Unit) and a modem are both devices used to connect a router to a network. However, they have different functions and operate in different contexts.
A CSU/DSU is used to connect a router to a digital leased line. It provides the necessary digital signal conversion and conditioning to allow the router to communicate over the leased line. The CSU (Channel Service Unit) is responsible for converting the digital signal from the router into a form suitable for transmission over the leased line, while the DSU (Data Service Unit) is responsible for converting the digital signal from the leased line into a form suitable for the router. The CSU/DSU also provides line monitoring and diagnostics to ensure the leased line is working correctly.
On the other hand, a modem is used to connect a router to an analog telephone line. It converts digital signals from the router into analog signals that can be transmitted over the phone line. It also converts analog signals from the phone line back into digital signals that can be understood by the router. Modems also provide error correction and compression to improve the quality and speed of the connection.
To summarize, the main difference between a CSU/DSU and a modem is the type of connection they are used for. A CSU/DSU is used for digital leased lines, while a modem is used for analog telephone lines. Additionally, a CSU/DSU provides line monitoring and diagnostics, while a modem provides error correction and compression. | <urn:uuid:c603bb10-bd02-4fb9-9480-38c01a768ac2> | CC-MAIN-2024-38 | https://www.exam-answer.com/csu-dsu-vs-modem-2 | 2024-09-11T01:11:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00141.warc.gz | en | 0.936179 | 349 | 2.8125 | 3 |
Army-Funded Researchers Demo that Machine Learning Shows Potential to Enhance Quantum Information Transfer
(ScienceDaily) Army-funded researchers demonstrated a machine learning approach that corrects quantum information in systems composed of photons, improving the outlook for deploying quantum sensing and quantum communications technologies on the battlefield.
When photons are used as the carriers of quantum information to transmit data, that information is often distorted due to environment fluctuations destroying the fragile quantum states necessary to preserve it.
Researchers from Louisiana State University exploited a type of machine learning to correct for information distortion in quantum systems composed of photons. Published in Advanced Quantum Technologies, the team demonstrated that machine learning techniques using the self-learning and self-evolving features of artificial neural networks can help correct distorted information. This results outperformed traditional protocols that rely on conventional adaptive optics.
“We are still in the fairly early stages of understanding the potential for machine learning techniques to play a role in quantum information science,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “The team’s result is an exciting step forward in developing this understanding, and it has the potential to ultimately enhance the Army’s sensing and communication capabilities on the battlefield.” | <urn:uuid:98c67ed5-608c-42c2-a22e-fac6a513ebe1> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/army-funded-researchers-demo-that-machine-learning-shows-potential-to-enhance-quantum-information-transfer/ | 2024-09-11T00:17:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00141.warc.gz | en | 0.912892 | 270 | 2.546875 | 3 |
Three-quarters of developers think those who create artificial intelligence (AI) algorithms are ultimately responsible for AI’s impact on society, according to a global survey.
The use of artificial intelligence will not only transform how people live, but also what they do. This could lead to a period of instability as people are replaced by robots, mainly of the software variety, but also humanoids in the future.
There are also fears around singularity, where AI could self-improve and eventually surpass human intelligence, and of algorithms making decisions over fairness.
The Stack Overflow survey asked coders who they considered primarily responsible for considering the ramifications of AI.
“Developers are most likely to think that the creators and technologists behind the machine learning and AI algorithms are the ones who are ultimately most responsible for the societal issues surrounding artificial intelligence,” said the research report.
Almost half (47.8%) of respondents said the people creating the AI should take responsibility, while 27.9% believed a governmental or other regulatory body should be responsible.
Read more about AI’s impact on society
Of the remainder, 16.6% said prominent industry leaders should take responsibility and 7.7% said nobody should be primarily responsible.
The biggest danger associated with AI is algorithms making important decisions, according to 28.6%, while a further 28% feared AI would surpass human intelligence.
The development that coders are most excited about is the automation of jobs.
Data scientists and machine learning experts were the group of developers most concerned about the advancement of AI, according to the survey.
“We included a free response option on this question. There was not much serious worry about Skynet [fictional organisation in Terminator films], but many developers discussed systemic bias being built into algorithmic decision making and the danger of AI being used without the ability to inspect and reason about decision pathways,” said the Stack Overflow report.
In total, 72% of the 100,000 coders interviewed said their excitement about the possibilities outweighed their worries about the dangers.
A separate survey of 500 IT professionals carried out by Topcoder, which is a global community of 1.2 million IT professionals, found that 80% wanted government involvement in preparing workforces for AI.
Respondents said governments should help to prepare the next generation of IT workers for AI, with 55% suggesting governments should fund companies to re-skill current staff and 61% saying they should reshape public education in preparation for AI and its impact on jobs and society. | <urn:uuid:f2755499-a627-43c2-afbe-13c3870db903> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/252436885/Coders-responsible-for-impact-of-artificial-intelligence-on-society | 2024-09-17T03:14:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00541.warc.gz | en | 0.964943 | 521 | 3.15625 | 3 |
nito - Fotolia
Human error is the main cause of data breaches, according to statistics obtained from the UK’s Information Commissioner’s Office.
Figures obtained by Egress Software Technologies via a Freedom of Information (FOI) request found that human error accounted for almost two-thirds (62%) of the incidents reported to the ICO – far outstripping other causes, such as insecure webpages and hacking, standing at 9% combined.
The most common type of breach occurred as a result of someone sending data to the wrong person. Data posted or faxed to the wrong recipient accounted for 17% of data breaches, according to ICO information.
A further 17% of breaches came from loss and theft of paperwork, while in 9% of cases, data was emailed to the wrong recipient.
The ICO also recorded several other types of data breach, including insecure disposal of hardware and paperwork, loss or theft of unencrypted devices, and failure to redact data.
“The fact that so many breaches are caused by methods of working that are known data breach pitfalls – such as faxing and posting sensitive information, or using plaintext email – should be a major concern for all organisations,” said Egress CEO Tony Pepper.
“Organisations need to begin gaining a holistic understanding of the information security measures they have in place,” he added.
Pepper recommended businesses examine the nature of the data produced and handled by their staff, and using a classification tool to mandate how it’s treated. Next, they need to make sure that, when required, the data is released in the correct manner.
According to Pepper, integration between classification policy and tools, such as email encryption and secure online collaboration, can ensure the correct protection and control is applied to the data when it is released from the business environment.
He said such measures are usually not available in more traditional ways of working, leaving staff open to the risk of accidentally sending data to the wrong recipient.
Read more about data loss prevention
- Expert Bill Hayes describes how data loss prevention (DLP) products can help identify and plug information leaks and improve enterprise security.
- Companies that fail to start planning to deal with the EU’s data protection requirements are in for a real shock, warns the International Association of Information Technology Asset Managers.
Public awareness of data loss is set to rise with changes to European data protection laws coming into force in 2018 through the General Data Protection Regulation (GDPR).
Speaking at the European Identity & Cloud Conference 2016 in March, privacy lawyer and KuppingerCole analyst Karsten Kinast, said: “The regulation requires organisations to notify the local data protection authority of a data breach within 72 hours of discovering it. This means organisations need to ensure they have the technologies and processes in place that will enable them to detect and respond to a data breach.” | <urn:uuid:53b341e5-0ecb-433d-9fa5-3740de8aa465> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/450297535/Human-error-causes-more-data-loss-than-malicious-attacks | 2024-09-18T08:48:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00441.warc.gz | en | 0.941429 | 594 | 2.765625 | 3 |
Have you ever come across the term pi123 and wondered what it means? How does it relate to the world of mathematics, technology, or even everyday life? In this article, we will delve into the concept of pi123, exploring its significance and impact. Whether you’re a student, a professional, or simply someone with a curious mind, this guide will provide you with a clear understanding of pi123 and its relevance in various fields.
What is pi123?
Pi123 is not just a random sequence of characters; it has specific implications depending on the context in which it is used. In mathematics, pi (π) is a well-known irrational number representing the ratio of a circle’s circumference to its diameter. However, pi123 is not directly related to the mathematical constant pi. Instead, it is often associated with specific codes, algorithms, or identifiers used in different technological and scientific applications.
The Origins of pi123
Understanding the origins of pi123 requires a closer look at its application. In some instances, pi123 may be used as a unique identifier in software development, databases, or encryption algorithms. The combination of “pi” with the sequence “123” might be utilized for its simplicity and ease of recall, making it a convenient placeholder or reference code.
pi123 in Technology
In the realm of technology, pi123 could be found in various coding languages or systems. For example, developers might use pi123 as a sample string or test data in programming. Additionally, pi123 could be part of an encryption key or a reference number in a database. Understanding its role in these contexts is crucial for developers and IT professionals who encounter this term in their work.
The Significance of pi123 in Data Encryption
One of the most important applications of pi123 is in data encryption. In cybersecurity, pi123 may be part of a more complex encryption key, helping to secure sensitive information. Encryption is essential in protecting data from unauthorized access, and pi123 could play a role in the creation of secure algorithms. By understanding the significance of pi123 in this context, one can appreciate its importance in maintaining data privacy and security.
How pi123 is Used in Educational Tools
Educational tools and software might also incorporate pi123 as part of their instructional material. For instance, math-related apps or programs could use pi 123 in exercises or examples to teach students about number sequences, coding, or encryption. Its use in education underscores the versatility of pi 123 in various learning environments.
The Relevance of pi123 in Software Development
In software development, pi 123 might be employed as a test case or example within coding tutorials. Beginners in programming often encounter such sequences when learning the basics of coding, debugging, or software testing. Recognizing the role of pi 123 in these scenarios can help learners grasp fundamental programming concepts more effectively.
pi123 and Its Role in Algorithms
Whether in sorting algorithms, data processing, or machine learning models, pi 123 might serve as a reference point or a test variable. Understanding its application in algorithms highlights the importance of such sequences in the efficient functioning of technology.
Is pi123 Just a Placeholder?
Placeholders like pi 123 can simplify complex processes, making them more accessible and easier to understand. In fields such as programming, database management, and encryption, placeholders like pi 123 play a crucial role in testing and developing new technologies.
Common Misconceptions About pi123
There are several misconceptions about pi123, particularly among those unfamiliar with coding or mathematics. Some may mistakenly believe that pi 123 is related to the mathematical constant pi, while others might view it as a random sequence with no real significance. However, as we have explored, pi 123 has specific applications that make it valuable in various fields.
How to Work with pi123
If you’re a developer or student working with pi 123, it’s essential to understand its context and purpose. When used as a test string or placeholder, ensure that you recognize its function within the code or system. If pi 123 is part of an encryption key, it’s crucial to handle it with care to maintain data security. Understanding how to work with pi 123 effectively can enhance your coding, debugging, and data management skills.
The Future of pi123
As technology continues to evolve, the use of sequences like pi 123 may expand into new areas. From advanced encryption techniques to innovative educational tools, pi 123 could find new applications that we haven’t yet imagined
pi123 may seem like a simple sequence at first glance, but its significance in technology, education, and data security is undeniable. Whether you’re a developer, student, or curious reader, understanding the role of pi123 in various contexts can provide valuable insights into the world of coding, algorithms, and encryption. As we’ve explored in this guide, pi 123 is much more than just a random sequence—it is a versatile and important component in many technological processes.
1. Is pi 123 related to the mathematical constant pi (π)? No, pi 123 is not directly related to the mathematical constant pi. It is often used as a placeholder or test string in coding, encryption, and other technological applications.
2. Where might I encounter pi 123 in my work or studies? You might encounter pi 123 in coding tutorials, encryption algorithms, database management, or educational software. It serves various purposes depending on the context.
3. Why is pi 123 used as a placeholder? pi 123 is used as a placeholder because it is simple, easy to remember, and can be employed in various scenarios without causing confusion.
4. Can pi 123 be part of an encryption key? Yes, pi 123 could be part of an encryption key, particularly in scenarios where a simple, memorable sequence is needed as part of a larger, more complex key.
5. How can I better understand the significance of p i123? To better understand pi 123, consider its applications in coding, encryption, and algorithms. Exploring these areas will give you a deeper insight into its importance and usage | <urn:uuid:7c222bdb-ae0b-434d-afb7-5bb576e27bd4> | CC-MAIN-2024-38 | https://diversinet.com/pi123/ | 2024-09-11T03:00:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00305.warc.gz | en | 0.927333 | 1,245 | 3.546875 | 4 |
On September 26, after a nine-month journey through the Solar System, NASA’s Double Asteroid Redirection Test (DART) mission impacted an asteroid called Dimorphos.
NASA scored a bullseye, with DART – roughly the size of a vending machine – hitting Dimorphos within 10% of the 160-metre asteroid’s centre. The hit changed the orbit of Dimorphos around its bigger companion asteroid Didymos by more than 30 minutes, far exceeding the original goal.
This is the first time humans have deliberately changed the motion of a significant Solar System object. The test shows it’s plausible to protect Earth from asteroid impacts using similar future missions, if needed.
We speak with Steven Tingay, John Curtin Distinguished Professor (Radio Astronomy), Curtin University about this astonishing feat of science and engineering.
To read Professor Tingay’s article visit https://theconversation.com/nasas-ast…
Read more about the DART mission https://dart.jhuapl.edu/ | <urn:uuid:2e40e7f9-2986-4725-805f-073a04159416> | CC-MAIN-2024-38 | https://mysecuritymarketplace.com/av-media/dart-scores-30-minute-change-to-asteroid-orbit/ | 2024-09-11T04:02:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00305.warc.gz | en | 0.884358 | 224 | 3.6875 | 4 |
What Is Remote Wipe? All You Must Know
"The US loses $30 billion in cellphones every year."
People are constantly misplacing their devices, which exposes them to the risk of losing private information. It's difficult to imagine our phones, which contain so much personal information, being stolen or lost. Allow for the possibility that your phone's security has been compromised, allowing your data to be stolen: It's a terrifying thought, but it can happen. This is where remote wiping comes as your saviour.
An administrator can remotely delete data from any device using remote wipe security solutions. The primary purpose of remote wiping is to prevent data loss if a device is lost or stolen.
However, suppose the administrator does not have the device's physical access to wipe it. In that case, it also has the ability to delete data from a device given to a new owner or getting retired.
Remote Wipe: What Is It?
A mobile device management (MDM) solution may include remote wipe functionality. The MDM tool can be set up to enable an administrator to remotely delete specific files or folders from a device, erase the device's entire memory, or make the device unusable.
An administrator can remotely wipe a device by sending a command over mobile or Wi-Fi networks to the MDM application that has been installed on the device. The MDM solution starts the remote wipe procedure as soon as it receives this command. The specified files will be deleted if the wiping procedure is not stopped by a system reboot or another similar occurrence.
How Does Remote Wiping Work?
Each device has a different mechanism for a remote wipe. The majority of the time, the device has third-party MDM software installed. The equipment and the individual attempting to wipe away the data communication via this software.
Remote wipes can be configured to erase all of the data on the phone or just the data on a specific portion of the device. Let's say, for example, that a company permits staff to use their cell devices for business purposes. Employees might have a folder or application connected to sensitive information on their phones. The users' data on the device (photos, messages, and other data) may be preserved after a remote wipe if that employee is fired or leaves the company.
Another use for remote wiping is to disable a device permanently. In this instance, the wipe erases all programming from the device and its internal data.
Remote Wiping Is Seldom Simple Or Guaranteed!
Although remote wiping may appear to be fail-safe for businesses, it is essential to remember that it is not always simple to execute. A device must be turned on and connected to the internet for a remote wipe to occur. This makes it possible for the device to receive the command via MDM. Due to these conditions, there is typically only a small window when the wiping process can be done. It might be challenging to determine when the phone dies if someone turns it off purposefully.
Anyone attempting to wipe their device remotely must have the remote wiping feature activated. A setting in the Android Device Manager needs to be activated to perform a remote wipe on an Android device. The device's location tracker must also be activated. The wipe won't work if the service isn't turned on.
Why Is Remote Wiping Crucial?
The primary purpose of the remote wipe is to handle physical dangers to device security, such as the misuse or loss of a company's devices. If a user's device is compromised, an attacker might be able to read the information stored there if it isn't secured through encryption or if they can figure out the owner's password or PIN.
The COVID-19 pandemic made remote work the norm, and as a result, mobile and corporate technologies are commonly used for business purposes from outside the organization. Due to these two factors, corporate devices are now more likely to be lost or stolen than in the past when they were primarily kept in the office and had access to business information and systems. For an organization to manage the physical security risks associated with remote work, the remote wipe feature really helps.
Remote Wiping: Limitations
The following are a few remote wipe solutions' limitations:
Devices Must Be Online:
A device cannot be erased if it is off, in aeroplane mode, or otherwise disconnected from the network.
Remote Wipe Can Be Interrupted:
Some data may not be satisfactorily removed from the device if a thief reboots it while data deletion is in progress.
Data Can Be Retrieved:
An attacker might occasionally succeed in getting data from the device, though. For instance, old and solid-state drives might make it possible to recover deleted data.
Protects Only Against Loss/Theft That Is Already Known About:
Remote wipe solutions depend on an administrator sending a command to the device to wipe it. This indicates that a device won't be deleted unless the administrator is informed about it. A hacker might be capable of extracting information from a device before it is erased if an employed person is unaware that it has been stolen or needs to wait to report it.
Some Facts You Need To Know About Remote Wiping
Power and Network Connection Is A Must
The phone or tablet must be turned on, linked to the network, and capable of receiving the protocol because the command "wipe" is transmitted wirelessly to the device. It might be simple to remotely wipe a device if it is lost in an airport. On the other hand, it's simple to turn off the device, shield it, or remove the SIM card if someone wants to prevent the device from being erased.
Since the window for remote wiping can be extremely brief, it's crucial to notify your IT department as soon as possible if a device goes missing. Data breaches can happen within seconds of a device being stolen.
Remote Wiping Is Not A Monolithic
There are countless options for remote erasure on modern mobile phones and mobile management systems. The device can occasionally be reset to the factory default state using remote wiping. Others may have a more subtle remote wipe. For instance, some configurations have "enterprise wipe," which only removes the software and data installed by the business and leaves user data unaffected. Since your company is more concerned with those assets, phones with a container setup may only have the work profile wiped.
An enterprise wipe can be used when someone leaves the organization without inadequately reactivating their smartphone in a Bring Your Own Device (BYOD) environment. In that case, it makes more sense to delete the enterprise data since they might still be storing confidential info.
Instead of immediately erasing all stored data, another strategy is to lock lost devices. A customized notification, such as a mobile number to call if the device is found, can be pushed to the device's lock screen using an MDM solution.
Employees Must Be Notified.
No matter which MDM/EMM tool your business uses, remote wiping is typically included, so employee smartphones likely have some erasure capability. Employees may naturally anticipate that corporate-owned devices can be erased at any time.
However, IT administrators may still be able to remotely wipe devices under BYOD policies if staff members are mandated to configure an EMM/MDM or malware detection tool on their smartphone or tablet. Staff members must agree to this in an organization's BYOD policy before using their own devices to access work systems.
Organizations without an MDM/EMM system in place may handle remote wiping on a manual basis by restricting access to an app and deleting any associated data. A screen that informs users that this is a part of the T&C frequently appears when they sign up for those services.
There is generally no other way to allow complete access to your company's systems while maintaining data security. Professionals may object to the idea that IT has the power to wipe data from their devices remotely, but it is a necessary measure. Remote wiping is a crucial safeguard for priceless informational assets from the perspective of the business.
Benefits for Small Businesses of Remote Wipe Solutions
Any device owner can benefit from a remote wipe feature, but businesses can take advantage of it. Intellectual property, customer information, and passwords to personal accounts are just a few examples of the information every organization needs to keep private.
Make sure your company is utilizing the many advantages of remote wipes:
- Safeguards your private data. If the device is misplaced or stolen, a remote wipe guarantee that all critical data can be deleted.
- Enables the wiping of any device: With the right software, you can wipe data virtually from any digital equipment, protecting your data whether it's stored on a laptop, smartphone, or tablet.
- Secures data from ex-employees: If an employee is fired or leaves, you can quickly delete business data from their devices by issuing a remote wipe command.
- Enables you to react quickly: If a device belonging to your company has been stolen or lost, remote wipe applications give you the freedom to take swift and effective action.
- Keeping information out of misuse: You can keep your information away from people who would misuse it, like rival companies or ex-employees.
Best Practices for Secure Mobile File Sharing and Remote Wipe
How should enterprise IT organizations manage remote wipe processes to maximize mobile security in light of these risks and difficulties? Here are some suggestions:
- Deploy a mobile device container solution that is secure.
- Secure mobile containers separate business data from personal data and allow IT administrators to delete business information without impacting personal data. Other security benefits of secure mobile containers include enhanced protection from malware infections.
- Inform employees that their data will never be erased by the organization's secure mobile file-sharing solution. This removes any incentive for employees to postpone reporting stolen or lost devices.
- Educate employees on the significance of mobile security, explicitly protecting the company.
- Business data is regularly far more precious than employees' $500 referenced in polls. For instance, customer data exposed in regulated industries could result in fines of several hundred thousand dollars or more. Both these lost data, such as product proposals, could result in a loss of competitive advantage for the company. Employees should understand that a $200 mobile phone could contain valuable data.
- Monitor how business services are used on mobile devices. Suppose a machine known to contain business information has not obtained any enterprise servers in months. In that case, it is likely that the device is no longer in the owner's possession and that the data on it is vulnerable. When an unusually quiet device is discovered, the IT department may wish to contact the manufacturer.
Use in the Business
Effective mobile device management requires the ability to remotely wipe lost or stolen company devices. To keep up with your business's demands without worrying about what will happen to your business data if a device goes missing, we are committed to helping you safeguard your digital assets on demand. Know where every asset is and have the authority to protect it from a possible data leak immediately. You are aware of the requirements of your company.
Use in Small Businesses
The pressure on small businesses to guarantee the security of sensitive information on employees' laptop computers, cell phones, tablet devices, and other gadgets is increasing daily. How can you effectively serve your customers while ensuring data security, protecting customer information, and minimizing risk? Any digital asset can be tracked with a robust MDM solution; if it disappears, it can be locked or erased. You can prevent all the expensive consequences of a data breach and keep your customer's trust if you have the choice to delete confidential data when a device is breached.
Use in Personal Data Wiping
Ever misplaced your laptop or phone? Do you have quick access to your email and other accounts and sensitive personal data stored on your devices? You can feel secure about the confidentiality of your information and devices with any reliable MDM tool.
Is your cellphone vulnerable to SIM Swap? Get a FREE scan now!
Please ensure your number is in the correct format.
Valid for US numbers only!
SIM Swap Protection
Get our SAFE plan for guaranteed SIM swap protection. | <urn:uuid:9f9ecb34-0221-4eeb-b20d-730afcede56f> | CC-MAIN-2024-38 | https://www.efani.com/blog/remote-wipe | 2024-09-12T10:16:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00205.warc.gz | en | 0.946132 | 2,476 | 2.578125 | 3 |
Data Replication is the process of copying specified, file-level content from one computer, the source computer, to another, the destination computer. This is achieved through an initial transfer of the specified data, after which the replicated copy is kept updated in real time with any changes that are made to the data on the source computer. This replicated copy on the destination computer provides on-going, nearly-real-time disaster recovery protection for the source computer, unlike most data protection solutions which require significant time to perform a complete data protection operation. In addition, data replication provides a basis for additional data protection activities, such as Recovery Points (snapshots) and backups of Recovery Points.
The content for replication can be defined at the directory or volume level on a source computer and replicated to a destination computer. Once the initial transfer is complete, a driver on the source computer performs the following:
continuously monitors changes to the files contained in the defined directories or volumes
logs all new files, and changes to existing files
automatically transfers the log to the destination computer, thus replicating all new files and changes to existing files, from the source computer to the destination computer in nearly real time. See Replication Logs for specific information about frequency and timing of data replication.
A persistent connection is used as a data transfer mechanism, optionally compressing and encrypting data across the network, and through this facility, the destination computer is kept in sync with the defined content on the source computer. If the connection is interrupted at any point, the log continues to be maintained on the source computer, and once the connection is restored, CDR will automatically re-sync with the destination computer, bringing the replica up-to-date. Note that re-syncing is time and disk space intensive, and thus to be avoided if possible. For some additional discussion of this subject, see Interruptions and Restarts. If multiple Replication Pairs are active, CDR uses multiple threads to perform these operations on all Replication Pairs in parallel. CDR operations on a T1 link are fully certified. The success of CDR operations on a slower link is not guaranteed.
Some of the scenarios for data replication are listed below, but this is not a complete list of all the possible data replication configurations.
The following options can be used to perform data transfer from source to destination:
Full Resync should be necessary only in cases when no data presented on destination. Full Resync copies all the files from the source to the destination computer. When you start Full Resync at Replication Set or Replication Pair level, you can specify Full Resync, causing the Replication Pair to begin at the Baseline Scan phase.
Smart Re-Sync is the default behavior of CDR when activities are interrupted and cannot be seamlessly restarted at the same point again. In this case all new/modified data will be transferred from source to destination.
If replication is interrupted and there is a chance that the data on the destination is manually partially deleted or modified etc., the destination path is considered as inconsistent and optimized sync is recommended to rebuild it again based on the current data in the source path with consideration of data which already presented on destination.
Optimized Sync is used to transfer the modified/new files on the source computer to the destination computer along with data missing on destination. In previous attempts of sync had failures these failures will be re-tried during running Optimized Sync.
Optimized can be used in the following scenarios
If after interruption in replication the filtering option are modified, such as removing filter that was previously applied, pre-existing files become eligible for transfer to the destination
If some data was partially modified on destination
If previous sync had failures.
See Start Data Replication Activity for step-by-step instructions.
For new installations, Optimized Sync is enabled by default. You must enable Optimize Sync manually on upgraded clients by selecting the Include files that do not match with destination copy option. For step-by-step instructions, see Add a Replication Pair.
To change the state of one or more Replication Pairs at once from the Replication Set level, see Change the state of Replication Pair for step-by-step instructions.
Replication Prediction can be used to track the size of the data that has been added or modified for the time during which a pair is active and monitoring; for Windows file systems, monitoring is performed at the volume or folder level; for UNIX, monitoring is performed at the file system level. This information is used to estimate the amount of data throughput required per hour, day, etc., and thus whether the bandwidth of the current connection will be sufficient for the predicted data replication activity. For instance, to see how much data will be replicated for an Exchange Server during each workday or for the whole week, you can start monitoring all folders used by the Exchange Server (stores, logs etc.) After 24 hours or a week, you can check the size of data modified, and use that information to estimate bandwidth requirements.
Replication Prediction reports the following for each monitored folder, volume, or file system:
the monitoring interval -- start and end time
the size of the data changed, in bytes and MB
For step-by-step instructions, see Perform Replication Prediction.
CDR maintain logs on the computer, logging all file write activity (new files and changes to existing files) involving the directories and volumes specified in the source paths of all the Replication Pair(s) on that computer. These replication logs are transferred to the destination computer and replayed, ensuring that the destination remains a real-time replica of the source. For more information, see Replication Logs.
The source path or directory and the CDR cache directory must be created on different file systems.
Throttling enables you to monitor and control the data replication activities. It also allows you to configure the rate of data transfer over the network, based on the throttling parameters. The various throttling options (including throttling amount and rules) can be configured. For more information, see Throttling.
Files that are in the destination directory, but not the source directory, are orphan files. You can choose to ignore, log, or delete such files that are identified in the destination path; these settings are configured in the Orphan Files tab of the Replication Set Properties.
To configure Orphan File settings, see Configure Orphan File Processing for step-by-step instructions.
To view Orphan Files, see View Orphan Files for step-by-step instructions.
Things to Consider
A file that is created on the source and is then deleted before it has been replicated, will still be created on the destination and then deleted. This is because both the creation and deletion of the file are captured in the log file, and this will be replayed on the destination computer. These are not treated as Orphan Files.
A renamed file will be replicated to the destination as a new file. The previous copy with the old name will remain on the destination and be treated according to your Orphan Files settings.
If you change the orphan file settings for an existing Replication Set, the change will only affect Replication Pairs that are created after the change, or Replication Pairs that are aborted and restarted. Currently active Replication Pairs will not be affected by the change until they are aborted and restarted.
It is strongly recommended that you do not replicate to the root of the destination filer or the filer volume. If for any reason you need to replicate to the root of the volume then ensure that the Orphan File Processing is turned off from the Replication Set Properties.
Data Replication Monitor
Replication is a continuous activity and details of on-going data replication activity is shown in the Data Replication Monitor in the CommCell Console. The process of starting data replication with CDR involves several job phases, as follows:
For more detailed information about Job Phases and Job States, see Monitoring Data Replication.
All other job-based activity, such as Recovery Point creation, is reflected in the Job Controller. See Controlling Jobsin Job Management for comprehensive information.
Out of Band Sync
In cases where large amounts of data must be transferred from the Source computer to the Destination computer during Baselining, but the connection between the source and the destination is constrained, such as a slow WAN connection, you may not want to begin replication using the Baselining Phases. You may prefer, for instance, to back up the source and restore it to the destination to effect the initial transfer of data.
To perform the initial transfer of data without using baseline, see Out Of Band Sync from the Replication Set for step-by-step instructions. After the transfer of data, start the Replication Pair with Start, so that only the data that is new or modified since the backup will need to be replicated.
Replicate the Destination Data Back to the Source Computer (Windows Only)
It is recommended that you keep the following in mind when performing the replicate data back to the source computer:
If data has been damaged on the source computer, perform a Copyback from the Live Copy on the destination, without Overwrite existing data... selected. See Copy Back File System Data from a Recovery Point or the Live Copy.
In a case of failure of the source computer, the Replication Pair(s) can be aborted, and the data on the destination computer can be used as the primary data set. Once the problem is solved on the original source computer, the Replication Pair(s) can be created in reverse, replicating the new and modified data back to the source computer, using Smart Re-Sync.
To limit the replication to only the data newly created or modified on the replica while it was being used as the production data set, you must save the current USN (Unique Sequence Number) on the destination volume(s) before actually using them as the production data set. This will ensure that when you start the Replication Pair later to replicate data back to the source computer, CDR can use Smart Re-Sync, beginning from the USN that was saved.
To Replicate the Destination Data Back to the Source Computer, see Replicate the Destination Data Back to the Source Computer for step-by-step instructions.
It is recommended that you keep the following in mind when performing data replication:
Ensure that the destination volume has sufficient space for all the data that will be replicated to it. If you are replicating data from multiple source volumes to the same destination volume (Fan-In), ensure that the destination volume is sufficiently large for the data which will be replicated from all the source volumes. If you are creating Recovery Points, you must also account for the space requirements of the snapshots that will be created on the Destination; see Recovery Points - Snapshot space requirements.
Individual failed files or folders will not necessarily fail the replication job. Such individual failures may just be logged and the data replication job will continue. Check the logs periodically for such failures. See View the Log Files of an Active Job. In some cases, the nature of such failures during replication may have an underlying cause which would in turn cause CDR to switch to SmartSync, or Abort replication altogether.
In a case of failure of the source computer, the data on the destination computer can be used temporarily as the primary data set. Once the problem is solved on the original source computer, the new and modified data can be replicated from the destination computer back to the source computer. For more information, see Replicate the Destination Data Back to the Source Computer.
If a SAN volume that is a source for any Replication Pair(s) is disconnected and re-connected again, you must abort and restart at least one of the Replication Pairs on the source computer.
On both the source and destination computers, it is recommended that you Configure Throttling for CDR Replication Activities.
It is recommended that you also configure alerts. For more information, see Alerts and Notifications.
When you replicate data that was encrypted on the source computer, it will not be accessible on the destination computer. To access the data, you must use Copyback to recover the data to the source computer, where you will be able to access it with the proper permissions. On the source computer, if you remove the encryption from the data after it has been replicated, the data will not be replicated again, so it will remain encrypted on the destination. Encrypted files are replicated in the Baseline and the SmartSync phases.
The virtual memory paging file (pagefile.sys) must be configured on a local, fixed disk.
When replicating application data, see Change Account for Accessing Application Servers.
When using VSS on a source computer it is recommended that you also see Configuring Space Check for ContinuousDataReplicator Agents and Configuring Alerts for Low Disk Space to provide warning that the source computer is running out of disk space, which will ultimately cause replication activity to be System Aborted.
If Windows compression is set on root level of a driver letter, the compressed files will be replicated to destination as uncompressed files.
Sparse files attributes are not transferred during the Baselining and SmartSync phases; the files assume the attributes of regular files on the destination. During the Replicating phase, sparse files do retain their attributes on the destination.
To replicate files with non-ASCII character names, perform the procedure detailed in Configuring the Locale for Non-ASCII Characters.
Cross Platform Replication
Cross Unix platform data replication is now supported. For example, you can replicate data from a AIX source computer to a Solaris destination computer, or Solaris to Linux, etc. However, ACLs and Extended Attributes will be lost.
Additional Setting for Data Replication
Use the following Additional Setting to modify the default behavior of the Data Replication:
Additional Settings |
Access Control Files |
For Windows, the nDoNotReplicateACLs additional settings can be used to disable the replication of the security stream of files. This stream includes user and group access control list (ACL) settings for file access. If this additional setting is not present, ACLs will be replicated. | | <urn:uuid:5cf11253-5381-4dd4-b3bf-29c97bd3a46d> | CC-MAIN-2024-38 | https://documentation.commvault.com/11.20/data_replication_01.html | 2024-09-13T15:11:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00105.warc.gz | en | 0.904244 | 2,947 | 3.171875 | 3 |
In a sophisticated cyber-espionage campaign, Russian hackers succeeded in infiltrating Microsoft’s email systems. This breach was not the result of a frontal assault on Microsoft’s defenses; instead, it showcased the guile of experienced hackers. By exploiting vulnerabilities in widely used software protocols, the attackers were able to surreptitiously insert malicious code into the company’s routine operations.
Establishing a Foothold
Once they had established a foothold, the hackers leveraged this position to gain deeper access. They effectively camouflaged their presence, making it appear as if the activities were legitimate operations conducted by the company’s own employees. Through this method, they managed to exfiltrate sensitive material, including crucial passwords that potentially compromised not just Microsoft but a range of U.S. government agencies.
The Aftermath and Response
The discovery of such a breach set off alarm bells within the cybersecurity community. The Cybersecurity and Infrastructure Security Agency (CISA) issued a directive, initially kept out of the public eye, demanding all affected agencies to change their authentication details immediately. This directive also urged these bodies to investigate the reach of their data exposure thoroughly. An intense security overhaul followed within these agencies as they raced to bolster their digital defenses and mitigate any damage done.
The Persistent Threat of Cyber Espionage
This incident is a stark reminder of the persistent threats in our interconnected digital world. It underscores the importance of robust cybersecurity measures and constant vigilance, as nation-state actors continue to target invaluable digital assets for their strategic gains. | <urn:uuid:0f193aa0-795d-4ae4-abf4-798d67d804a6> | CC-MAIN-2024-38 | https://governmentcurated.com/government/how-did-russian-hackers-breach-microsoft-emails/ | 2024-09-13T14:56:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00105.warc.gz | en | 0.945673 | 317 | 2.59375 | 3 |
This is a guest post for the Computer Weekly Developer network written by Owen Richards, co-founder and head developer of Big Lemon.
Big Lemon is a software development agency based in South Wales. The company solves problems through digital solutions, designing and building software, apps and bots.
Richards writes as follows…
Climate change is rightly gathering an increasing focus in today’s news cycle.
From eating less meat to cutting out plastic, people are looking to the things they can do to reduce their carbon impact.
One demographic that has historically been less efficient at managing their impact, are students. The average student uses roughly £500 worth of energy and water a year, which is more than the average homeowner. When you consider they are only likely to be living in student accommodation for eight months of that year, you realise, that’s a lot.
We know that students disproportionately use a lot of energy, but what isn’t really known is why. Some accommodation providers think it’s because contracts are often ‘all-in’ so the costs are hidden. Others think it’s because this is the first time students are out on their own and are still learning how to manage living by themselves.
We just don’t have the data.
So what do you do when you have a problem that needs to both gather information, but also offer a solution? You call in the developers…
Developing a solution
The Student Energy Project is an initiative run by the energy management company, Amber Energy. The project aims to collect habitual data from students, to learn where they are using the most energy, and then provide feedback and incentives (such as free pizza, students love pizza, right?) to reduce that energy consumption.
The project works by collecting data from accommodation meters and combining it with historical data, such as weather, temperatures and expected usage, and then combines that with habitual data from the students – such as how often they boil the kettle or run water. In the future it will go on to gamify the feedback, encouraging students to compete with each other as to how efficient they can be through league tables and rewards.
We joined the project to help build and develop the platform.
Initially, it was just a web-based-project, had a poor interface and considering the amount of data it was going to need to manage, was lacking the future-proofing required to manage the backend processes.
First, we mapped out how the users would utilise the tool and quickly decided that whilst the web app was ok, it needed to be more efficient and the project needed its own native app.
So, we went about building the API and the backend to transfer all the data across. We also built the native app alongside the new web app, to make it easier for cross over functionality in the future.
We realised that we were going to need to build something that could handle live data, so we decided to use Meteor as the framework for the backend, which uses the Distributed Data Protocol (DDP) – something that was quite new to us at the time. I’d recommend anyone looking to build real-time web applications to check out Meteor. The documentation is really accessible and the fact that it works out of the box with React was a massive plus for us, as well as having their own Meteor-dedicated hosting, Galaxy.
There were several challenges to the development process, but one of the biggest was the sheer amount of data points.
For example, some meters in student accommodation are per room, others are per site. Some don’t have gas, water and electric meters, or some only have water and gas etc. So mapping out the different ways all the data connects was tricky, and needed to be reflected in the UI. There’s no point asking a user for information they don’t have, and given the variance, we had to make sure we were asking the right users the right things.
How can this make a difference going forward?
The applications are now live with over 700 students signed up in the first couple of weeks. As the project goes forward and we gather more data, we are going to be able to build a clearer picture of student energy usage.
The app itself is also going to evolve as we gather more data. For example, when students are regularly submitting habitual data and we have a better understanding of what behaviour impacts consumption the most, we can offer the suggestions or incentives that are going to have the biggest impact.
Essentially the app will become more intelligent; it’s going to be able to learn that someone showers once every day or they always charge their phone overnight, and be able to give stats and feedback that are more personal and appropriate to each user.
There’s also scope to expand the development to other sectors, including in the workplace. Whilst we can’t control government policy or conglomerate’s resources, as developers, we can look at how our platforms, apps and tools shape behaviour. Tech for good is a growing ethos, and it will be interesting to see what the industry can develop to try and solve global problems.
To take a look at the app yourself, you can go to – www.studentenergyproject.com | <urn:uuid:ed8a33c6-8d65-4a79-8fad-88f65428dd2f> | CC-MAIN-2024-38 | https://www.computerweekly.com/blog/CW-Developer-Network/Big-Lemon-How-developers-are-helping-change-student-energy-consumption | 2024-09-14T21:51:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00005.warc.gz | en | 0.971588 | 1,082 | 2.59375 | 3 |
How Web 3.0 differs from 2.0? This question is asked frequently when we come across Web 3.0 topics. In this article, we will explain to you what is web 3.0 and how it differs from web 2.0.
Web 2.0 and Web 3.0 are the successful versions of Web. Web 1.0 was introduced in the early 2000s. Today, the current web version is web 2.0 which we are familiar with. However, today’s web is static and has limitations as per individual needs. While using the internet we use the WWW system. We use WWW the World Wide Web to retrieve core information on the internet.
Tools for robotic process automation: List of Top 13 tools
Berners-Lee a computer scientist developed the internet at European researcher CERN. In October 1990, he introduced the three main technologies as well as the first webpage browser with the name WorldWideWeb.app. The three main technologies which later became the foundation of the internet are as follows:
- HTML: HTML is formatting language used to create webpages. The full form of HTML is Hypertext Markup Language.
- URI or URL: URL or URI is the unique address given to each resource available on the internet. Uniform Resource Identifier or Locator is the full form of URI or URL.
- HTTP, or Hypertext Transfer Protocol, is the technology that allows the resource to retrieve on the internet.
The web pages in Web 1.0 were static and most of the users were using the features such as email and real-time news. The application on Web 1.0 was less interactive.
Web 2.0 was a highly successful upgrade to Web 1.0. With Web 2.0 the view of how we use the internet was completely changed. Web 2.0 webpages have completely replaced the webpages of Web 1.0. The web pages of Web 2.0 are interactive and socially interconnected, and the content is user-generated. With the help of Web 2.0, user-generated resources are available to millions of users on the internet instantly.
There is some increase in the growth of Web 2.0 due to the majority of active users on social media and accessing the mobile internet. Similarly, Android and iPhone have also increased the growth in internet usage. Today many applications are dominating the market of online activity and utility. For instance, we use Facebook, LinkedIn, TikTok, Twitter, WhatsApp, etc platforms for continuous online interaction.
Millions of people are earning from the online platform, by delivering goods and services, delivering food items, and driving. Web 2.0 has become a great success for some industries. Whereas, some industries have failed to adopt the web model like retail, entertainment, media, etc.
Features of Web 2.0
- Users can retrieve and classify the data and it is free to sort the data.
- The content is dynamic which means it is responsive to the user’s input.
- Users can comment online to the site owner about the content and the site owner can exchange the information online with the user.
Web 2.0 is full of interactive tools and platforms where user can share their opinions. Users can share their experiences by followings tools:
- Social Networking
- Voting on the web content.
- Sharing a blog.
- Social Media.
- Tagging other users on web content.
What is Web 3.0?
Web 3.0 is a decentralized web. It is based on machine learning and AI, making it more advanced than its previous versions. We can have real-world human communication with Web 3.0. It is highly secured for users. For instance, no one can track the activity of the users who visit any site.
One of the main reasons for Web 3.0 is that we can digitalize the assets. With the help of blockchain technology, we can convert our assets into tokens or digital representation. We have also seen some digital currencies like Cryptocurrency and NFTs (Non-Fungible Tokens).
- Semantic Web: Semantic web is the next evolution of web technologies. Instead of searching and analysing the word with keywords and numbers, the semantic web understands the meaning of the word and hence, it improves the web to create, share and connect the content.
- Artificial Intelligence: With the help of AI computers can understand the content on a level compared to human-like. This can provide faster results. But to achieve Artificial Intelligence, we have to combine semantic web with natural language processing.
- 3D Graphics: Web 3.0 consists of three-dimensional graphic designs. For instance, Computer games, eCommerce websites, and online Museum guides are all common on Web 3.0.
- Connectivity: With the help of semantic metadata, the information or the content is more connected. Therefore, users are able to fetch all available information easily.
- Ubiquity: Web 2.0 is already ubiquitous. In other words, the content and services on the internet are available everywhere and the users can access them from any device and from anywhere on the internet. With the increase of IoT devices, Web 3.0 will take ubiquity to a new level.
- Blockchain: Security is higher with blockchain technology. Users' data is protected and encrypted. Therefore, it prevents companies from using the users' information for their own purpose.
- Decentralized: Users can log into the website securely without being tracked with the help of decentralized data networks.
- Edge Computing: Edge computing is a technology that processes the data and apps on the network edge. It uses devices like mobile phones, laptops, appliances, etc.
How Web 3.0 differs from 2.0
Web 3.0 is the third generation. In the recent article of Gartner, it is mentioned that Web3 won’t overtake Web 2.0 in the enterprise before the end of the decade. The current version of Web 2.0 is static. Therefore, it is not possible to solve individual needs. Web 3.0 is the upgraded version and it is dynamic in nature. Therefore, every individual is able to interact with the internet. If we compare Web 3.0 with Web 2.0, Web 3.0 is more secure. We can store the data securely in a decentralized system. Therefore, it reduces the risk of data leaks.
You May Also Like To Read: | <urn:uuid:726bc06f-45d3-41c2-9605-51996db623c0> | CC-MAIN-2024-38 | https://www.knowledgenile.com/blogs/how-web-3-0-differs-from-2-0-key-similarities-and-differences | 2024-09-14T20:13:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00005.warc.gz | en | 0.924519 | 1,329 | 3.203125 | 3 |
Tech Note 0035
Linux Performance Tuning
Optimizing network throughput for Linux systems
Most Linux distributions include default configurations which are not optimized for high performance. For network speeds above a few hundred megabits per second, these legacy settings can severely impair throughput. Check the settings below whenever installing MTP/IP software on a system where such speeds are expected. This advice is specific to Linux systems. Advice for improving performance on all platforms can be found in Tech Note 0023.
MTP/IP software uses the UDP/IP packet format to provide network and operating system compatibility. The UDP/IP buffer size determines how much data the operating system will store while handling other I/O operations. Many Linux distributions limit UDP buffer sizes to just 128 kilobytes: enough for only 1 millisecond of data on a gigabit network. Such small buffers can lead to high packet loss and limited network throughput.
Check the current UDP/IP buffer limit by typing the following commands:
sysctl net.core.wmem_max sysctl net.core.rmem_max
If the values are less than 2097152 bytes you should add the following lines to the /etc/sysctl.conf file:
Changes to /etc/sysctl.conf do not take effect until reboot. To update the values immediately, type the following commands:
sudo sysctl -w net.core.wmem_max=2097152 sudo sysctl -w net.core.rmem_max=2097152
Increasing these limits will not affect most applications. Only applications which specifically request larger buffers are affected. MTP/IP will request the largest buffer size available, up to 2 megabytes. For more information about UDP buffer tuning, see Tech Note 0024.
File Write Cache
Linux delays writing data to storage until 10% of RAM is filled and will freeze all storage access for flushing when 20% of RAM is filled. Systems with large amounts of RAM may experience extremely inconsistent file performance: very fast until the cache fills up, then crippling freezes while it empties. This is especially problematic when network speeds are much faster than storage speeds as gigabytes of data can become cached requiring many seconds or even minutes to flush. During flushes, network performance may be impaired or suspended, resulting in overall poor throughput and dropped connections. To ensure consistent high performance, these caches should be reduced so that storage can flush them quickly.
Storage caching is controlled by the sysctl variables vm.dirty_background_bytes and vm.dirty_bytes. For networks with a high bandwidth delay product, these should be set to two and four times the bandwidth delay product, respectively. The following values are good for most high-speed networks. Avoid going much lower unless RAM is extremely limited.
sudo sysctl -w vm.dirty_background_bytes=125000000 sudo sysctl -w vm.dirty_bytes=250000000
To ensure that these values persist across restarts, add the following lines to /etc/sysctl.conf:
Changing these values may reduce performance for local applications which infrequently write amounts of data which are larger than vm.dirty_background_bytes but smaller than 10% of RAM, however it will greatly reduce the chances of data loss in case of a system crash. See Tech Note 0023 for advice on improving the performance of the storage hardware.
NFS Mount Options
Performance for the NFS network attached storage protocol varies greatly depending on the version and configuration. In some cases, there may be trade-offs between performance and reliability. When using NFS with a NAS device, consult the device vendor for performance advice. Following are general guidelines for both NFS devices and servers:
Update to the most recent operating system and NFS versions available. Legacy NFS implementations may contain serious performance and file integrity bugs. Verify that all systems are running an adequate number of biod and nfsd processes.
Use NFS over UDP when possible.
Use the following mount options:
Older systems may limit rsize and wsize to 8K. Use the largest value available, up to 32K. For some NFS servers, adding the no_wdelay option may further improve performance, so test with and without it.
Some NFS servers and devices have limited support or poor performance for file locks. The ExpeDat and SyncDat server, servedat, uses such locks by default when running on Linux. If you are receiving otherwise unexplained Permission Denied errors when uploading to NFS, try setting the NoEXCL option.
Tuning NFS performance can be a complex endeavor, especially when dealing with legacy or highly customized systems, or specialty hardware. See Tech Note 0029 for general guidance on tuning network attached storage.
Modern CPUs are able to adjust their performance to conserve energy. Most Linux distributions enable that energy saving mode by default. This can impair network throughput, especially for multigigabit networks. To ensure maximum performance, scaling_governor must be set to performance for every CPU core. This must be done on the bare-metal operating system as virtual machines do not have direct access to these hardware settings.
While some distributions include utilities for easily managing CPU performance, the instructions below use the most common technique of adjusting each core separately via /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor. To view the current settings, type:
To change to performance mode, you must type a separate command for each CPU core. For example:
ls -l /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor ...
To ensure that these changes persist across reboots, add the echo commands to /etc/rc.local, creating that file if necessary. Verify the settings after reboot as the requirements for different Linux distributions may vary.
Setting scaling_governor to performance may increase energy use and heat output of the host hardware. Adjusting scaling_governor is not usually necessary for network speeds of 1 gigabit per second or slower.
Tech Note History
Jan | 28 | 2020 | Clarified recommended vm.dirty values |
Nov | 15 | 2019 | Updated NFS options |
Feb | 23 | 2018 | First post | | <urn:uuid:01db05d7-7969-4726-904d-2ec9ba9e9ed9> | CC-MAIN-2024-38 | https://www.dataexpedition.com/support/notes/tn0035.html | 2024-09-15T23:23:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00805.warc.gz | en | 0.853658 | 1,355 | 2.59375 | 3 |
Technology tandems aren’t very common but occasionally you do find two techs that can’t live without each other. The relationship between operating systems and CPUs is entirely co-dependent because they both need each other.
Another example of that is the simultaneous growth of edge computing, sometimes called fog computing by people trying to be clever, and the Internet of Things (IoT). Due to the design of IoT, it needs edge computing to maximize its potential, and both techs are in their very early stages.
Both edge computing and fog computing are strongly on the rise for the same exact reasons: an IoT data deluge. A report by Hitachi Vanata estimates that connected cars, with their continuous monitoring of all systems, will generate 25GB of data every hour.
“IoT is the next generation of endpoints,” said Lazarus Vekiarides, CTO of cloud storage provider ClearSky Data. “If you think about the sheer volume of things out there that could be generating data the number is enormous, much larger than the number of humans or cell phones.”
Mind you, at least a car has the power (via the engine) to at least handle some compute functions. A wearable device or remote sensor won’t because computation means power consumption and if it is operating on a battery, that means a shorter battery life.
So for the sake of the IoT devices, computing needs to be moved off the device and onto the server. While there are some truly massive data centers around the U.S. and a growing number around the world, they would be overwhelmed with data coming in from cars all around the country.
Edge computing, therefore, serves a second purpose – offloading the load on central data centers. Car data generated in Los Angeles can be processed on edge computing centers in Los Angeles rather than sending it to a Utah or Iowa data center. That means latency and even with the fast data centers and private backbones many are building, latency is still an issue.
“When you need to process data from millions of devices you do it in the cloud. The problem is the cloud is usually far away so that presents a fair amount of latency, and the volume of data doesn’t lend itself well to going up to the cloud,” said Vekiarides.
The design of the Internet is the inverse of IoT. The use case today is downloading something from an origin, like watching a movie on Netflix. Your home broadband connection is likely 10mbits to 20mbits down but 1-2mbits up. You click on a link on YouTube, which sends a few bits to a server, and a multigigabyte video is sent down to you.
The structure of IoT works in reverse. The endpoints are sending up massive amounts of data rather than receiving it. So the very design of the Internet is not in IoT’s favor.
Then there’s the third issue: storage. A wearable device like Fitbit is not going to store much. Even a few flash memory chips will add up to take up a lot of space and consume power. Plus, Vekiarides notes, storage has a much higher failure rate than CPUs and memory, and he would know given his business.
For something like IoT, where you collect huge volumes of data, you need to do two things: a local analysis component and central storage. So again, this is where edge computing fits the bill for IoT.
What Is Edge Computing Anyway?
Edge computing is defined by Wikipedia as “a method of optimizing cloud computing systems by performing data processing at the edge of the network, near the source of the data.” So it is about creating a sort of landing platform that’s nearby to the endpoints where all the data can be staged for some amount of computation and the latency problem can be addressed.
This means many more locations than most cloud providers have now, and for private firms it means more remote servers outside of their data center(s). Even a large scale data center provider like Equinix has only a dozen locations around the U.S. and 44 worldwide.
In many ways it looks a lot like a standard data center and uses all of the same hardware. The only real difference is its higher rate of distribution. Major data center providers like Equinix, CoreSite, Digital Reality and others are setting up dedicated hardware in their co-location centers specifically for edge computing use, and it’s the same hardware.
For business users, Amazon is about to one-up the colo providers with Greengrass, which puts an edge computing center in your remote office or location. You can deploy it on something as small as a Raspberry Pi or an x86 tower server. Amazon developed it with a number of partners, including Intel, Qualcomm, and Samsung, and uses its Lambda serverless service to deliver local machine-to-machine communication. So a remote office or factory floor can deploy their own private edge computing server for local devices.
No Standard Design
IoT and edge computing have something in common: they are both works in progress. IoT was defined fairly early but it took a little while to catch on. Early attempts, like wearables, didn’t go over well because they weren’t very good and people didn’t like wearing them. With further development, embedded devices have gotten better. Each new generation of cars become more computerized. Smartphones gain new sensors and some wearables are actually useful.
However, IoT and edge computing remain a work in progress. Vekiarides notes that it’s difficult to generalize use cases or core apps. “Every single one of these IoT apps is a snowflake, with its own unique requirements, there is no standard for IoT or edge computing yet because GM cars are unique to Ford, and factories have their own sensors,” he said.
Platforms are starting to emerge that are similar to the messaging middleware that first emerged in the 1990s, like CORBA and SOA. They are little more than standards for passing messages back and forth between devices and where the processing is done, but it’s a start.
And like every emerging standard there are lots of contenders for the crown. Gartner’s Hype Cycle for IoT Standards and Protocols lists 30 would-be standards, half of them expected to deliver some kind of business benefit. They cover IoT security, device management and sensor input.
Obviously a great deal of buildout is needed. Even with a dozen major cities covered, Equinix has a way to go, as do its competitors. For smartcars to work in major metropolitan areas, significant edge computing will be needed, whether it’s from the car makers themselves, Equinix, Amazon, or emerging micro data centers like Vapor IO and Schneider Electric.
They offer ruggedized data centers about the size of a car placed at a cell tower with a wired connection to the tower to take in 4G data sent from local devices and process it there, or forward it on to a data center. The research firm Markets and Markets believes that the micro data center sector could be worth a $32 billion over the next two years.
And more compute is needed in the data centers as well. Vekiarides said he’s heard anecdotally of some cloud customers not being able to get the compute resources they need. The problem will only worsen when millions of cars are sending data in to the data centers.
And of course, the network itself needs to get faster. Some of the major cloud providers like Google and Amazon have built their own networks for maximum throughput but for the masses, you are competing with Netflix and YouTube, which combined consume more than 40 percent of Internet bandwidth, and Netflix during peak hours is as much as 70 percent of Internet traffic.
That can be solved in part with the advent of 5G, which will be up to 10 times faster than 4G. That will take a lot of the load off the edge networks. It’s scheduled for testing starting this year, and rollout will take several years because these networks are not cheap to build.
So overall, edge computing and IoT are still in their infancy but growing together. | <urn:uuid:79a9f129-b2f9-48b6-a0a9-c60a7d218aae> | CC-MAIN-2024-38 | https://www.datamation.com/cloud/why-edge-computing-and-iot-go-hand-in-hand/ | 2024-09-16T00:13:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00805.warc.gz | en | 0.955025 | 1,708 | 3.078125 | 3 |
Secure your business with CyberHoot Today!!!
Remote Code Execution (RCE) is an attack that allows hackers to remotely execute malicious code on a computer. The impact of an RCE vulnerability can range from malware execution to an attacker gaining full control over a compromised machine. RCE attacks can achieve a variety of hacking goals including:
- Initial Access: RCE attacks commonly begin as a vulnerability in a public-facing application that grants the ability to run commands on the underlying machine. Attackers can use this to gain an initial foothold on a device to accomplish these attacks listed next.
- Escalation of Privileges: Many times, internal vulnerabilities within a server remain unpatched. This may not put system at risk to external hackers, but when an adversary gets an interactive login via an RCE exploit, they can then try to escalate privileges from within the server or system as explained in this Linux hacking article.
- Information disclosure: RCE attacks can be used to install data-stealing malware or to directly execute commands that extract and exfiltrate data from the vulnerable device. This can be simple unencrypted data exfiltration to sophisticated memory scrapping malware looking for passwords in memory.
- Denial of Service: An RCE vulnerability allows an attacker to run code on the system hosting the vulnerable application. This could allow them to disrupt the operations of this or other applications on the system.
- Cryptomining: Cryptomining or cryptojacking malware uses the computational resources of a compromised device to mine cryptocurrency. RCE vulnerabilities are commonly exploited to deploy and execute cryptomining malware on vulnerable devices.
- Ransomware: Ransomware is malware designed to deny a user access to their files until they pay a ransom to regain access. RCE vulnerabilities can also be used to deploy and execute ransomware on a vulnerable device.
While these are some of the most common impacts of RCE vulnerabilities, an RCE vulnerability can provide an attacker with full access to and control over a compromised device, making them one of the most dangerous and critical types of vulnerabilities.
What does this mean for an SMB or MSP?
Additional Cybersecurity Recommendations
Additionally, these recommendations below will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO services.
- Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum.
- Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure.
- Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training.
- Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints.
- In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely.
- If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money.
- Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most.
All of these recommendations are built into CyberHoot the product or CyberHoot’s vCISO Services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates.
To learn more about Remote Code Execution (RCE), watch this short 3-minute video:
CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like:
- Cybrary (Cyber Library)
- Press Releases
- Instructional Videos (HowTo) – very helpful for our SuperUsers!
Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. | <urn:uuid:52272122-99f2-4312-8aca-a81e9f2b43bf> | CC-MAIN-2024-38 | https://cyberhoot.com/cybrary/remote-code-execution-rce/ | 2024-09-18T11:24:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00605.warc.gz | en | 0.911311 | 1,025 | 2.71875 | 3 |
In this video, researchers describe how the Jetstream project at Indiana University. Jetstream is a user-friendly cloud environment designed to give researchers access to interactive computing and data analysis resources on demand, whenever and wherever they want to analyze their data. It will provide a library of virtual machines designed to do discipline specific scientific analysis. Software creators and researchers will also be able to create their own customized virtual machines or their own private computing system within Jetstream.
Jetstream will feature a web-based user interface based on the popular Atmosphere cloud computing environment developed by the iPlant Collaborative and extended to support science and engineering research generally. The operational software environment will be based on OpenStack.
The computing environment will consist of two homogenous clusters at Indiana University and TACC with a test environment at the University of Arizona. The system will provide over 0.5 PetaFLOPS of computational capacity and 2 petabytes of block and object storage. The individual nodes will contain two Intel “Haswell” processors, 128 GB of RAM, 2 terabytes of local storage, and 10 gigabit Ethernet networking. The system will leverage 40 gigabit Ethernet for network aggregation and each of the production clusters will connect to Internet2 at 100 Gbps. The physically distributed system will allow it to be highly available and resilient even if there were to be some sort of event that made one of the two sites inoperable.
Funded by the US National Science Foundation and set to launch on January 20, 2016, Jetstream results from a partnership between Indiana University’s Pervasive Technology Institute, the University of Texas at Austin’s Texas Advanced Computing Center (TACC), the Computation Institute at the University of Chicago, the iPlant Collaborative at the University of Arizona, and the University of Texas, San Antonio.
Jetstream is scheduled to enter production sometime in February, 2016. | <urn:uuid:a71a193b-2c2c-490c-be6b-163ce0e51022> | CC-MAIN-2024-38 | https://insidehpc.com/2016/01/jetstream-cloud/ | 2024-09-18T11:05:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00605.warc.gz | en | 0.913363 | 385 | 2.75 | 3 |
According to the Defense Advanced Research Projects Agency (DARPA), it takes an average of 312 days for security pros to discover software vulnerabilities such as viruses, malware, and other attacks. In hacker time, that’s a virtual eternity in which bad actors can wreak havoc within infected systems and steal information, all without being noticed.
DARPA would like to see that 312 days reduced to a matter of weeks, days—even seconds. How? By putting automated machines to the task.
That was the idea behind the DARPA Cyber Grand Challenge (CGC), a first-of-its-kind cyber capture the flag (CTF) competition in which the competitors were machines, not humans. Spanning two years, the project began in 2014 with 100 teams attempting to program computers to play the game. The challenge culminated in August 2016 in a final showdown among the top seven finalist teams. Individual, fully autonomous “cyber reasoning systems” competed, each composed of 2560 CPU cores and 16TB of RAM, along with automated fuzzing, symbolic execution, analysis, and management software.
DARPA’s objective? To improve the state of cyber security “…by developing automated, scalable systems able to find and fix software vulnerabilities at machine speed,” DARPA director Arati Prabhakar was quoted as saying in a DARPA press release1. The current process for finding exploitable vulnerabilities and bugs in software is not automated. Security professionals spend thousands of hours searching millions of lines of code to find and patch software flaws. “Our goal is to break past the reactive patch cycle we’re living in today,” Prabhakar added.
For two decades, security pros have been honing their bug-hunting skills in CTF competitions at network security and hacking conferences. Teams of human players face off in a head-to-head race to discover, diagnose, and fix software flaws in real time. Each player controls a server (host) running an identical copy of unexplored code. During the game, players are given small, original programs called “challenge binaries” that contain vulnerabilities and flags to be protected or captured. Players must protect their digital flags by patching their own server software, keeping it healthy and functional, and scanning for and attacking opponents’ vulnerabilities to capture their flags. Players earn points for defending their server code and keeping their flags safe, keeping their software available and functioning normally, and capturing opponents’ flags; they lose points for damaging their own software and for losing flags.
Cyber CTF is a game of strategy and tactics that requires analytic skills, speed, and perseverance. When players find a vulnerability, they must decide what to do: Patch immediately? Don’t patch and watch the network? Scan opponents first? Tell no one? Build an obfuscated defense? In the real world today, it’s human players, not machines, who wrestle with and make these decisions.
The implications of automated machines someday being able to handle these decisions are enormous. The possibility of dramatically reducing the time it takes to find and fix software vulnerabilities is critical in a world where every conceivable object is being connected to the Internet. That includes everyday conveniences such as household appliances and cars as well as critical systems such as power grids, traffic lights, water supplies, and air traffic control systems.
In the mid-2000s, it was estimated that the world was running a trillion lines of code. The Internet of Things means a vast increase in the number and types of devices becoming “connected.” New code for these devices is being written every day—quite often by engineers who have little experience writing secure code to operate on the hostile Internet. That means vulnerabilities and attacks have the potential to expand at an alarming rate as more and more connected devices are produced. We are already seeing the results of this with DDoS attacks launched by IoT devices reaching above 600 Gbps at their target and much larger at their source.
So, what were the results of the CGC?
The DARPA CGC finals demonstrated that automated binary vulnerability analysis is technically possible, now; and as with other computational problems, it will only get faster and more capable with time. Each of the competitors blended automated fuzzing, with a library of well researched attack patterns, and symbolic execution, with its ability to “solve for the crash,” to find and fix vulnerabilities in previously unknown challenge binaries. In several cases, competitors found and exploited vulnerabilities within minutes! All of the competitors delivered many variations of each attack as well. In some cases, they found variations of attacks for existing vulnerabilities which continued to work when the published fixes for those vulnerabilities were applied! On the other hand, a large number of known vulnerabilities in the challenge binaries were not found by any of the competitors. So none of these cyber reasoning systems yet provides a “general solution” to the problem of finding and exploiting vulnerabilities.
Interestingly, while each team used a similar high-level approach to finding, fixing, and exploiting vulnerabilities, their cyber reasoning systems had different strengths and weaknesses as shown in the competition results. Some were better at attacking, some better at fixing issues, some found the most vulnerabilities, and some simply played the game better.
These results imply that there is much fertile ground available for research and for tuning or extending the various parts of these complex systems. And that network defenders and product manufacturers are fated for interesting times ahead.
The future possibility of this automation is fun to think about, and will be even more exciting to witness when it does happen. But, the world has a lot of catching up to do before this will be a reality. Automation is every tech shop’s Holy Grail, more often than not stalled by “tech debt” that causes the application to break when things are automated. In a world where most applications are a conglomeration of old and new code bases, sometimes multiple languages, customized third-party plugins, out of date OSs, and lots of internal network dependencies, we might need to start fresh to make this a reality. | <urn:uuid:0d7c3c49-d2f6-4abe-85a3-4764f2ad7e03> | CC-MAIN-2024-38 | https://www.f5.com/labs/articles/threat-intelligence/darpa-proves-automated-systems-can-detect-patch-software-flaws-at-machine-speed-22613 | 2024-09-20T23:01:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00405.warc.gz | en | 0.964606 | 1,253 | 2.734375 | 3 |
‘Spectre’ of a full-scale ‘Meltdown’ rears its head
There’s a lot of talk right now about a major design flaw in a huge proportion of the microprocessors used by the world’s computers. In essence, two methods of accessing sensitive information on your devices have been revealed – and they are serious.
They have been named ‘Meltdown’ and ‘Spectre’ and between them they affect nearly every computing device made in the last 20 years……. They can affect your desktop machine, your laptop, your tablet, your mobile, cloud computing platforms, backup hard drives, etc, etc, etc. In fact, anything that uses many forms of modern processing chip, particularly most of those made by Intel since 1995! Some other manufacturers, including AMD and the British firm, Arm, are affected too, although Intel has the biggest problem.
That matters to most of us because Intel chips, as you’re probably aware from all of the big-budget advertising, are everywhere.
The risk is in the architecture of the chips, which can allow an attacker to get access to parts of memory that should have been protected, but turn out not to be. The scariest part is that in many cases they can do this simply by running some code in a web browser on a page you’ve visited. The memory they can get access to is where the machine would store critical information such as passwords and other login credentials and the keystrokes you’ve made.
You don’t need to be a computing expert to know that this is a very, very bad thing. That’s why this potentially dry and techie story matters to everyone.
UPDATE 5/1/18: Apple has this morning confirmed that all of its MacBooks, iMacs, iPhones and iPads are also affected. Apple Watches are not. Meltdown will take time to deal with and there are concerns that fixing it is going to slow down all of our computers (another big deal). Spectre is a more sinister issue; it’s harder to exploit, but also harder to fix. In fact it might need a fundamental redesign of the processors and hardware replacement to completely deal with, so it could be a problem for years to come!
What Should You Do to Avoid a Meltdown?
Actually, there is no complete answer yet – and may not be for some time!
At the moment, there are only limited things you can do to protect yourself.
Producers of the leading browsers (Chrome, Firefox, Internet Explorer, Edge) are rushing out patches to add a level of defence and you should ensure you keep yours up-to-date. Don’t forget to check not only your favourite that you use everyday, but all those installed on your system. There will be system updates to address the worst of the immediate problem with Meltdown on Intel chips, but that picture is far from clear yet.
Microsoft has rushed out an emergency patch for some Windows systems, but if you’re running a third-party anti-virus solution (and most organisations and individuals are), then you might not see that yet because it doesn’t work well with all of them at this point. It’s pretty likely that many systems will need firmware updates as well as an update to the Windows software. That’s very much a system level update to the hardware on the device or machine itself and can be quite disruptive if you have lots of technology to take care of.
If your systems are maintained and updated by Bespoke Computing, we’re monitoring developments and will take care of any updates as and when they arrive – or at least once we’re sure they’re safe to apply to your hardware.
The best reassurance we have right now is that these exploits are not thought to have been used “in the wild” by attackers, but now they know about them, you can bet there are plenty of people exploring the possibilities and proof of concept attacks have been demonstrated by researchers. In short, there’s not a lot you can do immediately except keep on top of system updates and browser updates.
If you’re concerned about getting your software and hardware suitably updated when the fixes do emerge but are not sure how, please do give us a call to talk through how we could help you. | <urn:uuid:76949306-b20a-4e97-a9f1-4d2fc97e907f> | CC-MAIN-2024-38 | https://bespokecomputing.com/spectre-full-scale-meltdown-rears-head/ | 2024-09-07T13:30:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00705.warc.gz | en | 0.960315 | 915 | 2.53125 | 3 |
HFC is a telecommunications industry term for a broadband network which combines optical fiber and coaxial cable. It has been commonly employed globally by cable TV operators since the early 1990s.
Below is a typical HFC Network. Click the section to view example products.
The fiber optic network extends from the cable operator’s headend out to a neighborhood’s hubsite, and finally to a fiber optic node which serves anywhere from 25 to 2000 homes. A headend will usually have satellite dishes for reception of distant video signals as well as IP aggregation routers. Some headends also house telephony equipment for providing telecommunications services to the community. The headend will receive the video signal and add to it the Public, Educational and/or Governmental channels and encode, modulate and upconvert onto RF carriers, combined onto a single electrical signal and inserted into a broadband optical transmitter. This optical transmitter converts the electrical signal to a downstream optically modulated signal that is sent to the nodes. Fiber optic cables connect the headend to optical nodes in a point-to-point or star topology, or in some cases, in a protected ring topology.
A fiber optic node has a broadband optical receiver which converts the downstream optically modulated signal coming from the headend to an electrical signal going to the homes. Today, the downstream signal is a radio frequency modulated signal that typically begins at 50MHz and ranges from 550MHz to 1000MHz on the upper end. The fiber optic node also contains a reverse/return path transmitter that sends communication from the home back to the headend. This reverse signal is a modulated RF frequency ranging from 5 to 42MHz.
The optical portion of the network provides a large amount of flexibility. If there are not many fiber optic cables to the node, wavelength division multiplexing can be utilized to combine multiple optical signals onto the same fiber. Optical filters are used to combine and split optical wavelengths onto the single fiber. For example, the downstream signal could be on a wavelength at 1550nm and the return signal could be on a wavelength at 1310nm. | <urn:uuid:25396134-2877-46d7-997a-7bcf09ff1252> | CC-MAIN-2024-38 | https://www.multicominc.com/solutions/technologies/hfc/ | 2024-09-08T18:50:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00605.warc.gz | en | 0.927493 | 428 | 2.921875 | 3 |
Originally, drones, also known as unmanned aerial vehicles (UAVs), were seen as high-tech gadgets for military applications. However, it did not take much time for the corporate world to realize and explore the huge potentials of drones for commercial applications. As a result, investments in drones, especially in commercial drones are soaring.
A drone or UAV is basically an aircraft without a human pilot and is a component of the unmanned aircraft system (UAS). An unmanned aircraft system basically includes a UAV and a ground-based controller, along with a system of communications between the UAV and the controller.
Originally used for applications that are considered “dull, dirty, or dangerous” for humans, today, drones have found a wide range of applications in sectors like agriculture, public safety, construction, insurance, delivery and logistics, and energy & power.
Commercial drones are a bit different from the conventional drones, as they are designed specifically for commercial applications. For example, commercial drones are lighter, better equipped, have longer flight time, and can carry loads over long distances, as compared to conventional drones.
Commercial drones also come with HD cameras and so, they can record terrains and capture high-quality photos. Such drones can help cultivators monitor their crops and evaluate crop health, so that timely interventions in the form of correct application of fertilizers and pesticides can be carried out. This, in turn, can ensure better crop management and improved yields.
Drones can also be used by farmers to collect soil data, while insurance companies can use them to inspect and evaluate damaged assets for a better and faster settlement of insurance claims. On the other hand, retail giants like Amazon are exploring commercial drones for delivering small packages over short distances.
In the construction sector, drones can facilitate the real-time inspection of all types of construction activities to improve safety and performance. Drones are also being increasingly used for maintaining public safety, especially in rescue operations, disaster response, and law enforcement purposes. In the energy and power sector, UAVs can play a huge role in facilitating real-time and high quality inspection of power lines, transmission towers, oil and gas pipelines, nuclear installations, and others.
Although a wide range of commercial UAVs are available in the market, the quadcopters have been the most popular so far. Some of the top models of commercial drones are DJI T600 Inspire, DJI Matrice 100, Hubsan H301S Spy Hawk, 3DR Solo Drone Quadcopter, and DJI Phantom 3 Professional. So, DJI is a dominant player in the commercial drones market. Apart from DJI drones, Hubsan, Parrot, Syma, Solo, and UDI are some of the bestselling drones in the market.
The market for drones is set to experience huge growth considering the emergence of newer and more dramatic uses of UAVs. However, the operation of drones, especially for commercial purposes does face some challenges. One major challenge is to get regulatory approval and the required infrastructure.
Presently, a significant part of investment in drones is directed towards untested applications and whether these applications will get government approval and the required infrastructure is not certain. Such uncertainties are definitely not good for the market.
Further, challenges related to safety, as well as conflicts with general aviation users like private, commercial, and also military aircraft, can become a major concern for the drone industry. However, the rate at which the technology is developing and also its growing applications across industry verticals are expected to help the drone industry to overcome these hurdles soon. | <urn:uuid:d28b551e-84b5-4ca2-9715-0af4cfe5c320> | CC-MAIN-2024-38 | https://www.alltheresearch.com/blog/commercial-drones-their-prospects-and-challenges | 2024-09-11T07:46:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00405.warc.gz | en | 0.952406 | 731 | 3.265625 | 3 |
Harnessing the power of wind to generate energy is a key component in the global effort to move towards more sustainable and environmentally friendly power sources. However, the efficiency of wind power plants is often compromised by the complexities of aerodynamic interactions between the turbines, particularly the wake effect. Recognizing the potential for optimization in wind farm design, researchers have turned to artificial intelligence (AI) to tackle this challenge.
AI-Driven Optimization of Wind Plant Layouts
The Wake Effect: Understanding and Mitigating Impact
The wake effect occurs when wind turbines extract energy from the wind, causing a reduction in wind speed for other turbines situated downstream. This not only decreases the performance of affected turbines but can also lead to increased mechanical wear due to the turbulence generated. Traditionally, the complex task of minimizing the wake effect involves maintaining considerable distances between each turbine, leading to inefficient land use and higher installation costs.
AI has emerged as a game-changer for addressing the wake effect in wind plant design. The Wind Plant Graph Neural Network (WPGNN), an AI-based surrogate model, has been developed and trained on vast amounts of data, encompassing over 250,000 different wind plant layouts. By processing this extensive dataset, WPGNN can predict and optimize turbine placement with high accuracy. It enables the model to propose arrangements that strategically angle turbines, employing wake steering techniques to reduce the impact of wakes on downstream turbines without the need for excessive spacing.
Accelerating Renewable Energy Deployment
Through the application of WPGNN, substantial land savings could be realized. Imagine future wind plants operating with up to 60% less land, resulting in a cascade of benefits such as lower operational costs, reduced environmental impacts, and the possibility to site wind plants in a wider variety of locations. Furthermore, higher energy output translates directly into increased revenue, which could significantly lower the cost of wind energy, making it an even more competitive alternative to fossil fuels.
The AI-driven model has shown encouraging results across varying regions in the United States, adapting to the unique conditions of each site. Whether it’s the rolling plains, coastal areas, or mountainous terrains, WPGNN is adept at configuring the optimal layout for maximizing energy production while considering regional wind patterns and topographical influences. The AI essentially learns regional ‘dialects’ of wind behavior, ensuring that each wind plant is speaking the language of efficiency fluently.
Extending AI Applications Beyond Wind Power
AI in Nuclear Energy: Tackling Plasma Instabilities
The scope of AI in the energy sector extends well beyond wind turbines. The same principles of machine learning and predictive modeling are finding their way into the nuclear fusion arena. Managing plasma—a hot, charged state of matter in fusion reactors—is fraught with challenges, not least of which are instabilities that can terminate the fusion reaction or damage the reactor components.
Researchers are now exploring how AI can be used to predict and control these instabilities. By training AI models with vast amounts of experimental data, machines can forecast the onset of disruptive events and adjust the magnetic fields that confine the plasma, potentially stabilizing it before instabilities become problematic. Although still in the research phase, such AI applications could greatly accelerate the development of fusion energy, providing a nearly limitless and clean source of power for the future.
The Future of AI in Renewable Energy Systems
Wind energy is a vital piece of the sustainability puzzle, yet the efficiency of wind farms can suffer due to the intricate aerodynamics involved, notably the wake effect where turbines interfere with each other’s airflow. To improve wind farm performance, researchers are leveraging artificial intelligence (AI). AI can analyze and optimize the layout of turbines to mitigate the wake effect, leading to better airflow and more power generation. This optimization also includes predictive maintenance, which reduces downtime and increases overall efficiency. The interplay between AI and wind energy not only supports the green energy transition but also enhances the economic viability of wind power plants. By integrating advanced AI algorithms into the planning and operational phases of wind farms, the renewable energy sector can unlock new levels of efficiency, driving down costs and propelling the world towards a cleaner energy future. | <urn:uuid:59e76627-dfed-4b3e-9d77-1824a029fee1> | CC-MAIN-2024-38 | https://energycurated.com/renewable-energy/how-is-ai-improving-efficiency-in-wind-energy-plants/ | 2024-09-12T11:13:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00305.warc.gz | en | 0.92057 | 845 | 3.796875 | 4 |
Natural language processing (NLP) refers to a set of techniques that enable computers and people to interact. Most of the activities humans perform are done via language, whether communicated directly or delineated using natural language. Human language, developed over millennia, has become a nuanced form of communication that carries a wealth of information that […]
Natural language processing (NLP) refers to a set of techniques that enable computers and people to interact. Most of the activities humans perform are done via language, whether communicated directly or delineated using natural language. Human language, developed over millennia, has become a nuanced form of communication that carries a wealth of information that typically surpasses the words alone. As technology progressively makes the platforms and methods through which we communicate more accessible, the need to understand the languages we use to communicate becomes greater. By amalgamating the power of artificial intelligence (AI), computer science and computational linguistics, NLP helps machines ‘read’ text by mimicking the human ability to comprehend language.
The aspects that make a language natural is exactly what makes NLP difficult; the rules that dictate the representation of information in natural languages evolved without predetermination. These rules can be abstract and high level, like how sarcasm is used to denote meaning; or low level, like the use of the character ‘s’ to convey plurality of nouns. NLP involves identifying and making use of these rules with code to translate unstructured language data into information with a schema. There are still very challenging issues to solve in natural language. However, deep learning methods are accomplishing state of the art results in some specific language problems.
Early computational outlooks of language research focused on automating the analysis of the linguistic structure of language and creating basic technologies like machine translation, speech synthesis, and speech recognition. Today’s researchers hone and utilize such tools in real-world apps, creating speech-to-speech translation engines and spoken dialogue systems, identifying emotion and sentiment toward services and products, and mining social media information about finance or health.
While NLP may not be as mainstream as Machine Learning or Big Data, we utilize natural language apps or benefit from them on a daily basis. A 2017 report by Tractica on NLP estimated that the total NLP hardware, software and service market opportunity could reach $22.3 billion by 2025. The report also predicts that NLP based software solutions that leverage AI will record market growth from $136 million in 2016 to $5.4 billion by 2025. It’s quite clear that NLP is here to stay, and it's likely to have a larger impact on how humans interact with machines. Here are some examples of how NLP can change the way we collaborate in the enterprise.
Data classification simply refers to the process of organizing data by relevant categories so that it can be used and secured more efficiently. The classification process not only simplifies the retrieval of data, but also plays a crucial role in compliance, risk management, and data security. Data classification entails tagging data in order to make it trackable and searchable. It also curbs multiple duplications of data, which decreases backup and storage costs. Deep learning, which is used in natural language processing, is well suited for automated classification because it can learn the complex underlying structure of sentences and the semantic proximity of different words.
NLP classification algorithms can’t work ‘out -the-box’; they have to be trained to make specific predictions for texts. The algorithms are give a set of categorized/tagged text based on which the generate machine learning models, the models will then be able to automatically classify untagged text. Utilizing NLP to automate content classification makes the entire collaborative process efficient and fast.
The average enterprise generates massive amounts of data on a daily basis. In this digital age, information overload is a real phenomenon. Our access to information and knowledge has exceeded our capacity to make sense of it. When NLP is applied during the data ingestion, searchable indexes are automatically added to the document’s composition. In keyword based search, text and documents are searched based on the words found in the query. The returned results are typically based on the number of matches of the query words with documents.
In semantic search, the syntactic structure of the natural language, the frequency of the words and other linguistic elements are considered. An NLP algorithm can understand the specific requirements of the search query by identifying events, brands, people, places or phrases; understands how negative or positive the text is; and automatically curates a collection of results by topic. For easier discovery, personalized content recommendations can be generated related to the same topic.
Today’s workforce interacts with vast amounts of text all the time; constantly scrolling through files, and sharing documents. If they can extract intelligence from text, they will become more efficient and productive. Despite the fact that natural language processing is not a new science, the technology is rapidly advancing thanks to a growing interest in human-to-machine communications, powerful computing, enhanced algorithms, and the availability of big data. Utilizing Natural Language Processing (NLP) to create an interactive and smooth interface between machine and humans will continue being a top priority for increasingly cognitive applications. | <urn:uuid:4fda1dcd-86e7-4e13-b3ad-91bd271d824f> | CC-MAIN-2024-38 | https://www.filecloud.com/blog/2018/10/how-natural-language-processing-nlp-can-augment-collaboration/ | 2024-09-13T18:00:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00205.warc.gz | en | 0.922235 | 1,069 | 3.53125 | 4 |
This cybersecurity alphabet is a fun and informative book that teaches kids how to recognize fraudsters’ tricks and learn the importance of staying safe online.
You’re looking at the Kaspersky Cybersecurity alphabet. Have you ever come across the term “cybersecurity” before? Cybersecurity helps us to use modern technologies — either a smartphone or a computer — safely and explore the online world without worrying about possible threats.
The digital world is huge. Nowadays, you can do a lot of things online: travel without leaving your home, or study foreign languages by talking to native speakers, for instance. And, of course, you can play games, not only with your classmates, but also with friends; even with those who are far away!
But along with the endless opportunities, there are some dangers on the internet, just like in real life. So you should always be alert. Careless actions online and the negligence of the cyber hygiene rules could lead to severe consequences: you can infect your tablet or smartphone with malware, exposing important information to cybercriminals or people could steal your awards and the progress you make in your favorite online game.
In this book, you’ll get to know new technologies, learn the main cyber hygiene rules, find out how to avoid online threats, and recognize fraudsters’ tricks. To make sure that your online journey is exciting and free from bad experiences, please study this book from A to Z. | <urn:uuid:26ae08c5-23c6-410f-a769-c27366126e81> | CC-MAIN-2024-38 | https://www.kaspersky.com/blog/cybersecurity-alphabet/?reseller=gl_cyberalphabet_pr_ona_pr__all_b2c_blo_lnk__kas_____&utm_source=CyberAlphabet&utm_medium=sm-project&utm_campaign=gl_CyberAlphabet_ma0250&utm_content=link&utm_term=gl_CyberAlphabet_organic_250y6s04n9gukyn | 2024-09-13T19:20:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00205.warc.gz | en | 0.926895 | 298 | 3.5 | 4 |
Creator: University of Michigan
Category: Software > Computer Software > Educational Software
Tag: events, health, opportunity, policies, problem
Availability: In stock
Price: USD 49.00
Racial health disparities – differences in health outcomes based on race – are rampant in the U.S., and many incorrectly assume these are due to differences in behavior or genetics. To understand these differences, and ultimately identify solutions to eliminate these disparities, we need to dig deeper and look at the root causes. We need to examine how our socio-political institutions have racial inequities embedded within their policies and practices. We need to re-examine history to learn how and why race was created and how it was used to advance the interests of whites. We need to examine how state violence is selectively used to reinforce racial inequities. Learners in this course will be guided through these examinations in order to gain a deeper understanding of why health disparities exist in the U.S.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
and what will be necessary to eliminate these disparities. Answering questions pertaining to course materials will give learners the opportunity to self-reflect in an effort to deepen their thinking about health inequities. Additionally, course assignments will give learners the opportunity to practice advocacy skills through the creation of writing products intended to convince decision-makers to change their perspective. To fix the problem we need to accurately diagnose it, and this course will help learners diagnose the root causes of the problem. By the end of this course, learners will be able to: – Describe the impact of structural racism on individuals. – Identify policies and events that shaped current racial health inequities. – Discuss how historical events contributed to current racial health inequities. – Describe how inequities in institutions like schools, businesses, and policing contribute to current racial health inequities. – Apply public writing strategies to work against racial inequities in health. | <urn:uuid:68a06ca7-68f5-42e1-b76f-229d4494f7bf> | CC-MAIN-2024-38 | https://datafloq.com/course/structural-racism-causes-of-health-inequities-in-the-u-s/ | 2024-09-18T15:00:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00705.warc.gz | en | 0.946945 | 397 | 3.296875 | 3 |
Put a Stop to Spyware
Learn how to recognize and get rid of this modern-day scourge
February 21, 2005
At this moment, computer systems in your organization are probably communicating with companies whose names neither you nor your systems' users have ever heard of, whose countries of origin make them immune to US criminal and civil laws, and whose identities are often purposely obscured. In many cases, these companies have administrative access to systems on your internal network and might be regularly installing software on and making configuration changes to those systems, causing a significant increase in your Web traffic and raising the likelihood that your organization will be the victim of corporate identity theft. If this news takes you by surprise, you aren't alone. The method these nefarious intruders are using—spyware—is one of the most misunderstood risks in the IT industry.
Know the Enemy
Originally, the term spyware referred to a category of surveillance software that law enforcement agencies and others use to monitor a computer user's activity. More recently, the term has taken on a broader meaning that includes any software that monitors or controls a computer without the clear and direct consent of the user. (That definition is now the more common one and the one I use here.) Spyware consists of three main categories: adware, snoopware, and malware.
Adware. Adware is used to deliver advertisements to users or to collect information for use by advertisers. This type of spyware is probably the most common and typically has three objectives: to monitor user activity, to keep the adware software installed and updated, and to display advertisements to the user. Familiar adware includes BargainBuddy, Coolsavings, DashBar, and n-CASE.
Once installed, adware runs either as a standalone process launched at startup or as a DLL attached to an existing process. Adware programs can monitor just about any user activity or configuration information. Figure 1 shows a typical adware operation. The program uses one or more URLs to communicate with Web servers owned by the adware publisher. These publishers often use multiple redundant Web servers to confound content filters. To get its communications through firewalls, adware uses HTTP, often encrypting the data to mask the details of its operations. As a result, adware traffic is usually indistinguishable from general Web traffic within an organization. Adware uses a GUID—often a hardware-specific token (e.g., the affected system's MAC address)—to let the adware publisher maintain a running historical profile of specific activity on the affected system.
The communication between a system running adware and the adware server is initiated either by a specific user activity, such as browsing to a Web site, or on a timed basis. A typical exchange of information involves the adware providing its server with information about recent user activity and the adware server then providing a targeted advertisement based on this activity. For example, a user in your company is planning to attend a conference and goes to a travel Web site to look for a ticket. If the user's computer is running adware, an ad for a different travel site, belonging to the adware publisher's sponsor, might pop up.
Snoopware. Snoopware is used to surreptitiously monitor the activity of a computer user. This software has two objectives: to monitor user activity and to ensure that the monitored user remains unaware of the monitoring. Snoopware is most commonly associated with identity theft and corporate spying. Common snoopware products include Catch Cheat Spy, SpectorSoft EBlaster and Spector, and WinWhatWhere Investigator.
Snoopware integrates with a system in numerous ways, including installing keyloggers, browser plugins, or standalone monitoring processes and even by replacing system software. The information that the snoopware monitors varies from product to product but typically includes screen shots, keystrokes, application activity, Web surfing, Instant Messaging (IM) communications, and email messages.
Snoopware either stores captured information in a database on the local machine or sends the information to a centralized server. Products that locally store the information use encryption and hidden folders to avoid discovery, then use email to regularly deliver the collected information to the monitoring party. The monitoring entity also often has local access to the collected information by way of special hotkeys that the snoopware installs on the monitored system. Snoopware that stores collected information on a remote centralized server sends the information in real time via HTTP Secure (HTTPS). Figure 2 shows a typical distributed snoopware operation, which lets the monitoring entity view the collected information from a remote computer by using a Web browser.
Malware. Malware (short for malicious software) is designed to disrupt the normal operation of a system. Whereas the term malware has traditionally been applied to viruses, worms, and Trojan horses, new types of malware include browser hijackers, parasites, and dialers. Browser hijackers can change a browser's default home page or redirect all Web requests to remote sites. Parasites can alter existing tracking links so that the malware publisher can get referral credits for online purchases. Dialers take over modems connected to the affected system and make remote phone calls (e.g., to pay-per-call pornographic lines). Well-known malware includes CoolWebSearch, MarketScore, New.Net, Mail Wiper Spy Wiper, and Virtual Bouncer. Figure 3 shows a typical malware operation.
Watch Your Back
So how does spyware get on your systems? Such programs are typically installed through the following means:
Free utility software—Numerous free utilities are written specifically as delivery mechanisms for spyware. These programs are one of the most common sources of spyware and include software to block popups, manage calendars, synchronize clocks, find bargains on the Internet, give real-time weather updates, and view online greeting cards.
Bundled software—Sometimes a software company that wants to generate additional revenue from its software will partner with a spyware company.
Licensed software—Snoopware is often installed through standard licensed software.
Drive-by download—Spyware that exploits low browser or application security settings can affect a system when the user visits a Web site, views a popup advertisement, or reads an HTML-enabled email message.
Silent download—Once installed, some forms of spyware will install new spyware. Because spyware typically has escalated privileges on the affected system, new spyware installations or upgrading of the existing spyware is common.
Spyware distributed by free, bundled, or licensed software typically comes with an End User License Agreement (EULA) that the user must accept before installation. These EULAs often provide detailed information about what rights the user is granting the spyware publisher and what activities the publisher might monitor. (They also complicate legal actions against spyware companies, as the sidebar "Is Spyware Legal" explains.) A typical EULA, such as the one that comes with DashBar, is 12 pages and grants the publisher the ability to "occasionally install and/or update software components," among other rights. Drive-by and silent downloads almost never present EULAs and therefore represent a greater risk to organizations because their publishers make no commitment about the rights and limitations of the software.
Understand the Risks
Would you let end users randomly establish VPNs to remote organizations without your knowledge and approval? If your answer is "No!" but your organization doesn't have policies or infrastructure in place to prevent spyware, you might be surprised by the real risks to which you're open. Table 1 lists these risks and their relative likelihood (which might vary from business to business). Of these risks, the two most misunderstood are reduced security posture and increased bandwidth usage. If you need a reason to get approval for preventative measures, the following information might come in handy.
Reduced security posture. Each time a system on your network becomes infected with spyware, the overall security of your organization is compromised. Spyware often runs with administrative-level privileges to systems on which it is installed, giving it the ability to communicate on the network and download and install software. The only limitations of these escalated privileges are those imposed by the spyware publisher. In addition, many types of spyware directly alter the security settings of the affected system to better enable the spyware's operation or to prevent its removal. Some spyware adds sites to Microsoft Internet Explorer (IE's) trusted zone, alters Web browser security settings, adds entries to a HOSTS file, or even disables antispyware and antivirus software. Even after you remove spyware, general configuration changes made to the system often remain, leaving the computer vulnerable to other spyware programs.
Increased bandwidth usage. All types of spyware use your bandwidth to communicate with remote systems. In lab tests, I found that each spyware product adds an average of two times the standard network traffic (e.g., for a system infected with 10 spyware products, 30KB of inbound/outbound traffic for a Google search averages 600KB of traffic). In one test, a system running only WeatherBug generated 133KB of traffic just by opening a Web browser to the default Google home page. Only 1.7KB of this traffic resulted from communication with the Google Web server; the rest was the result of communications between the system and two Web servers registered under different organizations (but both in fact representing the same spyware publisher).
By now you're asking, "How do you get rid of this stuff?" Unfortunately, no one product or technology can eliminate the risk of spyware within your organization. However, you can control spyware by establishing a defense-in-depth strategy that involves a combination of use policies, user education, and technology.
The typical foundation of such a strategy is often an acceptable use policy that defines what users can and can't do with their systems and—most importantly—establishes penalties for not adhering to the policies. Typical policies cover Web browsing, downloading, and installing software. User education is often the next layer in your defensive strategy. Spyware can be confusing to IT administrators; it's often incomprehensible to end users. Still, given a proper education, many users can be taught the risks of visiting questionable Web sites, accepting ActiveX controls, or installing software from unknown or questionable organizations. Of course, no defense is complete without the help of the proper technology. Several categories of software can be used to fight spyware (see "Learning Path," page 62, for suggestions about where to find more information about some of these types of products):
Content filters—Content filters at your network perimeter can prevent users from visiting sites that might represent a spyware risk and can prevent spyware from communicating with its publisher.
Antivirus software—Network- or desktop-based antivirus software can give you an early warning of certain malware, particularly Trojan horses and dialers.
Antispyware software—Antispyware software identifies, cleans, and prevents spyware from being installed on a system. Unfortunately, because of the speed with which new spyware is introduced and the relative immaturity of antispyware programs, no one product provides a comprehensive solution. As a result, many IT departments use two or more products in tandem to increase breadth of coverage.
Desktop firewalls—Host-based firewalls have traditionally been deployed only to mobile users but are becoming more common on desktops. Firewalls that regulate outbound connections—not including Windows XP Service Pack 2's (SP2) Windows Firewall—can reduce the risk of spyware by providing notification. Although knowing about spyware doesn't prevent a system from becoming infected, it can help you keep the spyware from performing its intended function.
Patch-management programs—Spyware often exploits security vulnerabilities in browsers to install itself on systems. Keep systems updated with critical system and browser security patches, by using either Windows Update or centralized patch-management solutions.
Browser security–management tools—Tools that help you centralize the definition and management of browser security, such as the Internet Explorer Administration Kit (IEAK), let you lock down the security of your organization's Web browsers and prevent drive-by downloads.
A Real and Present Danger
Spyware in all its forms—adware, snoopware, and malware—represents a real and present danger to businesses, in the form of increased security and legal risks. Understanding what spyware is, how it gets on your systems, and how it can negatively affect your business is an essential part of developing a strategy to protect your organization.
About the Author
You May Also Like | <urn:uuid:9d075633-faab-4786-ad06-1a2f79583394> | CC-MAIN-2024-38 | https://www.itprotoday.com/it-security/put-a-stop-to-spyware | 2024-09-10T05:01:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00605.warc.gz | en | 0.923457 | 2,597 | 2.609375 | 3 |
IoT (Internet of Things) can be found in our homes, cars, hospitals, manufacturing plants, schools, and workplaces. IoT are sensors and devices connected to the Internet to obtain and send data, receive instructions, and communicate with other IoT devices.
How does IoT Work?
Imagine the thermostat in your home. Your traditional thermostat may have been pre-set years ago to activate your heating system when the room temperature drops below a certain threshold. This means that regardless of if anyone is home or if the air conditioner gets inadvertently activated, your heating system will go on if that temperature threshold is crossed. On the other hand, an IoT thermostat will gather additional, even more, pertinent information to determine when best to activate the heat. It can take into consideration the current outside temperature, the time of year, weather forecast, whether there is anyone in the room, and how long it typically takes to heat the room. This additional context not only maintains the comfort of your home but conserves the environment well. Utility companies recognize this, and some are issuing rebates to customers who are using IoT thermostats.
Benefits of IoT and Impact on the Future Workplace
IoT in Smart Buildings combines numerous devices to improve the workplace environment, efficiency, and experience. A Smart Building may employ IoT devices to control HVAC, lighting, shades, elevators, physical security and access, conference room facilities, and workplace settings, among others. Just like in your home, IoT in Smart Buildings will use real-time and historical variables to create a more efficient workplace.
IoT can also be part of a corporation’s return to workplace strategy during and after the COVID-19 pandemic. Such as this IoT device example: keeping track of safe space utilization can be accomplished with a few technologies – people counting, proximity monitoring, facial recognition, employee directories, and access control. Many of these technologies can be found in your existing technologies, such as WiFi, security cameras, Active Directory, and security access control. They may require only incremental investment to deliver these types of insights.
Industry Benefits & IoT Security
Today there are nearly 14 billion IoT devices that can be found in all corners of the world. The security challenges presented by IoT not only come from the number of devices, but its’ pervasiveness in our home and workplace, the absence and differences in manufacturing standards, and true to its name, it’s need to communicate, download, and upload data to the Internet. To secure IoT, we need to augment traditional methods with advancements in Machine Learning, Artificial Intelligence, and behavioral analysis to identify IoT devices and detect and report unwanted behavior.
New Era Technology can help answer any of your IoT questions or fulfill any of your IoT needs. With our capabilities as a full-service IT provider, we can build, design, and manage the IoT environments that rely on the data network, cloud, wireless, audiovisual, physical security, life safety, unified communications, and cybersecurity.
To learn more about IoT and how New Era can assist you, please email email@example.com. | <urn:uuid:f84b305c-8db9-4a6e-9f02-fd47fc449a77> | CC-MAIN-2024-38 | https://www.neweratech.com/us/blog/iot-benefits-what-it-means-for-your-organization/ | 2024-09-12T15:36:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00405.warc.gz | en | 0.9313 | 643 | 3.078125 | 3 |
I recently read an article about the emerging IT talent shortage in Canada, and it got me thinking about what “talent” means in relation to career stages. The article says that three types of candidate are currently having difficulty finding IT jobs, despite the shortage: juniors, immigrants, and seasoned professionals. What can we say about each of their talents and the impact of talent on job prospects?
These days, the word “talent” often refers to a person or a group of people – we speak in terms of the talent pool and talent management. However, a talent, according to Wikipedia, is “a group of aptitudes useful for some activity, talents may refer to aptitudes themselves.” According to Wiktionary, talent means “a marked natural ability or skill” among other things.
Strangely enough, companies typically talk about specific talent requirements while employees focus on knowledge, past experience, and performance. We often say that a person has natural talent (for drawing or painting, for example) but also believe they can further develop their talent through training and experience. In the IT world, is computer programming a natural talent or a developed skill?
In my opinion, talent should include at least the following:
- Expertise – the ability to do research, to understand new things, and to learn how to do things;
- Experience – the ability to choose solutions or paths that work best based on prior history;
- Maturity – the ability to sort out priorities, make decisions, and recover from set-backs;
- Innovation – the ability to think outside the box and imagine new approaches or directions;
- Communications – the ability to speak, write and present to others; and
- Collaboration – the ability to negotiate, build consensus, and arrive at group decisions.
As I often do, I turned to Google to see what is being said about career stages. To my surprise, it seems that there is no general agreement on career stages – anything from three to nine major stages were being proposed. It can be career stages are also related to life stages, which reminded me of a good book I read a long time ago called Passages (by Gail Sheehy).
Just for fun, let’s try to map career stages onto the Gartner technology hype cycle.
A “technology trigger” can be replaced by a career trigger such as a first job (a junior employee) or the completion of a university education. It’s hard to pull the career trigger if you cannot get your first job, which is a dilemma for the junior job hunter. Talent at this stage would be based on natural aptitude or formal education (or both).
The career rapid ramp-up phase includes discovering interests and “hidden” talents, and this results in practical, real-life experience. People at this stage may believe there are no limits to what is achievable – eventually there comes a peak of inflated career expectations. People at this point will say: “I can be the best in the world” or “I can be the leader in my field.” For IT careers, a correlation to technology hype cycles is also likely – mobility, cloud computing, big data and social networking are fertile areas for acquiring talent these days.
Talent versus career management has been discussed in an article in The Toronto Star. The perspective is that defining talent requirements are a company’s responsibility while managing a career to acquire in-demand talents is a personal responsibility. The key is to keep the two in synch throughout the career stages. This requires continuous improvement through training and skills development.
For some people, inflated career expectations may actually be realized; for others “the train will fall completely off the tracks.” This happens, sometimes quite publicly, with careers in sports and entertainment.
For many people, their career lies somewhere between these two extremes. They, at some point, will face the trough of career disillusionment. This can be a period of disappointment or disillusionment when your career seems to be stalled, possibly due to a mismatch of talents or unwise career moves. At this stage, most people become more aware of their talent shortcomings, re-define their aspirations, and assess their future plans. This can be a good time for course corrections, or even a radical change such as moving to a new country (and/or company). It may also be a period of deep reflection and re-evaluation of goals.
The fourth stage, the slope of career enlightenment, is a consolidation and maturation period during which your talents become fully developed, are in more demand, and your successes are more significant. The perception is that you are moving in the right direction. This will often be a time of change from doing to leading. It may also be a period of innovation and invention, based on your broader understanding of what is needed and what it takes to exploit new opportunities.
The final stage is the plateau of productivity. By now your skills, expertise and experience are all well-established. Although it could be seen as the “beginning of the end,” it is also a time of higher productivity, increased influence, greater recognition and more rewards. The “seasoned professional” may need to re-fresh specific areas of expertise to keep up with the technology changes, but this is balanced by well-developed skills and the wide-ranging set of experiences (the idea that nothing is ever really new). This stage can include a transition from previous roles into teaching, coaching and mentoring.
These ideas can be summarized in a chart:
Career Trigger | Inflated Career Expectations | Trough of Career Disillusionment | Slope of Career Enlightenment | Plateau of Career Productivity | |
Expertise | Academic – you know some things | You think you know everything | You don’t know everything | You pull it all together | You’ve seen it all before |
Experience | Low | Growing | Includes minor set-backs | Broadening | Visionary |
Maturity | Low | Low | Increasing | Mature | Wise old man |
Innovation | Few ideas | Lots of ideas | Self-doubts | Confident | Risk-taking |
Communication | Unrefined | Ego-driven | Misunderstood | Highly capable | Respected |
Collaboration | Skeptical | Bull in a china shop | Crossed connections | Leader and facilitator | Networked and respected |
These are a few of my thoughts based on my own experience. I’m sure your views may be very different. Please provide your comments and feedback, both positive and negative! | <urn:uuid:c5bb03f8-dbd1-4519-8cfa-5455f9ebeba6> | CC-MAIN-2024-38 | https://www.itworldcanada.com/blog/the-evolution-of-talent-skills-expertise-experience/373592 | 2024-09-17T15:25:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00005.warc.gz | en | 0.966867 | 1,381 | 2.78125 | 3 |
It’s not often that an OpenSSH vulnerability is discovered, so when researchers at the cybersecurity firm Qualys revealed a flaw in the widely used secure communications protocol, it set the security community buzzing.
The vulnerability in the OpenSSH networking tool affects nearly 14 million vulnerable instances, according to Qualys, and experts are scrambling to patch the bug before it is exploited. Dubbed regreSSHion, the vulnerability is severe and can be used to gain full access to affected systems and to bypass firewalls. The bug takes advantage of a timing issue that was fixed nearly a decade ago but was re-introduced in 2020, a phenomenon known as “regression” that inspired the bug’s name.
But experts are cautioning that the bug — CVE-2024-6387 — is difficult to exploit even under the best conditions, and most modern systems have defenses against this type of attack.
Omkhar Arasaratnam, general manager of the Open Source Security Foundation, said the researchers had to use specific laboratory conditions to ensure a successful intrusion.
“Qualys came up with situations through which they were able to take a thing that may take weeks to a thing that could take hours, but it still relied upon an intentionally fragile environment for it to execute,” Arasaratnam said, noting that finding a bug in a program thought by many to be “rock solid” is impressive work.
OpenSSH noted that it took them eight hours of continuous connection before they were able to replicate a successful attack.
Jake Williams, former National Security Agency hacker, faculty at IANS Research and the vice president of research and development at Hunter Strategy, said in an email that the severity of the bug should not be overstated, cautioning that the “Internet is NOT on fire.”
“This disclosure also provides another opportunity to talk about the importance of zero trust. Most organizations don’t need SSH open to the whole Internet,” Sullivan said.
Qualys is not releasing a proof of concept for the vulnerability and so far no successful exploits have been released in the wild, giving defenders time to mitigate the bug.
Still, the discovery of a vulnerability in a ubiquitous piece of open-source software raises concerns that it will linger unpatched on significant numbers of systems. Vulnerable versions of the software Log4j are still prevalent in the wild and exploited by state-backed hackers, even though the Log4Shell exploit was revealed years ago.
RegreSSHion only appears to impact Linux systems that are 32 bit, which are typically older computer systems that — in this case — lack a modern security technique that appears to block the bug, dramatically decreasing the number of affected systems.
Arasaratnam noted that the bug would be avoided by using memory-safe languages, the transition to which is a key priority of the Biden administration to better secure the open-source ecosystem on which the world’s digital systems rely.
A string of high-profile vulnerabilities affecting open-source software and malicious efforts to manipulate the maintenance of open-source tools has led to concerns about the security of open-source software. Both financially motivated criminals and state-backed hackers have been targeting open-source code and developers in an effort to infect their victims further down the supply chain ecosystem.
This story was updated July 3, 2024 to the correct the spelling of Omkhar Arasaratnam’s name. It was updated again July 10, 2024, to add to Jake Williams’ title. | <urn:uuid:ae92a23d-e9d4-49f3-9c88-3b962f6cd559> | CC-MAIN-2024-38 | https://develop.cyberscoop.com/openssh-vulnerability-linux-regresshion/ | 2024-09-18T17:37:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00805.warc.gz | en | 0.962924 | 731 | 2.6875 | 3 |
Learn how to create a captivating endless road animation in PowerPoint by mixing shape animations and perception tricks. This guide is perfect for persons using PowerPoint 2016 and beyond. Here’s how to begin.
PowerPoint animations offer a powerful tool for creating dynamic presentations that can captivate your audience. Beyond simple transitions, knowing how to utilize animations like the endless road effect can bring your slides to life. The key to success is combining the right shapes, animations, and speed adjustments. The endless road animation example teaches us the importance of perspective and motion to create illusions that are both realistic and engaging. As with any skill, practice is essential. Experiment with different animations, settings, and concepts to discover what works best for your presentation goals. Remember, the more you play around with PowerPoint's features, the better you'll get at harnessing its full potential to make your presentations stand out. With creativity and experimentation, you can transform your slides into immersive experiences for your audience.
Create this Endless Road Animation in PowerPoint and learn the underlying concept. Creating an endless road animation in PowerPoint requires a combination of shapes, animations, and possibly some advanced tricks to make the road seem like it's moving continuously towards the viewer. Here's a step-by-step guide to help you create this effect and understand the underlying concept.
This tutorial is designed for PowerPoint 2016 and newer versions. To start, open PowerPoint and select a blank slide. Use the 'Insert' tab, click on 'Shapes', and select a rectangle to draw a large rectangle as the road base.
Add Perspective by using the trapezoid shape to create the illusion of a road receding into the distance. Adjust the trapezoid's top width to be narrower than the bottom width. Next, color the road with a solid fill, typically gray, and you can add a line color for the road's edges if desired.
To add road lines, use the line shape for the dashed lines in the middle of the road. You can then duplicate this line several times depending on your road's length. Select all lines, go to the 'Format' tab, align center, and then distribute vertically to ensure even spacing.
To animate the road, group the road and lines together, then apply a motion path by selecting 'Add Animation' under the 'Animations' tab. Choose 'Lines' for the motion path to go straight down. Adjust the path by dragging the end point up towards the top of the slide to make the road move towards the viewer.
To make the road seem endless, speed up the animation and possibly repeat it. Set the animation to start 'With Previous' and choose a duration for a smooth motion. Using the 'Repeat' option can make it loop endlessly, creating an illusion of an endless road.
Fine-tuning for realism involves adjusting the speed of the animation and adding a static background that complements the moving road. Consider adding side elements that move at different speeds to create a parallax effect, enhancing the sense of perspective.
The underlying concept of this animation is based on perspective and motion. By creating a road that narrows in the distance and applying a motion path animation, you simulate perspective and continuous motion towards the viewer. Adjusting speed and aligning the motion path with the road's direction are key to making the animation convincing.
Experiment with these steps to refine your endless road animation in PowerPoint. The software is quite versatile, and by tweaking the elements and animation settings, you can create a highly realistic effect.
PowerPoint is not just about slideshows; it's a powerful tool for creating animations that can enhance any presentation. The capability to create an endless road animation demonstrates just how versatile PowerPoint can be. By understanding the principles of perspective and motion, presenters can craft animations that captivate audiences and add a dynamic element to their presentation.
Animations in PowerPoint can range from simple transitions between slides to complex animations like the endless road, which simulate continuous motion. These effects can make presentations more engaging and help convey concepts in a visual and intuitive manner. It’s vital, however, to balance the use of animations to ensure they enhance rather than detract from the message.
Creating realistic animations in PowerPoint requires a mix of creativity, technical skills, and a good understanding of animation principles. The tutorial on creating an endless road animation is a great example of how combining shapes, motion paths, and timing can produce an effect that’s both impressive and visually pleasing.
Furthermore, leveraging such animations can distinguish your presentations, making them memorable for your audience. It's an excellent way to communicate complex ideas visually, making them easier to understand.
Incorporating elements like background and perspective can add depth to your animations, creating a more immersive experience. PowerPoint's versatility means users can experiment extensively to get just the right effect, adjusting settings like speed and motion to achieve realism in their animations.
Ultimately, the key to effective PowerPoint animations lies in understanding the tools available and applying them creatively. Whether it's for educational purposes, business presentations, or any other scenario, animations can significantly enhance the viewer's experience, making your presentations stand out.
With tutorials like the endless road animation guide, users can learn to harness the power of PowerPoint to create engaging, dynamic content. It’s an invitation to explore the full range of possibilities PowerPoint offers for enhancing presentations and communicating ideas in an impactful way.
Create this Endless Road Animation in PowerPoint and learn the underlying concept. Creating an endless road animation in PowerPoint requires a combination of shapes, animations, and possibly some advanced tricks to make the road seem like it's moving continuously towards the viewer. Here's a step-by-step guide to help you create this effect and understand the underlying concept. This tutorial is designed for PowerPoint 2016 and newer versions.
Adjust Speed: Experiment with the animation speed. A faster animation creates a sense of speed, but too fast might not look realistic. Background: Consider adding a static background that complements the moving road, like a sky or distant mountains. Perspective: To enhance the perspective, you might add side elements that also move but at different speeds, creating a parallax effect.
The underlying concept of this animation is based on the principle of perspective and motion. By creating a road that narrows in the distance, you simulate perspective. The motion path animation makes the road move towards the viewer, creating the illusion of endless motion. The key to making the animation convincing is adjusting the speed and ensuring the motion path is perfectly aligned with the road's direction.
Experiment with these steps to refine your endless road animation. PowerPoint is quite versatile, and by tweaking the elements and animation settings, you can create a highly realistic effect.
PowerPoint is an incredibly powerful tool for creating animations, with endless road animation being just an example of what's possible. By mastering the use of shapes, animation paths, and timing, users can create engaging presentations that capture their audience's attention. This tutorial demonstrates not only how to create a specific effect but also teaches the important principles of perspective and motion in animation. With practice and creativity, users can apply these techniques to a wide range of projects, making their presentations more dynamic and compelling. PowerPoint continues to be a versatile tool for both business and educational environments, encouraging users to explore its full potential.
To animate a car or any object in PowerPoint, first, select the object you wish to animate. Navigate to the Animations tab and click on Add Animation. Then, scroll down to Motion Paths and select an appropriate path. For a more tailored animation, choose the Custom path option, which allows you to dictate the exact trajectory of the object's movement.
Animating a dotted line in PowerPoint involves selecting the line, proceeding to the "Animations" tab, and then choosing your desired animation effect. You can further customize the animation by fine-tuning the timing and duration settings to meet your specific needs.
PowerPoint Endless Road Animation Tutorial, Endless Road Animation Concept, Create Endless Road PowerPoint, PowerPoint Animation Techniques, Learn PowerPoint Animation, Road Animation Tutorial PowerPoint, Endless Animation Design PowerPoint, Easy PowerPoint Animation Tricks, Mastering PowerPoint Endless Road, PowerPoint Road Animation Guide | <urn:uuid:1e1629d1-a1f9-476a-9a4f-a440d84b54db> | CC-MAIN-2024-38 | https://www.hubsite365.com/en-ww/pro-office-365?id=d5c7c393-2fcf-ee11-9078-000d3a443bf8 | 2024-09-18T17:42:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00805.warc.gz | en | 0.896529 | 1,666 | 3.296875 | 3 |
Q. AI is not all that new. By the broadest definition, there has been some form of it operating since 1951, so why is it dominating headlines now?
Current references to AI are largely referring to generative AI (Gen AI) or Large Language Models (LLM). These are significantly different to previous forms of AI because of how they interact with us. Gen AI and LLMs are designed to learn patterns and generate content autonomously. They exhibit a remarkable capacity for creative tasks unlike earlier forms that were often rule-based or task-specific. Their ability to produce human-like content has catapulted AI into the forefront of public discourse, shaping discussions around its potential and impact on society.
Q. What’s going to happen? Should we be scared?
This is not our first technological revolution. I have observed that humans typically seem to move through four stages of behaviour as our relationship with a given technology matures.
Stage 1: ‘Disruption and Hype’
I would say this is where we are currently sat with regards to AI. Typically, what we see here is a great deal of enthusiasm, anticipation and fear surrounding the technology. We do not fully understand it, however, the promise of what it can deliver is extremely seductive. We see a surge of interest and media coverage in the technology and a mix of fascination and apprehension as people grapple with the unknown, contemplating the potential impact on jobs, privacy, and societal structures.
Stage 2: ‘Fear and Backlash’
The second stage is unavoidable because humans have a cognitive bias toward negativity and are wired to place more emphasis on negative information than we are positive information. However, in the case of Gen AI/LLMs, in the end, we will find this technology is too powerful not to use. LLMs can help us to interrogate data quickly and efficiently and there will always be a force pushing us toward progress.
Stage 3: ‘Adaptation and Normalisation’
This is where the technology is no longer seen as disruptive or novel and its use will become a normal and ordinary part of everyday life. Much of the reasonable concern about AI is its potential to obscure where human inputs end and where AI begins. Today, users have very little understanding of how these LLMs are trained and what safeguards have been coded into the models. If you ask GPT-4 today, it will tell you it has a layer of protection which reduced the likelihood of generating “harmful” responses. GPT-4’s answer is quite vague and is unlikely to withstand the least amount of scrutiny. However, by the time we reach this stage we will have acceptable answers to the questions raised.
Stage 4: ‘Transformation and Unforeseen Consequences.’
Here we see the profound impact the technology has made on society, culture, and individual lives as well as the effects—both positive and negative—that were not initially anticipated. If we take the internet as a recent example, today it plays an irreplaceable role in everything we do and in all areas of life. However, one issue this has created is that we are dealing with the previously unforeseen consequences of how internet exposure impacts the social development of children.
The advancements we have seen in AI in recent years are unprecedented. AI is both an amplifier and an accelerator. It allows us to do everything we currently do, except faster and with much more volume. If you consider the Nelson Data, Information, Knowledge, Wisdom (DIKW) Model, which shows data at the bottom of the hierarchy and wisdom at the top, it can help you to visualise what AI does. AI can collect data and turn it into information. The responses LLMs provide can simulate knowledge by learning from the data used to train it. Wisdom seems like a significant barrier for AI though. I think we can train it to consider multiple perspectives, however, we will also need to protect AI against bias. These protections are likely to limit how quickly AI evolve.
So, what does the advancement of AI mean for cybersecurity?
In cybersecurity, it has long been said that attackers have the advantage over defenders as defenders need to protect against every move made by the adversary whilst the adversary only needs to find one successful exploit.
I expect AI will significantly widen this gap in the short term, with cybercriminals leveraging the technology to develop more sophisticated attacks. For example, attackers now have the ability to create increasingly convincing fake audio, video and images, which will be used for more sophisticated, large-scale, phishing campaigns.
On the flip side, AI is expected to offer huge accelerations in the capability of threat detection, facilitating the analysis of vast quantities of datasets and behavioural activities in real time to detect potential cyber-attacks with unprecedented speed and accuracy. Similarly, it will also enable threat intelligence to be automatically collated, enabling organisations to stay informed about emerging threats and vulnerabilities more readily.
AI will also lead to the development of more advanced authentication methods, reducing the risk of criminals gaining unauthorised access to digital systems and applications, safeguarding critical data and sensitive information against potential security breaches.
Perhaps best regarded as a double-edged sword, AI is set to provide advantages and challenges to cybersecurity practices. This uncertainty demands we take a cautious and proactive approach if we are to stay ahead of emerging threats. | <urn:uuid:1400aee1-d5b8-40f7-8887-4f31149e19fc> | CC-MAIN-2024-38 | https://res.armor.com/resources/blog/ai-revolution-insights-with-cyber-security-expert-miguel-clarke/ | 2024-09-20T00:23:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00705.warc.gz | en | 0.962243 | 1,095 | 2.921875 | 3 |
The learning never stops in emergency management. Michel Milot, manager of emergency telecommunications with Industry Canada Emergency Telecommunications (ICET), outlines problems encountered, and lessons learned, in both the Ontario blackout of 2003 and the ice storm of 1998:
Stormy weather: The main problem during the ice storm … was the telephone poles that went down during the first one to two days. But the telephone companies put up lines quickly. There were a lot of difficulties because they had to re-establish communication, but it only took a few days to connect everybody. During the ice storm, the ICET facilitated the supply of equipment to support provincial emergency workers. ICET also helped in maintaining backup power supply for the continuity of telecommunications networks by co-ordinating available resources with federal and provincial bodies and authorities in the telecom industry. Means of communication were assured for National Defence, police departments and provincial officials.
Lessons learned: The storm and the blackout truly tested the capabilities of emergency services. Notable lessons learned relate mostly to priority access to telephone service by emergency personnel and access to power supplies. During the first hours of the blackout, wireless service networks were overloaded due to a high volume of usage. Telecom companies experienced problems in receiving fuel used to generate the local area network facilities and telecommunication switches, including 9/11 systems. The blackout stressed the importance of strong partnership and communication with the telecom industry, whereas the storm emphasized the importance of priority access to transit routes for telecom personnel to transport equipment and perform maintenance.
Lights out: The following are findings from a report on how the Internet held up during the blackout. The study was conducted by Renesys, a U.S.-based Internet connectivity monitoring firm, and published in U.S. News & World Report Nov. 25, 2003:
• Thousands of significant networks (such as those run by corporations and government) and millions of individual Internet users were offline for hours or days, even though the largest Internet backbones were apparently unaffected by the massive power outage.
• The geographic area affected by the blackout included more than 9,700 globally advertised customer networks belonging to more than 3,500 businesses and other private and public organizations.
• Of those networks, more than 2,000 suffered severe outages for longer than four hours and more than 1,400 were down for longer than 12 hours.
• Renesys’s conclusion: Not enough organizations have backup power supplies: “The scale and duration of the outages we measured during the blackout strongly suggest that without additional investment in higher-quality interconnection and power at its edges, the Internet will be in no shape to supersede the telephone network as the nation’s primary communications infrastructure.” 061721 | <urn:uuid:776e49c2-8cc7-4f22-a029-a1e645f0000a> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/lessons-learned/4705 | 2024-09-20T01:48:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00705.warc.gz | en | 0.963859 | 552 | 2.578125 | 3 |
Cryptography is foundational to establishing and maintaining digital trust online–especially now, with more devices connected to the internet than ever.
If you’re responsible for an organization’s certificate management, you know a great strategy starts with discovering all issued and managed certificates in your environment.
Let’s dive in and learn about the many types of certificates.
Why It’s Important to Understand Certificates
Digital certificates are the primary vehicle by which people and machines are identified and authenticated, making management more complicated as your organization grows.
Organizations use SSL/TLS certificates to secure the transmission of confidential information, but certificates can also be used to sign code and secure Internet of Things (IoT) devices. Managing and protecting certificates at scale can be a challenge, but insecure certificates are a major cybersecurity risk.
In order to maintain the safety and security of your organization’s website, software, and digital infrastructure, anyone involved with PKI should know the types of certificates, how they work, and what they protect.
9 Types of Certificates
Certificates vary depending on what they’re used to protect. From websites to IoT devices, it’s important to understand the different types of certificates as part of your management strategy.
SSL/TLS certificates are used to establish secure connections between web servers and browsers. They encrypt data transmission on websites, preventing eavesdropping by hackers. These certificates are visually represented by padlocks in browser bars.
- Single Domain SSL: Secures a single domain name. Ideal for small websites or landing pages, difficult to scale.
- Wildcard SSL: Protects a domain and an unlimited number of its subdomains by employing the wildcard character (*) in the domain name field.
- Multi-domain SAN (Subject Alternative Name) SSL: Secures multiple unrelated domains within a single certificate. Suitable for companies managing various independent domains or offering services to other businesses.
Within SSL certificates, there are multiple validation levels. These validation levels are used to verify the identity of the certificate owner.
- Domain Validation: Verifies domain ownership, but not company identity. This validation is best used for basic website security or for personal blogs.
- Organization Validation: Confirms domain ownership and basic organizational information, making it suitable for businesses seeking to establish trust with their customers and prospects.
- Extended Validation: Most rigorous validation, verifies domain ownership, company existence, and legitimacy. This validation is important for businesses dealing with highly sensitive customer information, like financial institutions and healthcare providers.
Code signing certificates
Code signing certificates are used to verify the authenticity and integrity of software code, ensuring users download untampered code from trusted sources.
Users who download software directly from a company’s website often encounter a code signing certificate. When the download starts, your browser or operating system might check the certificate to be sure the software hasn’t been modified.
Email signing and encryption certificates
Email certificates allow users to digitally sign and encrypt emails, verifying the sender’s identity and confirming the email hasn’t been altered. Only the intended recipient can decrypt and read the email content.
For example, a lawyer might use an email signing certificate to send a signed copy of a legal document to a client, protecting it from tampering.
Client authentication certificates
Client authentication certificates verify the identity of users trying to access a network or server, adding an extra layer of security beyond usernames and passwords.
Businesses might require employees to use client authentication certificates when using a VPN client to remotely connect to the company’s internal network.
Document signing certificates
Document signing certificates are used to digitally sign electronic documents, verifying the document’s origin.
Digitally signing a contract using a document signing certificate means both parties involved can confirm the document’s authenticity and integrity.
Verified mark certificates
These specialized certificates display a trusted mark within the browser bar alongside the SSL certificate. Users can verify the website’s ownership and brand identity, promoting trust.
For example, a bank might use a VMC and display their official logo next to the secure connection padlock in the browser bar, so bank customers know they’re in the right place for protected transactions.
These lightweight certificates are used to secure communication between IoT devices. IoT devices can be easier for hackers to compromise if they aren’t up to date, so a constraining certificate protects bad actors from using an IoT device to disrupt your work.
A smart thermostat might use an IoT certificate to securely communicate with a cloud server for temperature adjustments.
DevOps and other ephemeral certificates
Ephemeral certificates are designed to expire quickly, streamlining management in DevOps environments with frequent deployments and infrastructure changes.
During a software deployment in a DevOps environment, an ephemeral certificate might be used to temporarily secure communication between a newly created server or test environment and other components of the software in development.
Certificate Authority (CA) certificates
Last but not least, CA certificates aren’t directly used for user or device authentication, but they are used to verify the identity of the Certificate Authority itself. CAs issue all of the types of certificates listed above, so their trustworthiness is crucial to security.
The root of trust in the Public Key Infrastructure (PKI) system is the authenticity of CAs and your browser or operating system’s ability to automatically trust them. After the CA has been verified, it can be used to verify the other certificates you encounter.
Managing the Certificate Lifecycle
Managing the different types of certificates doesn’t end with issuance. Every certificate has an expiration date, which means the work of maintaining your digital certificate is ongoing.
For example, it’s generally considered best practice to revoke and reissue SSL/TLS certificates annually. Most major browsers will deny connections to servers with certificates more than 398 days old, whether they’re expired or not.
Effective certificate lifecycle management starts with a comprehensive audit of every certificate in use. It’s crucial to keep an exact inventory and monitor all certificates, keys, and the root of trust (RoT) to identify any potential threats and make quick adjustments accordingly.
You can maintain the health of this security by updating certificates, keys, and the RoT as needed and revoking any certificates and keys when the relevant devices are no longer in use.
Tips for the Future
Strong and secure certificate lifecycle management starts with good organization. Now that you’ve learned about the different types of certificates, here are a few more suggestions for the future:
1. Avoid management by spreadsheet. When you begin to manage 100 or more certificates, it’s time to call in backup. Use a certificate manager to discover and track the lifecycle of all your certificates so that you gain visibility across all certificates–even shadow IT–and won’t be caught out of date.
2. Standardize practices and policies. No matter how many certificates you have to manage, determine your standard practices and document them thoroughly. This saves time during training and provides transparency across your organization, so everyone knows the security practices to follow.
3. Prevent outages by automating certificate lifecycle management. Automating the lifecycle gives you confidence that critical systems won’t go offline if you miss a calendar reminder or a note in a spreadsheet. Automated tasks take care of the easily repeatable actions in certificate management and prevent errors, so you have peace of mind.
With good planning and the help of automated tasks, your ability to maintain crucial digital certificates will be a matter of routine, freeing you up for more important tasks.
Establishing trust with digital certificates is essential, and Keyfactor is here to help! Check out our resources to learn how the Keyfactor platform handles all types of certificates and delivers high-quality certificate management. You can read the latest industry news here and request a demo here. | <urn:uuid:c674822b-b413-45dd-875d-006c75c5215c> | CC-MAIN-2024-38 | https://www.keyfactor.com/blog/a-field-guide-to-pki-encryption-9-types-of-certificates-explained/ | 2024-09-19T23:43:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00705.warc.gz | en | 0.904016 | 1,641 | 3.1875 | 3 |
The ability to encrypt and use cyphers to hide messages from unauthorized readers goes back at least as far as Roman times.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology and government. He is currently the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
Following the horrible terrorist attacks in Paris, there have been renewed calls to impose restrictive government control over encryption by some analysts studying the attack. Their logic seems to be that such a large group of terrorists could not have planned the attacks over time without using some form of encryption to shield their activities from authorities, even though no actual evidence of encrypted communications has yet been found.
I’m actually somewhat surprised encryption remains a controversial topic thousands of years after its creation. The ability to encrypt and use cyphers to hide messages from unauthorized readers goes back at least as far as Roman times, when notes were written on leather strips wound around a pole of a certain diameter. To reassemble the message once the leather was unwound, the exact size pole needed to be employed on the other end.
Over the years, hundreds of manual forms of encryption and an almost unlimited number of codes have been created to protect information. In fact, of the few ancient technologies still being used today, probably only encryption still carries such controversy.
The last time government control was seriously considered for encryption was the massive fubar that was Skipjack and its accompanying Clipper chip back in 1996. I talked with chief security architect for NetIQ, the security portfolio of Micro Focus, Michael Angelo, who was part of the industry and government working group that studied the feasibility of Skipjack.
The idea behind Skipjack was that all devices that needed to use encryption would use the Skipjack algorithm, developed by the National Security Agency in secret. It would live on a special Clipper chip that would then be embedded in devices. Each Clipper chip had a built-in backdoor the government could use to unlock and read or listen-in on anything created with it.
Almost needless to say, not many people in the general public or any U.S.-based businesses wanted to standardize on something with a built-in backdoor. Exporting devices with Clipper chips to foreign countries was also a complete nonstarter. Governments in Europe and Asia didn’t want NSA to be able to read their mail or tap their phones. As such, Skipjack and Clipper were proposed in 1993 and dead by 1996.
But it was not a total loss, because at least it got the government thinking about encryption.
“About three years of meetings ensued, but suffice it to say that we in the industry were able to convince various U.S. government agencies that controls on strong encryption would not be a deterrent to criminals and terrorists, and that U.S. industry needed encryption to compete in a global market,” Angelo told me. “In the end, encryption was reclassified from a military-only technology to a dual-use technology for both military and civilian use.”
As a dual-use technology, control was moved to the Commerce Department. They are now studied by Commerce's Information Systems Technical Advisory Committee. Angelo remains on the ISTAC committee today.
As to the terrorists operating in Paris, Angelo thinks it is unlikely they used encryption, or would have needed to do so. It’s even possible that using encryption would have done more to call out their activities to authorities.
“For direct calling one another, the GSM system uses encryption as part of its basic technology,” Angelo said. “But the cellular communications are decrypted at the base stations and law enforcement can access the contents of the calls there. Hence, encryption on standard GSM phones would not have hindered law enforcement.”
For email, the use of strong encryption might have protected the contents of a message, but Angelo says it would not have hidden the fact that the two parties were talking, and probably would not have masked their locations.
Also, an encrypted email might have raised a flag in a system, especially if one party was already being watched. France recently loosened its own government restrictions on encryption so as long as cryptography is only used for authentication and integrity purposes, it can be freely employed by anyone in that country. Thus, using strong encryption to protect the contents of a message might have been illegal to begin with in France, and made the sleeper cell easier to spot.
Whether we discover the terrorists in Paris used encryption, those horrible events have stirred the pot in the age-old encryption debate. Angelo believes the many good uses of encryption far outweigh the potential bad, and I have a tendency to believe him.
Plus, as Skipjack showed, there may be no technical or practical way for a single government to control encryption without seriously harming its economy or pushing too far into its citizens’ rights to privacy.
“We need encryption to protect ourselves and our businesses, and to protect things like personal information from being stolen,” Angelo said. “Before we make any quick decisions about a technology that is so fundamental to our survival, we need to think long and hard about just what we want to accomplish and focus on the advantages of it, not just the bad side.” | <urn:uuid:4348d858-3b73-4fee-bb61-45e73d142be0> | CC-MAIN-2024-38 | https://www.nextgov.com/emerging-tech/2015/11/decrypting-encryption-debate/124008/?oref=ng-next-story | 2024-09-19T23:03:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00705.warc.gz | en | 0.974138 | 1,098 | 2.609375 | 3 |
In recent years, cybersecurity has become a critical concern for individuals, organizations, and governments around the world. The rise in cyber threats, coupled with the increasing dependence on technology, has highlighted the importance of securing digital assets from unauthorized access, theft, and exploitation. While technology has undoubtedly played a significant role in strengthening cybersecurity, human factors remain a critical area that requires attention. In this article, we'll explore the role of human factors in cybersecurity and discuss some cybersecurity dissertation topics that you can consider for your research.
What are Human Factors in Cybersecurity?
Human factors in cybersecurity refer to the behaviors, actions, and decisions made by individuals that impact the security of digital systems and data. Common human factors that can compromise cybersecurity include lack of awareness, poor training, negligence, and errors. For instance, an employee may inadvertently download malware by clicking on a malicious link, or a user may create a weak password that can be easily cracked by an attacker. Human factors can also arise from external sources, such as social engineering, phishing, and insider threats.
The Human Factor Vulnerabilities in Cybersecurity
Human factor vulnerabilities in cybersecurity are often considered the weakest link in the security chain. Cyber attackers often exploit human vulnerabilities to gain unauthorized access to systems and data. According to a 2020 report by Verizon, 85% of data breaches involve human error. Human factors can be particularly problematic in organizations where employees are not adequately trained on cybersecurity best practices, or where security policies are not strictly enforced. Examples of cybersecurity incidents caused by human factors include data breaches resulting from stolen passwords, insider threats, or accidental data exposure.
Addressing Human Factors in Cybersecurity
Addressing human factors in cybersecurity is a critical step in improving overall security posture. Organizations can adopt various measures to mitigate human factor risks, including employee training and awareness programs, implementing security policies, and enforcing compliance. The role of organizational culture is also critical in promoting a security-conscious workforce. Organizations can foster a security culture by promoting security awareness and rewarding good security practices.
In conclusion, human factors in cybersecurity remain a critical area that requires attention. Addressing human factor risks can significantly improve overall security posture and reduce the likelihood of successful cyber attacks. If you're considering research in cybersecurity, exploring the role of human factors can lead to valuable insights and recommendations for organizations seeking to improve their cybersecurity posture. The cybersecurity dissertation topics listed above provide a starting point for conducting research on this important topic. | <urn:uuid:3a59c3f2-cc7b-4fe3-9899-00c336f7d574> | CC-MAIN-2024-38 | https://em360tech.com/tech-article/exploring-role-human-factors-cybersecurity | 2024-09-09T02:44:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00805.warc.gz | en | 0.93776 | 490 | 2.75 | 3 |
More data centers are transitioning to renewable energy sources. As this happens, Scope 3 becomes a data center’s largest contributor to greenhouse gas (GHG) emissions. As this category of emissions is also the least reported and understood, quantifying Scope 3 emissions is one of the most important issues the industry is facing.
In this paper you will be able to:
- Gain a deeper understanding into Scope 3 GHG emissions in data centers
- Utilize assessment tools and calculators to estimate a data center's carbon footprint, which can be useful for organizations looking to measure and reduce their environmental impact
- Identify best practices for reducing emissions and learn more about insights into the embodied carbon of capital goods, which represent the majority of a data center’s Scope 3 emissions
- Explore the future of data centers, discussing the role of IT and facility infrastructure in the embodied carbon profile over time | <urn:uuid:ac89dc20-bad4-4977-aeee-6bf11e463ac8> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/whitepapers/quantifying-ghg-emissions/ | 2024-09-09T02:57:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00805.warc.gz | en | 0.936665 | 182 | 2.984375 | 3 |
Browser extensions are convenient little utilities that may adjust browsing experience and make it more comfortable for you personally. However, such a convenient shell – an applet to the legitimate program – could not have been ignored by malicious actors. In this post, I will uncover about malicious browser extensions, their nature and potential harm.
Can extensions be malicious?
Yes, extensions can be malicious, but the harm they can cause is quite specific. In terms of severity, a browser extension is not on par with full-fledged malware. Since extensions cannot go beyond the environment of a browser, they cannot infect the system, modify or delete system files, or directly manipulate the operating system (except for cases with vulnerabilities). However, some extensions can collect personal data, such as browsing history, passwords, and other confidential information, and transmit it to third parties without your consent. This makes them close to spyware and infostealers.
Depending on the type of extension, they can act differently and thus have distinct malicious potential: For example, some can open pop-up ads, redirect users to phishing sites or inject ads into websites where they are initially not present. Some extensions may contain malicious code that can initiate the download of other malicious programs. They can also change your browser settings without your knowledge, alter your homepage or search engine.
It is worth noting that a malicious browser extension these days is a rare find, unless you source them from official websites. Browser extensions are usually distributed through extension stores – platforms that have moderation and requirements, although they are not always effective for stopping malicious stuff. Should their system detect malicious activity or get a well-backed feedback on malignant behavior, the extension’s listing will cease to exist.
Main ways for dodgy extensions to spread are far away from the common routes of the Internet. Usually, they appear from a redirection made by a shady website that trades its traffic to random traffic brokers online. Upon redirection, the user will see an offer to install a “recommended extension” – to enhance security or to display the content. Sure enough, neither of these really happen after the installation.
A browser hijacker is perhaps the most common type of malicious extension. Once installed, this extension changes your homepage and search engine. Even if the user navigates to google.com and performs a search, the extension redirects the query to its search engine. It also adds a special token to each search query, which modifies the search results. In the end, instead of relevant results, the user receives sponsored links that may not even match the query.
The primary risk of such extensions lies in the collection of personal information. The redirection that happens in the process throws the user through a selection of data broker sites, and each of them gathers whatever data they want. Aforementioned alteration of search results can casually throw the user to a phishing page. In some cases, this can result in the download of malicious software.
Adware extensions, as the name suggests, add advertisements to all the websites a user visits. Typically, these extensions disguise themselves as something useful or basic, such as extensions for finding discounts and promo codes. Notably, similar functionality is already present in Microsoft Edge. In practice, these extensions are useless; instead, they bombard the user with ads. Considering that adware does not do anything beyond the actions I’ve just mentioned, malicious browser extensions may be just an adware specimen.
Typical result of activity of adware browser extensions is hard to ignore. The browser starts to run slowly; clicking on any element on a page opens multiple tabs with ads, some of which may be malicious. Certain sites can automatically initiate the download of malicious software. Overall, the extension can seriously degrade the user experience and pose a threat to privacy.
Fake Cryptocurrency Wallet Extension
Fake cryptocurrency wallet extensions pose as legitimate crypto wallets, but their goal is to steal users’ credentials and funds. As I mentioned earlier, moderation in app stores is far from perfect, and sometimes malicious actors manage to place harmful extensions in official extension stores. These extensions may be disguised as popular wallets but have no actual affiliation with them.
When a user enters their credentials, such as private keys, mnemonic phrases, or passwords, the extension transmits this information to the malicious actors. This info allows the attackers to access the user’s real cryptocurrency wallets. Once they have access to the account, the attackers can transfer the funds to their accounts, leading to a complete loss of cryptocurrency for the user.
How to Stay Safe?
Malicious browser extensions are a type of threat you should not underestimate the dangers of. I have a few recommendations that can help you minimize the risks associated with malicious extensions. Firstly, try to avoid installing unnecessary extensions. I would recommend avoiding extensions from unverified sources altogether.
While most of us tend to click “next” to speed up the installation process when installing an extension from a store, I suggest paying attention to the developer and reading the reviews. Keep an eye on your installed extensions and promptly remove any that are unnecessary. Pay special attention when installing extensions related to cryptocurrency wallets. And finally, consider using decent anti-malware software that will notify you about the malicious activity that comes from such an extension. | <urn:uuid:117761c2-0dff-4892-96f6-9360e4d0cd7e> | CC-MAIN-2024-38 | https://gridinsoft.com/blogs/browser-extensions-are-they-safe/ | 2024-09-10T08:32:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00705.warc.gz | en | 0.923085 | 1,075 | 2.609375 | 3 |
Cybercrimes are on the rise, affecting more of the population each year. Resembling an online battle, government agencies like the FBI are working to combat these crimes, finding new ways to take on the savvy online criminals who are destroying people’s credit, as well as their lives.
Some states, however, appear to be more susceptible to cybercrimes than others, although all are affected to some degree. While you can most likely guess a few of those states topping the list, others might surprise you. Knowing if your state sits in the top ten of the states that are mostly affected by cybercrimes can prompt you to take notice and take action now to protect yourself and especially those who own small businesses.
What are Cybercrimes?
A cybercrime is a computer and network-oriented action which damages another, usually financially. These crimes affect both individuals and companies.
Common Types of Cybercrimes
Six of the most common types of cybercrimes include:
- Phishing: Scams in which a hacker or cybercriminal attempts to lure personal or sensitive information from an individual or company computer user
- Malware: Cybercrime involving malicious software creating viruses, worms, and spyware on a computer
- Identify theft: Accessing personally identifiable information, such as social security numbers, and using them fraudulently (both for credit and debit fraud)
- Online harassment
- Invasion of privacy
Top Ten States Affected by Cybercrimes
According to the 2019 FBI Internet Crime Report, ten states are at higher risk of cybercrimes. Their report compiles statistics, including:
- Number of cybercrime victims (based on the number of complaints)
- Total monetary losses in the state
- Number of cybercriminals
- Total earnings by the identified cybercriminals
This list of top ten states mostly affected by cybercrimes is ranked according to total monetary losses resulting from cybercrimes alone, not on the number of victims reporting these crimes.
Total Losses: $573,624,151
Residents of California report more cybercrimes than in any other state. This is likely due to the large population and number of businesses. The total number of cybercriminals identified in the state in 2019 was 17,517. There were 50,1323 victims, with losses of an average of $11,442 each.
Total Losses: $293,445,963
Florida is known for its high number of senior citizens residing in the state, and this may partially explain why it ranks so high on this list. The total number of identified cybercriminals is 11,047, with 27,178 victims suffering losses of around $10,797 each
Total Losses: $264,663,456
A surprise at #3 in Ohio, a state with only average internet access and an overall lower median household income. Yet, with just under 10,000 victims, losses soar to $28,000 on average for each one. Reportedly 2,508 cybercriminals milked almost $15,000,000 out of victims here in 2019.
Total Losses: $221,535,479
As the second-largest state in the country, Texas’s resident population is huge, so you have more opportunities for cybercrimes based on the sheer number of people. The total number of identified cybercriminals in the state comes in at just over 10,000, with earnings of over $126 million.
Total Losses: $198,765,769
While New York is full of companies and a dense population, its total losses from cybercrimes, just under $200 million, put it in fifth place.
Total Losses: $107,152,415
Home to several major companies, including industrial complexes and large state universities, Illinois has one of the country’s best broadband access levels. Because of this, it has become a target of cybercriminals. Total losses to cybercrime in the state in 2019 equaled $107,152,415, with over 10,000 victims involved.
Total Losses: $106,474,464
With such close proximity to New York, the state of New Jersey holds its own when it comes to the highest median income of its residents. Company headquarters and various businesses are found in the state, leading to high temptation for cybercriminals. Total losses to cybercrime in the state in 2019 topped out at $106,474,464.
Total Losses: $94,281,611
Thanks to its high-speed technology, top universities, and company headquarters, Pennsylvania finds itself in the top ten states affected by cybercrime. Victims here lost an average of $8,639 each in 2019.
Total Losses: $92,467,791
Located near Washington, DC, Virginia is densely populated in areas with government workers and contractors. It makes sense, then, that cybercriminals are attracted to this state. Just under 5,000 cybercriminals earned almost $25 million in 2019 alone here.
Total Losses: $84,173,754
With one of the highest median household incomes and full internet coverage across the state, Massachusetts attracts a wide variety of cybercriminals.
So, in what state will you least likely experience a cybercrime? According to the FBI report, Vermont is your best bet.
Ways to Prevent Cybercrimes from Happening to You
Whichever state you live in, there are steps you can take to protect yourself from being a victim of cybercrime.
- Use anti-virus software: allows for regular scanning, detecting, and removing Cybercrime threats from your computer.
- Update your operating system and other software: Keeping everything updated provides the latest security patches for protection.
- Create stronger passwords and change them regularly. Consider keeping track of passwords with a password manager program.
- Avoid opening spam emails and any attachments.
- Decline providing personal information or clicking on suspicious links asking you to confirm or update personal information, including passwords.
- Beware of non-legitimate websites and odd-looking URLs.
State laws differ when it comes to cybercrimes, so you may want to gain knowledge of your rights within your particular state. For example, the states of Florida and New Jersey have in-depth laws with detailed factors and classifications for felonies and misdemeanors. When cybercrimes cross state lines, the FBI can be involved as well.
Cybercrime led to $3.5 billion in overall damages throughout the states in 2019. Even with the passing of more legislation and detailed state laws, the number of crimes is expected to continue to rise. The best thing all state residents can do to help is to be aware of cybercriminals’ motives and do what they can to protect themselves and their businesses.
Related Articles About Identity Theft: | <urn:uuid:8baef88d-3c59-480c-80cb-dcc2c2101ba3> | CC-MAIN-2024-38 | https://staging.homesecurityheroes.com/states-affected-cybercrimes/ | 2024-09-11T14:02:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00605.warc.gz | en | 0.944772 | 1,414 | 2.921875 | 3 |
The complexity of seeking a cure for cancer has vexed researchers for decades. While they’ve made remarkable progress, they are still waging a battle uphill as cancer remains one of the leading causes of death worldwide.
Yet scientists may soon have a critical new ally at their sides — intelligent machines — that can attack that complexity in a different way.
Consider an example from the world of gaming: Last year, Google’s artificial intelligence platform, AlphaGo, deployed techniques in deep learning to beat South Korea Grand Master Lee Sedol in the immensely complex game of Go, which has more moves than there are stars in the universe.
Those same techniques of machine learning and AI can be brought to bear in the massive scientific puzzle of cancer.
One thing is certain — we won’t have a shot at conquering cancer with these new methods if we don’t have more data to work with. Many data sets, including medical records, genetic tests and mammograms, for example, are locked up and out of reach of our best scientific minds and our best learning algorithms.
The good news is that big data’s role in cancer research is now at center stage, and a number of large-scale, government-led sequencing initiatives are moving forward. Those include the U.S. Department of Veteran Affairs’ Million Veteran Program; the 100,000 Genomes Project in the U.K.; and the NIH’s The Cancer Genome Atlas, which holds data from more than 11,000 patients and is open to researchers everywhere to analyze via the cloud. According to a recent study, as many as 2 billion human genomes could be sequenced by 2025.
There are other trends driving demand for fresh data, including genetic testing. In 2007, sequencing one person’s genome cost $10 million. Today you can get this done for less than $1,000. In other words, for every person sequenced 10 years ago, we can now do 10,000. The implications are big: Discovering that you have a mutation linked to higher risk of certain types of cancer can sometimes be a life-saving bit of information. And as costs approach mass affordability, research efforts approach massive potential scale.
A central challenge for researchers (and society) is that current data sets lack both volume and ethnic diversity. In addition, researchers often face restrictive legal terms and reluctant sharing partnerships. Even when organizations share genomic data sets, the agreements are typically between individual institutions for individual data sets. While there are larger clearinghouses and databases operating today that have done great work, we need more work on standardized terms and platforms to accelerate access.
The potential benefits of these new technologies go beyond identifying risk and screening. Advances in machine learning can help accelerate cancer drug development and therapy selections, enabling doctors to match patients with clinical trials, and improving their abilities to provide custom treatment plans for cancer patients (Herceptin, one of the earliest examples, remains one of the best).
We believe three things need to happen to make data more available for use for cancer research and AI programs. First, patients should be able to contribute data easily. This includes medical records, radiology images and genetic testing. Laboratory companies and medical centers should adopt a common consent form to make it easy and legal for data sharing to occur. Second, more funding is needed for researchers working at the intersection of AI, data science and cancer. Just as the Chan Zuckerberg Foundation is funding new tool development for medicine, new AI techniques need to be funded for medical applications. Third, new data sets should be generated, focused on people of all ethnicities. We need to make sure that advances in cancer research are accessible to all.
Contact us Today!
Chat with an expert about your business’s technology needs. | <urn:uuid:7118593c-75c1-4da9-b0fa-df375efd5dfc> | CC-MAIN-2024-38 | https://www.managedsolution.com/tag/datascience/ | 2024-09-16T12:43:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00205.warc.gz | en | 0.940161 | 769 | 3.25 | 3 |
Definition of Microsoft Internet Explorer in Network Encyclopedia.
What is Microsoft Internet Explorer?
Internet Explorer is Microsoft’s web browser and integrated suite of client-side Internet software, which integrates tightly into the Microsoft Windows operating system and was included with Windows until version 7 as an essential operating system component. In July 2019 a new web browser eas released: Microsoft Edge.
Microsoft Internet Explorer includes such components as:
- Internet Explorer Web browser
- Microsoft Outlook Express
- Microsoft NetMeeting
- Microsoft FrontPage Express
- Microsoft Chat
- Web Publishing Wizard
- Connection Wizard
Internet Explorer’s Web browser includes
- Support for the latest Internet protocols, including Hypertext Markup Language (HTML) 4; Microsoft Visual Basic, Scripting Edition (VBScript); Jscript; ActiveX; Java; and Dynamic HTML
- Split-screen Search, History, Channel, and Favorites Explorer bars that can be toggled on and off
- Security zones for dividing intranets and the Internet into safe and unsafe regions with their own security settings
- Authenticode 2 code-signing technology that enables users to check the digital certificate of downloaded code before installing it on their system
- Offline browsing, which enables users to access Web content in their History or Subscribed Content folders when they are not connected to the Internet
- Scheduled, unattended dial-up for obtaining Web content from subscribed sites to view offline later
- Autocompletion of Uniform Resource Locators (URLs) typed into the address bar using Microsoft IntelliSense technology
- Dynamic HTML behaviors that allow Dynamic HTML functionality to be extended through hosted components
- Development enhancements that allow users to set the properties of items based on the value of an expression
- Enhancements to tables that allow users to create fixed-layout tables and collapsible borders
- Accessibility enhancements
Internet Explorer was once the top web browser
Internet Explorer was once the most widely used web browser, attaining a peak of about 95% usage share by 2003. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launch of Firefox (2004) and Google Chrome (2008), and with the growing popularity of operating systems such as Android and iOS that do not support Internet Explorer. Estimates for Internet Explorer’s market share are about 2.28% across all platforms or by StatCounter’s numbers ranked 7th, while on desktop, the only platform on which it has ever had a significant share (e.g., excluding mobile and Xbox) it is ranked 4th at 5% after macOS’s Safari. It manages to reach the second rank after Chrome when its statistics are combined with its successor, Edge (others place IE 3rd with 7.44% after Firefox). Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. | <urn:uuid:56fb4407-ceca-4477-b3ef-550a937ef09f> | CC-MAIN-2024-38 | https://networkencyclopedia.com/microsoft-internet-explorer/ | 2024-09-19T02:10:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00005.warc.gz | en | 0.887722 | 615 | 3.109375 | 3 |
Why Is the Power Out? Weather Impacts on the Electric Utility Grid, Anticipated Property Claims
In February 2021, the catastrophic winter weather events that occurred in southern parts of the U.S., especially in Texas, wreaked havoc on the electric utility grid. Millions of people lost power and some areas were on mandated rolling blackouts. Many people suffered in the cold, without power, wondering how this could be occurring in 2021 in the United States. Right now, there are a lot more questions than answers, and we may not fully understand all the issues contributing to the outages for weeks or months.
In the meantime, we have developed this short blog post to help you understand how we get our power under normal circumstances, the impacts of weather events on the main components of our electric utility grid, and how to navigate the property damage claims that will result from this week's catastrophic events.
Types of Power Generation Plants
There are two types of power generation plants: renewable and non-renewable. Renewable power includes solar farms, wind farms, hydropower/dams, and geothermal. Non-renewable power generation includes oil, coal, natural gas, and nuclear. These plants produce the electric power (load) that businesses, governments, and homes need to run. The United States has approximately 10,500 utility generation stations spread across the country, which pales in comparison to the size or land area taken up by transmission and distribution equipment. These generation plants are somewhat redundant due to the fact that if one plant goes down, a utility can rely on other facilities to cover the demand.
So, as end-users, we typically will not have a power outage or notice if one generation plant goes down. Problems arise when you have the failure/shutdown of several plants or an increased load or demand. For example, in the summer of 2020, the State of California had to trigger rolling blackouts. The cause of the blackouts was not the result of any generation plant or equipment failures, but rather, was largely a result of an extreme heatwave across the state that put increased demand on the generation plants that they simply could not supply.
Why the Texas Rolling Blackouts Failed
This week in Texas, the rolling blackouts were caused by the extreme weather which has impacted the utility grid two-fold. First, due to the extremely low temperatures, businesses and homes are having to produce more heat, which increases the power demand on the grid. This alone, however, would not cause outages as the grid should have enough power to maintain this type of demand. The other issue is that the cold weather caused generation plants to shut down either as a result of equipment failure or to protect the equipment. Wind turbines are subject to ice building up on the blades or the components becoming too cold and are designed to shut down to prevent any damage to the equipment. Similarly, thermal plants, such as coal and natural gas, require water lines to properly cool equipment to prevent overheating. However, given the extremely low temperatures, the necessary water can freeze and damage the components, so these plants also have to be shut down to prevent damage.
You may be asking yourself, what happens to energy equipment in cold climates? Similar energy farms and plants in the north routinely experience below-zero temperatures and undergo a "winterization" process every year to prevent issues like we've seen this week in Texas. Solar farms and plants in the south do not typically go through the winterizing process because these types of weather events are extremely rare and not anticipated. So, this week, Texas and the surrounding states had the double-whammy of both an increased demand for power and a decreased supply due to generation plants having to be taken offline.
Although the Electric Reliability Council of Texas (ERCOT) had established certain reserve margins for the region to allow for increased demand and shortfall in supply, the combinations created by this storm exceeded the planning criteria. Since electric power must be delivered instantaneously from the available generation, the rolling blackouts were instituted to prevent instability and total collapse (blackout) of the state's power system.
Electrical Power Grid Systems
The electric power transmission system is an interconnected grid of power lines that transport the electricity generated by power plants hundreds of miles to cities or neighborhoods. Transmission lines vary in voltages, from 69kV (kilovolts) to 765kV. Due to their importance, the power lines are supported by much more robust structures/poles than the distribution lines that you would see in a residential neighborhood. Because of this, they are much better able to withstand weather events, such as high winds and ice. Electrical transmission systems are also redundant, meaning that if one transmission line goes down, other transmission lines can continue to be used to supply power, and no outages are experienced by customers. Weather events that can cause failures of transmission systems and result in power outages include hurricanes and tornadoes.
In 2017, for example, Hurricane Maria devastated Puerto Rico's transmission and distribution system by knocking out 80% of its electrical grid. This was the largest blackout in U.S. history and the second-largest in the world. It took nearly a year for power to be fully restored in Puerto Rico.
Electrical Power Distribution System
The electrical distribution system is a grid of power lines in a city or region that distributes power from transmission lines to end-users. Distribution power lines vary in voltages, from 120 volts to 35kV. Most of the power outages experienced in the United States are a result of equipment failures in electrical distribution systems. Distribution equipment is less robust and monitored and serviced less frequently than electrical generation and transmission systems because it affects far fewer customers. Therefore, weather events are more likely to have a greater effect on this equipment, leading to power outages. Equipment failures to distribution equipment frequently occur following hurricanes, tornadoes, earthquakes, ice storms, windstorms, and lightning events.
Expected Power Outage Claims From Texas and Southern States
Given the weather events this past week and their impact on the utility grid in Texas and the surrounding states, Envista has seen an influx of water line and sprinkler pipe breaks and expects many more to occur as temperatures rise in the coming days. As insurers and legal professionals begin to tackle the myriad of claims and disputes that will arise from this disaster, it is important to keep the following in mind:
- Determine the cause of the pipe ruptures. Were the failures weather-related or was there another issue? Forensic mechanical engineers can assist with evaluating damage and determining the cause of loss.
- Identify if there is any recovery against the electrical utility for not meeting necessary demand or against the pipe installers for not properly insulating or installing MEP systems. Electrical, mechanical, and fire protection engineers can assist with investigating these failures to determine liable parties.
- Structures and equipment are also greatly impacted during cold weather events. Whether it's ice dams on roofs not acclimated or designed for cold climates, or business-critical equipment that was not powered on for days due to the outages, all structures and systems must be properly evaluated by trained engineering and equipment professionals to determine the extent of damage and repair/replacement scope.
Our experts are ready to help. | <urn:uuid:605e4169-cb27-4238-81ce-92bae66f047a> | CC-MAIN-2024-38 | https://www.lwgconsulting.com/knowledge-center/insights/articles/why-is-the-power-out-weather-impacts-on-the-electric-utility-grid-anticipated-property-claims/ | 2024-09-19T01:33:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00005.warc.gz | en | 0.95578 | 1,480 | 3.40625 | 3 |
Artificial Intelligence (AI) is a transformative technology that enables machines to perform tasks typically requiring human intelligence. These tasks range from recognizing speech and images to making decisions and translating languages. AI systems are built on algorithms, which are sets of rules or instructions that allow computers to learn from data. Machine learning, a subset of AI, involves training these algorithms on large datasets to recognize patterns and make predictions.
At the core of AI is the concept of neural networks, inspired by the human brain's structure. These networks consist of layers of nodes (neurons) where data is processed and transmitted. When data is input into the network, it passes through these layers, with each layer extracting increasingly complex features. Through a process called backpropagation, the network adjusts its parameters to minimize errors, improving its performance over time.
Google's AI Innovations: A Game-Changer
Google, a leader in AI research and development, has recently unveiled a series of updates that promise to redefine the landscape of AI technology. These updates include the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and significant progress on Project Astra, which envisions the future of AI assistants. Let’s dive into these innovations and explore their potential impact.
Gemini 1.5 Flash: Speed and Efficiency
Gemini 1.5 Flash is Google's latest AI model designed to offer unparalleled speed and efficiency. This model is built on a new architecture that optimizes data processing and reduces latency. The key features of Gemini 1.5 Flash include:
The new design allows for faster processing of tasks, making it ideal for applications requiring real-time responses, such as voice assistants and interactive customer service bots.
By improving the efficiency of data processing, Gemini 1.5 Flash reduces energy consumption, which is crucial for sustainability and cost-effectiveness in data centers.
This model can be easily scaled to handle increasing amounts of data and more complex tasks, ensuring that it can grow with the needs of businesses and developers.
The introduction of Gemini 1.5 Flash is set to disrupt the AI market by providing a solution that balances speed and efficiency, making advanced AI capabilities more accessible and practical for a wide range of applications.
Enhancements to Gemini 1.5 Pro: Power and Precision
Building on the foundation of Gemini 1.5 Flash, Google has also enhanced its Gemini 1.5 Pro model. The improvements focus on increasing the model's power and precision, making it suitable for more demanding applications. Key enhancements include:
Advanced Natural Language Processing (NLP)
Gemini 1.5 Pro now offers improved NLP capabilities, allowing it to understand and generate human language with greater accuracy. This is crucial for applications such as automated content creation, advanced search engines, and sophisticated chatbots.
Enhanced Image and Video Recognition
The model's ability to process and understand visual data has been significantly improved, making it ideal for applications in healthcare (e.g., medical imaging), security (e.g., facial recognition), and media (e.g., automated video editing).
Robust Training Algorithms
With enhanced training algorithms, Gemini 1.5 Pro can learn from more diverse and complex datasets, improving its generalization capabilities and making it more adaptable to different use cases.
These enhancements position Gemini 1.5 Pro as a powerful tool for businesses looking to leverage AI for more complex and high-stakes applications, driving innovation across various industries.
Project Astra: The Future of AI Assistants
Project Astra represents Google's vision for the future of AI assistants, aiming to create more intuitive, intelligent, and human-like interactions. This initiative focuses on three main areas:
AI assistants under Project Astra are designed to understand context better, making interactions more natural and relevant. This involves not just processing the words spoken by users but also understanding the context in which they are said, leading to more accurate and useful responses.
By leveraging user data (with privacy considerations), AI assistants can offer more personalized experiences. This means remembering user preferences, past interactions, and even predicting future needs to provide proactive assistance.
One of the most ambitious aspects of Project Astra is to imbue AI assistants with emotional intelligence, allowing them to recognize and respond to the emotional states of users. This could lead to more empathetic and supportive interactions, especially in areas like mental health support and customer service.
The advancements under Project Astra are poised to revolutionize the way we interact with technology, making AI assistants not just tools but integral parts of our daily lives that can understand and respond to our needs in a human-like manner.
Impact on the AI Market
Google's latest AI innovations are set to disrupt the AI market in several significant ways:
With the introduction of Gemini 1.5 Flash and enhancements to Gemini 1.5 Pro, Google has solidified its leadership in AI technology by offering state-of-the-art models that deliver unmatched speed, efficiency, and precision. These advancements position Google well ahead of its competitors, making it the go-to choice for businesses and developers seeking cutting-edge AI solutions.
The Gemini 1.5 series not only accelerates computational tasks but also optimizes resource usage, ensuring high performance and cost-effectiveness. This strategic move attracts a wide array of industries aiming to leverage advanced AI for innovative applications, thus reinforcing Google's dominance in the AI market.
By improving the efficiency and scalability of its AI models, Google makes advanced AI capabilities more accessible to a broader range of users. This democratization of AI technology could lead to a surge in AI adoption across various sectors, from small startups to large enterprises.
The powerful capabilities of Gemini 1.5 Pro and the groundbreaking vision of Project Astra are likely to spur innovation across multiple industries. For instance, improved NLP and image recognition can enhance healthcare diagnostics, while advanced AI assistants can transform customer service and personal productivity.
Ethical AI Development
As Google pushes the boundaries of what AI can do, it also emphasizes the importance of ethical AI development. Ensuring that AI technologies are used responsibly and that user data is protected will be crucial in gaining public trust and widespread acceptance.
These advancements will inevitably influence market dynamics, prompting other tech companies to accelerate their AI research and development efforts. This competitive pressure could lead to rapid advancements in AI technology, benefiting consumers with more innovative and capable AI-driven products and services.
Google's recent updates to its AI offerings, including the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and progress on Project Astra, signify a major leap forward in AI technology. These innovations promise to enhance the speed, efficiency, and capability of AI systems, making them more accessible and practical for a wide range of applications. Moreover, the visionary Project Astra aims to redefine our interactions with AI assistants, making them more intuitive, personalized, and emotionally intelligent.
As Google continues to push the boundaries of AI, the market is set for significant disruption. The tech giant's advancements in artificial intelligence are poised to equip businesses and developers with more powerful and sophisticated tools, fostering a wave of innovation across various sectors. Industries such as healthcare, finance, and logistics are likely to experience profound transformations, as AI-driven solutions enhance efficiency, accuracy, and decision-making processes.
The ability to analyze vast amounts of data, automate complex tasks, and develop predictive models will enable companies to optimize operations, reduce costs, and create new, innovative products and services. Consequently, the competitive landscape will evolve, with early adopters of advanced AI technologies gaining a substantial advantage.
However, as these advancements unfold, ethical considerations and responsible AI development will be paramount to ensuring that the benefits are widespread and equitable. Issues such as data privacy, algorithmic bias, and the societal impact of automation must be addressed to prevent potential harm and inequality. Google's commitment to ethical AI practices will be crucial in setting standards for the industry, promoting transparency, and fostering public trust.
By prioritizing responsible AI development, Google and other leaders in the field can help shape a future where technological progress aligns with societal values, ensuring that the transformative potential of AI serves the greater good.
The future of AI looks incredibly promising, with Google at the forefront of this technological revolution. Google's commitment to advancing AI has already yielded significant breakthroughs, such as the development of sophisticated language models and cutting-edge machine learning algorithms. As these technologies continue to evolve, they are poised to transform various sectors by enhancing efficiency, accuracy, and scalability.
For instance, In business operations, AI can significantly streamline processes by automating routine tasks and enhancing efficiency. Through machine learning algorithms and natural language processing, AI can handle customer inquiries, process orders, and manage inventory with minimal human intervention, reducing the time and effort required for these activities.
This automation not only frees up employees to focus on more strategic tasks but also ensures greater accuracy and consistency in operations. For instance, AI-driven chatbots can provide instant customer support, while predictive analytics can forecast demand and optimize supply chain logistics, ensuring that businesses are always prepared to meet market needs.
Furthermore, AI optimizes resource allocation by analyzing vast amounts of data to identify patterns and trends that might not be evident to human analysts. By leveraging these insights, businesses can make informed decisions about where to allocate their resources most effectively, whether it’s in marketing, product development, or workforce management.
AI systems can recommend adjustments in real-time, helping companies to adapt swiftly to changing market conditions and consumer preferences. This proactive approach not only maximizes the use of available resources but also drives productivity and growth by ensuring that every aspect of the business is aligned with its strategic goals. Ultimately, AI provides actionable insights that empower businesses to operate more intelligently and competitively in an ever-evolving marketplace.
Moreover, the potential of Google's AI innovations extends to critical areas like healthcare, where improved diagnostic tools and personalized treatment plans can lead to better patient outcomes. Additionally, the advancements in AI will enable more natural and intuitive human-computer interactions, making technology more accessible and user-friendly.
As AI becomes more integrated into daily life, its impact will be profound and far-reaching, reshaping industries and society at large. Google's leadership in AI not only promises a future where technology augments human capabilities but also one where it drives meaningful progress across multiple dimensions of human endeavor. | <urn:uuid:756dd3e0-2f5c-4219-9f86-44077befec9f> | CC-MAIN-2024-38 | https://www.datacenters.com/news/google-unveils-next-era-of-ai-advancements | 2024-09-20T03:44:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00805.warc.gz | en | 0.92631 | 2,127 | 3.4375 | 3 |
The study of human vocal pitch, a key component in communication, has ventured into a new dimension with the recent focus on its genetic underpinnings. With special attention to how these genetic factors may manifest in the median voice pitch of speakers from different linguistic backgrounds, this article weighs the implications of a unique genetic locus found to have a consistent influence across cultures. This exploration sheds light on a commonality that bridges the gap between speakers of tonal and non-tonal languages, positioning the ABCC9 gene as the pivotal element in the symphony of human vocal pitch modulation.
Unveiling the Genetics of Vocal Pitch
The Concept of Median Voice Pitch
Median voice pitch distinguishes an individual’s speech, contributing greatly to their unique vocal signature. While it has always played an integral role in human verbal interaction, the exploration of its genetic basis is relatively nascent. The identification of a genetic locus that significantly impacts median voice pitch serves as a cornerstone in understanding this aspect of human communication. This genetic marker, deeply embedded in our biological makeup, can potentially offer insights into the evolutionary pathways that shaped our ability to convey not just words, but emotions and intentions through speech.
Cross-Cultural Genetic Study
Breaking new ground, researchers embarked on a genetic investigation that transcended cultural and linguistic boundaries by including speakers of Mandarin Chinese and Icelandic. The inclusion of these diverse groups was pivotal in the discovery that certain genetic influences on vocal pitch are universally shared, rather than confined to particular linguistic systems. This broad approach has facilitated a more comprehensive understanding of how the genetics of vocal pitch operate on a global scale, imbuing the findings with significance for a multitude of linguistic and cultural contexts.
The ABCC9 Gene Locus and Vocal Pitch
Genome-Wide Association Study (GWAS) Insights
The adoption of the GWAS methodology paved the way for uncovering the elusive genetic factors tied to vocal pitch. The comparative analysis between Chinese and Icelandic women yielded a striking focal point—the ABCC9 gene locus. Its variant, rs11046212-T, showed a significant correlation with median voice pitch across the study’s cross-cultural cohorts. Such a transcendent discovery points to the existence of a universal genetic thread that weaves through the fabric of diverse human populations, influencing a trait as central to communication as voice pitch.
The Transcultural Impact of rs11046212-T
The ABCC9 gene’s variant, rs11046212-T, emerges as a beacon of genetic consistency, shedding light on the universal nature of voice pitch. The revelation that this variant exerts a considerable influence over median pitch in speakers of vastly different languages points to a shared genetic canvas upon which the diverse patterns of human language are painted. This significant find undermines previous assumptions of language-specific determinants for voice traits, suggesting that human vocal pitch modulation may be largely underpinned by common genetic denominators, cutting across cultural and linguistic divides.
The Mechanics of Voice Pitch Across Cultures
Understanding Genetic Effects
It’s now clear that the genetic effects on median voice pitch are not context-dependent; the trait persists across various speaking scenarios, whether in casual conversation or structured reading. This discovery underscores the pervasiveness of genetic influence on voice pitch and refutes any presumption that it is a fleeting trait affected by immediate environmental factors. Instead, the genetic underpinnings of pitch appear to be formidable, contributing to a voice characteristic that remains stable across different languages and cultures.
Mood and Genetic Influences
One intriguing aspect explored was the potential link between mood disorders and genetic impacts on vocal pitch. Major Depressive Disorder (MDD), known to affect various biological traits, was hypothesized to potentially affect the pitch of one’s voice. However, the genetic effects on pitch remained consistent and robust, irrespective of mood, which dispelled the notion that mood disorders might skew the genetic governance of vocal pitch. The absence of significant genetic effect heterogeneity between individuals with MDD and those without supports this conclusion, offering a clearer view of how genetics, rather than mood fluctuations, are key directors in the symphony of human speech.
Methodology and Implications
The Role of Heritability and Polygenic Scores
The study’s methodology showcased an astute application of heritability estimates and polygenic scores, grounded in the data from Icelandic cohorts, to predict voice pitch variations in other populations. This approach not only demonstrates the study’s methodological solidity but also amplifies the relevance of the findings beyond the initial dataset. The predictive power of the genetic scoring highlighted the ABCC9 gene locus’s significant role in vocal pitch, ensuring that the study’s implications would resonate across genetic and linguistic landscapes.
Dissecting the Findings
The research endeavor excavated four genome-wide significant hits through a precise meta-analysis combining Chinese and Icelandic data, illustrating the robustness of the genetic link to voice pitch. Two of these were newly uncovered loci, potentially signposts to future exploration. Fine-mapping analyses targeted the most probable causal variants within the ABCC9 locus, consolidating the stance that this gene locus is not merely a point of curiosity but a critical juncture in the genetics of voice pitch across the human species.
Future Perspectives in Vocal Pitch Genetics
Beyond the Median F0 Measure
The research, while groundbreaking, presented limitations. Focusing on women and using median F0 as the sole measure of pitch, the study offered a slice rather than the full spectrum of voice components influenced by genetics. Future endeavors must expand this scope, transcending gender boundaries and incorporating a broader range of vocal traits to truly capture the genetic choreography of human speech.
The Biological Underpinnings of Verbal Communication
Research into the human voice pitch, an essential element in how we communicate, has recently turned towards genetics to understand its variations. Scientists are particularly interested in how these genetic components affect the average vocal pitch among different language speakers. A breakthrough in this field has pinpointed a particular genetic site, known as the ABCC9 gene, which appears to exert a consistent impact on voice pitch regardless of cultural or linguistic background.
This finding is significant as it suggests a shared genetic influence among speakers of both tonal languages, where pitch determines meaning, and non-tonal languages. The ABCC9 gene seems to be a central player in regulating vocal pitch across humans. By highlighting a genetic commonality, this line of study not only advances our grasp of the biological determinants of voice pitch but also illuminates a point of convergence within the diversity of human language and communication. This genetic perspective on voice pitch thus underscores an unexpected unity across the worldwide tapestry of languages, bringing a genetic lens to the study of one of our most distinct and expressive traits. | <urn:uuid:e13f962c-0f9a-4c4c-9fbb-db0a8b75b56c> | CC-MAIN-2024-38 | https://biopharmacurated.com/research-and-development/how-does-the-abcc9-gene-affect-human-voice-pitch/ | 2024-09-08T02:57:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00105.warc.gz | en | 0.913084 | 1,382 | 2.609375 | 3 |
Akamai announced plans to eventually use 50 percent renewable energy by 2020 last week. As a content delivery network (CDN) Akamai is one of the Internet’s giants, handling something like 30 percent of Internet traffic. Given that, the question is: why such a low renewable target… and why so late?
The big players have mostly committed to renewables a while back. Apple uses 93 percent renewable energy (and 100 percent in its data centers), Google has a goal of going 100 percent renewable, and has so far achieved 37 percent. Facebook has a goal of being 50 percent renewable by 2018, and its new data centers use 100 percent renewable power. Microsoft has a 100 percent goal, and its US operations are already all-renewable. Amazon has set a goal of 100 percent renewable energy, but without a date.
Offsetting fossil fuels
All this is, of course, achieved using power-purchase agreements, where the giant buys the output of a renewable energy project which balances the energy it actually uses in its data centers.
Akamai is a very different player. As a content delivery network it is a distributed system, which essentially provides a cache on local servers wherever content is wanted. In today’s terminology, it’s an “edge” network. This means that it uses power in a distributed way - 93 percent of its emssions are from its network.
It also doesn’t face consumers the same way as Google and Facebook. The brand is unknown out side the tech industry, so it’s not faced a major Greenpeace campaign, and can’t get the same public relations benefit from a green announcement.
Despite this, Akamai pays electricity bills, and has done a lot of work choosing low-energy equipment. A graph of its energy use shows its equipment handles 20 times more data now than seven years ago, for only a fraction more energy.
But Akamai has a lot less control over where that kit gets placed and how it is powered, than the other players do, with their centralized facilities. Google and Facebook can choose where to build their data centers, putting them where the power and economics is most favourable.
And they can do so, in large part, because CDNs like Akamai are there, making sure the content gets to the users.
Akamai has to use remotely managed servers, and place them in colocation facilities owned by third parties. As Akamai’s director of sustainability, Nicola Peill-Moelter, puts it: ”We don’t own any facilities and already pay the landlords for our electricity. Solar panels on roofs and wind-farm power purchase agreements for our individual facilities are not options for us.”
What Akamai will be doing is in fact a kind of power-purchase agreement, offsetting the power used by its network (or its landlords) by supporting renewable projects. But Akamai has to do this in a more complex way, because it wants to replenish the grid in the countries where it is operating.
Renewable energy projects are harder to find and fund in India and Australia, than they are in California, but that‘s what it plans to do.
So, it has taken Akamai longer to get to the position of making a commitment, it is planning on a smaller renewable quotient than the webscale cloud giants, and it will take longer to get there.
But the effort is massively significant for renewable power in edge networks. And it could also drive the development of green energy, where it is needed most. | <urn:uuid:e5ba553b-eb5f-42f7-afef-c34d3d36fcf0> | CC-MAIN-2024-38 | https://direct.datacenterdynamics.com/en/opinions/its-harder-to-green-the-edges/ | 2024-09-08T02:45:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00105.warc.gz | en | 0.958246 | 748 | 2.546875 | 3 |
Searching an entire document for a specific text string can be time-consuming, especially if the document is very long. If you have an idea of where the text string occurs in the document, you can use text search options to limit your search. For example, if you know that the phrase you are looking for occurs only within a specific set of columns, you can limit the search to those columns. Depending on how the document was imported, you may also be able to search predefined blocks of text using column indexes.
To limit a search using internal text search settings:
- Click Settings.
The Settings drop-down list is displayed:
Select from the following:
Select to search for alphanumeric text.
Select to search for numeric values. This option also allows the use of the following operators to limit the search: =, >, <, >=, and =<. You can use and, or, and to as operators to search for a range of values. For example, type 2010 and 2011 to find documents containing both 2010 and 2011.
If you are searching for an exact number that is part of an alphanumeric text string, then the number will not be found. For example, if you search for 001 and the actual text is ABC001, then the value will not be found.
Select to search for numeric values that use formatting characters. For example, to search for all Social Security Numbers greater than 800-00-0000, type > 800-00-0000 in the Search String field. You can use this option with following operators to limit your search: =, >, <, >=, and =<. The and, or, and to operators can be used to search for a range of values. For example, type 800-00-000 to 900-00-0000 to find documents containing values within this range.
Note:When you search for formatted numbers greater or less than the entered search string, formatted numbers followed by periods are not included in the search results. For example, if the formatted number is the last word in a sentence, then it will be omitted as a result.
Select to include wild card characters in your text string search criteria.
Select to return only matches that have the same capitalization as the text string search criteria.
Whole Word Match
Select to return matches for an exact word.
Use Column Searching- Select to search within specified columns.
Column Indexing- Select the column index for the block of text that you want to search. This setting is only available for documents with column indexes.
After selecting either option, select Columns... to specify columns in the Column Configuration window:
In the From field, type or select the character position of the column to start the search in (the left most column to be searched). The column of characters at the far left of the document is 1, the next column to the right is 2, and so on.
In the To field, type or select the character position of the column to end the search in (the right most column to be searched). The number in the To field must be greater than or equal to the number in the From field.
- Press the Enter key.
- The search is executed with your selected options. | <urn:uuid:7418438e-ab9a-4bca-a6fa-6b901d6e34af> | CC-MAIN-2024-38 | https://support.hyland.com/r/OnBase/Unity-Client/Foundation-23.1/Unity-Client/Usage/Working-with-Documents/Text-Tab/Internal-Text-Search/Limiting-Searches-Using-Internal-Text-Search-Settings | 2024-09-09T08:25:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00005.warc.gz | en | 0.836558 | 670 | 2.875 | 3 |
Siri, Cortana: New Driving Hazards In The Digital Age
If you work for an IT leader who expects you to be always-on, even during your drive times, you'll want to see what researchers have discovered about how much hands-free systems diminish your attention span behind the wheel.
10 iPhone Apps Only An IT Pro Could Love
10 iPhone Apps Only An IT Pro Could Love (Click image for larger view and slideshow.)
If you work for an IT leader who expects you to be always-on, even during your drive times, you'll want to pass along this news. Even using hands-free systems behind the wheel causes considerable distraction when driving.
In a pair of studies conducted for The AAA Foundation for Traffic Safety, University of Utah researchers found that it takes a considerable amount of time for a driver to return to full attention after using a hands-free system.
In some cases, a driver doing 25 mph could travel more than 1,000 feet before regaining full attention after using a hands-free system.
"Drivers should use caution while using voice-activated systems, even at seemingly safe moments when there is a lull in traffic or the car is stopped at an intersection," said Marshall Doney, AAA's president and CEO, in a statement. "The reality is that mental distractions persist and can affect driver attention even after the light turns green."
[ Of course, it might not matter much longer. Read Tesla Updates Autopilot Software, Moves Closer to Self-Driving Cars. ]
One of the studies showed that it is highly distracting to use hands-free voice commands to dial phone numbers, call contacts, change music, and send texts with Microsoft Cortana, Apple Siri, and Google Now smartphone personal assistants. The other study examined voice-dialing, voice-contact calling, and music selection using the in-vehicle "infotainment" systems in 10 vehicles of the 2015 model year.
Researchers rated the smartphone assistants and in-vehicle systems on a scale of 1 to 5, with 1 being mild distraction and 5 being maximum distraction. All three of the smartphone assistants were rated between 3 (high distraction) and 4 (very high distraction). Among the in-vehicle systems, three were rated moderate distraction, six were considered high distraction, and one (the 2015 Mazda 6 system) was rated as very high distraction.
The studies were conducted with participants driving the one of the ten car models at 25 mph or less around a 2.7-mile route in Salt Lake City's Avenues neighborhood as the testers used voice commands to dial numbers, call contacts, and tune the radio using in-car systems. Participants were also studied under the same conditions while using one of the three smartphone voice assistants to dial numbers, call contacts, choose music, and text.
The researchers looked at how the systems rated "high distraction" and "moderate distraction" affected drivers. They found that a driver traveling only 25 mph continues to be distracted for up to 27 seconds after disconnecting from the most highly distracting phone and car voice-command systems, and up to 15 seconds after disconnecting from the moderately distracting systems.
A 27-second distraction while on the road means a driver traveling 25 mph would cover the length of three football fields -- more than 1,000 feet -- before regaining full attention. Distracted-driving statistics are sobering. In 2013, 3,154 people died and 424,000 others were injured in motor vehicle crashes on US roads involving driver distraction, according to the US Department of Transportation.
(Image: AAA Foundation for Traffic Safety)
The study of in-vehicle information systems included 257 participants, and the smartphone personal assistant study had 65 participants -- all with no at-fault accidents during the past five years. Participants ranged in age from 21 to 70.
Their reaction times were then compared to known distractions that are rated on a scale of 1 to 5. According to a statement about the research:
A category 1 mental distraction is about the same as listening to the radio or an audio book. A category 2 distraction is about the same as talking on the phone, while category 3 is equivalent to sending voice-activated texts on a perfect, error-free system. Category 4 is similar to updating social media while driving, while category 5 corresponds to a highly challenging, scientific test designed to overload a driver's attention.
The graphic below shows how researchers rated the various smartphone and in-car systems:
(Image: AAA Foundation for Traffic Safety)
Stayer said in a statement: "If you are going to use these systems, use them to support the primary task of driving -- like for navigation or to change the radio or temperature -- and keep the interaction short."
Do you feel pressured to work even while driving? How do your experiences with hands-free systems compare with what the researchers found? Do you regularly use Siri, Cortana, or Google Now to interact with your smartphone while driving? Tell us about it in the comments section below.
About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:4f011821-1a33-43f9-a919-7bce3e539c09> | CC-MAIN-2024-38 | https://www.informationweek.com/it-leadership/siri-cortana-new-driving-hazards-in-the-digital-age | 2024-09-10T11:52:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00805.warc.gz | en | 0.956622 | 1,087 | 2.625 | 3 |
Connects decision-makers and solutions creators to what's next in quantum computing
Chinese Researchers Claim Quantum Encryption Hack
Method not considered a risk as it may not scale up or offer advantage over classical computing
January 11, 2023
Chinese researchers claim to have broken a standard RSA algorithm – a public-key cryptosystem that is widely used for secure data transmission – using a quantum computer.
The researchers at the Beijing Academy of Quantum Information Sciences made the claim late last month on the arXiv repository, an open repository for scientific papers, noting that their quantum computing method was successful in a small-scale demonstration.
However, many experts remain skeptical that this method could be scaled up to outperform conventional computers in the task. They believe it will be many years before quantum computers can outperform conventional computers at decrypting cryptographic keys, the strings of characters used in an encryption algorithm to protect data.
Shor's Algorithm vs. Schnorr’s Algorithm
One proposed method for breaking RSA is Shor's algorithm, which uses quantum computing to find the prime factors of an integer. It could break RSA encryption ten times faster than classical computing methods.
But applying Shor's method would require a quantum computer substantially more powerful than current prototypes. Researchers estimate breaking RSA encryption with this method would require one million or more qubits.
Shijie Wei, who led the Beijing Academy of Quantum Information Sciences team, instead used Schnorr's algorithm, another method for factoring integer numbers used in digital signatures. Although Schnorr's algorithm was created to function on a classical computer, Wei's team used the quantum approximate optimization algorithm (QAOA) to perform a portion of the process on a quantum computer.
The researchers claim that their algorithm can crack strong RSA keys using only 372 qubits. Critics contend that running the QAOA algorithm on such a small machine would require each of the 372 qubits to work without errors 99.9999% of the time.
It's also uncertain whether using the QAOA speeds up factoring huge numbers compared to using Schnorr's algorithm on a classic computer. The research paper admitted this, reading: "It should be pointed out that the quantum speedup of the algorithm is unclear."
Experts believe that while this Schnorr-based method may not pose an immediate threat to online security, it is only a matter of time before Shor's algorithm executed on a quantum computer could do so.
About the Author
You May Also Like | <urn:uuid:ee60880a-24f4-4462-be43-64bd580f4768> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/security/chinese-researchers-claim-quantum-encryption-hack | 2024-09-10T10:13:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00805.warc.gz | en | 0.914069 | 506 | 3.46875 | 3 |
In previous posts we have loosely discussed the act of comparing strings. Until now this has been a nebulous term relying primarily on intuition to understand but if we want automated systems such as PolyAnalyst to perform what we call comparison it needs to be well defined and based on calculation. This concept is typically referred to as a string metric and is a method of putting into numbers how similar two strings are.
Despite the name, many of these measurements are not in fact metrics by definition because they do not all obey the triangle inequality; however, they remain very useful in the measurement of strings. There are many string metrics of varying degree of complexity and usefulness.
A traditional exemplar of string metrics is the Levenshtein distance.
What is the Levenshtein Distance?
Informally, the Levenshtein distance is the number of edits it takes to turn one string into the other while defining an edit as inserting characters, deleting characters, or substituting characters.
For example, let us compare the words “Kitten” with “Sitting.”
Kitten | Sitting |
Sitten | Sitting |
Sittin | Sitting |
Sitting | Sitting |
It took three operations to turn “Kitten” into “Sitting” so we can say the Levenshtein distance between the words is 3.
Other distance metrics include Damerau-Levenshtein that also takes into account transpositions of characters and Jaro-Winkler which considers matching characters and transpositions between strings but adds more complexity in both the definition and calculation.
Basic string metrics do not account for any semantic information about strings, however. They deal solely with characters and measure simply how close in terms of characters are strings alike. This is why, without normalization, string metrics are virtually useless. However, if we normalize our data into small atomic attributes, then we have already isolated the semantic components and so then relying on simple character distances actually provides some useful information to us.
String metrics are not the complete solution but they are an important piece in a system which can achieve fuzzy matching – piece that allows us to quantify how similar entity components are which allows us to automate this process and remove some degree of arbitrariness in an already subjective task. | <urn:uuid:b2595204-428e-43fe-b01b-1ce2507defbc> | CC-MAIN-2024-38 | https://www.megaputer.com/fuzzy-matching-comparing-records-with-string-distance-measures/ | 2024-09-11T16:40:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00705.warc.gz | en | 0.936211 | 473 | 3.078125 | 3 |
“This is very embarrassing.” So began a post by the developers of UnrealIRCd server after finding that their software was infected with a Trojan. Another example of why enterprises should consider the safe haven of Linux? Just the opposite: The Trojan infected only the Linux version of the server software, but its Windows counterpart was clean.
Although Linux malware is relatively rare compared to attacks on Windows, it exists, and it’s steadily increasing. In fact, as far back as 2005, the amount of known Linux malware had already doubled over the course of a year to 863 programs. As Linux’s popularity grows among consumers and enterprises, so does its attractiveness to hackers.
In the process, the strategy of security by obscurity becomes less viable. So far, Linux servers appear to be targeted more frequently than Linux PCs partly because there’s a larger installed base. The risks aren’t limited to servers and desktops, either. One recent example is Backdoor.Linux.Foncy.a, which attacks smartphones running the Linux-based Android operating system. Kapersky Lab calls Backdoor.Linux.Foncy.a “the most striking example of a malicious program used by cybercriminals to remotely control an infected device by sending a variety of commands.”
In a sense, Linux malware today is like mobile malware circa 2002: Many businesses, consumers and analysts scoffed at warnings simply because attacks were so few and far between. But as the attacks mount, so does the need for a strategy that’s more robust than simply betting that the odds are in your favor.
Developing a Security Strategy
The good news is that many successful strategies from the Windows world are applicable to Linux.
1. Think twice about downloading free software and content even when it, the source or both appear innocuous. Ignoring that advice has facilitated hacks such as screensavers that use Ubuntu PCs for distributed denial-of-service attacks. Backdoor.Linux.Foncy.a passed itself off as the “Madden NFL 12” game.
2. Run a Windows antivirus program. Because Linux PCs are still a minority, there’s a good chance that a file is headed for a Windows machine. Windows antivirus software minimizes the chances that the Linux PC or server will facilitate malware’s spread.
3. Borrow from Ronald Reagan: Trust, but verify. For example, many Linux users trust Ubuntu’s Personal Package Archives. The potential catch is that although there’s a code of conduct, there’s no guarantee that a secretly malicious signatory won’t leverage that trust. Verification could include using only entities that have proven themselves to be trustworthy, or inspecting the files in a package for anything suspicious before installation.
There’s also a growing selection of books and Web tutorials for developing an enterprise Linux security strategy. For example, CyberCiti.biz advises: “Most Linux distro began enabling IPv6 protocol by default. Crackers can send bad traffic via IPv6 as most admins are not monitoring it. Unless network configuration requires it, disable IPv6 or configure Linux IPv6 firewall.”
4. Explore vendors offering Linux security services and products. There’s a good reason why they’re worth paying attention to: They wouldn’t have those lines of business if there weren’t enough threats already out there.
5. Don’t let managers and other supervisors blindly sign off on the wireless portion of expense reports. This advice is as low-tech as it gets, but it’s also highly effective — not just for Android malware, but types that target all other mobile OSs, too. Although a lot of malware is designed to harvest credit card numbers and other personal information, Backdoor.Linux.Foncy.a is an example of the types that send messages to premium-rate text message and other data services. By simply questioning why an expense report has an unusually high wireless bill that month, you could catch an infected smartphone before it has several months or more to incur unnecessary charges. In the case of Backdoor.Linux.Foncy.a, only about 2,000 Android phones were infected, but that was enough for the hackers — later arrested — to run up an estimated 100,000 Euros in unauthorized charges. | <urn:uuid:8d8087db-5e28-4ae3-87cb-c8662514608d> | CC-MAIN-2024-38 | https://intelligenceinsoftware.com/WhyLinuxNeedsMalwareProtection/ | 2024-09-12T22:05:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00605.warc.gz | en | 0.933417 | 899 | 2.546875 | 3 |
A network switch acts like the central hub of a Local Area Network (LAN). It’s a hardware device that connects multiple computers, printers, servers, and other network devices together within a building or a campus. The switch serves as a controller, allowing networked devices to talk to each other efficiently.
The Role of a Network Switch in a Local Area Network
In a LAN and WLAN, the network switch determines the path of data packets traveling across the network. Unlike a simple hub which broadcasts data to all connected devices, a switch is smarter. It can identify which device is the intended recipient of the data and sends it directly to that device, minimizing traffic and maximizing speed.
How Data Packets are Handled by a Network Switch
Here’s a quick breakdown of how data packets are managed:
- Receiving: The switch accepts incoming data packets.
- Processing: It reads the packet’s header to identify the destination address.
- Forwarding: The switch then transmits the packet directly to the specific device.
Different Types of Switches and Their Functions
Switches come in various types, each serving different needs:
- Unmanaged Switches: Simple, plug-and-play devices with fixed configurations, perfect for basic tasks.
- Managed Switches: Offer more control, with features like VLANs and QoS, ideal for complex networks.
- Smart Switches: A middle ground between unmanaged and managed, offering some management features without the complexity.
Switch vs. Router: Similarities and Differences
While both switches and routers are fundamental network devices, they serve distinct purposes. A router connects multiple networks together, like your business network to the internet. It routes data between these networks and can perform tasks like assigning IP addresses. On the other hand, a switch operates within a single network, directing data to its correct destination within that network.
Understanding these differences is vital in ensuring that you choose the right device for your network’s specific needs. Whether it’s managing the flow of data within your office or connecting your local network to the wider world, getting the right balance of switches and routers can significantly impact your network’s efficiency and reliability.
How Does a Network Switch Work?
A network switch receives, processes, and directs data to the correct destination. When a data packet arrives at a switch, the switch determines which device on the network should receive it and sends it to that device only. This targeted approach prevents unnecessary data traffic on other parts of the network, ensuring efficiency and speed.
Understanding MAC Addresses and Switch Functionality
Each device on your network has a unique identifier known as a Media Access Control (MAC) address. Think of it as a postal address for your network devices. When a switch receives data, it examines the packet’s header to identify the MAC address of the destination device. It then uses this information to direct the data to the right device. Over time, switches learn the MAC addresses of devices on the network, which further streamlines data delivery.
Exploring the Network Layer Interaction with Switches
Network switches mainly operate at the data link layer (Layer 2) of the OSI model, handling physical addressing and network topology. However, some advanced switches, known as Layer 3 switches, also have routing functionalities typically found in routers. These switches can perform some operations at the network layer (Layer 3), like routing and IP addressing, offering more flexibility in handling data across different networks.
Switch Port Processes and Data Transmission
Each port on a network switch can be thought of as a separate channel for data. When a device is connected to a port, the switch manages the flow of data to and from that specific port. This includes not only directing incoming data to the correct device but also managing outgoing data from connected devices, ensuring smooth and orderly traffic flow.
Network Segments and How Switches Manage Them
Network segmentation involves dividing a larger network into smaller, more manageable segments. Switches play a crucial role in this process by controlling which data packets can move between different segments. This helps in not only organizing and streamlining network traffic but also enhancing security by limiting data access to certain parts of the network.
Managed vs. Unmanaged Switches: Which Type Do You Need?
Choosing between managed and unmanaged switches is a decision that can significantly impact your business network’s efficiency and scalability. Let’s delve into the details to help you make an informed choice.
Comparing Managed and Unmanaged Switches
- Managed Switches: These are the high-flyers of the switch world. They offer advanced features like network management, monitoring, VLANs (Virtual Local Area Networks), and QoS (Quality of Service) settings. They’re ideal for networks that require precise control and high levels of traffic management.
- Unmanaged Switches: These are the workhorses, simple and straightforward. They’re plug-and-play devices with no configuration needed, making them perfect for smaller networks or where minimal management is sufficient.
When to Opt for a Managed Switch
If your business has complex network needs or anticipates growth, a managed switch is your go-to. Here’s why:
- Scalability: As your business grows, so do your network needs. Managed switches adapt to changing demands.
- Security: They offer better security controls to protect sensitive data.
- Customization: Tailor your network to your specific needs with VLANs and QoS settings.
- Remote Management: They allow you to monitor and adjust your network remotely, a big plus for troubleshooting.
Benefits of an Unmanaged Switch in Simple Networks
For smaller businesses or those with straightforward networking needs, unmanaged switches shine by:
- Simplicity: They’re easy to set up and use, with no configuration needed.
- Cost-Effectiveness: Generally, they are more affordable than managed switches.
- Reliability: With fewer features, there’s less that can go wrong, making them highly reliable for basic tasks.
Smart Switches and Their Role in Network Management
Smart switches are the middle ground between managed and unmanaged switches. They offer some management features, like limited VLAN support, without the complexity and cost of fully managed switches. This makes them suitable for businesses that need more control than unmanaged switches offer but don’t require the full suite of features of managed switches.
What Is a Layer 3 Switch and Do You Need One?
A Layer 3 switch, also known as a multilayer switch, combines the capabilities of both a switch and a router. It operates at the network layer, handling routing of IP packets, in addition to performing the switching functions. Consider a Layer 3 switch if:
- Inter-VLAN Routing: You need to route traffic between different VLANs.
- Advanced Networking: Your network requires sophisticated management and routing capabilities.
- Performance: Layer 3 switches typically offer better performance for routing tasks than traditional routers.
The choice between managed, unmanaged, smart, or Layer 3 switches hinges on your specific business needs. Understanding these needs and forecasting future growth will guide you in selecting the right switch, ensuring a robust, scalable, and efficient network for your business.
The Great Debate: Network Switch vs. Router
Understanding the roles of network switches and routers in your business network is crucial. Each plays a unique part in keeping your network running smoothly. Let’s explore their differences and how they fit into your network infrastructure.
Core Differences Between Switch and Router Operations
- Network Switches: A switch connects various devices within a single network (like computers in an office) and directs data to its correct local destination using MAC addresses.
- Routers: Routers connect multiple networks (like your office network to the internet) and route data between them using IP addresses.
Device That Connects: When to Use a Switch, When to Use a Router
- Use a Switch: When you want to connect multiple devices within a single network. For example, linking all the computers in your office to share resources.
- Use a Router: When you need to connect your network to the internet or another network. Routers are essential for tasks like assigning IP addresses and managing the traffic between different networks.
Can Switches and Routers Work Together in a Network?
Absolutely! In fact, most business networks will have both switches and routers working in tandem. The router acts as the gateway to the internet, while switches keep your internal traffic organized and efficient. This combination ensures that data flows smoothly within your network and to/from the broader internet.
The Impact of Switches on Wireless Access and Control
Switches are typically associated with wired networks, but they play a significant role in wireless networks too. Wireless routers or access points connect to switches, extending the network’s reach without sacrificing control or security. Switches can manage the traffic coming from these wireless devices, ensuring optimal performance.
Understanding Switch Ports in Context of Routers
Routers usually have a limited number of ports, as their primary job is to route traffic between networks, not within them. Switches, on the other hand, offer multiple ports for internal network connections. This makes switches an indispensable tool for expanding your network, allowing more devices to connect and communicate efficiently.
While switches and routers may seem similar at first glance, they serve distinct and complementary roles in your network. A well-balanced use of both is key to a smooth, efficient, and scalable network infrastructure, vital for any thriving business.
Setting Up Your Network: How to Integrate a Network Switch
Integrating a network switch into your business network is a crucial step towards enhancing connectivity and performance. Let’s walk through the setup process to ensure you’re getting the most out of your network.
Initial Setup of a Network Switch
- Choose the Right Location: Position your switch in a central, easily accessible area. Ensure it’s in a cool, dry place to prevent overheating.
- Connect to Power Source: Plug your switch into a reliable power source. Consider using an Uninterruptible Power Supply (UPS) to protect against power outages.
- Update Firmware: Before adding devices, ensure the switch’s firmware is up-to-date for optimal performance and security.
Connecting Devices to Your Switch: A Step-by-Step Guide
- Gather Necessary Cables: You’ll need Ethernet cables to connect devices to the switch.
- Connect Devices: Plug one end of an Ethernet cable into the device and the other end into the switch. Repeat for each device.
- Check Connections: Verify that each connected device is recognized by the switch. This is often indicated by LED lights on the switch.
Configuring Network Settings for Optimal Switch Performance
- Access Switch Interface: For managed switches, access the management interface using the instructions provided by the manufacturer.
- Configure Settings: Adjust settings like VLAN configurations, QoS priorities, and security measures as needed.
- Save Configurations: Always save your settings after making changes to ensure they are applied.
Network Within a Network: Segmenting with Switches
- Define VLANs: Use VLANs to segment your network into smaller, more manageable groups. This enhances performance and security.
- Assign Ports to VLANs: Allocate switch ports to different VLANs based on your network segmentation strategy.
- Configure Inter-VLAN Routing: If necessary, set up routing protocols to allow communication between different VLANs.
Maintaining Your Network: Tips for Switch Management
- Regular Updates: Keep your switch’s firmware and software updated to protect against vulnerabilities.
- Monitor Performance: Regularly check the performance and health of your switch. Look for signs of overloading or unusual activity.
- Backup Configurations: Regularly back up your switch configurations. In case of failure, this will allow for quick restoration.
By carefully setting up and maintaining your network switch, you ensure that your business network is robust, secure, and ready to handle daily operations efficiently. Remember, a well-configured network is a cornerstone of a successful, tech-savvy business. | <urn:uuid:2b1173ac-bbe5-485c-8089-48e369326b68> | CC-MAIN-2024-38 | https://purple.ai/blogs/network-switch/ | 2024-09-12T22:48:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00605.warc.gz | en | 0.917513 | 2,538 | 3.59375 | 4 |
A Certified Ethical Hacker (CEH) or a “White Hat Hacker” is a skilled professional who finds and fixes computer and network security vulnerabilities using the same methods and tools as a malicious hacker. The word “ethical hacker” refers to those with advanced computer technology skills who can defraud organizations, breach security, and penetrate networks just like a malicious hacker but with appropriate permissions. Any company with a significant online presence needs ethical hacking skills to protect their organization from cybercriminals.
Here is a list of essential factors to consider before you start preparing for the CEH v11 certification exam:
1. The Rising Demand for CEH v11: If you have working experience in the IT industry, you might be aware of the increasing number of cybercrime and how the cybersecurity market is revolutionizing day by day. When you’re ready to begin your career in cybersecurity and want to prepare for the CEH certification exam, the first step before going into it is to know the demand for CEH v11 certification. What role will it play in your professional development?
2. Career Growth in the Field of CEH: The second and most crucial factor to consider before preparing for your certification exam is understanding the potential for career growth in that field. We know that the increasing number of computer hacking cases has resulted in a significant increase in the number of ethical hacking job openings. Every well-known organization, financial institution, and government agency is now required to employ an ethical hacker to safeguard their sensitive data and information.
3. Key Responsibilities as CEH Expert: An ethical hacker may be in charge of various roles and responsibilities in a company. Before you start your exam preparation, know the responsibilities and work on these critical factors like:
4. Technical Skills Needed to Build a CEH Career: When choosing a career, first understand what skills are needed to enter that field. Ethical hacking is the practice of compromising computer systems to assess security while acting in good faith by informing the vulnerable party. Ethical hacking is a necessary skill for many job roles that protect an organization’s online assets. Let’s look at the skills needed to become an ethical hacker.
A. Computer Skills
B. Computer Networking Skills
Networking skills are one of the essential requirements for becoming an ethical hacker. The computer network is the interconnection of multiple devices, commonly referred to as Hosts, linked via various paths to send/receive data or media. So before prepping for the CEH certification exam, you must have these basic computer networking skills in the field of cybersecurity.
C. Linux Skills
Linux is an open-source operating system, and under the General Public License (GPL), its source code can be modified and distributed to anyone. An ethical hacker should learn Linux because it is safer than any other operating system in terms of security. This isn’t to say that Linux is without flaws; it does have malware, but it is less susceptible than any other operating system. Moreover, Linux does not require anti-virus software.
D. Programming Skills
Programming skills are another essential skill for becoming an ethical hacker. So, what exactly does the term “programming” mean in the computer world? It is defined as “the act of writing code that a computational device recognizes to perform specific commands.” There are many programming languages that ethical hackers use, such as C, PHP, Python, SQL, Java, etc.
E. Basic Hardware Knowledge
The physical components, such as the Central Processing Unit (CPU), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers, and motherboard, are computer hardware. Hackers don’t care about hardware security; they can play with it if they have access to it. How will someone who doesn’t understand hardware understand how the motherboard works, how USBs transfer data, how CMOS and BIOS work together, and so on? To become an ethical hacker, one must also have a basic understanding of hardware.
F. Reverse Engineering
The process of recovering a product’s design, security frameworks, and features from an analysis of its code is reverse engineering. The purpose of reverse engineering is to shorten the time it takes to service a system by making it more accessible and creating the required documents. The most popular software security system is reverse engineering, which ensures that a system or network is free of significant security vulnerabilities and threats. It contributes to developing a system’s endurance, protecting it from cyberattacks.
G. Cryptographic Skills
Cryptography converts plain text into encrypted, a non-readable form that is unintelligible to hackers while being transmitted. An ethical hacker must ensure that information between different organization members is kept private.
H. Database Skills
All database creation and management are centered on the Database Management System (DBMS). To assist the organization in building a solid DBMS, an ethical hacker must understand this and different database engines and data schemas.
I. Problem-Solving Skills
Problem-solving abilities help identify the source of a problem and invent a workable solution. Aside from the technical skills listed above, an ethical hacker must also be a deep thinker and an issue solver who can resolve problems and play with them quickly.
5. How to Make a Successful Career in CEH: When you are ready to make your career as an ethical hacker, the question that arises in your mind is where to start and how to make a successful and meaningful career in ethical hacking. You can go through the various online websites, videos, and blogs regarding a successful career path for the CEH exam. Go through this link. It will be helpful in your preparation.
Great Tips for Passing the Certified Ethical Hacker (CEH) Exam
CEH v11 with InfosecTrain
InfosecTrain is a premium training provider for those looking to advance their careers in information technology and cyber security. Our instructors are highly knowledgeable in a variety of fields. We’re a world-class training company with a worldwide reputation for training excellence. To begin your preparations, enroll in InfosecTrain’s CEH certification training course. | <urn:uuid:c0c468bf-f4e8-4f97-988c-8a734736a420> | CC-MAIN-2024-38 | https://www.infosectrain.com/blog/things-to-consider-before-ceh-v11-exam/ | 2024-09-15T09:58:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00405.warc.gz | en | 0.935854 | 1,271 | 2.921875 | 3 |
Courtesy of Security Journal of the Americas, Jan 2023
Nowadays, there is a lot of new jargon being thrown around: artificial intelligence (AI), computer vision, machine learning, neural networks and now deep learning. Everything from your spellchecker to the stock market is using one or more of these new technologies, so we should expect security providers to use them too. Understanding the differences between each term can help us delineate between what is real and what is just fancy marketing.
Let’s start with AI; it’s essentially an umbrella term for any machine that an engineer has allocated with some ability to “think”. A basic form of AI is the lowly thermostat from the last century, which has exactly two intelligent thoughts:
1. It is not warm enough in here, I’m going to turn on the heat
2. It is warm enough in here, I’m going to turn off the heat
There have been great advances in AI since the creation of the thermostat. Let’s look at some of these newer terms.
One of the newer forms of AI – and one of the most relevant ones to a security professional – is computer vision. This gives a computer the ability to “see”. Does that mean a computer really has eyes and sees the way we do? No. The computer has simply been trained to recognize the data patterns in photos and videos and classify them: people, cars, animals, etc.
Less than 10 years ago almost all computer vision was artisanal, meaning it was more art than science. If you wanted a computer to look at a camera and detect cars, you had to explain to a computer what a car looked like first, which is easier said than done.
As you can imagine, this kind of training with the computer did a poor job at detecting cars, until machine learning.
With machine learning, the engineer can give a computer 10,000 pictures of cars and 50,000 pictures of objects other than cars (boats, people, buildings, mailboxes, etc.) and tell it to sort it out on its own. Suddenly, the computer can do a better job of detecting cars than it ever could have with an engineer trying to describe them.
How does machine learning work?
There are a lot of things involved, including vector calculus, tensors and more, but we will take on what is probably the most important and most relatable of them all: neural networks, as they are the most relevant to deep learning. The bottom line is that you can give the computer inputs, like a picture of a car and it comes up with an output, like “that’s a car”.
Scientists were inspired by the biological neural networks that constitute animal brains. Artificial neural networks loosely mimic the same approach as the brain does: our brain consists of billions of neurons, connected by synapses. Neurons generate signals that are transmitted to other neurons, changed and transmitted further. In between the input – you are shown a car – and the output – you say “that’s a car” – there are many layers in the neural network in your brain.
We train our neural networks by passing a lot of information to them, then the neural network understands how to classify objects and can recognize objects of the same class even if they were not present in the training dataset.
Once you’ve seen enough cars, you know what cars look like. You can tell the next car you see is a car, even if you haven’t seen that make and model before.
Today’s artificial neural networks are not nearly as complicated as a human brain. However, machine learning has proven to work better than trying to explain things to a computer.
The value for a security professional is obvious: if a surveillance system transmits images to an AI platform with machine learning, these neural networks can now recognize a specific car, person or anomaly on their property, all without having to hire a team of guards to stare at empty parking lots or office hallways, waiting for an event to occur. Unlike a team of guards, the system never blinks, never gets tired or distracted.
Deep learning is when you have enough layers in your neural network that you can call it “deep”. At some point, making them deeper doesn’t help, but still costs more in terms of
Here’s an analogy: is an 8’ pool in your backyard a “deep” pool? Your realtor is going to say so, as compared to most pools. The fact that it’s a little deeper has some value to buyers. Does that mean if you put in an 80’ deep pool that you will see an even greater return? Highly unlikely.
There is no hard and fast definition of “deep” in a neural network, even just three layers might count. So, anyone who says authoritatively that they are selling “deep learning” is probably from the company’s marketing department.
AI still has a lot of growing up to do. By way of example, consider a fight in the parking lot. Today, an AI platform can be trained to try to detect a “fight” with enough video footage, but it might get confused with dancing, kids roughhousing, etc. Likewise, it’s another layer of sophistication for the computer to recognize the combatants are two small children and therefore it’s a parent’s job to resolve.
That said, there’s a lot we can do with the AI we have right now: we can set alerts to bad actors so you know when they enter your property, license plate detection so you can identify the culprits destroying your property, alerts to anomalies such as a mob gathering before it gets out of control. All in real-time. We also have something better than handing the reins entirely over to a machine: remote guarding.
With remote guarding, otherwise known as live monitoring, we can smoothly transition from AI to human intelligence: a guard can receive an alert of someone at the front door after hours and distinguish between someone fumbling for their keys and using a crowbar to open the door.
Using IP speakers, the guard can then talk through the speaker to the person with the crowbar, let them know they are on video surveillance and that police/security has been called.
Assuming they attempt to run, a human can track them from camera to camera, capture what they look like, what make and model of car they’re climbing into, capture the license plate and add their face to the bad actor list so an alert will go out if they return. That’s why remote guarding has up to a 93% deterrence rate and up to a 95% apprehension rate. It can also save up to 40% on a company’s guard costs, as well as reduce liability and losses.
Deep learning or not, AI that supports remote guarding is a moon landing ahead of the largely forensic measure surveillance was for the previous 30 years. It moves video surveillance from “let’s see what caused all that damage last night” to preventing the damage from occurring in the first place. That’s not jargon. | <urn:uuid:a7451d30-304d-4355-a3db-7032f76dbc31> | CC-MAIN-2024-38 | https://www.cloudastructure.com/post/rick-bentley-ceo-and-founder-of-cloudastructure-examines-how-security-systems-get-smarter-and-more-sophisticated-every-year | 2024-09-17T21:39:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00205.warc.gz | en | 0.968319 | 1,500 | 2.890625 | 3 |
In the fast-paced world of technology, artificial intelligence (AI) is making significant strides. However, this advancement comes with a hefty price tag in terms of energy consumption. Big Tech companies like Google, Microsoft, Meta, and Apple are now consuming more electricity than entire nations.
The @WSJ breaks down a complex topic. An AI-driven economy provides tremendous opportunity but if we don’t invest in the natural gas infrastructure to support it, consumers may pay the price. https://t.co/HyWAQSceKH
— Williams (@WilliamsUpdates) August 12, 2024
Data centers are gobbling up growing amounts of electricity & water in CA, in part due to artificial intelligence, even as the state struggles to meet its clean energy goals and supply enough water. Important story by my @latimes colleague @MelodyPetersen: https://t.co/6gW3Ds733s
— Sammy Roth (@Sammy_Roth) August 12, 2024
Google and Microsoft each consumed 24 terawatt-hours (TWh) of electricity in 2023, surpassing the energy use of countries with populations in the millions. Google’s data centers alone accounted for 10% of global data center electricity consumption, with a 17% increase from the previous year. Microsoft’s electricity use doubled from 11 TWh in 2020 to 24 TWh by 2023.
The push towards AI is a significant factor in this increase. Training AI models requires advanced cooling systems and increased power to handle complex computations.
Is AI worth the price if it’s hiking our power bills? 💡 With rising costs & hotter summers, who's footing the bill?
— Center for Progressive Reform (@CPRBlog) August 12, 2024
As Google and Microsoft have been at the forefront of the generative AI race, their power consumption reflects these technological advancements.
Meta and Apple have shown comparatively lower electricity use, but Meta has been rapidly increasing its consumption as it scales up its data operations. This trend highlights the growing importance and complexity of AI technologies, requiring vast resources to support their development and implementation. As technology companies continue to expand their AI capabilities, their energy needs will likely keep escalating.
The data raises questions about the long-term sustainability of such growth and the environmental impact associated with it. Balancing technological advancement with sustainable energy consumption is a challenge that needs to be addressed. In California, the frenzy of data center construction could delay the state’s transition away from fossil fuels and raise electric bills for everyone else.
The data centers’ insatiable appetite for electricity also increases the risk of blackouts.
AI advancements driving energy concerns
California is at the verge of not having enough power, ranking 49th out of the 50 states in resilience.
“California is working itself into a precarious position,” said Thomas Popik, president of the Foundation for Resilient Societies. The state has already extended the lives of PG&E’s Diablo Canyon nuclear plant and some natural gas-fueled plants to avoid blackouts on sweltering days. Data centers have long been significant power consumers, but the specialized chips required for generative AI use far more electricity and water than those for typical internet searches.
A ChatGPT-powered search consumes 10 times the power of a Google search without AI, according to the International Energy Agency. Santa Clara has become a top market for data centers, partly because its electric rates are 40% lower than those charged by PG&E. However, the lower rates come with a higher cost to the climate.
Silicon Valley Power, Santa Clara’s utility, emits more greenhouse gas than the average California electric utility. Santa Clara is spending heavily on transmission lines and other infrastructure to accommodate this increasing power use. Increases in electric rates have also been steep, jumping significantly over the past two years due to new infrastructure costs and a spike in natural gas prices.
Former chair of the state’s public utilities commission Loretta Lynch noted that big commercial customers like data centers pay lower rates for electricity across the state, which means when new infrastructure is built, residential customers bear more of the cost. “Why aren’t data centers paying their fair share for infrastructure? That’s my question,” she asked.
Despite the strain on the grid, PG&E remains optimistic about the increased demand. “I think we will definitely be one of the big ancillary winners of the demand growth for data centers,” said Patricia Poppe, PG&E’s chief executive, during a conference call with analysts. She sees potential for profits as the company continues to attract tech giants to build data centers in the region. | <urn:uuid:a1e9b9a9-59b5-4555-b50a-39d30e37f59f> | CC-MAIN-2024-38 | https://www.baselinemag.com/news/big-techs-ai-surge-spikes-energy-use/ | 2024-09-08T06:10:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00205.warc.gz | en | 0.93622 | 973 | 2.890625 | 3 |
Understanding Cellular Modem and Router Specifications
Considerations for Optimal Fixed and Mobile ConnectivityIn today's digitally connected world, the role of cellular modems and routers is more critical than ever. These devices enable seamless internet access across various environments, from mobile applications like vehicles and remote offices to fixed locations such as homes and businesses. Understanding the specifications of these devices is essential for optimizing both performance and security.
The Evolution of Cellular Modems and Routers
Modems: Translating Signals
The term "modem" comes from "modulator" and "demodulator." Early modems allowed computers to communicate over analog telephone lines, translating digital signals into a format suitable for these networks. Similarly, cellular modems convert wireless signals from cellular towers into digital data that devices can process.
Modern cellular modems, especially those supporting 5G technology, have evolved significantly. They can handle multiple radio channels simultaneously and use several antennas to achieve high-speed data transmission, supporting gigabits per second of throughput. This advancement has greatly enhanced mobile internet reliability and speed.
Routers: The Gateway to Connectivity
Routers play a pivotal role in networking by managing data traffic between devices and networks. In both fixed and mobile applications, routers facilitate the distribution of internet connectivity from cellular modems to a network of devices. They include essential features such as security protocols, data routing, and management interfaces that ensure efficient and secure data flow.
Key Specifications of Cellular Modems and Routers
Generations and Performance CategoriesThe performance of cellular modems is influenced by their generation and performance category. The 3GPP (Third Generation Partnership Project) establishes standards for cellular technologies, including 3G, 4G/LTE, and 5G. Each new release introduces enhanced capabilities:
- LTE (3GPP Release 8): Supports Category 3 modems with peak theoretical speeds of 100 Mbps down and 50 Mbps up.
- LTE-Advanced (Release 10): Introduced carrier aggregation, allowing for higher data throughput by combining multiple frequency bands.
- 5G (Release 15 and beyond): Offers groundbreaking speeds and capacities, with routers like the AirLink XR80 supporting up to 4.14 Gbps throughput.
Understanding these categories helps gauge the potential performance improvements in real-world applications, such as comparing Category 6 and Category 18 modems.
Theoretical Performance vs. Real-World Performance
While specification sheets often highlight the peak theoretical performance of modems, actual performance can differ due to network congestion, signal strength, and interference. For example, while a modem might support theoretical speeds of 1.2 Gbps, the real-world speeds will vary based on environmental factors.
However, understanding the relative improvements between different categories can help set realistic expectations. A Category 6 modem may consistently outperform a Category 4 modem under similar conditions, highlighting the importance of selecting the right modem based on use-case needs.
Frequency Bands and Carrier Compatibility
Frequency bands are the channels through which cellular networks communicate with devices. A modem's ability to access multiple bands improves coverage and performance, especially when switching network providers or traveling across different regions.
Modern devices should support newer bands like Band 71 (T-Mobile) and Band 14 (AT&T FirstNet) to ensure comprehensive coverage. Ensuring compatibility with the necessary bands is crucial when selecting a modem for your specific carrier and location.
Carrier aggregation technology allows modems to combine multiple frequency bands to enhance data throughput and reliability. This feature is essential for maximizing performance in areas with weak signals or high congestion:
- LTE Advanced Cellular: Support carrier aggregation, allowing the use of multiple channels for data transmission.
- 5G Cellular: Utilize advanced carrier aggregation to leverage multiple 5G bands for improved performance.
For instance, a modem supporting 5x carrier aggregation can combine up to five channels, significantly boosting performance in high-traffic areas. For example the Cradlepoint W1855 wideband adapter supports up to 5 aggregated bands.
MIMO Technology and Antennas
Multiple Input Multiple Output (MIMO)
MiMO technology improves data speed and signal quality by utilizing multiple antennas to send and receive data simultaneously. Devices with configurations like 2x2 or 4x4 MIMO offer superior performance:
- 4x4 MIMO: Employs four antennas to enhance speed and signal quality, especially critical for 5G devices using high-frequency mmWave bands.
This technology is vital for achieving optimal performance in environments with weak signals or high interference.
External Antenna Ports
External antennas can significantly enhance signal reception in areas with poor coverage. While not all devices feature external antenna ports, those that do offer valuable opportunities to improve connectivity:
- Advanced Mobile Routers and Hotspots: Often include external antenna ports, enabling users to connect antennas and boost connection stability and speed.
Router Specifications for Connectivity
While modems are integral for accessing cellular networks, routers facilitate data distribution and management within a network. Key router specifications include:
Wi-Fi standards dictate the speed and efficiency of wireless connectivity within a network:
- 802.11ac (Wi-Fi 5): Supports up to 3.5 Gbps, suitable for most applications.
- 802.11ax (Wi-Fi 6): Offers up to 9.6 Gbps, improved performance in dense environments, and better energy efficiency.
- 802.11be (Wi-Fi 7): Offering up to 46Gbps, Wi-Fi 7 the next generation of Wi-Fi technology, with Extremely High Throughput (EHT).
Selecting a router with the appropriate Wi-Fi standard ensures compatibility with your device ecosystem and anticipated data needs.
Bandwidth and Channel Width
Routers specify bandwidth capacity and channel width, impacting data transmission rates and network stability:
- Dual-Band Routers: Operate on 2.4 GHz and 5 GHz bands, providing flexibility and reduced interference.
- Tri-Band Routers: Include an additional 5 GHz band for higher data throughput and device capacity. Important to note, this is not usually found in cellular routers. It is more common to have dual Wi-Fi radios, which offer two sets of dual-band wi-fi connectivity.
Security is paramount in both fixed and mobile applications, requiring routers to implement robust measures:
- Encryption: WPA3 offers improved protection against unauthorized access.
- VPN Support: Enables secure remote access to network resources.
- Firewall Capabilities: Provide additional layers of security to prevent intrusions.
Quality of Service (QoS)
QoS settings allow users to prioritize network traffic, ensuring that critical applications receive the necessary bandwidth for optimal performance:
- Application Prioritization: Ensures seamless streaming, gaming, and video conferencing experiences.
- Device Prioritization: Allocates bandwidth based on device importance.
Carrier-Specific Devices and Cross-Carrier Compatibility
Many devices are optimized for specific carriers and may support additional bands used by other carriers. However, these bands might not be enabled, limiting performance. Higher-end cellular routers and flagship smartphones generally offer cross-carrier compatibility, allowing for easier network switching.
5G Standalone Mode and Carrier Aggregation
As 5G technology matures, "Standalone" (SA) mode is gaining prominence. Unlike "Non-Standalone" (NSA) mode, which relies on 4G infrastructure, SA mode operates entirely on 5G networks:
- SA Mode: Offers lower latency and improved performance without the need for simultaneous 4G connections.
- Carrier Aggregation in SA Mode: Fully utilizes multiple 5G bands for enhanced performance, particularly beneficial in regions with diverse 5G deployments.
Future-Proofing Your Modem and Router Choice
Choosing the right cellular modem and router is crucial as mobile internet becomes more integral to daily life. Understanding key specifications such as generation, frequency bands, carrier aggregation, and MIMO capabilities will help ensure the selected device meets current and future needs.
Staying informed about technological advancements ensures you can leverage new capabilities as they emerge, whether you're a digital nomad, business traveler, or someone valuing reliable internet access.
About MCA and Our CNS Team
MCA is one of the largest and most trusted integrators in the United States, offering world-class voice, data, and security solutions that enhance the quality, safety, and productivity of customers, operations, and lives. More than 65,000 customers trust MCA to provide carefully researched solutions for a safe, secure, and more efficient workplace.
Our Cellular Networking Solutions (CNS) team (formerly known as USAT) is made up of certified experts in designing and deploying fixed and mobile wireless data connectivity solutions for public and private enterprises nationwide - complete with implementation, training, proof of concept (POC), system auditing, and on-site RF surveying services with optional engineering maintenance contracts.
Our extensive Cradlepoint catalog of world-class routers, gateways, and software designed for remote monitoring and management in even the harshest environments allows us to deliver a full suite of reliable technologies capped with a service-first approach.
Share this Post | <urn:uuid:8891113a-aa05-4332-b67d-7654c05d0fd3> | CC-MAIN-2024-38 | https://lte.callmc.com/understanding-cellular-modem-specifications/ | 2024-09-11T19:08:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00805.warc.gz | en | 0.902044 | 1,885 | 3.09375 | 3 |
The GSA, which leases approximately 1,400 data centers located in large federal facilities to many government agencies, is under mandate to cut energy consumption in those facilities by 30 percent. In October 2011, the GSA’s STAR database for Automated Data Processing (ADP) space found that nearly half of data center energy is used for non-IT loads, such as cooling and power conditioning. The U.S. Environmental Protection Agency (EPA), in a 2007 Energy Star program report, predicted that GSA can expect data center energy use to grow at an annual rate of 15 percent, which represents a doubling of energy consumption every five years.
The U.S. Department of Energy (DOE) and the EPA recently sponsored studies that concluded that energy use can be reduced by 25 percent through the implementation of best practices and commercially available technologies. Clearly, data centers are a prime target for such energy-saving measures. Unfortunately, these centers are typically ‘buried’ in the large facilities where they’re housed, providing GSA with limited visibility into specifics around energy consumption. Access to such in-depth data would be invaluable in driving data center optimization. And by achieving greater energy efficiency, the GSA study, “Wireless Sensor Network for Improving the Energy Efficiency of Data Centers,” conducted by the Lawrence Berkeley National Laboratory in March 2012, concluded that the GSA stands to gain critical flexibility in future data center expansion planning through the reduction or elimination of additional power and cooling demands.
WIRELESS SENSOR TECHNOLOGY
Before GSA could proceed toward meeting its mandate to lower its energy and the associated greenhouse gas emissions, an energy consumption baseline had to be established. The GSA study’s authors, Rod Mahdavi and William Tschudi of Lawrence Berkeley National Laboratory (LBNL), reported that wireless sensor networking technology was chosen over wired sensors because of the high cost and complexity associated with hardwiring sensors in existing facilities. Plus, wireless systems are easily expandable and can be redeployed as needed for IT equipment refreshes, along with allowing them to meet future data center growth requirements.
The GSA, DOE, LBNL, and technology provider, SynapSense collaborated to evaluate the full potential that wireless sensor networking technology offers data center operators. To this end, GSA turned to its Green Proving Ground program (GPG) to conduct the study. Much as its name suggests, the GPG tests the efficacy of groundbreaking sustainable building technologies. The GPG program was created to draw upon emerging technologies in the private sector and drive improved environmental performance in federal buildings, using the GSA’s extensive real estate portfolio as a proving ground to help promote the implementation of exciting new breakthroughs.
The GPG study was aimed at validating how well the SynapSense Data Center Infrastructure Management Platform would provide data center operators with detailed, real-time measurements of environmental factors and power consumption. In this way, a performance baseline could be established, allowing operators to uncover areas of less-than-optimal performance, which in turn would present opportunities to reduce overall energy consumption in a way that is both cost-effective and facility-friendly.
The U.S. Department of Agriculture’s (USDA) National Information Technology Center (NITC) Data Center in St. Louis, was selected as a demonstration facility for the SynapSense wireless monitoring solution because of NITC’s determination to gain greater visibility into the operational efficiencies of its data center. The NITC team was intent on making improvements in both the energy efficiency and the overall resiliency of their facility. The team had learned about the SynapSense Data Center Optimization Platform during a presentation by Dale Sartor of Lawrence Berkeley National Labs at a 7x24 Exchange conference. This prompted the subsequent study, which focused on identifying ways to improve the efficiency of data center cooling.
Once installed, the network of wireless sensors measured floor-to-ceiling conditions, including humidity and temperature conditions at multiple elevations, the raised floor differential pressure, as well as UPS and CRAC cooling system power, which were used to calculate energy usage. Eighty humidity sensors measured the top of the cold inlet airside of each rack. Sixteen pressure sensors monitored the sub-floor to room differential pressure in the cold aisles. Sensors were also installed on the computer room air conditioner (CRAC) units to monitor supply and return air temperatures, along with relative humidity.
SynapSense’s comprehensive, integrated data management software analyzed the collected data, allowing data center operators to easily gauge facility performance in real-time. The data were then fed into the DOE’s assessment tool to calculate power usage effectiveness (PUE) with accurate power measurements
According to Mahdavi and Tschudi’s report, it was now possible to analyze the effectiveness of recirculation and bypass air mixing, under floor air pressure, as well as cooling system efficiency. Operators had the data they needed to determine if these critical systems properly adhered to thermal-operational ranges published by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) for IT equipment. It was also possible to set up alerts in the event ranges were exceeded.
The data collected from the wireless sensors made it possible to analyze temperature, relative humidity, and dew point levels down to the individual racks. This analysis enabled operators to identify overcooling, overheating, and air mixing conditions in specific racks or aisles over time. Aggregation of real-time data from hundreds of sensors was then used to produce thermal imaging maps, which provided a visual snapshot of the environmental conditions in the data center. This made it far easier to rapidly pinpoint areas of concern.
The comprehensive data and resulting analysis made possible by the 600 SynapSense wireless sensors enabled the evaluation team to understand operating conditions and identify problem areas. Armed with this vital information, they could now explore ways to achieve desired efficiencies and energy savings.
After gaining authorization from USDA management, the team began implementing the data center changes called for by the in-depth analysis. One of the key findings was overcooling of the data center. Further investigation revealed that a high volume of air and low supply air temperatures the CRAC systems provided the racks was at the heart of the overcooling problem. The wireless system revealed that the vast majority of the racks were being operated below the ASHRAE-recommended temperature range.
Mahdavi and Tschudi’s report notes that maintaining temperature within the industry standard range is vital in order to minimize the possibility of thermal shock damage to equipment should a cooling outage occur that results in a >9.0°F elevation in temperature per hour. Running systems above or below the ASHRAE range for extended periods also increases the danger of equipment malfunction or failure. Beyond the risk to equipment, this practice also represents an inefficient use of cooling energy.
In dealing with the overcooling problem, the evaluation team tested several approaches and settled on a solution that shut down three CRAC units, then sealed them off so cold air couldn’t seep back into the data center through the pressurized, raised floor. Return air temperature set points were elevated and return air relative humidity set points lowered to the recommended settings. Once these steps were complete, the team was able to shut down the dehumidification and reheat modes on the CRACs, while demonstrating with real-time metrics that the facility met ASHRAE-recommended dew point range.
While overcooling was the predominant problem, the wireless sensor data also revealed some hot spots. Hot discharge air from the IT equipment was re-circulating back through the racks. This problem was traced back to a lack of blanking panels, which was easy to remedy.
Data gathered from wireless sensors that measure inlet air temperature were used to rebalance the cooling system. This was accomplished by examining the number and location of perforated floor tiles in the cold aisles and rebalancing the quantity and location of perforated and solid tiles where needed to match the IT cooling requirements of each rack. Solid tiles replaced 45 perforated floor tiles, which helped solve this problem.
Once these procedures were completed, the evaluation team spent the next two hours capturing and analyzing a new set of data to determine the impact of the changes made. Additional adjustments resulted in further fine-tuning of the systems. The entire implementation phase of this project was completed in less than a day. (See LiveImaging thermal, pressure, and humidity maps that detail the results of the optimization.)
COOLING SYSTEMS OPTIMIZED
The first set of results assessing the use of wireless sensor technology in data centers showed significant energy savings. Following the installation of SynapSense wireless sensor networking technology, subsequent data analysis and the resulting cooling system modifications, the NITC data center’s facilities staff was able to reduce cooling loads and achieve significant energy savings.
Mahdavi and Tschudi’s report concludes that the NITC data center was able to reduce cooling loads by 48 percent, thereby decreasing overall data center energy use by 17 percent. This represented an annual savings of 657 megawatt-hours (MWh). The report also notes a corresponding reduction in PUE, from 1.83 to 1.51.
The GPG study has confirmed the cost effectiveness of this innovative wireless technology. Despite the fact that this NITC facility already had some of the lowest energy costs of all of GSA’s facilities, calculations show that it will take less than three and a half years to offset costs of vendor provided hardware, software, and labor. The GSA report further concludes that applying this technology across the agency’s portfolio of tenant data centers would yield a potential savings of $61 million annually.
The substantial reduction in cooling load and CO2 emissions achieved as a result of GSA’s NITC data center study clearly demonstrates the efficacy of wireless sensor networking technology.
"By most standards, this data center is an efficient facility. The fact that a wireless sensor network helped it significantly reduce its energy profile speaks volumes for the technology," said Ron Jones, facility manager, Office of the Chief Information Officer, USDA.
The GSA study validates that wireless sensor networking technology can be instrumental in helping to achieve tremendous cost and energy savings, even at data center facilities which are well-designed and well-managed. The findings reported by Mahdavi and Tschudi suggest that wide spread adoption of this technology offers significant energy saving and environmental benefits that the broader data center industry simply can’t ignore. | <urn:uuid:b19b5c32-c3ee-4cd8-bcf3-b0028c15f799> | CC-MAIN-2024-38 | https://www.missioncriticalmagazine.com/articles/85095-cutting-costly-data-center-energy-consumption | 2024-09-11T20:01:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00805.warc.gz | en | 0.94676 | 2,203 | 3.015625 | 3 |
The Command Line Interface, or CLI, is an application that is operated through an ASCII terminal. The user has greater configuration flexibility by entering the commands directly. The CLI is a basic command-line interpreter with command-line completion, inline syntax help, and prior command recall. The CLI can be accessed from a console terminal connected to a console port or through an SSH/Telnet session. A switch can be configured and maintained by entering commands into the CLI.
The CLI is the choice of most engineers for configuring and maintaining switches. It allows for copying and pasting parts, or all, of a configuration to re-use on other systems. The CLI also provides a basis for scripting and automation. The following is an example taken from Chapter 4 of commands entered at a command line:
OS10(conf)#interface mgmt 1/1/1
OS10(conf-if-ma-1/1/1)#no ip address dhcp
OS10(conf-if-ma-1/1/1)#ip address 192.168.1.10/24
Terminal emulators are available for Windows or Linux that run on a PC. Terminal emulators provide a method for entering CLI commands. A terminal emulator example is provided in Appendix A. | <urn:uuid:50cbd833-1bdd-459c-813a-7857db14e61c> | CC-MAIN-2024-38 | https://infohub.delltechnologies.com/en-us/l/management-networks-for-dell-emc-networking-configuration-guide/command-line-interface-cli/ | 2024-09-14T07:30:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00605.warc.gz | en | 0.876554 | 262 | 3.609375 | 4 |
Security researchers from Trend Micro obtained samples and performed a detailed analysis of the new Umbreon Linux rootkit that targets x86 and ARM computers.
Umbreon Is Efficient Against a Variety of Devices
The Umbreon rootkit is a fairy new malware that has been developed for Linux systems built on the x86 and ARM architectures. This makes embedded devices vulnerable as well, many Internet of Things (IoT) appliances are also affected. The development of the rootkit began in early 2015 however the detailed analysis has been published now.
The malware is installed manually on the victim devices or by the attackers themselves. It allows the criminals full remote control of the infected hosts.
Umbreon is classified as user-mode rootkit. This makes it a persistent threat that has stealth features, making it hard to detect by security software. The main purpose of the malware is to hide from users, forensic and system tools and system administrators. Such sophisticated rootkits have the ability to open backdoors and communicate with remote C&C servers. The data transfer may include instructions for executing commands or sensitive information uploads.
Umbreon can hook functions from main libraries that are used by applications to run important operations such as writing or reading files. The rootkit can potentially spy on the victim machine without obstruction. The extracted samples were demonstrated to run on x86, x86-64 and ARM architectures.
The actual code of the malware is written in C making it very portable and easy to port to other systems and architectures. Upon infection, the rootkit creates a valid Linux user that the criminals can utilize when using the backdoor. Access is made using the standard authentication methods such as PAM modules and the SSH protocol.
The crafted user has a special GID (group ID) that Umbreon checks when the attacker performs a login. The user cannot see the user entry in /etc/passwd as the libc function is hooked by the rookit.
The backdoor component itself is named Espeon, and it can capture all TCP traffic that reaches the main Ethernet interface of the victim system. When it receives a crafted packet by the criminal, it connects to the source IP. Three specific values have been identified: sequence number, Acknowledgment number, and IP identification.
Umbreon disguises itself from system administrators and tools by manipulating environment variables, hooking up to libcap functions and imitation of the glibc library.
How To Remove the Umbreon Linux Rootkit
It is possible to attempt removal of Umbreon as it is a user-mode malware. Boot the system using a Live CD and follow these instructions:
- Mount the partition where the /usr directory is located; write privileges are required.
- Backup all the files before making any changes.
- Remove the file /etc/ld.so.
. - Remove the directory /usr/lib/libc.so.
. - Restore the attributes of the files /usr/share/libc.so.
. .*.so and remove them as well. - Patch the loader library to use /etc/ld.so.preload again.
Note that the file names vary as Umbreon generates them randomly. If you want to get acquainted with the detailed analysis, check Trend Micro’s blog post. | <urn:uuid:c90f3e8d-7a9e-45fc-bed9-e3f97c1a0aa5> | CC-MAIN-2024-38 | https://bestsecuritysearch.com/umbreon-linux-rootkit-hits-x86-arm-systems/ | 2024-09-18T00:41:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00305.warc.gz | en | 0.911502 | 674 | 2.671875 | 3 |
The COVID-19 pandemic changed the way we work. As remote working is becoming the new normal, there has been an increase in the number of cyber attacks and load on IT teams. In this article, we will be discussing the importance of machine learning in cyber security and how it helps to detect threats and predict suspicious behavior in different security events.
According to the Swiss National Cyber Security Center, during the pandemic, the number of cyber attacks increased by 300%. Furthermore, IT governance discovered 5 billion breached records in 2021. Today, at least 10 million records were compromised.
At the moment, there are millions of cyber attacks happening worldwide.
Cybersecurity is an infinite game between threat actors and security professionals. As we improve our detection against different cyber threats, a new attack vector emerges. With so many different types of cyber attacks, maintaining high-security analysis and precision is challenging for most businesses.
Learn more about IT Security, what impact it has on your business, and how to protect your business against malicious events – What is IT Security? – Definition and measures!
The amount of data that is being generated is large and complex. According to the Data Never Sleeps 4.0 report from 2016, over 18 TBs of data is being generated every minute. Today, that number is even higher.
These data are generated by different IP-based devices and software. We refer to them as Big data. We, as human beings, cannot analyze this amount of data by ourselves. We are unable to easily predict potential security threats. Building models by hand is labor-intensive. That wouldn’t work.
So, we need some help. That’s where machine learning (ML) comes into place.
Before we start discussing the importance of machine learning algorithms, let’s start with the basics.
The Basics of Machine Learning Capabilities
For those of you who are new to the topic, machine learning is not a new trend. The concept dates back to the 1940s, but it took time to develop. In the early 1950s, Arthur Samuel, an American scientist, developed the first program that used machine learning, which was a game for playing checkers.
The game used machine learning to learn how to play better than the author of the program. That created a WOW effect.
In 1968, Arthur C. Clarke, a British scientist, predicted our life today. He stated that we will eventually work with machines and software that could match human capabilities through artificial intelligence (AI).
He was right.
Today, machine learning (ML) is used in different industries to gain business intelligence. You can see it in self-driving cars, speech and image recognition, ads recommendation, virtual assistants, video surveillance, and many more.
For example, Netflix uses artificial intelligence and machine learning to provide their users with an appropriate movie or series suggestions. We have all experienced this, haven’t we? Google uses it for Google translation, traffic alerts using Google Maps, etc. Facebook uses it for facial recognition systems and identifying humans.
AWS provides a solution called Amazon SageMaker to build, train and deploy ML models for any business case.
The list is huge.
What is Machine Learning?
Machine learning (ML) is a type of artificial intelligence (AI). Furthermore, deep learning is a subset of machine learning and uses algorithms to analyze complex data. It draws conclusions based on the data similar to how a human would do it.
It can’t work alone. It requires data. It can only analyze and predict behavior based on the data it analyzes. Applying that mechanism to cybersecurity systems would mean analyzing data from security incidents, learning from it, and then applying the solution to a new attack to prevent it.
When it comes to using machine learning in cybersecurity, there is no specific security algorithm to do so. Machine learning is just a toolset that can be applied to almost any industry. The only different thing is the data that is being analyzed.
The raw data needs to be converted to a vector space model and then used by machine learning to analyze it and prevent security incidents.
Many security prevention solutions use machine learning. The goal is to fight against advanced threats that are occurring every minute. You can read more on how we can help you stay protected Malware protection with Hornetsecurity Advanced Threat Protection.
For example, Google uses machine learning to analyze and prevent security threats against Android endpoints.
Microsoft Defender Advanced Threat Protection (ATP) uses machine learning to analyze trillions of data every day and finds 5 billion new threats every month. You can read more here: Microsoft Defender uses ML.NET to stop malware.
Some enterprise companies use AI and machine learning to protect their infrastructure from potential incidents that could happen from BYOD (Bring Your Own Device) and CYOD (Choose Your Own Device).
Types of Machine Learning
Machine learning uses three types of learning; supervised, unsupervised and semi-supervised learning.
Supervised learning uses data samples and labeling to predict potential malware behavior. For example, machine learning would analyze network traffic and mark it as malicious based on the learning from the existing datasets.
That way, ML can learn how traffic went from normal to malicious. In other words, it would build a pattern to predict malicious network traffic.
With unsupervised learning, no labeling is being used. ML uses only data samples and tries to learn to form a behavior. For example, machine learning would analyze network traffic over some time, and it would learn which traffic is normal and which traffic is malicious.
There is also semi-supervised learning where only some of the data are labeled. We can say, semi-supervised learning is supervised and unsupervised learning.
Machine Learning Use Cases in Cybersecurity
There are many use cases where machine learning helps in preventing cybersecurity incidents. As time goes on, the number of use cases is growing.
One of the use cases is detecting and preventing DDoS attacks. ML algorithm can be trained to analyze a large amount of traffic between different endpoints and predict different DDoS attacks (applications, protocols, and volumetric attacks) and botnets.
In 2021, there were more than 9 million DDoS attacks worldwide. DDoS has one goal, and that is to put the system to slow-response or no-response (read it downtime). ML can detect and stop it.
The second use case is to fight against malware. This includes trojans, spyware, ransomware, backdoors, adware, and others. ML algorithm can be trained to help antiviruses in fighting unknown cyber threats. According to Statista research, in 2021, 5.4 billion malware attacks were detected.
Phishing attacks are one of the most common attacks used to steal confidential data and get into corporate or government institutions. It is shared via scam emails. For example, Google (Gmail) uses machine learning to analyze data in real-time and identify and prevent malicious behavior of more than 100 million phishing emails.
We published an article to help you understand and prevent phishing in detail. You can read it here: Phishing – The danger of malicious phishing emails.
The third use case is about protecting against application attacks. Applications are used by end users and are prone to different layer 7 attacks. According to Cloudflare, they handle 32 million HTTP requests per second. Web Application Firewall (WAF), in combination with machine learning, can be trained to detect anomalies in HTTP/S, SQL, and XSS attacks.
Microsoft, AWS, Google Cloud, FortiGate, and many other vendors offer WAF as part of their portfolio.
The fundamental security principles teach us to implement multi-factor authentication. This includes something we know (e.g. password), something we have (e.g. USB token), and something we are (e.g. fingerprint, facial detection). AI and ML combined with deep learning play a vital role in biometric applications.
ML helps to perform matching tasks to quickly find the relevant data.
Security Operation Centers (SOC) take care of monitoring, detecting, and responding to different cyber security threats. One of the challenges the SOC Team had was dealing with a large amount of data. Thanks to machine learning, SOC Teams can more efficiently automate and analyze incidents and be more proactive.
The list of use cases is bigger. And it wouldn’t work without having machine learning as part of cybersecurity.
Cybersecurity is an infinitive game. As you read this article, millions of different security threats are occurring worldwide. At the same time, new critical threats are being developed without adequate protection.
Millions of data are being generated every minute. We, as human beings, can’t do all the analysis, maintenance, and prevention. We need help.
Thanks to machine learning and its toolset we can automate things. Machine learning can help us perform deep analysis, predict behavior and uncover threats. It does this by analyzing the dataset that is being generated by different devices and software.
ML is learning from data. It can help us analyze and predict malicious activities such as malware, phishing, application attacks, authentication attacks, and much more. Many companies develop their system with machine learning in place.
Our mission is to keep your system and data safe. We at Hornetsecurity want to ensure that your data is untouched and complies with security principles (confidentiality, integrity, availability).
Throughout 2022, we analyzed over 25,000,000,000 emails and found that 40.5% of emails were unwanted. We created a report that gives you an in-depth analysis of the Microsoft 365 threat landscape. You can download it here Cyber Security Report.
And the last thing for today. If you’d like to take a deeper dive into the Microsoft 365 threat landscape and learn the key strategies to building cyber security resilience, watch our free on-demand webinar. | <urn:uuid:fb6bd54a-31f3-4f0f-a829-fcc30b96e258> | CC-MAIN-2024-38 | https://www.hornetsecurity.com/en/blog/the-importance-of-machine-learning-in-cyber-security/ | 2024-09-19T08:31:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00205.warc.gz | en | 0.946194 | 2,045 | 3.171875 | 3 |
September 18, 2018
The digital twin, a concept and IoT-enabling technology that is becoming more pervasive in supply chain and asset-intensive industries, holds much promise for perhaps an unlikely candidate – the smart building.
It’s one of the key findings in a comprehensive report that looks at how to achieve data interoperability between building systems and the IoT recently released by the Georgia Tech Center for the Development and Application of Internet of Things Technologies.
The built environment is becoming more of a computing platform, according to Dennis Shelden, associate professor at Georgia Tech’s School of Architecture, as well as director of the university’s Digital Building Lab. Still, building systems lack connectivity. One way to overcome this is by leveraging Building Information Modeling (BIM), software used in 3D architectural models, to act as the digital twin for distributed IoT systems. Widely in use today in design and construction, these 3D plans often aren’t connected in any way to the life of the building upon opening, Shelden said.
“Once the architect stopped drawing, they were out of the picture, and never have been able to understand (the building) as we designed it,” said Shelden, who is both an architect and technologist. “We know we can now connect to the live building, and enrich knowledge and simulation of what the building is because we have all the sensor data.”
According to the report, connecting the building automation and control (IoT) data protocols with the BIM data schemas can “provide a critical layer of spatial semantics to these IoT systems, such as device geo-positioning and metadata tagging, and enrich Smart Building/Smart City efforts while harmonizing these data sources with various data protocols.”
This holds tremendous potential not only for connected buildings, but also for the architecture profession as a whole.
“There’s a whole next generation possibility for architects to have this longer relationship through continuous improvement through IoT,” Shelden said.
The paper, “Foundational Research in Integrated Building Internet of Things Data Standards,” was produced by Georgia Tech College of Design researchers under the supervision of Shelden and Dr. Pardis Pishdad-Bozorgi of the School of Building Construction. It aims to develop a strategy and preliminary framework for building level IoT semantic models and open data strategies, and offers initial use cases.
In addition to looking at enabling technologies, current standards and protocols, the paper offers a vision for what networked buildings could look like in smart cities. Different spheres such as commercial, office, government, health care, residential and campuses coalesce around use cases, and exchange basic “packages” of data standards collected by all buildings – such as space occupancy rate, people counting, electricity and water usage and more.
“The data flow generated in each building is collected by the data hub of each community, and then connected to the city-level IoT network,” the report’s authors wrote.
For those interested in learning more about the research and its application, Georgia Tech’s Digital Building Lab will hold a public symposium Oct. 25-26. “Data Experience and Environment,” will feature conversations with leaders in the high-tech building space and explore the technology itself, as well discuss local startup and incubator resources.
About the Author
You May Also Like | <urn:uuid:8f2843de-1e22-4076-bde6-f4728a061a01> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/smart-cities/digital-twins-can-enable-smart-buildings-report-says | 2024-09-19T08:29:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00205.warc.gz | en | 0.956036 | 712 | 2.796875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.