text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
TACC, Lamont Observatory Host One of the Largest Earth Sciences Data Collections in the Country March 7, 2018 — The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is partnering with the Lamont-Doherty Earth Observatory (LDEO) at Columbia University to host one of the largest data collections for Earth sciences of its type in the country. The data relates to the Ross Ice Shelf, a massive slab of floating ice that is about the same size as the country of France. Over the past four years, researchers at LDEO have been flying over the frozen waters in the polar regions and collecting field data for the ROSETTA-Ice project, which studies the Ross Ice Shelf. The shelf is constantly fed by a flow of ice from glaciers draining from both the East and West Antarctic ice sheets. The field data includes crucial information on the shelf and the underlying tectonics of the Antarctic region. Ice shelves, like icebergs, lie mainly below the waterline. This means that the majority of the shelf is not visible without the use of scientific instruments. Studying how the ice, ocean and underlying seabed interact will inform scientists of potential change in the ice shelf from projected climate change. “The Ross Ice Shelf is of interest because it’s floating, allowing ocean water to move freely about beneath it and we have seen in other regions like this that they can become unstable and break up releasing ice from glaciers dammed up behind them into the ocean,” said Nick Frearson, a lead engineer on the ROSETTA-Ice project whose team designed the Icepod, the data collection system and sensor suite and the radar technology that probes the ice shelf. “Warming ocean water is getting underneath the shelf that is a significant couple of degrees warmer than the surrounding water and can mean the difference between freezing and melting at the base of the ice,” he said. “The shelf acts like a large cork impeding the flow of incident glaciers and ice streams, and could have far-reaching effects if it changes significantly and releases more ice to flow from the land into the sea raising sea-levels globally in the process.” Frearson says that the data being collected using scientific instruments — hundreds of terabytes in total— is unique. “We take data from a suite of instruments, all sampled synchronously, and bring them together to form a much bigger picture than if we just analyzed data from one instrument,” he said. Up until now the sea floor under the shelf has only been mapped to a resolution of 50km using a combination of satellite gravity data and a land survey undertaken in the 1970’s. This is low enough to hide whole mountain and valley systems and was not detailed enough for oceanographers to accurately model ocean currents flowing under the shelf. With state-of-the-art radar; gravimeters, which measure gravity anomalies; a magnetometer that measures Earth’s magnetic anomalies; LIDAR, remote sensing of the surface with laser pulses; and high-resolution photogrammetry to map surface structures; ROSETTA has been able to map the interior and ocean floor of the shelf to much higher resolution. “In the process, we have collected many 100’s of TB’s of data and needed a state-of-the-art solution to manage it. That’s where TACC comes in,” Frearson said. To collect, process, analyze and store the data, Frearson and other colleagues at LDEO have been using the National Science Foundation-funded Extreme Science and Engineering Development Environment (XSEDE) allocations on resources such as Stampede2 and Ranch. Researchers also heavily relied on TACC’s Corral even though it is not an XSEDE resource. XSEDE is a single virtual system that scientists use to interactively share computing resources, data, and expertise. Stampede2 is used for data processing; Corral for data storage; and Ranch tape storage for the long-term archiving of data. “The speed of XSEDE and TACC resources is superior to our existing high-performance computers at Lamont,” said Lingling Dong, a computer software, and data engineer. “I have data from 2015 that processed in 50 hours using Lamont resources; however, it only needed three hours of processing time using XSEDE resources and a total of two hours for data-transfer across the network.” When it comes to storage, LDEO had been storing the data locally. “But because of the size of it now, we couldn’t even back it all up,” Frearson said. “We had the original raw data that we brought back from the field as a backup but that was it. It’s a lot more data than we had previously been used to handling.” Corral is the storage system of choice for the Polar Geophysics Group at LDEO and leads the way in the preservation and sharing of data for researchers. Corral enables data-centric science throughout the U.S. This storage and data management resource is designed and optimized to support large-scale collections and a collaborative research environment. “It’s admirable that LDEO has planned for how much data they are going to generate and want to make sure that it’s available over a period of several years,” said Chris Jordan, manager of TACC’s Data Management and Collections group. “From the start, they wanted a way to both store hundreds of terabytes of data and to make it widely available on the web as it’s uploaded.” In addition, LDEO is using some of the academic referencing mechanisms that make it easier to search for and locate data sets, similar to the way it is done with academic literature. “They chose to use DOI indexing on all of their data as they upload it so that people can find it more easily,” Jordan said. “The National Science Foundation promotes the citation of data sets in addition to the citation for publications.” “We aren’t the only Science group having to cope with very large volumes of data and hope that the partnership that we have forged with TACC shows that it is possible to manage and disseminate this level of data in a cost-effective, user-friendly and easily accessible manner,” Frearson said. “This data will help people in the science community who are interested in the cryosphere, the Polar Regions in general, and the changes that are going on there. We hope that people across the globe, as well as institutions in the U.S., will benefit from this data set.” Source: Faith Singer-Villalobos, TACC
<urn:uuid:b9098c1d-7b29-4c38-bec6-f7fa0e65e5cc>
CC-MAIN-2022-40
https://www.datanami.com/this-just-in/tacc-lamont-observatory-host-one-of-the-largest-earth-sciences-data-collections-in-the-country/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00199.warc.gz
en
0.940137
1,417
3.296875
3
NetworkTigers discusses the future of cybersecurity. Will it get worse before it gets better? Cybersecurity is one of the most important investments any company or individual can make in today’s interconnected world. Ensuring that your home or business is appropriately protected can save more than money – it can also protect your privacy in an era where data privacy is becoming even more precious and important to preserve. However, today’s cybersecurity landscape is an ever-evolving field of threats and dangers. Will cybersecurity get worse before it gets better? Experts seem to believe that the risks will only continue to evolve. The question is, how can we best ensure we are prepared to meet them when they do? Recent cybersecurity trends With the rise of remote work and multiple supply chain shocks that overtook the international datasphere throughout the global pandemic, cybersecurity was paramount in 2020. The COVID-19 era saw multiple new threat actors arise in how companies conducted remote work. Zoom bombers, new kinds of ransomware, and a rise in phishing attacks on employees working from home all contributed to increased cybersecurity risks in the past year. However, a new report shows an alarming trend in how often businesses have been hacked throughout 2021. According to Check Point Research, businesses weathered up to 50% more cyberattacks per week in 2021 than in 2020. The analytical firm only counted detected and blocked threats in their assessment, meaning that the actual number may be even higher. With numbers like these, it’s no wonder that businesses seem to have begun to accept cyberattacks as the new normal. Cyber threats on the rise The data shows that cybersecurity seems to be getting worse before it gets better. Some of the latest cyber threats include: - Social engineering: Even as technology becomes more difficult to breach, human error remains the main form of cybersecurity weakness. Social engineering is a kind of attack method that targets human interactions. Socially engineered attacks often rely on peoples’ urges to be helpful or fear of punishment if they make a mistake. Many data security experts advocate for increased employee IT training and open communication among company strata to address socially engineered cybersecurity threats. Employees are less likely to believe they risk letting down a CEO or senior officer from a phony phishing attack if they can contact management to report attempts to target them. - Sophisticated phishing: Phishing is particularly dangerous for employees working from home. As more business communication occurs via email or messaging, phishing attempts have become more prevalent and successful. In a recent development, certain industries, often medical and insurance, report facing a one-two punch of phishing attacks. Employees in these fields have received emails from realistic addresses posing as clients or vendors for the company. The only request in these emails is that the targeted employee give them a call. Scammers use these sophisticated phishing approaches to build rapport and trust before asking the targets to share information or send money. Because multiple forms of communication are used, these phishing attempts appear more trustworthy. A San Francisco report shows a 10% increase in email-to-phone phishing attacks, with businesses in the medical and insurance fields up to 60% likely to become targets. - Credential compromise: The reuse of passwords is a common threat to even the best cybersecurity networks. Even with constant verifications or multi-factor authentication, re-used or commonly used passwords can undermine otherwise effective barriers to keep hackers at bay. A Google study shows that up to 65% of people reuse their passwords for multiple accounts. Much of this password reuse is across streaming sites, but many report mingling personal and professional passwords. Some recycling is understandable when the average person has to remember around 90 different passwords. However, doing so can substantially jeopardize the safety and security of your company’s network privacy. Cybersecurity reports and assessments Will cybersecurity get worse before it gets better? Even the most optimistic data privacy experts say things will get more complicated before anyone can be sure. And most cybersecurity professionals say the outcome is bleak. Cybersecurity has gone from being a small-scale concern to a full-blown national and international security risk. Destructive hacks such as SolarWinds show that the extent of data privacy breaches is still fully understood. Ongoing hacks threaten the safety of every Internet user, with new risks being discovered daily. By updating and investing in cybersecurity to the fullest extent of your ability, you can hope to surf the wave of new worries in the cybersecurity sector. However, no one can fully outrun the onslaught of cyber risks. It’s going to get a lot worse before it gets better, but with the right focus and upgrades, we can weather the storm.
<urn:uuid:2e9de59b-7aba-4cc2-ba1b-25e3a2f69fae>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/the-future-of-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00400.warc.gz
en
0.953784
952
2.5625
3
Viruses and worms are malicious programs that self-replicate on computers or via computer networks without the user being aware; each subsequent copy of such malicious programs is also able to self-replicate. Malicious programs which spread via networks or infect remote machines when commanded to do so by the “owner” (e.g. Backdoors) or programs that create multiple copies that are unable to self-replicate are not part of the Viruses and Worms subclass. The main characteristic used to determine whether or not a program is classified as a separate behaviour within the Viruses and Worms subclass is how the program propagates (i.e. how the malicious program spreads copies of itself via local or network resources.) Most known worms are spread as files sent as email attachments, via a link to a web or FTP resource, via a link sent in an ICQ or IRC message, via P2P file sharing networks etc. Some worms spread as network packets; these directly penetrate the computer memory, and the worm code is then activated. Worms use the following techniques to penetrate remote computers and launch copies of themselves: social engineering (for example, an email message suggesting the user opens an attached file), exploiting network configuration errors (such as copying to a fully accessible disk), and exploiting loopholes in operating system and application security. Viruses can be divided in accordance with the method used to infect a computer: Any program within this subclass can have additional Trojan functions. It should also be noted that many worms use more than one method in order to spread copies via networks.
<urn:uuid:6bebe2ad-f8a8-43f6-a300-d424bfb4aba4>
CC-MAIN-2022-40
https://threats.kaspersky.com/en/class/VirWare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00400.warc.gz
en
0.911885
351
3.828125
4
User authentication activity verification is an essential aspect of network security. Every big organization, small and medium-sized businesses, as well government agencies, must hold it with high importance. A solid user authentication system prevents your business and network infrastructures (database, records, software & hardware, computer systems, etc.) from potential threats which could result in a huge loss. Thus, it’s usually essential to put necessary authentication systems in place to prevent unauthorized network access. Single-Factor Authentication (SFA) has been one of the most used authentication methods. However, technological advancement, which has brought both positive and negative effects, has made it vulnerable. Hence, we review SFA loopholes while looking at other alternatives you can consider for your business. Single-Factor Authentication (SFA) Single-Factor Authentication is the conventional security method for regulating and securing access to a network or system. The method identifies and ensures that the party trying to gain access is indeed permitted to do so, by requesting one category of credentials for verification. Password-based authentication is the most common Single-Factor Authentication method. The method demands users to enter the right username and password before granting them access. The method relies heavily on the diligence of the user or network administrator to create a strong password and ensure it remains secured and unknown to unauthorized persons. Single-Factor Authentication has been viewed as a vulnerable authentication method by most experts, with CISA (Cyber-security & Infrastructure Security Agency) adding it to the list of bad practices. SFA has been vulnerable to many passwords comprising techniques like phishing, social engineering, network sniffing, keylogging, etc. This makes businesses that employ it as the major network security method susceptible to network compromise and other security threats. More information on the risks involved in Single-Factor Authentication (SFA) will be shared in this article, but first, we will look at the methods’ alternatives. Two-Factor Authentication (2FA) This authentication method is sometimes referred to as two-step verification or dual-factor authentication method. The security method requires a user or system administrator to provide two distinct verification factors, enabling the system to carry out a proper identification or verification process. It is a much-improved method compared to Single-Factor Authentication (SFA), and it’s fast replacing SFA in the cyber security world. Two-Factor Authentication (2FA) better protects your business network, data, and other essential resources with restricted access. While SFA requires only a username and password (the only factor), 2FA goes further by requesting a second (different) factor, which could be a code (security token), fingerprint, or facial scan (biometric factors). The method provides an additional security layer to the verification process by making it complex for unauthorized personnel to access the system or network. Thus, ordinary password compromise doesn’t leave the system vulnerable, as the attacker would still have to scale through a second factor for proper identification and verification. Multi-Factor Authentication (MFA) Two-factor authentication (2FA) is a form of multi-factor authentication (MFA). This authentication method requires users to provide more than one factor for authentication before gaining access to a system or network. Multi-factor authentication is a generic term for other authentication methods apart from Single-Factor Authentication (SFA). This may include 2FA, 3FA, 4FA, and even 5FA. MFA methods provide a higher level of security based on the number of factors. This implies that a system protected by the 3FA method is more secure than that of 2FA, and 4FA provides more level of security than 3FA, etc. Additional authentication factors in MFA systems usually include fingerprints, voice recognition, facial scan, PINs, security token, and other methods to verify or prove your identity. Most businesses now rely on MFA, especially for sensitive information and data with high market value. The method provides complex security layers, which render most password decryption methods like phishing, social engineering, and malware fraud ineffective. While Multi-Factor Authentication isn’t a lasting solution to these attacks, it helps mitigate them and makes it complex for unauthorized users to access the network. What Are the Risks of Single-Factor Authentication? Businesses that still use the Single-Factor Authentication method expose themselves to certain risks which can cause loss, compromise, or inability to access valuable data. Whether you use Single-Factor Authentication for your financial account, company network, database, or computer system, the following are the risks you’re exposed to: Ease of Attack Single-Factor Authentication is a basic form of network security, making unauthorized access and data breach easier for attackers. The average data breach cost has increased by 2.6% over the years, moving from $4.24 million in 2021 to $4.35 million in 2022. This indicates that businesses keep losing to data breaches continuously. Using the SFA method for your network security makes your system susceptible to these breaches, putting your business on the verge of losing millions. SFA requires a single factor (password or pin code) which can be easily compromised through phishing and other methods. According to IBM’s cost of data breach report 2022, 19% of breaches were due to compromised or stolen credentials, while phishing was responsible for breaches 16% of the time. This indicates that using SFA puts your business at risk of these attacks due to how easy it is for attackers to bypass this security process. Permanent loss of Access Using a Single-Factor Authentication (SFA) could result in permanent loss of access to a system or network if you forget or misplace the required factor without any means of retrieving it. An example is the cryptocurrency wallet — Trustwallet. It requires a unique set of words called “Phrase” to access your wallet. Failure to provide that phrase means loss of permanent access to your portfolio and whatever digital assets you have in it. Multiple Factors Authentication (MFA) provides alternative means of gaining access to a system or network in case one is impossible. This is another MFA over SFA. Contact Us For Help With Multi Factor Authentication Businesses need to move past Single-Factor Authentication and adopt Multi-Factor Authentication to prevent ease of attacks and data breaches that could cost them millions and avoid permanent loss of access to vital databases. You shouldn’t compromise on integrating any of the MFA into your business to improve your network security. Contact Dynamix Solutions to implement a convenient user authentication system that is also secure. Call us toll-free at 1 (855) 405-1087.
<urn:uuid:56ca4508-543c-4527-bf85-dd76f9f0c2ba>
CC-MAIN-2022-40
https://dynamixsolutions.com/stop-using-single-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00400.warc.gz
en
0.918879
1,368
3.046875
3
Distributed antenna systems (DAS) are a widely utilized and massively beneficial technology, but not everyone knows what they actually are. A distributed antenna system is a network consisting of spatially separated antenna nodes that are connected to a common source, which in turn provides wireless service to an area. The difference between distributed antenna systems and single antenna systems is that a DAS can offer an equal amount of coverage, but benefits from more efficient power usage and better reliability. Applications range from enhancing poor cellular signals to the very important “first responder” public safety networks. As a WiFi Distribution System In some instances, especially for larger commercial bodies and facilities with specific needs, distributed antenna systems are used to create a WiFi network. This is in contrast to a traditional modem and router system, but on a large enough scale, using a DAS still makes sense due to their reliability and low-maintenance requirements. There are also cases where distributed antenna systems can be deployed to areas where traditional WiFi systems wouldn’t work—such as underground zones like subway systems where a typical WiFi system may not be practical. Ensuring the Public Safety There are plenty of buildings where radio or cell reception isn’t just a luxury but a necessity for day-to-day operations. In a commercial facility, hospital, or school, communication is critical especially in the case of emergency scenarios where public safety becomes important. Distributed antenna systems work for these kinds of industries because they’re customizable to fit specific needs. If you’ve dealt with poor coverage in the past, you’re already aware of how frustrating it is to have your wireless network fail on you when you need it—which is why distributed antenna systems are coveted as an effective solution. Get in Touch with FiberPlus FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include: - Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks) - Electronic Security Systems (Access Control & CCTV Solutions) - Wireless Access Point installations - Public Safety DAS – Emergency Call Stations - Audio/Video Services (Intercoms and Display Monitors) - Support Services - Specialty Systems - Design/Build Services FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry. Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at email@example.com, or visit our contact page. Our offices are located in the Washington, DC metro area, Richmond, VA, and Columbus, OH. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999.
<urn:uuid:8b0660f7-a87b-47b4-816a-455e879b58e5>
CC-MAIN-2022-40
https://www.fiberplusinc.com/helpful-information/distributed-antenna-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00400.warc.gz
en
0.936514
620
2.6875
3
By Matt Jones, Tessella Most articles about artificial intelligence start with big claims. ‘AI will change the world’; ‘Here are some exciting examples of AI’. These are all legitimate starting points, but fewer words are spent understanding what contemporary AI really is, and who can benefit from it. An uninformed observer could be forgiven for thinking AI is a new technology that you can buy as part of a platform and simply plug into your business and become the next digital company of the future. Most technology is quite complicated. And AI is ‘quite complicated’ multiplied. If a business wants to really take advantage of AI, they need to stop worrying about what the latest, cool AI gadget or platform will be and start thinking about what an AI toolset can do for the specific problems facing their enterprise. So, what do we mean by AI? Firstly, we need to understand what AI is and what it is not. AI is a composite of several techniques and toolkits including; machine learning, deep learning, neural networks, and natural language generation and processing. These tools ingest carefully selected training data to make sense of the task, the information sought and the world in which they live and operate. What we are not talking about is ‘strong AI’ or Artificial General Intelligence (AGI) – which can, in theory, learn without training, but is still a considerable time from being a reality. This is fine for future gazing, but if you’re looking at what AI can do for you now, put AGI on the backburner. Building and training AI If you really want to benefit from AI, you need to develop an AI suited to your problem, and then train it correctly. One option is to buy an AI black box which will suck in your all your data and spot potentially useful patterns within it. However, many problems worthy of the black box price tag are too complex to automate, and the correlations that pop out the other end are not magically wrapped up in business insight or context, they usually require a lot more work to understand what they really mean, if they’re relevant at all. Another option is to build the AI yourself. This allows you to bake in an understanding of data and context, rather than using someone else’s approximation. This ensures you understand what’s happening, which helps you identify possible flaws or biases once it’s up and running. And, you get to keep control of all of your data. There are times when a black box is simpler and we are not advocating building your own AI in every situation, but it certainly offers more control and greater precision of the answers generated. And the good news for people wanting to take this approach is that the leading tech companies—Google, Microsoft and Facebook, etc—have made their AI tools freely available to anyone. These, individually or combined, can be used to build bespoke transformational AI or machine learning platforms, which are as sophisticated as any black box currently on the market, for the price of a data scientist’s salary or consultancy fee. Customer insights vs operational intelligence A key part of building and training your AI is understanding what you want it to do. Most discussion of AI in the media refers to consumer products which aim to guess your behaviour. Much of this is a progression of the data analytics approach used by marketing-led companies like Amazon: mine huge data sets to spot how consumers reacted to promotions, pricing, etc, and use that to predict future behaviour. Finding that a promotion adds 1% to sales is great, it’s not important why the 1% bought or who they were. AI takes this further by using more sophisticated data sets, bringing in new data sources (images, voice etc) and an ability to learn as it goes. But essentially it is still looking for correlations. Operational and business challenges, on the other hand, tend to use large volumes of data to predict a small number of high-risk events, e.g. under what circumstances a jet engine will fail, a drug will exhibit adverse effects, or oil drilling platform suffer from subsurface corrosion that affects the integrity and safety of operations. They cannot afford failed experiments. They need to know whether one event will lead to another so they can act on which serious money, and sometimes lives, depend. Doing this needs someone to identify the data needed to train the AI, precisely manage the training, evaluate the outputs and design scientific experiments which can isolate the issue being studied and understand whether one action is the direct result of another – for example, does a change in readings from a jet engine mean it needs a quick clean or that you should take the plane out of service. If you’re not 100% sure, you need to play it safe, which can be very costly. This needs data skills, the industry expertise to understand what the data means, and an understanding of the scientific method to test patterns, eliminate biases and prove the link between cause and effect. For all the talk of AI as the latest technology, it is as much a scientific issue as a technology one. Using AI effectively AI has become a very broad term. The curse of the hype around it is that lots of people are trying to position themselves as experts, shouting about its business changing potential but rarely explaining how. Platforms with baked in AI has something to offer. Many innovative companies will be developing AI-based products designed for non-expert use. Many of these technologies will be excellent and solve important problems. But buying an off the shelf product doesn’t immediately solve all your business problems or put you ahead of the competition. If you see AI as an opportunity for your company to make disruptive leaps forward, you need to look at how it can work for you. AI and machine learning is a series of tools and toolkits. Like any disruptive force, you can’t just buy new tools and expect them to solve your problems. You need people that understand these tools; know how to wield them and when and where they’re most appropriate. That is how to become one of the next disruptors. Matt Jones is Lead Analytics Strategist at Tessella, Altran’s World Class Center for Analytics.
<urn:uuid:a24e2f40-3394-4cc7-be62-8453695d1126>
CC-MAIN-2022-40
https://bdtechtalks.com/2017/07/26/do-you-want-ai-to-solve-a-marketing-business-or-operational-challenge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00400.warc.gz
en
0.957006
1,297
2.671875
3
February 18, 2020 by Siobhan Climer From a malicious ransomware attack to a straightforward power outage, data backup methods are more important than ever for protecting your data and getting your data center and staff back to work. Backup methodology has remained relatively unchanged, yet too often businesses are unable to recover from a disaster event because of corrupted data files. An untested data backup method is almost dangerous as no backup method. Implementing one of these data backup methods is a foundational component of delivering technology services to your organization. There are at least a dozen different data backup methods, but here are the four most common. The Four Primary Data Backup Methods Just like it sounds, a full backup is a complete backup of every file and folder in the system. Full backups take more space and require more time to backup; however, they are the most comprehensive. Restoring lost data from a full backup is faster than with other methodologies. If only using a full data backup method, you create a new full backup at every backup window. Pros: Full backups are the most comprehensive and robust form of backup methodology. Cons: Most businesses do not need this level of backup as most files do not change significantly from one backup to another; thus, full backups often take up large amounts of unnecessary storage space, using up resources and costing money. An incremental backup creates an initial full backup of the system. However, in all following backups only those files that have changed since the last backup are backed up. Oftentimes, organizations will combine full backup with incremental backup to strategically store files during different resource utilization windows. The risk with this combined method is that the current state of the live file system will naturally differ from the backup, thus posing a risk should a disaster occur. Pros: Incremental backups use far smaller storage volumes. Cons: Incremental backups require lots of computing overhead to perform the necessary backup to system comparison algorithm. In addition, the restore process takes longer. Like an incremental backup, a differential backup creates an initial full backup of the system and then preceding backups only store changed files. The difference between incremental and differential data backup methods is that the full backup becomes a regular touchpoint. For example, if a full backup is run on a Sunday and a file is changed on a Monday, that specific file will be documented in every differential backup until the next full backup. Pros: Differential backups simplify recovery because only the last full backup and the last differential backup are needed (as opposed to comparing between every incremental backup and the full backup). Cons: Differential backups require more storage space and network bandwidth. A mirror backup stores an exact copy of the source data. It is a straight copy of the system at a given time and is therefore much faster to backup than other data backup methods. Pros: Reduces storage overhead by making removal of obsolete files from the backups simple. Cons: Increases risk as a mistakenly-deleted file would be deleted from the backup if not identified before the scheduled backup. Finding A Data Backup Method That Works For You Backing up your organization’s data is a vital step in ensuring that your business can continue should a disaster occur. However, backup and disaster recovery strategy isn’t as simple as a full backup once a week. A strategic backup method and disaster recovery program aligns the benefits of different data backup methods with your business goals, RTOs, and RPOs. It also ensures that the regular testing and validation of your backup methodology works. Learn more about modern data backup methods in our whitepaper, Moving Away from Tape: Strategies for Advancing to a more Modern Backup Solution, and schedule a talk with our backup and disaster recovery experts today. Like what you read? Contact us today to discuss your data backup methods. Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges. About The Author Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s reading and writing fantasy, gardening, and exploring the world with her twin daughters. Find her on twitter @techtalksio.
<urn:uuid:87920bac-f0b4-452c-9dbc-ae1fe5b6c0e2>
CC-MAIN-2022-40
https://gomindsight.com/insights/blog/four-data-backup-methods-it-resiliency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00400.warc.gz
en
0.93242
993
2.546875
3
Multilingualism keeps you fit! Studies show that older people with foreign language skills are more mentally active than others. Degradation processes that appear to be age-related occur later on average if one has enjoyed higher education and has pursued lively intellectual activity throughout life. However, research studies also show that among people with comparable levels of education, those who speak two or more foreign languages have a clear advantage. Learning a foreign language is therefore jogging for the brain and counteracts aging processes. Also, studies have shown that even at an advanced age, it is still possible to achieve good learning results. Studies show that older people with foreign language skills are more mentally active than others. Our brain naturally undergoes aging and degradation processes over time. That is why many older people ask themselves whether it makes sense to learn a foreign language at an advanced age. From the age of 70, the brain’s aging mainly affects short-term and working memory. However, long-term memory, which is responsible for language comprehension, remains well preserved into old age. This explains why my grandfather can still pronounce Italian verbs flawlessly but regularly forgets where he left his hearing aid. Learning processes may take a little longer than for younger people. Still, older people have the great advantage of drawing on a rich repertoire of knowledge and learning techniques and building up mnemonic bridges that make learning much easier. So what does it matter if you sit a few minutes longer on a foreign language text – in old age, you usually have more time for leisure activities and hobbies anyway. To make it easier for you to learn a foreign language or brush up on a language you have already learned, we have put together a few valuable tips for you: 1. Take your time As older people often need a little more time to study, it is essential not to stress yourself. So don’t put yourself under unnecessary pressure. Instead, plan a little more time, and studying for only a few minutes a day is better. Repetition is increasingly important in old age. 2. Start with the topics you can use. Do you like to go on holiday? Then work with learning materials that cover this topic. There are exciting learning materials for every field of interest. Even a foreign-language text can keep you busy for days or longer and lead you to success (the Birkenbihl method for learning foreign languages is particularly recommended for this purpose). 3. Train your pronunciation Older people often find it more difficult than younger people to learn the pronunciation of a language perfectly. Multimedia language courses offer a variety of ways to train pronunciation. Native speakers speak texts aloud, which helps you to recognize and internalize the speech melody and rhythm. Repeating the sentences is an excellent way to check your pronunciation. However, it is crucial that you first listen actively and passively to the native speakers often; otherwise, you will quickly acquire an accent. 4. Learning together is more fun Learning a foreign language on your own is tedious for many. An alternative is to learn together with your partner or with a friend. Watch films together, read books together and help and correct each other. 5. Use your experience Older people’s experience, learning, and knowledge are usually more extensive than young people’s. Draw on your experience and think about what has helped you most in learning and what tactics have not helped you much. Build mnemonic bridges and use associations to remember things. So even in old age, you will have success in language learning. In addition to passing the time and having fun, you will train your brain!
<urn:uuid:947d4cec-31a0-4de4-9dea-9c7bf2617b35>
CC-MAIN-2022-40
https://blog.brain-friendly.com/language-fit-into-old-age/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00400.warc.gz
en
0.961794
740
2.734375
3
Series: COBOL Programming COBOL Programming – Basics The COBOL Programming Basics course introduces the COBOL language and its basic structure. It describes the syntax and use of program logic statements in the procedure division of a COBOL program. It examines the standard loop and conditional statements, and the available arithmetic operations. It also describes the use of basic screen and printing instructions. Data and Datafile Definitions in COBOL The COBOL Data and Datafile Definitions course explains how the COBOL programming language describes and defines data. It also shows how COBOL data definitions can be used to manipulate the way data is used. It explores display and computational formats, and the use of redefines to reference data in different ways. COBOL Programming – Manipulating Data The COBOL File Handling course describes how COBOL can be used to define and process several of the common file types used in system processing. It details how sequential and direct files can be defined in the environmental division of the program, and the instructions and processes used to access data sequentially and directly through an index. COBOL Programming – Advanced The COBOL Programming - Advanced course examines the use of tables in a COBOL program, and the methodologies used for file sorting. It details the use of subprograms and the linkage section. It also shows how parameters are passed to a program. COBOL – IBM Enterprise COBOL 6.3 for z/OS The COBOL - IBM Enterprise COBOL 6.3 for z/OS course is designed for learners with a basic understanding of generic COBOL who need to extend its use to the z/OS environment. It describes how COBOL programs are made available through compile and bind processes and discusses coding and options specific to the z/OS environment. The use of IBM's Language Environment is presented, and a number of coding techniques used to improve the performance of COBOL running on z/OS, is also shown. Accessing IMS Databases from COBOL The Accessing IMS Databases from COBOL course details the structure and use of an IMS/DB database. It gives examples of the DL/I data access language and shows how to use DL/I in COBOL programs to read and update IMS data. The concept of backup and recovery, particularly in the context of batch programming runs, is also explained.
<urn:uuid:892cd33f-f2d3-4faf-88f4-3efbb1d4d167>
CC-MAIN-2022-40
https://interskill.com/series/cobol-programming/?noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00400.warc.gz
en
0.842541
504
3.671875
4
September 17, 2015 Accenture Finds More Than Half of 12-Year-Old Girls in the UK and Ireland Believe STEM Subjects are Too Difficult to Learn LONDON; Sept. 17, 2015 – New research from Accenture (NYSE: ACN) reports that more than half (60 percent) of 12-year-old girls in the United Kingdom and Ireland believe that science, technology, engineering and mathematics (STEM) subjects are too difficult to learn. The survey of more than 4,000 girls, young women, parents and teachers, demonstrates clearly that there is a perception that STEM subjects and careers are better suited to male personalities, hobbies and brains. Half (51 percent) of the teachers and 43 percent of the parents surveyed believe this perception helps explain the low uptake of STEM subjects by girls. Nearly half (47 percent) of the young girls surveyed said they believe such subjects are a better match for boys. The research also suggests that parents and teachers must do more to encourage girls in the early stages of development to embrace STEM subjects if government and business initiatives to increase the number of women in STEM careers are to succeed. Although girls ranked parents and teachers as their biggest influencers when making a decision about subject choice, more than half (51 percent) of parents say they feel ill-informed on the benefits of STEM subjects specifically, and only one in seven (14 percent) say they understand the different career opportunities that exist for their daughters. “It’s worrying that girls’ interest in STEM subjects tails off so early in their time at secondary school. With such a small percentage of parents understanding what these subjects can offer their daughters, it is not surprising that girls become disconnected from STEM,” said Emma McGuigan, managing director for Accenture Technology in the UK & Ireland. “Our research suggests that while getting girls enthused about subjects like technology or engineering must start at home, encouragement needs to continue in early education, such as nursery and primary school, so that girls don’t conclude at a young age that math and science are too difficult.” Additionally, while emerging sectors like technology are starting to bridge the gender gap — with groups and initiatives like TechFuture Girls, Stemettes, The Science Museum, techUK and Girls in Tech encouraging women to embrace the digital era – more than three-quarters (77 percent) of girls still believe that the science and technology sector lacks high-profile female role models. “It’s important that girls understand that these subjects are as much for them as they are for boys,” said the Tech Partnership’s CEO, Karen Price. “While a lot of fantastic work has been done to encourage women and girls to embrace STEM, females still only comprise a small percentage of the workforce in related industries. If STEM businesses work together to support teachers and parents to get young girls excited about these subjects from a much younger age, we will be much closer to the goal of making the balance of men versus women in these careers more equal.” Tom O’Leary, director of learning at the Science Museum, said: “At the Science Museum Group, we recognize the importance and scale of the challenge to ensure that young people, especially girls, see that a STEM career is for them. Our own Enterprising Science research project reflects findings similar to Accenture’s, and as such, we have put programmes in place to help more young people find science engaging outside of the classroom. Museums and science centres are in pivotal positions to help build science capital by developing connections between teachers, young people and their families. We support efforts by secondary schools to integrate engaging museum experiences and approaches into their teaching, and to help them tap into their students’ home-based knowledge and experiences to make science more meaningful and relevant to young people.” Commissioned by Accenture and conducted by Loudhouse, a specialist research division of the Octopus Group, the online research covered a total of 1,571 girls of secondary school age (11-18) and 2,509 young women (19-23) across the United Kingdom and the Republic of Ireland. Samples of 535 parents and 112 teachers were also taken to determine the influencing factors for girls in their academic subject choices. The survey was conducted in April 2015. Accenture is a global management consulting, technology services and outsourcing company, with more than 336,000 people serving clients in more than 120 countries. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments. Through its Skills to Succeed corporate citizenship initiative, Accenture is equipping more than 3 million people around the world with the skills to get a job or build a business. The company generated net revenues of US$30.0 billion for the fiscal year ended Aug. 31, 2014. Its home page is www.accenture.com. # # # + 44 7825 023 622 + 44 7769 955302
<urn:uuid:d02852c1-e7b5-429e-86c5-b0cd544eae0c>
CC-MAIN-2022-40
https://newsroom.accenture.com/news/accenture-finds-more-than-half-of-12-year-old-girls-in-the-uk-and-ireland-believe-stem-subjects-are-too-difficult-to-learn.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00400.warc.gz
en
0.954319
1,104
2.671875
3
Predefined variable types include clipboard, date time, string and system settings and parameters. Use the actions in the Clipboard package to perform operations on the clipboard variable. See Clipboard package. |Clipboard||Returns the contents of the clipboard.| Use the actions in the Datetime package to perform operations on the date time variables. Datetime package |Date||Returns the date including hours, minutes, and seconds. Note: Hours can be in 24-hour or AM/PM format depending on the machine configuration. |Day||Returns the day in DD format.| |Hour||Returns the hours in HH format.| |Machine||Returns the device name as a string.| |Millisecond||Returns the milliseconds with a value between 0 and 999.| |Minute||Returns the minutes in MM format.| |Month||Returns the month in MM format.| |Second||Returns the seconds in SS format.| |Year||Returns the year in YYYY format.| StringUse the following variables to change how a string is displayed. |Enter||Starts a new line without returning to the beginning of the line based on the operating system of the device. For example, the variable always adds a new line in Linux CentOS. In Microsoft Windows, the variable adds a page break in the Microsoft Word application and a new line in the Notepad application.| |Newline||Starts a new line and moves the cursor to the beginning of the next line regardless of the application and operating system of the device.| |Separator||Demarcates a separation between values with a |Tab||Creates large space.|
<urn:uuid:11d35ef6-51a5-4b0e-b074-fb0524f53cd4>
CC-MAIN-2022-40
https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/aae-client/bot-creator/using-variables/cloud-system-related-variables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00400.warc.gz
en
0.6507
387
2.765625
3
Menu path: Setup > Network > Network Drives > Windows Drive In this area, you can integrate network drives shared by Windows as well as those from Linux/Unix servers via the SMB protocol (Samba). You can find a sample configuration at the end of this page. To manage the drive list, proceed as follows: - Click to create a new entry. - Click to remove the selected entry. - Click to edit the selected entry. - Click to copy the selected entry. Clickingwill bring up the Add dialogue, where you can define the following settings: - Enabled: Defines whether the configuration entry will be applied. ☑ The network drive will be integrated. - Local Mount Point: The local directory under which the server directory is to be visible (default: - Server: The IP address, Fully-Qualified Domain Name (FQDN) or NetBIOS name of the server. - Share Name: Path name as exported by the Windows or Unix Samba host. - User name: User name for your user account on the Windows or Unix Samba host. - Password: Password for your user account on the Windows or Unix Samba host. - User writable ☑ The user can not only read but also write directory contents. Otherwise, only the local root user is able to do this. ☐ The user can only read directory contents. (default) Sample configuration entry The following picture shows a sample configuration entry. /(Linux/Unix-style forward slash) can be used as a path separator. Note that if you enter, for example, \smbmountas a moint point, a directory called \smbmountwill be created, because \is a legal character in Linux directory names. For Share Name, however, /(Linux/Unix-style forward slash) or \(Windows-style backward slash) can be used as a path separator.
<urn:uuid:cb241d93-8f00-42c6-b4aa-f38efffc6d26>
CC-MAIN-2022-40
https://kb.igel.com/igelos-11.01.100/en/windows-drive-23512103.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00600.warc.gz
en
0.765374
432
2.625
3
ake78 (3D & photo) - Fotolia The attacks, some lasting as long as two days, were similar to the DDoS attacks on domain name system (DNS) services supplier Dyn on 21 October 2016. The Dyn attack was enabled by an IoT botnet using the Mirai malware code, prompting fears of more widespread attacks using insecure IoT devices. The affected Russian banks claim that online services were not disrupted, but some described the initial DDoS attacks as massive, followed up by even more powerful attacks. Security firm Kaspersky Lab said more than a half of the botnet devices were situated in the US, India, Taiwan and Israel, while the attack came from 30 countries. Each wave of attack lasted for at least one hour, with the longest lasting 12 hours, however the attacks peaked at only 660,000 requests per second. “Such attacks are complex, and almost cannot be repelled by standard means used by internet providers,” Kaspersky Labs said in a statement. David Kennerley, director of threat research at Webroot, said the attacks on the Russian banks really drive home the security issues of IoT devices. “While attacks like these are complicated, there’s still an element of basic security that could have reduced success: password management,” he said. According to Kennerley, consumers and business users need to understand the importance of changing device passwords from the manufacturer’s default. “If the default password had been changed, many of the devices that make up these botnets could not have been hijacked in the first place. “Default passwords are inherently easy for malware to guess and as the number of connected devices continues to rise, consumers need to change them to more complex ones, otherwise we’ll be seeing a lot more of these attacks in the future,” he said. Read more about DDoS attacks - Security researchers discover more powerful botnets exploiting internet of things (IoT) devices to carry out massive distributed denial of service (DDoS) attacks. - DDoS attacks have become a commodity, and are available openly on professional services online marketplaces for as little as $5 an hour. - There is a real concern that many companies are being affected by DDoS attacks commissioned by competitors, according to Kaspersky Lab. The attacks confirmed the trend of hijacking IoT devices to bombard targeted organisations with internet connection requests with the aim of overwhelming them and making them inaccessible to users. According to a source in Russia’s Central Bank, the botnet behind the attack included IoT devices, reports Global Research. Security experts have used the Dyn attack to highlight the fact that a wide range of internet-connected devices are vulnerable to hijacking by attackers due to weak security mechanisms. Vulnerable devices include surveillance cameras such as those used in the Dyn attack, as well as routers, digital video recorders (DVRs), smart TVs and even microwave ovens. DDoS used to distract security teams After the release of the Mirai malware code on an underground forum in early October, security experts warned of terabit-class IoT botnet-based DDoS attacks that could knock almost any business offline or disable chunks of the internet. Surprisingly, the attacks on the Russian banks were relatively weak. Other IoT botnet attacks have been among the strongest DDoS attacks seen. But in 2015, communications and analysis firm Neustar warned that smaller DDoS attacks can be more dangerous than a powerful attack that knocks a company offline. Smaller attacks, it said, are increasingly being used to distract IT and security teams to enable attackers to steal data or install malware on systems for use in future cyber attacks. Security blogger Brian Krebs believes Mirai was used to hit his news site with a DDoS attack of 620 gigabits per second (Gbps) in size on 20 September 2016. A week later, French hosting firm OVH was hit by an attack that peaked at more than one terabit or 1,000 gigabits per second. The OVH attack set a new record and is believed to have been enabled by using the combined bandwidth of a botnet of 150,000 IoT devices, according to The Hacker News. The power of the Mirai botnet far exceeds earlier IoT botnets discovered in June 2016 to launch DDoS attacks in Brazil and the US of around 400 Gbps. IoT security ‘far from where it should be’ Industry players need to address the security of IoT devices urgently before it is too late, according to Lorie Wigle, general manager, IoT security at Intel. “The recent [IoT botnet] attack on Dyn should be a wake-up call,” she said at Intel Security’s Focus 2016 customer and partner event in Las Vegas. It is good that the attack has happened now, said Wigle, because it shows that the current state of IoT security is far from where it should be. The technology industry has a window of opportunity to ensure IoT is adopted with maximum security and minimum risk, but that window is small and closing rapidly, she warned. The issue is that IoT device manufacturers are failing to implement robust security controls from the outset, said McEvatt, senior cyber threat intelligence manager in UK and Ireland at Fujitsu. “Anyone can use online services such as Shodan to look for vulnerable IoT devices, making organisations an easy target for low-level cyber criminals. The worrying reality is that security is often an afterthought and security fundamentals are still not being followed, such as changing default passwords,” he said. According to McEvatt, to help shift this mindset and make securing internet-connected devices easier for businesses, the Online Trust Alliance (OTA) has produced a framework in IoT security, offering guidance on how to secure embedded devices. “This introduction of a kitemark standard for IoT devices is a progressive step towards ensuring safe practice is followed and that security of such devices against these types of hacks is at a premium. This is especially important for the financial sector, which handles lots of sensitive data,” he said. Read more about IoT security - Growth of the internet of things will be slowed or stunted if the industry fails to be proactive about data security, according to IoT Security Foundation. - The influx of internet of things devices will inevitably bring security headaches. Don’t miss out on the opportunities of IoT, but learn how to avoid IoT security issues. - The Five key information security risks associated with the internet of things that businesses can and should address.
<urn:uuid:4c6684a7-5759-4345-b042-db24ca0c2fd6>
CC-MAIN-2022-40
https://www.computerweekly.com/news/450402778/Russian-banks-hit-by-IoT-enabled-DDoS-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00600.warc.gz
en
0.953857
1,373
2.59375
3
For those researchers looking for different types of AI (Artificial Intelligence), you must know that AI is divided in various types. But, there are mainly 2 types of main categorization which are based on the capabilities and functionality of AI. The diagram below gives a full explanation of the types of AI. Artificial Intelligence (AI) type-1: Based on Capabilities 1. Narrow AI or Weakness AI: - The Weakness AI is a specific type of AI that is always able to perform a dedicated job with intelligence. Narrow AI is among the most common and currently available AI in the technology of Artificial Intelligence. - One downside is that the Narrow AI cannot perform beyond its field or limitations. Why? This is because it is only trained for one specific job. This is the reason it is known as the weak AI. Furthermore, the Narrow AI can fail/disappoint in unforeseeable ways if it goes beyond its limitations. - A good example of Weak AI is the Apple’s OS Siri. Notwithstanding, it operates with a limited pre-defined range of functionality. - Another example which comes under the Narrow AI is the IBM’s Watson supercomputer. It makes use of an Expert system approach together-with with Machine learning as well as natural language processing. - On the long run, some other examples of Narrow AI are playing chess in a computer and purchasing suggestions on e-commerce site. Self-driving cars, speech recognition, and image recognition as also among the list of examples. 2. General AI: - As the name implies, the General AI is a type of intelligence that can efficiently perform any intellectual task like a human being. - The general AI has a great idea to make a system that could be faster, smarter and think like humans on its own. - As of now, there is no such existing perfect system that can be listed under the general AI. Why? Simply because they cannot perform any task as perfect as a human being. - Conversely, researchers around the globe are presently focusing on building machines with General AI. - Since computer systems as well as machines with general AI are still under serious research, users has to wait. This is because it will take so much time & efforts to develop such systems. 3. Super AI or Strong AI: - The Super AI is a high level of Intelligence of Systems in which machines could exceed human intelligence. Therefore, they are able to perform any task more better than human with observation properties. All these are an outcome of general AI birthing strong AI. - Secondly, a list of the major characteristics of strong AI include the ability to think, to reason & solve the puzzle. They can also make judgments, plan, learn, as well as communicate by its own. - Note that the Super AI is still at the stage of theoretical concept of Artificial Intelligence. That is to say that the development of such systems in life is still a globally changing task. Artificial Intelligence (AI) Type-2: Based on Functionality 1. Reactive Machines AI - Reactive machines are purely the most basic types of Artificial Intelligence. - This type of AI systems do not accumulate experiences or past memories for future actions. - Reactive machines only focus on immediate framework and react on it as per possible best action. - One important example of reactive machines is IBM’s Deep Blue system. - Lastly, Google’s AlphaGo is also a nice example of reactive machines. 2. Limited Memory AI - Just as the name implies, it can store some data for a short period of time. Therefore, limited memory machines can store past experiences. - Downside is that these machines can use stored data for a limited time period only. Therefore, data can be lost for new ones to be stored again. - Autonomous vehicles known as self-driving cars are one of the best examples of Limited Memory systems. These autonomous cars can store recent speed of nearby cars. It can also ascertain the distance of other cars, speed limit, as well as other information to navigate the expressway. 3. Theory of Mind AI - Machines with theory of Mind AI is expected to be able to understand the human emotions. It can read people’s feelings, beliefs, and be able to interact socially like human beings. - However, this type of AI machines are still not developed. The good thing is that researchers are working tirelessly to develop and improve such AI machines. 4. Self-Awareness AI - The Self-awareness AI is the future of Artificial Intelligence in technology. It is expected that these machines will be super intelligent more than man. They will have their own consciousness, sentiments, & self-awareness. - As can be seen above, these machines will be smarter than human mind. - Lastly, the Self-Awareness AI does not currently exist in reality still and it is a theoretical concept.
<urn:uuid:3aa12f5c-6124-4360-a3a4-18cf2f6a7983>
CC-MAIN-2022-40
https://hybridcloudtech.com/four-types-of-ai-artificial-intelligence-in-technology-trends/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00600.warc.gz
en
0.952346
1,026
3.78125
4
Webcasting, which originated in the mid-Nineties, is an increasingly common application of online technologies that benefit businesses and consumers alike. But what is a webcast? Indeed, it's become so ubiquitous, you may not recognize some of the content you consume online as a webcast. However, understanding the definition of a webcast, webcasting's benefits, and how webcasts work is vital to leveraging this application to make communication across your organization cheaper, quicker, and more accessible than ever before! What is a Webcast? A webcast is an online broadcast of your meeting, presentation, or other events. It is usually broadcast live using streaming media technologies and recorded simultaneously. The recording may be replayed at a later date for new audiences or shared with initial audience members for reference purposes. Webcasting is often used by television and radio stations that simultaneously broadcast (simulcast) their live programming online and in the entertainment industry. The simulcast of live performances is increasingly common. Traditional businesses, especially those with multisite teams, satellite workers, and remote workers, also use webcasting for meetings, conferences, training, and other operational functions. Webcasts are often mistaken for webinars. Which is why understanding the definition of a webcast is important. Webinars are online organizational gatherings in which interaction is critical. They are often held using videoconferencing software, with the presenter or presenters making ample use of interactive tools, like Breakout Sessions, Whiteboards, and Polls. Thus, webinars are designed for groups ranging from a few dozen to a few hundred. Webinars may, in fact, be broadcast online (webcast) live so that more people can watch the event. By contrast, webcasts are online gatherings primarily designed with larger audiences, in the thousands or tens of thousands, in mind. Also, webcasts typically are structured and designed to be one-to-many or a few-to-many broadcasts. Think of a television news program where anchors are speaking to you without two-way interaction. Webcasts can be used to broadcast standard training courses to a large workforce, display corporate Board meetings to the public, or showcase a new product to consumers online, among other applications. Webcasts allow you to increase the audience size for your event. When you broadcast online, people no longer need to be in the same room with you to watch your meeting, learn from your seminar, or witness your presentation. Webcasts allow you to share your event with co-workers, clients, and other key stakeholders across the country and across the globe. Because a webcast is broadcast online, anyone with an Internet-connected device can see your event. Viewers can use smartphones, tablets, laptops, or desktops to connect wherever they are, consequently eliminating the logistical hurdles and costs of on-site attendance. You no longer need to absorb the costs of staff traveling to a regional office for a regional sales presentation or a conference guest speakers' lodging expenses. Finally, given that webcasts are recorded, people who missed your event can view it at a later date. You can give the recording to a colleague who was out sick on the day of the broadcast when they return to work. You can also repurpose webcast recordings for other internal training materials or marketing pieces. How to Make a Webcast You'll need a few items to get started with your webcast. Of course, you'll need content, which might include one or several presenters, recorded audio or video, or other material. You should start to nail down your presentation well before you start worrying about the technical side, especially since webcasts don't take that long to set up with today's tools. As you start to nail your content down, you'll need to determine the device from which you'll be streaming. If your webcast consists of your CEO addressing the workforce, then a desktop or laptop may be sufficient. However, if you wish to capture multiple speakers and activities, you'll likely need multiple cameras and mics, as well as an encoder. An encoder is a software application or hardware application that can convert video files from their existing format to one suitable for streaming. You'll also need an easy-to-use live-streaming service, like BlueJeans, preferably one with built-in encoding software. When comparing options, look for services that provide multiple on-camera presenters and video layouts, integration with social media platforms like Facebook, Twitter, large audience sizes, and secure access control. Need to address a broad audience soon? Try BlueJeans Events for free to quickly and easily stream your webcast anywhere across the globe.
<urn:uuid:19df9458-cb51-4ad9-a818-4dafa49eb98b>
CC-MAIN-2022-40
https://www.bluejeans.com/blog/webcast
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00600.warc.gz
en
0.957581
963
2.953125
3
In this PGP encrypted hard drive recovery case study, the client had used full-drive encryption to secure the data on their laptop. With Symantec PGP whole disk encryption, the entirety of their hard drive was password-protected. PGP encryption, also known as “Pretty Good Privacy” encryption, was invented by Phil Zimmerman in 1991. Technology companies such as Symantec offer software that uses this strong encryption method to protect users’ data. PGP encryption helps protect the data on your hard drive from unwanted access. But it doesn’t protect your hard drive from any physical or logical damage. When this client’s laptop failed to boot up one day, the client removed the hard drive. They found that the drive grew very hot when they tried to power it on, and could not get it to detect on another machine. The client quickly contacted our recovery client advisers here at Gillware Data Recovery and send the hard drive to our data recovery lab. PGP Encrypted Hard Drive Recovery Case Study: Laptop Not Booting Drive Model: Hitachi HTS725050A7E630 Drive Capacity: 500 GB Operating System: Windows Situation: Laptop became very hot and wouldn’t boot Type of Data Recovered: User Word and Excel documents Binary Read: 67.2% Gillware Data Recovery Case Rating: 9 Firmware and Parts Compatibility Issues When our data recovery engineers inspected the client’s hard drive in our cleanroom, they found that the drive’s read/write heads had crashed. There was some moderate damage to the drive’s platters as well. The drive needed its read/write heads replaced. Even when two hard drives share the same model number, they are both still special snowflakes. Each hard drive has to be calibrated in the factory for its unique tolerances and minor defects separately. The calibration makes sure the drive’s internal components work properly, according to its unique differences. The calibration data is stored in a ROM chip on the drive’s control board. A hard drive will never truly behave optimally if it has another drive’s read/write heads inside it. This is simply because the drive’s calibrations just do not line up with the unimaginably tiny variations between the two sets of read/write heads. This can make finding suitable donor parts frustrating. This hard drive was particularly uncooperative with our engineers. Normally, when a hard drive powers on, its read/write heads find the firmware, read it, and store the data in the drive’s RAM before continuing its normal operations. The drive’s new read/write heads wouldn’t do this properly. They could read the firmware, but our engineers had to manually load it into the drive’s RAM. Due to adaptive drift, it took multiple sets of donor heads to read this hard drive. As a repaired hard drive continues to operate, its operating conditions change. When the conditions shift too far, the hard drive’s replacement parts become incompatible, and must be themselves replaced. Eventually, after multiple donors have been used on this drive and the drive’s condition had continued to degrade, we had gotten all we could get: 67% of the drive’s binary. Symantec PGP Decryption Symantec PGP whole drive encryption encrypts the entire hard drive (hence the name). Well, almost all of it. The only part of the drive that remains unencrypted is a small portion at the beginning of Sector 0 that tells anything talking to the drive how it’s encrypted. There’s no way to decrypt the drive on the fly, unfortunately, which puts our engineers in a bind when the drive is damaged. There isn’t any way to target used areas of the disk, because there is no way to discern encrypted data from encrypted zeroes. When a drive is damaged to the point where a full (or near-full) disk image isn’t possible, the situation is very worrying for our engineers. There’s no way of knowing how much stuff we’ve gotten. If the tiny parts of the disk that contain the encryption metadata couldn’t be recovered, then we can’t decrypt the recovered data, even with the correct password. And so our logical engineer Cody took the encrypted disk image out of our cleanroom, used the client’s password to decrypt the disk, crossed his fingers, held his breath, and waited. As a byproduct of its design, Symantec PGP whole disk encryption actually takes a very long time to undo. Our engineers are, unfortunately, at its mercy. Cody began the decryption process on a Friday morning. By the end of that day, about five percent of the disk had been decrypted. It wasn’t until the next Tuesday that the process finished. PGP Encrypted Hard Drive Recovery – Conclusion Cody reviewed the results of this data recovery as soon as the operation finished. The results were very good. Imaging the drive had been a shot in the dark due to the encryption. But our engineers had gotten 99.8% of the drive’s file definitions. Of the 99.8% of the files we knew about on the disk, the vast majority had been completely recovered. All of the client’s critical data was there. We rated this PGP encrypted hard drive recovery case a 9 on our ten-point data recovery case rating scale.
<urn:uuid:6c1ed190-046a-4b55-b574-de11697840dd>
CC-MAIN-2022-40
https://www.gillware.com/hard-drive-data-recovery/pgp-encrypted-hard-drive-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00600.warc.gz
en
0.955681
1,158
2.84375
3
In the first installment of this two-part series, I discussed how people use virtual reality (VR) for collaboration and how it differs from existing traditional technologies. In this final segment, I'll address how augmented reality (AR) is expected to be more transformative than virtual reality (VR). But first, I’d like to preface with a quote from Apple CEO, Tim Cook: "I regard [AR] as a big idea, like the smartphone. The smartphone is for everyone, we don't have to think the iPhone is about a certain demographic, or country or vertical market: it's for everyone. I think AR is that big, it's huge. I get excited because of the things that could be done that could improve a lot of lives." What is Augmented Reality? With virtual reality (VR), users wear headsets that replace the physical world with a 3D computer-generated virtual world. Whereas with AR, users maintain a view of the physical world, with computer-generated data and graphics inserted into what they’re seeing. This perspective is typically accomplished with a smartphone’s camera view, an AR headset, or AR glasses. As mentioned above by Mr. Cook, AR is indeed a big idea, so much so that it’s hard for us to imagine how it will change our world. Because AR has the potential to alter what we see, the changes are profound on many levels. For example, people walking down the street would all see different billboards. Family members could also see different art on the walls and watch separate video streams while sitting in the same room. Holograms in AR take us straight into science fiction, allowing for the closest thing to an in-person meeting we’ll ever get. But what about the enterprise? How is AR changing the way we collaborate? To answer these questions, we’ll examine how people are using AR today with an eye on how this will change in the future. AR – As We Know It Today Today, the most widely used device for AR is the smartphone, and users access AR in two different ways. If you ever played Pokémon GO or donned a funny hat filter on Snapchat, you’ve experienced the smartphone version of augmented reality. The display shows whatever the camera is pointed at, augmented with information or graphics from the AR app. One of the most widely-used consumer uses of AR today involves home renovations. My family and I recently replaced the flooring in our home, and thanks to AR, we were able to see how different floor colors would look with our walls and furniture. Furniture companies are using AR to allow prospective buyers an opportunity to see what distinct pieces would look like in their rooms. Additionally, fireworks maker TNT has an app that allows you to virtually preview different fireworks before buying, either in the store, on your desk, or in your backyard. All of these experiences are available using nothing more than a modern smartphone. AR is also accessible via an AR headset, such as Microsoft’s HoloLens 2 . The headset has transparent lenses that supplement your real-world view with data or graphics in a more realistic and hands-free experience. The HoloLens 2 also provides a feature Microsoft calls mixed reality , enabling interaction with virtual 3D models appearing in the real-world view. Even at this early stage, we’ve already seen a game-changer for AR regarding remote service work. This area is exploding as companies recognize the enormous value in sending field workers to jobs equipped with AR devices to collaborate with remote specialists. The field technician broadcasts video of what they’re looking at, while remote specialists guide them through the work—annotating with diagrams and 3D animations of how they should get the job done inside their AR display. Reduced truck rolls and improved time to resolution will pay for a $3,500 headset (cost of HoloLens) quickly. Microsoft boasts a 40% reduction in travel time for Mercedes Benz, where local auto dealers are using AR to get remote assistance from engineers across the world. There are many concrete benefits to this type of operational improvement, including increased customer and employee satisfaction, reduced expenses, and reduced carbon footprint. One of the reasons I find AR intriguing is that it can check all of these boxes with a single technology. To go beyond the hype, I interviewed Nathan Pettyjohn , Commercial AR/VR Lead at Lenovo, to learn what the smart technology provider sees regarding real-world demand for AR. “We’ve reached a tipping point… about a year ago, our senior sales leaders started telling us that we are getting asked in every meeting what we can provide in terms of AR and VR,” Pettyjohn said. He also stated that the leadership at Lenovo considers this to be a significant part of its future. Lenovo has its own AR headset and partners with RealWear, a maker of head-mounted tablet computers for connected industrial workers. The Presence of AR in Three to Five Years It’s no secret that AR will see explosive growth in the next three to five years. Last year, Facebook announced ambitious plans to develop augmented reality glasses, and last week released its first foray into wearable tech through a collaboration with Ray-Ban. While not authentic AR glasses, this is the beginning of where Facebook is heading. Google has been experimenting with AR glasses for a decade. Consumers and industry watchers also expect Apple to make similar announcements, with authentic AR glasses rumored for release in the three to five-year time frame. Industry insiders believe the first generation of consumer AR glasses will compare to the first generation of smartwatches. The glasses will have limited functionality and act as an extension of your smartphone. They will play notifications, provide visual cues for navigation, identify landmarks, etc. But this will change quickly, especially as AR converges with AI, cloud, and 5G. What AR Will Look Like in Five Years and Beyond At a high level, the long-term future of AR is easy to summarize—it changes everything. What started as a smartwatch substitute will grow into a smartphone replacement. With mature AR glasses, the need for a smartphone goes away. From a collaboration perspective, the convergence of AR and virtual holograms will allow us to have more personal and intimate interactions with remote participants. As we ponder supporting a vast percentage of the remote workforce, this will become increasingly important. This area is one where AR and VR will converge, with remote users entering the digital twin of a meeting room in VR and seeing life-like avatars of the in-person attendees. In-person attendees will also see virtual holograms of remote attendees in their AR glasses. With enhanced quality and spatial audio, it might be hard to decipher who’s physically present in the room and who’s remote. Tim Cook said that AR is a “big idea” like the smartphone. I disagree. I think it’s a much bigger idea than the smartphone and will be even more disruptive. AR can change what we see. In effect, AR can alter our reality to the degree where we can’t decipher between what’s real and what isn’t. AR devices can also potentially analyze everything we see. The privacy issues around AR are as mind-boggling as the technology. The winner of this “bigger than the smartphone” race could end up being the one users trust the most. This post is written on behalf of BCStrategies, an industry resource for enterprises, vendors, system integrators, and anyone interested in the growing business communications arena. A supplier of objective information on business communications, BCStrategies is supported by an alliance of leading communication industry advisors, analysts, and consultants who have worked in the various segments of the dynamic business communications market.
<urn:uuid:7e7d6474-8456-457b-8186-cd1c9a4a7f51>
CC-MAIN-2022-40
https://www.nojitter.com/team-collaboration-tools-workspaces/ars-ascension-collaboration-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00600.warc.gz
en
0.942002
1,608
2.921875
3
NFC Technology - Benefits And Use Cases by Smitesh Singh, on Mar 24, 2022 5:24:56 PM Mobile devices these days have turned into quick payment devices causing users to dismiss their cash as well as cards entirely. The technology making this possible is NFC. NFC is the abbreviation for near-field communication. This technology works like Bluetooth, which means it required two different devices in close proximity to one another in order to function. One needs to have an NFC reader along with a mobile app for the smartphone. So how does the NFC technology exactly work? NFC is part of RFID or radio frequency identification. It employs a frequency of 13.56MHz which works for close-range interaction between devices. This is where devices need to be close to each other - typically, inside two inches of area. When a certain contactless action is triggered, devices start to exchange encrypted data with the devices until the intended result is achieved. NFC technology is usually much more secure than credit and debit cards. It is also a fast, secure, and innovative solution for the mobile-driven era. NFC use cases for mobile apps - Monitoring employee attendance: A number of enterprises make use of identity cards or access cards that approve the employee entrance and mark attendance at workplaces and institutions. The hassle of carrying and developing these cards can be done away with, and mobile devices can be used in conjunction with NFC technology. NFC technology can significantly enhance your security as well as authorization processes to optimize the management of employee attendance. As NFC always shares data in encrypted form, it can be effectively used for organizations that need a great security level. - Management of assets: Another critical use case in NFC technology is tracking and management of assets. With the help of NFC technology, employees can: - Scan RFID tags on stock items through mobile apps for identification - Track the return process of an item - Use NFC tags in storage rooms to check if they were validated. - Verification of medications: An effective way to make sure that medications and their authenticity are checked is by putting NFC tags on them and using mobile apps to scan them. Patients, nurses, and practitioners can scan these tags via this app to ensure that the medication is unopened before and that it’s 100% authentic. Additionally, an NFC tag can contain data on the recommended dosages, the expiry dates, side effects, along with other important details. - Collecting information: Just like a QR code, NFC can obtain data from NFC tags placed on various items t scan them for respective details. These details can range from prices, details on an event, data on medications inside drug stores, etc. The benefits of App for NFC There is no denying fact that the NFC technology across mobile devices is fast as well as secure. But what are the business benefits it offers? Here we enlist some of the benefits that NFC technology offers to enterprises: - Security: The data that is exchanged amongst the devices is consistently changing, unlike the static data that is stored in a stripe card. Moreover, the encrypted nature of NFC data makes it hard for hackers to steal as well as decipher it. - Speed: Anther benefit of NFC technology is that it is much quicker than EMV. EMV chips can take some time given the nature of these cards, but NFC transactions are done within a couple of seconds. - Power consumption: NFC is a bit slow as compared to Bluetooth devices, but it holds a considerable advantage with its tendency for lesser power consumption. It might be unsuitable for some use cases but it remains perfect for mobile devices as well as devices that don't have that much operating power. NFC is a transformational technology that can be implemented across numerous industries, ranging from warehousing, logistics to healthcare, finances, etc. With its growing popularity, it will get more imbibed in our lives with all important operations done within seconds through our mobile devices. To get started, get in touch with a mobile app development company.
<urn:uuid:f0481f32-a8fc-4f3a-9d91-9232e33cc8b3>
CC-MAIN-2022-40
https://blog.datamatics.com/nfc-technology-benefits-and-use-cases
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00600.warc.gz
en
0.951589
823
2.71875
3
The Routing Information Protocol, or RIP, as it is more commonly called, is one of the most enduring of all routing protocols. RIP has four basic components: routing update process, RIP routing metrics, routing stability, and routing timers. Devices that support RIP send routing-update messages at regular intervals and when the network topology changes. These RIP packets include information about the networks that the devices can reach, as well as the number of routers or gateways that a packet must travel through to reach the destination address. RIP generates more traffic than OSPF, but is easier to configure. RIP is a distance-vector routing protocol that uses hop count as the metric for path selection. When RIP is enabled on an interface, the interface exchanges RIP broadcasts with neighboring devices to dynamically learn about and advertise routes. The Firepower Threat Defense device supports both RIP Version 1 and RIP Version 2. RIP Version 1 does not send the subnet mask with the routing update. RIP Version 2 sends the subnet mask with the routing update and supports variable-length subnet masks. Additionally, RIP Version 2 supports neighbor authentication when routing updates are exchanged. This authentication ensures that the Firepower Threat Defense device receives reliable routing information from a trusted source. RIP has advantages over static routes because the initial configuration is simple, and you do not need to update the configuration when the topology changes. The disadvantage to RIP is that there is more network and processing overhead than in static routing. Routing Update Process RIP sends routing-update messages at regular intervals and when the network topology changes. When a router receives a routing update that includes changes to an entry, it updates its routing table to reflect the new route. The metric value for the path is increased by 1, and the sender is indicated as the next hop. RIP routers maintain only the best route (the route with the lowest metric value) to a destination. After updating its routing table, the router immediately begins transmitting routing updates to inform other network routers of the change. These updates are sent independently of the regularly scheduled updates that RIP routers send. RIP Routing Metric RIP uses a single routing metric (hop count) to measure the distance between the source and a destination network. Each hop in a path from source to destination is assigned a hop count value, which is typically 1. When a router receives a routing update that contains a new or changed destination network entry, the router adds 1 to the metric value indicated in the update and enters the network in the routing table. The IP address of the sender is used as the next hop. RIP Stability Features RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router receives a routing update that contains a new or changed entry, and if increasing the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16 hops. RIP includes a number of other stability features that are common to many routing protocols. These features are designed to provide stability despite potentially rapid changes in network topology. For example, RIP implements the split horizon and hold-down mechanisms to prevent incorrect routing information from being propagated. RIP uses numerous timers to regulate its performance. Following are the timer stages for RIP: Update—The routing-update timer is the interval between periodic routing updates. This is how often the device sends routing updates. Generally, it is set to 30 seconds, with a small random amount of time added whenever the timer is reset. This is done to help prevent congestion, which could result from all routers simultaneously attempting to update their neighbors. Invalid—Each routing table entry has a route-timeout timer associated with it. This is the number of seconds since the device received the last valid update. When the route-timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires. Once this timer expires, the route goes into holddown. The default is 180 seconds (3 minutes). Holddown—The holddown period is the number of seconds the system waits before accepting any new updates for the route that is in holddown (that is, routes that have been marked invalid). The default is 180 seconds (3 minutes). Flush—The route-flush timer is the number of seconds since the system received the last valid update until the route is discarded and removed from the routing table. The default is 240 seconds (4 minutes). As an example, when the interface on an adjacent router goes down, the system no longer receives routing updates from the adjacent router. At this time, the Invalid and Flush timers start increasing. In the first 180 seconds, nothing will happen. After 180 seconds, the invalid timer expires, making the route invalid, and the Holddown timer starts and holds the route for another 60 seconds. If there is still no update regarding the interface status on the adjacent router (that is, it is still down), then the route enters into the Flush state where in total the system has waited for 240 seconds from the last update (180 seconds for the Invalid timer and 60 seconds for Holddown timer), and the system flushes the route. Even if the adjacent routers interface comes up immediately, the system does not accept a routing update until the Holddown timer completes the remaining 120 seconds.
<urn:uuid:34aad1ba-5767-42e9-b6d0-3b1ba9016738>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/security/firepower/620/configuration/guide/fpmc-config-guide-v62/rip_for_the_firepower_threat_defense.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00600.warc.gz
en
0.912792
1,140
3
3
You know that box that you switch off and back on again when your WiFi starts playing up at home… well that’s not an AP, that’s a router. An AP, otherwise known as an access point, connects to that box allowing you to extend the range of your WiFi to a particular location. Let’s imagine you live in a massive house and your router is located in the east wing. Extravagant, I know, but necessary for the purpose of this example. Anyway, here in the east wing the router provides a strong WiFi connection. In the west wing however, connection is poor and somewhat non-existent. So, what do we do? We install an AP in the west wing and connect this to the router in the east wing via an ethernet cable. Hey presto, we now also have a strong connection in the west wing. Although used in the home, APs are more commonly used by businesses that occupy large public spaces i.e. hotels, offices, cafes, restaurants, train stations, airports… you get the idea. More often than not businesses will use multiple APs to ensure a strong and consistent connection across all areas for their customers. The use of one or more APs in a single location is known as a hubspot. In the physical, APs are nothing special. They’re a small, fairly insignificant piece of hardware, not too dissimilar from your typical router. What they do, however, is considerably more interesting. Their ability to act as a central transmitter and receiver of wireless signals allows us, our friends, families, and colleagues to connect to the internet instantly and wirelessly almost anywhere. Remember dial up… We’ve come a long way since then. So, why should businesses invest in APs and what are the benefits? Offering widespread wireless internet access for customers opens doors. A restaurant or café may benefit from customers staying longer; potentially purchasing more items. A hotel may benefit from increased bookings; we all know free WiFi is up there with some of the main deciding factors when choosing a place to stay. It doesn’t stop there. This is where APs start to get interesting. Our burning desire to search for and connect to public WiFi networks everywhere we go presents an opportunity for businesses to gain valuable insight and data around customers and how they interact within their physical spaces. Think about the last time you visited an airport. Did you connect to their WiFi network? Did you enter your details and agree to their terms in order to do so? I guessed as much. Imagine how many others did this as well. Airports are big places. That’s lots of people and lots of data. Every time a customer connects to WiFi through an AP, the AP starts to collect data on the device that has connected to it. How? Let me tell you. APs collect data through applying an identification code to the device known as a MAC address. Using the MAC address, the device (and therefore the customer) can be tracked. This allows businesses to see what device the customer is using and how they are interacting within the space. Footfall, passers-by, dwell times, bounce rates and return visits can all be tracked through the MAC address, and by monitoring the connection to the internet businesses can also capture the websites that the MAC address visits, giving insight into customer interests. What’s more, businesses can add a social media login page to not only provide a frustration-free way for customers to access the WiFi, but to collect contact information and begin to build detailed customer profiles. With the right cloud software enabled over the existing WiFi network businesses can access this wealth of rich WiFi analytics with ease and use it to develop targeted, real time marketing campaigns to improve ROI. For example, a restaurant may use the data to send a 2-4-1 cocktails offer via email or SMS to a customer passing by, or a clothing store may use the data to send a 10% off voucher to a customer following a recent purchase to encourage a repeat visit. This highly targeted marketing not only has significant benefits for businesses, but it also works towards creating an outstanding customer experience; the ultimate aim of any successful marketing strategy.
<urn:uuid:d947ff37-6495-4279-bbe2-c3a755d9e436>
CC-MAIN-2022-40
https://purple.ai/blogs/whats-ap-businesses-invest/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00600.warc.gz
en
0.948576
874
2.65625
3
Part one in our latest blog series explores the difference between Artificial Intelligence and Machine Learning, how it can deliver value and some practical considerations around implementation. There has been a dramatic increase in awareness around Artificial Intelligence and Machine Learning – which appears to be the technology buzzword of the moment. While there are some good opportunities for Machine Learning and Artificial Intelligence, there are a number of challenges particularly from a data security perspective and ethics. In this new blog series, we aim to explore the differences between Machine Learning and Artificial Intelligence, demystify some of the common myths associated with these approaches and outline some of the critical points that organisations need to consider before jumping in headfirst – all within an information security context. - Isn’t Machine Learning the same as Artificial Intelligence? No they are different, although the terms are commonly confused and often used interchangeably. To quote Forbes: Artificial Intelligence (AI) refers to devices that are designed to act intelligently, however AI is often classified into one of two fundamental groups – applied or general. Applied AI is far more common and refers to systems designed to intelligently trade stocks and shares, or manoeuvre an autonomous vehicle for example. Generalised AI refers to systems or devices which can in theory handle any task, and these are much less common. This is the area that has led to the development of Machine Learning (ML), which is often referred to as a subset of AI. Machine Learning’s key differentiator is that the device learns how to do a task, rather than is programmed to complete the task, which requires training. A common example of this is when a ML system is used to detect brain tumours in MRI scans. They were shown 1000s of images of brains with and without tumours, and throughout were told if a tumour was present. After the learning phase, the system could easily identify whether a brain had a tumour. This example shows that ML is very good at complex image tasks so long as there is a relatively simple answer. - What is the track record of Machine Learning and Artificial Intelligence in delivering value? It’s too early to tell. Although Machine Learning has been transformative in some fields, effective and accurate Machine Learning is challenging because often there is just not enough data available with clear results; as a result, many Machine Learning programs often fail to deliver their expected value. Today there is much hype around AI and ML, and as a result, business Executives are generally receptive. On the other hand, some people’s expectations of what Machine Learning can do in practice can far exceed what is possible or even reasonable. Today ML and AI is being employed in a handful of industries to support low complexity, high volume processes, but ultimately the human often has to be involved to validate any decision. For example, in the healthcare example given previously, they are using ML to detect tumours but still rely on a human radiographer to validate the results before surgery. If you speak to your digital assistant at home, ML is used to convert your speech to text, after which a basic AI engine is used to pick out key words to respond to you. It is significantly more challenging to apply ML in a text-based environment due to complexities with language and syntax, so the need to involve a human becomes even more critical to ensure accuracy of decisions. - Is Machine Learning easy to implement? Not really. There are two parts to any ML project, firstly you may need a very powerful computer with very specialised software. Secondly, you need a team experienced in finding the correct data for the ML engine to learn with. This ongoing problem contributes to a backlog of Machine Learning inside the enterprise. In fact, there is at least a ten-year backlog of Machine Learning projects locked inside large companies. Data scientists often need a combination of practical applications skills, as well as in-depth knowledge of science, technology, and mathematics. Recruiting them will require you to pay big bucks as these employees are often in high-demand and know their worth. Given the recent emergence of ML and AI, service providers have staffing shortages and those who do have skilled data specialists to support a deployment will charge a high premium. - Does Machine Learning require organisations to have large infrastructures? Yes. Machine Learning can require vast amounts of data processing capabilities. This is additive to the existing systems processing data across the environment, which will be already drawing on processing resources. Legacy systems often can’t handle the workload and buckle under pressure. Often companies rent servers from suppliers of data farms. While people might be happy to send a text from their home assistant, companies are still reluctant to send vast amounts of confidential data outside of their organisation. Boldon James is actively researching AI and ML techniques to identify areas where Machine Leaning can help in data classification. For more information please contact us. Later in the series, we will talk about AI and ML for security and data classification…
<urn:uuid:53b40b07-6636-4c1b-a041-68857f126659>
CC-MAIN-2022-40
https://www.boldonjames.com/blog/artificial-intelligence-and-machine-learning-are-we-missing-the-human-point-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00000.warc.gz
en
0.95803
1,017
3.0625
3
Oct 2, 2019 October marks the 15th year since the Department of Homeland Security created National Cyber Security Awareness Month (NCASM). Each year, security experts and organizations work hard to address cyber security issues and trends in types of attacks. Despite this, cyber crimes are on the rise, specifically attacks against small and medium-sized businesses. The news may publicize the issue of hackers and data breaches in large companies, but 58% of cyber-attack victims are small businesses, according to Verizon 2018 Data Breach Investigation Report. The report also states that a quarter of these attacks are caused by people inside organizations, though it is not clarified if these attacks are intentional or not. Cyber-attacks on small and medium-sized companies can cost more than 2 million dollars in damaged IT assets and operational downtime. Small businesses are often targeted because they tend to lack security resources of larger companies, even though the criminals might not be able to extort them for as much money. As IT professionals work to address the common gaps in cyber security, criminals work to develop new and more creative ways to access your private information. File-less attacks, for example, are 10x more successful and much harder to trace than methods such as ransomware and are increasing in number as a response to stronger security measures against former tactics. Criminals utilizing file-less techniques do not require software or files, they simply hijack current software that is often already built into Windows like Adobe Flash. They will begin when an unsuspecting employee clicks on a suspicious link, often sent through emails, and through there the hackers will take control and make the computer turn against itself. This year alone, several major companies experienced data breaches that cost over 300 million dollars. These data breaches should have been prevented through employee education and stricter cyber security methods. There is no one way to prevent cyber attacks, instead, experts recommends utilizing several locks and measures to avoid being attacked. For example, moving your information to the cloud might seem like the safe solution to secure your network, but if the best cyber security practices are not implemented with the technology, cyber criminals can still be a threat. Promptly updating and patching applications and operating systems, changing your passwords often, and continual training of your staff are all just as valuable to the security of your data as your monitoring tools and backups. Small and medium-sized businesses have an advantage when dealing with some of the issues of cyber security, awareness being one. A smaller company can better inform team members of problems to look out for, and prevention methods are easier to enforce. A small company can also benefit from outsourcing security and backup data solutions to IT specialists for much less than a large company or building a security team in-house. Concerned about your organization's security? Schedule a network assessment today OR contact one of our certified network security engineers with your questions.
<urn:uuid:b06c472a-f074-4e50-9e00-385d02943c4a>
CC-MAIN-2022-40
https://www.milner.com/resources/blog/post/articles/2019/10/02/stay-alert-during-cyber-security-awareness-month
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00000.warc.gz
en
0.960276
576
2.796875
3
A team of researchers with members from Oxford University in the U.K., the National Institutes of Health in the U.S. and Leiden University Medical Center in the Netherlands has developed a new approach to battling malaria—boosting an immune response in the liver. In their paper published in the journal Science Translational Medicine, the group describes their approach and how well it worked in mice. Malaria is a complicated disease caused by a parasite rather than bacteria or a virus, and carried by mosquitoes—it also has a two-stage infection process in its host. Most approaches to fighting the disease involve inoculating a patient with an agent to kill the parasite after it has caused an infection—and sadly, the parasite has started to become resistant to most of them. Because of that, scientists around the world have been working toward a new vaccine, one that in addition to being effective, would not be so easily overcome. In this new approach, the researchers are going after the parasite before it has a chance to reproduce. When a person is bitten by a mosquito, sporozoites enter the bloodstream and make their way to the liver. Once there, they reproduce asexually, giving birth to multiple merozoites, which go on to infect red blood cells. The researchers wondered if it might be possible to get the liver to put up more of a response when detecting the presence of sporozoites, killing them before they can reproduce. To test their idea, the group developed a vaccine that works by activating tissue-resident memory CD8+ T cells in the liver—it makes its way there via a virus-based delivery mechanism. Once there, it remains resident for up to six months. If sporozoites are detected, the T cells activate, killing them. The researchers report that the approach has worked very well in mice and has thus far been proven to be safe for human use. Next up will be clinical trials to determine if the new vaccine is an effective preventative measure for people living in malaria-endemic regions. They also note that if their method works as hoped, it might also be used to fight other types of infections that get their start in organs such as the liver. More information: Anita Gola et al. Prime and target immunization protects against liver-stage malaria in mice, Science Translational Medicine (2018). DOI: 10.1126/scitranslmed.aap9128 Journal reference: Science Translational Medicine search and more info
<urn:uuid:a3687a5b-6fdf-4b4a-8e8f-558e7d763b7c>
CC-MAIN-2022-40
https://debuglies.com/2018/09/27/a-team-of-researchers-developed-a-new-approach-to-battling-malaria-boosting-an-immune-response-in-the-liver/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00000.warc.gz
en
0.946903
518
3.90625
4
Phishing is a form of cyberattack, where a message recipient is tricked into a clicking a link through to a fake webpage with the aim of persuading them to enter personal information. Although is mostly carried out over email, this activity has now spread to social media, messaging services and apps. The goal of the scammer is to trick the target into doing what the scammer wants in order to infiltrate an aspect of their target's personal and work life. That might be handing over passwords to make it easier to hack a company, or altering bank details so that payments go to fraudsters instead of the correct account, etc. "Last year I fell victim to a really clever password phishing attack, in which a hacker used a fake google authenticator page to steal my password and 2-factor code."
<urn:uuid:0183d119-b77a-42d7-8262-1e37f2d95605>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/phishing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00000.warc.gz
en
0.939373
170
2.8125
3
Machine Learning Era, No Chance for Error Education Cybersecurity Weekly is a curated weekly news overview for those who are concerned about the Education industry and Education data breach. It provides brief summaries and links to articles and news across a spectrum of EdTech. Learn about machine learning era.Among students, depression is associated with a mirthless monster, running into the classroom, grabbing student’s hand and dragging him away to watch sad TV series. However, scientists have found a way to deal with it. Pearson, one of the world’s largest publishers, became a target for cybercriminals. The data breach affected nearly 13,000 school and university accounts. Tsunami of cybercrimes destroying education sector EdScoop on August 2, 2019 Security experts never stop assuming that cybercrimes won’t end in the education sphere and we’ve found one more prove. Pearson, one of the world’s largest publishers, became a target for cybercriminals. The data breach affected nearly 13,000 school and university accounts. According to Scott Overland, Pearson’s director of media relations, “the exposed data was isolated to first name, last name, and in some instances may include the date of birth and/or email address.” The upward trend of ransomware attacks took over a school district in northeastern Oklahoma. Now admins of Broken Arrow Public Schools are working with cybersecurity experts to eliminate the consequences of network interruption. What is noteworthy, the number of ransomware attacks since 2013 has already reached 200 incidents. Finally, Higher Ed institutions are still vulnerable to cyber threats. As stated by The Telegraph, leading British universities including Oxford and Cambridge have not turned on DMARC authentication for their domains that helps to protect users against phishing emails. Students’ depression to be measured: machine learning opportunities The Tech Edvocate on August 2, 2019 Among students, depression is associated with a mirthless monster, running into the classroom, grabbing student’s hand and dragging him away to watch sad TV series. However, scientists have found a way to deal with it. Machine learning, as an AI application, provides system the ability to automatically learn and improve from experience. MIT researchers have invented a neural-network model that is able to analyze raw text and audio data to reveal speech patterns that may signal depression. Moreover, this model can be developed as a smartphone application, so that user’s data will be monitored for emotional and mental decline and in case of distress system will inform someone appropriate. Weather forecast by ML: thunderstorms become more predictable Science|Business on July 30, 2019 Researchers of the Finnish Meteorological Institute and the Aalto University have joined together to implement machine learning tools in weather forecasting. The result of scientific work is especially beneficial for electricity companies: now, basing on ML algorithms, they are able to predict the severity of storms and which of them may cause blackouts. Storms are made up of many elements that can indicate how damaging they can be: surface area, wind speed, temperature and pressure, to name a few. By grouping 16 different features of each storm, we were able to train the computer to recognize when storms will be damaging. Roope Tervo, a software architect at the Finnish Meteorological Institute Chinese AI experiment can reshape the way the world learns MIT Technology Review on August 2, 2019 If in the last centuries Chinese were one step ahead and invented such common things as paper, the magnetic compass, printing, silk and gunpowder, today this list can be expanded with a grand experiment in AI education. Last year, China invested over $1 billion in AI education. While millions of students now use some form of AI to learn, both tech giants and startups have jumped in. Indeed, experts have emphasized three things that led to the AI boom: - AI ventures, aiming to improve learning effectiveness and other aspects of the education process, are provided with tax breaks and other incentives. This also makes them attractive for VCs. - Chinese academic competition and the “gaokao”, a college entrance exam, motivate students to take charge of their learning and make parents pay for every opportunity that may help their children get ahead. - The final reason is economic – Chinese entrepreneurs are interested in refining their algorithms and have large masses of data to do it, despite an ethical aspect of data privacy.
<urn:uuid:9f1b5bfd-8fce-46f2-b41a-d7adef69d3a2>
CC-MAIN-2022-40
https://edguards.com/egnews/education-cybersecurity-weekly/machine-learning-era-no-chance-for-error/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00000.warc.gz
en
0.95279
910
2.640625
3
The economy of a state can be greatly affected by the use of Bitcoin. Bitcoin is a type of digital currency that is used for online transactions. The use of Bitcoin can help to improve the economy of a state by increasing the level of trade. It can also help to increase the level of tourism in a state. It has been established that there is a direct correlation between the use of Bitcoin and the increase in trade. It is also evident that there are some states in the U.S. that are currently experiencing an increase in economic growth due to the use of Bitcoin. For example, both New Hampshire and California are currently at their peak when it comes to trade. This is because they have shown great potential in terms of accepting payments using Bitcoin Up login for transactions, especially on online platforms. It has also been established that a good number of travellers from all over the world are opting to pay using their credit cards. Therefore, if a state is able to transform the use of Bitcoin into legal transactions, it can increase the level of trade within its borders and at the same time improve its economy by inviting more tourists from all over the world. This will be very beneficial to both the business sector and the state’s economy. There are a number of benefits that a state will be able to get from using Bitcoin. First, it can help to increase the level of trade within a state. It is evident that some states, such as New Hampshire and California, have experienced an increase in trade since they started accepting payments using Bitcoin. This has also helped to increase their economy significantly. Bitcoin Promoted Tourism Also, the use of Bitcoin can help to improve tourism within a state. There are foreign travellers that prefer to pay using credit cards when they visit other countries for various reasons such as comfort and security. The possibility of using bitcoins in different states may be very beneficial since it will give travellers the opportunity to pay using this type of currency. This will make them feel comfortable when they are visiting other states. It can also help to reduce crime within a state by improving the level of security especially in terms of online transactions. Some criminals use Bitcoin in carrying out their illegal activities since it is very difficult to track their activities when they are using this kind of currency. On the other hand, it is very easy to trace criminals who are using credit cards for their illegal activities because they can be easily identified since their credit card numbers can provide them with access to personal information which they can use in carrying out their criminal activities. The above article has established that most states within the U.S. are experiencing an increase in trade since they have shown great potential in terms of accepting payments using Bitcoin for transactions, especially on online platforms. It has also been established that there is a direct correlation between the use of Bitcoin and the increase in trade. Also, it has been established that some states within the U.S. are currently experiencing an increase in economic growth due to the use of Bitcoin. There are a number of benefits that a state will be able to get from using Bitcoin such as helping to increase trade, improve tourism and reduce crime. This has also helped to increase their economy significantly. The Rise in Bitcoin Trading The rise in bitcoin trading leads to an increase in the number of people interested in investing in cryptocurrency. This, in turn, causes the price of bitcoin to rise even further. An increasingly popular form of investment, Bitcoin has a number of interesting features that make it stand out from traditional currency. Bitcoin is not controlled by banks or other financial institutions and transactions are made between individuals without the need for a central regulatory body. Bitcoin transfers can be carried out instantaneously and anonymously, making them highly desirable to those who wish to remain incognito. Bitcoin has seen steady growth in value since its conception in 2009, making it attractive to investors. This growth is set to continue into the future which makes bitcoin an interesting prospect for those looking for alternative methods of investment. However, the number of people trading in bitcoin is rising exponentially and this can affect the price of the currency. As more and more people become aware of the potential returns that bitcoin can give, the value of the cryptocurrency increases as demand continues to rise. This cycle has also been witnessed in many other investments such as housing and various commodities; however, it is yet to be seen whether this will work for cryptocurrencies like bitcoin.
<urn:uuid:71c20bd2-fc98-462b-8431-19e52e8a533b>
CC-MAIN-2022-40
https://gbhackers.com/states-economy-with-bitcoin/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00000.warc.gz
en
0.972292
891
2.6875
3
A set of specifications for data transfer between devices on a local network. The standards describe packet-switching technology, protocols, and the physical implementation of the connection. Ethernet is the most common LAN technology. It transfers data as frames — blocks of information consisting of a header and the transmitted data. The header specifies the MAC address of the sender and receiver, as well as other operational information. For reassembly of the individual frames into a single object, a checksum is applied. Frames are transmitted in broadcast mode, that is, to all nodes in the network. To reduce the amount of junk traffic and deliver packets only to the recipient in the header, special devices are used, namely routers.
<urn:uuid:6f84024f-899b-4a99-bfbb-fceb671d53ad>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/ethernet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00000.warc.gz
en
0.916277
140
3.765625
4
You may implicitly trust the Wi-Fi network at your local coffee shop or in your office, but there are times when sending data over Wi-Fi makes it more vulnerable to attack. This is especially true when using unsecured public Wi-Fi networks like the ones found in hotels, airports and coffee shops. However, even your office network could have vulnerabilities to fix. Follow these tips to make sure your data stays safe no matter what wireless network you are using. 1. Use secure sites In the earlier days of the web, most sites were not secure and used the protocol HTTP to send and receive information. Now the internet standard is HTTPS, a protocol that guarantees that any information sent to or from a site is encrypted and thus largely shielded from hackers. Just check the beginning of a URL before you visit to make sure it starts with HTTPS and not HTTP, whether you’re on public Wi-Fi or your office network. These days, about 84% of websites have switched over to HTTPS because they realize it’s an easy thing to do that protects web users. 2. Use a VPN A virtual private network, or VPN, is a great tool to keep company information safe on Wi-Fi. This appears as if you are browsing under a different IP address and can be used to mask your physical location from hackers. It also encrypts your data – frequently referred to as a tunnel of encryption – and even masks your online activity from the internet service provider of the network you are using. VPNs are generally very affordable and it’s an easy way to make your employees, clients, vendors and anyone you do business with feel secure about working with you over Wi-Fi. Using VPNs is always a good idea, especially when connecting over public Wi-Fi. 3. Use strong passwords on office networks No more using “password123” or the name of your company as your Wi-Fi password. It’s best to use a longer, more complex or random password since more and diverse characters mean it will be harder for hackers to guess or find. Some experts recommend using somewhere between 16 and 30 characters. You should also make sure your Wi-Fi network uses the updated WPA3 protocol instead of WPA2. With WPA2, hackers can literally go through the whole dictionary, trying to find your Wi-Fi password with no detection. WPA3 puts a halt to that. In order to use WPA3, your Wi-Fi router and other access points must be officially certified, so an equipment upgrade may be in order. Even with WPA3, you should still use a strong password, though the 4. Follow all security recommendations No matter what type of Wi-Fi network you are on, the greatest risk is not necessarily eavesdropping or hacking. These days, more and more hackers use social engineering attacks like phishing and spearphishing to gain direct access to your data. Even if you do everything right in terms of Wi-Fi security, your company could still experience a data breach. Educate employees on all types of best security practices, like how to spot phishing messages and how to handle strange attachments. Your company data will remain safer regardless of how employees connect online. Contact KME Systems for more information about how to secure your company’s information over any type of wireless network.
<urn:uuid:0acc1451-235e-4914-af76-c5f53c6552c3>
CC-MAIN-2022-40
https://kmesystems.com/5-wireless-security-best-practices-for-businesses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00000.warc.gz
en
0.924889
692
2.578125
3
What Is Semi-Structured Data? Finally, semi-structured data represents a hybrid of these two data types. This group could include Excel spreadsheets that contain important financial information, but the data itself is hard to extract. These data objects may have structure within them, but they lack the external structure needed for standard data management processes. Like unstructured data, these objects contain important insights that can be hard to extract and apply without an intelligent data governance strategy. Semi-structured data refers to any information that uses a self-describing schema, such as XML or JSON. These types of data have an open-ended schema that enables application data flexibility. Sometimes, this type of data is combined with structured data to record additional properties for specific types of records within a structured data store. The open-ended schema means that semi-structured data does not rely on the application that created it to define the embedded structure. For example, an Oracle database would be considered a structured data type. The rules governing the database are bound and applied by the application that creates the file, or, in this case, the database. With semi-structured datasets, the definitions and constraints are embedded within the file, regardless of the application that created them. For example, XML files and cascading style sheets for web pages are both forms of semi-structured data. They can be created by almost any kind of application — such as Notepad, a website builder app or an Office app like Word — so there is no way for the application to apply structure or rules to these data types. Semi-structured data is challenging for organizations to manage because it does not necessarily have the same level of organization and predictability as structured data. It does not reside in fixed fields or records. At the same time, it does have more rigidity than unstructured data, because it does contain elements that can separate the data into various hierarchies (think of comma delimited files or tab delimited files). Unlike structured data — which represents data as a flat table —semi-structured data can contain n-level hierarchies of nested information. This means that it can be easy to apply standard data management processes to semi-structured data, and it can be easy to extract insights from. The real issue is making sure your business has the tools and technology necessary to load the data into structured or unstructured data models, which can be managed via data governance.
<urn:uuid:b3d6a699-cd6f-486d-aab9-9a194bcc17f1>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-are-different-data-types.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00000.warc.gz
en
0.925397
505
3.125
3
Here is why High Voltage Cables are trending in the industry. We need to know about the high voltage cables, their types, components, advantages, and disadvantages in their application. The demand for electricity is increasing with the increase in population, therefore its proper supply must be guaranteed. For the supply of electricity over long distances, there is a necessity for high voltage cables. The properties of these cables make them effective and efficient for the transmission and also minimize power loss in long-distance transmission. The inclination towards the use of renewable energy such as solar energy and wind energy involves the construction of new power grids. The replacement of old power grids by a new transmission line will increase the demand for high voltage cables. What are High Voltage cables? Before knowing about the high voltage cables, we need to understand what is high voltage? The amount of voltage power that is above a threshold range is called high voltage. The voltage power that exceeds 1000 volts is considered as high voltage and is dangerous for living beings. High voltage is used for electrical transmission over long distances, so as to reduce the loss of current. High voltage cables are used to transmit electric power at high voltage. As these cables are fully insulated, they ensure the safety of the living being. The high voltage causes electric shock that may be fatal for them. It consists of an inner conductor that remains insulated from outside. The conductor carries high voltage and for that purpose, more resistant cables are required which can bear high voltage. Components of High Voltage Cables High voltage cables primarily consist of conductor and insulation. Apart from these, there many other components such as conductor shield, non-metallic insulation screen, metallic insulation screen, laying up, inner sheath and outer sheath are also present. Modern High Voltage Cables are simpler and consist of less number of parts. This includes conductors, conductor shield, insulation shield, metallic shield, joints, and jackets. Functions of these components are as follows: - Circular copper or aluminium conductors are responsible for carrying short circuit current and have a continuous load. - The conductor shield is made up of semiconductor material. It reduces the chance of electric current discharge at the interface of conductor and insulation. - All the insulation parts consist of polymers that provide protection to the conductor. It ensures better transmission of high voltage with minimum loss. - The cable jackets are made up of polymeric compounds that prevent moisture and saves the cable from getting eroded. It also minimizes chemical invasion that increases the lifespan of cables. Types of cables Cables are used to transmit electrical energy. They safely supply electricity from the power source to different loads. Different cables are designed accordingly to achieve this purpose. - Rubber cables, that are made up of oil products or rubber obtained from tropical trees. They are very liable to damage as it absorbs moisture. - Rubber cables are modified to form vulcanized rubber cable. In this type of cables, rubber is mixed with mineral compounds like zinc, lead, or sulfur. Vulcanized rubber cables are more durable are have higher mechanical strength compared to rubber cables. - Polyvinyl chloride cables have good dielectric strength. It is tough and durable as polyvinyl chloride is used for insulation. Advanced cables such as Polychoroprene cables, XLPE cables, MICC cables, and PILSWA cables are widely used. Polychloroprene cables are resistant to heat and are more durable. XLPE cables stand for cross-linked poly-ethene cables that are tolerant to a higher temperature and have good insulating properties. Mineral insulated copper-clad (MICC) cables are made by placing copper rods in a copper tube and the gap in between the two is filled by magnesium oxide powder. PILSWA cables are mostly preferred for the underground cable system as the lead sheath covering aids in protecting the cables from corrosion, penetration by chemical, and increase their lifespan. The specifications of High Voltage Cables - The features of high voltage cables make it special in terms of working efficiency with specific switching and distribution panels. - The supply of high voltage from these cables can be regulated from control rooms and at the same time can be easily monitored remotely. - The sub-stations lowers the voltage in the high voltage cables to supply the electricity in local areas. These are a few specifications of high voltage cables. Advantages and disadvantages - There are many advantages of high voltage cables. It can transmit electricity to longer distances. By using these cables, there is almost negligible loss of current due to leakage as it is highly insulated. High Voltage Cable also minimizes damage to living beings. - The stability problem is almost zero and has a higher carrying capacity. Voltage can be regulated in power centers and sub-stations. - The high cost of High Voltage Cables is a major restraint in its application. The installation cost is also high that makes it challenging to use without having good funds. These are a few disadvantages of high voltage cables. But regarding performance, it has no discrepancies. The bigger picture: The High Voltage Cable Market has been witnessing rapid growth as the electricity demand is increasing with increasing population. The setting up of small and large scale industries and companies requires the supply of electricity that puts an additional demand in the supply. With the increase in disposable money, people are getting more dependent on the internet, the use of mobile phones, laptops, and desktops, further increases the need for electricity. All these factors have a good impact on the High Voltage Cable market. Due to the shift towards the renewable source of energy, there is a replacement of old grids networks. The establishment of a new transmission line is an important factor that is driving the growth of the high voltage cable market in the coming years. The increase in trade between two countries increasing the setup of overhead, submarine as well as an underground transmission line that is expected to facilitates better communication between the countries. This is predicted to increase the demand for the production of high voltage cables and in turn, drive the growth of the market.
<urn:uuid:80d628c1-e37a-4b24-af7e-cbe429f2c5cf>
CC-MAIN-2022-40
https://globalriskcommunity.com/profiles/blogs/all-about-high-voltage-cables-and-its-trends
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00200.warc.gz
en
0.945551
1,232
3.640625
4
Twitter has taken the world by storm. For many people around the world, it is an essential source of information on world events, as well as a way to express themselves online. Yet as the impact and significance of Twitter has grown, so has the necessity to exclude some accounts from participating in the social network. As a result, allegations of censorship have emerged. So how does Twitter decide which accounts to ban, and which to approve? As anyone who has regularly used the site will testify, the company's practices can sometimes seem a little murky, but there are some things we know for sure about how censorship on Twitter works. 1. Shadowbanning accounts Critics have been making allegations for years but in 2018 Twitter made an official announcement confirming that they will be limiting the visibility of “suspect accounts.” Officially, the company stated that the proliferation of trolls had sparked this "shadowbanning" strategy, often associated with the censorship measures applied by repressive regimes. This was quite a change from how things were previously, when Twitter would only ban individual accounts due to the content of their tweets. The new strategy involves analyzing the patterns of tweeting. For instance, if a user regularly tweeted to strangers, it might be a sign of harassment or trolling. By determining which accounts qualified as suspect, Twitter could relegate their content to obscure regions of peoples' feeds, effectively neutralising their activity. All of that sounds unobjectionable. Other than the trolls themselves, no one really things armies of Russian bots should be allowed to rampage on Twitter, making user experience miserable and subverting public opinion. But the move has led to criticism of Twitter censoring conservatives. In fact, this criticism came from the White House itself, with Donald Trump citing "Many complaints" (ironically, in a tweet). However, the social network responded to allegations of Twitter censoring conservatives, explaining that "[its] behavioural ranking doesn’t make judgments based on political views or the substance of tweets." Instead, any aggrieved users had their behaviour to blame for slipping down feeds and search results. 2. Censoring hashtags While conservative activists in the USA have vocally accused Twitter of silencing their voices, others have pointed to the hashtags used by the social network. For these critics, the spread of hashtags (marked by a # symbol) across Twitter is far from neutral. In theory, hashtags are supposed to spread based on their popularity and the speed at which they "trend." But there's more to it than that, and this is where confusion can arise. It’s not just overall popularity that makes certain topics trend: they also have to be popular with "new audiences,” as well as be new themselves. This is done in line with Twitter’s strategy to maintain the content on their platform “fresh,” and grow the user base. A static user base is no good as a business model as opposed to attracting a constant stream of new users. Yet it’s not hard to see how this can create a situation that’s suspicious to regular users. It seems reasonable to assume that a topic or event getting thousands of tweets would trend. If the users making the tweets are the usual suspects, Twitter’s algorithm may hinder the popularity of such hashtags. This may look like Twitter censoring hashtags, but is actually just a result of the way the company handles information. This hasn't prevented many people accusing the network of operating a Twitter censorship policy based on hashtags. For example, the conservative site Breitbart accused Twitter of censoring hashtags critical of Hilary Clinton during the 2016 election. You can understand why people are angry about the company's hashtag policy, but it's not technically censorship. Unless we see the construction of algorithms as a form of censorship. However, Twitter would contest this, arguing that their algorithms are entirely neutral - which all the evidence suggests is the case. 3. Banning accounts While it’s true that Twitter occasionally outright removes accounts, the matter of whether this is censorship is more difficult to gauge. To be banned, users need to meet certain criteria, and Twitter is fairly open about what these comprise: - Spamming - sending bulk posts to certain targets, or flooding Twitter's servers with hundreds of marketing messages every day. - Hacking - anyone who seeks access to Twitter's systems is automatically out. - Fake accounts - While users can impersonate and parody others, copying bios and avatars is frowned upon. - Sending or linking to malware - no comment required. - Revealing private information about other users (doxxing) - Abuse and hateful content - this is often the most controversial aspect of Twitter's banning policy. What constitutes a threat or insult can be subjective. But as Twitter explains, most speech is protected. Promoting violence against other users, racial abuse, or targeted harassment are all prohibited, as is encouraging others to commit suicide. Usually, these rules are triggered by user complaints, which aren’t always reviewed in favor of the complainant. Even when they are, there’s an appeals process that can result in rulings being overturned. Users temporarily removed from the platform may have to supply a phone number to access their accounts - which is surely a massive privacy issue. But it's not what we usually mean by censorship. Still, this hasn't prevented many people from accusing Twitter of removing followers unjustly. For example, the hashtag #TwitterLockOut spread early in 2018 when conservatives lost huge numbers of followers overnight. However, it looks like this "purge" was due to Twitter flushing out colonies of Russian bots, or bots made for sale to Twitter users. Still, the allegations won't go away. 4. Censoring tweets - does it happen? But what if Twitter could either remove or suppress individual tweets at will? Wouldn't this give the company absolute power to determine which subjects gain exposure, and which wither away? We’ve already discussed the practice of shadowbanning, but when it comes to the censorship of individual tweets, there are more allegations than there is evidence. With one caveat. Twitter has been known to filter tweets in accordance with the wishes of governments. For example, the social network agreed to block anti-government tweets in Egypt, Saudi Arabia, Bahrain, and other countries. In Thailand, for instance, Twitter admitted that it could remove tweets deemed critical of the monarchy, making them invisible to Thai users. So yes, the platform has the capacity to censor individual tweets. Does it do so in most countries? Probably not. But it could. How to avoid falling victim to censorship on Twitter To avoid getting censored by Twitter, learn the platform’s rules. Sometimes your actions may not be malicious, yet still fall under one of Twitter’s punishable offenses, so it’s good practice to know what you’re up against. It may also be handy to use a VPN (Virtual Private Network) to conceal your location and identity. Twitter uses location information to determine whether users are genuine, but it's not a precise strategy. So if you move around a lot, a VPN could help you stay on the right side of the platform's algorithms.
<urn:uuid:8155f87c-02e3-43fb-bf2a-4d69794c5a85>
CC-MAIN-2022-40
https://cybernews.com/privacy/is-there-censorship-on-twitter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00200.warc.gz
en
0.945044
1,502
2.609375
3
The World Runs On Standards Do you remember all those pictures of the Ever Given stuck in the Suez Canal? As painful as that situation was for the global supply chain, it was still impressive to see that massive vessel filled to the brim with shipping containers, not one out of place. It served as a reminder of a very important aspect of our lives that we often take for granted: standardization. The Ever Given held over 18,000 shipping containers, stacked neatly and securely on the cargo ship’s decks. Intermodal shipping containers, as they are called, represent a standard for transporting goods that allows conveyance on ships, trains, and trucks across the world. Can you imagine the challenge facing global trade if shipping containers had different dimensions in each continent, country, state, or region? Freight operators would have to put together a jigsaw puzzle of containers at each shipping port. Of course, standards are everywhere. And aren’t you glad that railroad tracks are the same width across every state? What a nightmare it would be if we had to switch equipment at each state’s borders. It sometimes doesn’t matter what the standard itself is. For example, we measure and calculate in feet or meters, pounds or grams, liters or gallons, or hogsheads and hectares. What matters is that the standard exists so that it can be widely understood and applied by many to create a universal familiarity. Standards Exist For A Reason Although we celebrate uniqueness, we can all appreciate standards–whether it is shipping containers or units of measure–because they lead to greater efficiencies. For example, costs of production and delivery go down as standards allow for single implementations. Standardization also maximizes a customer’s experience due to coherent communication, repeatability, compatibility, and consistent quality. Safety is another important aspect of standardization. Office buildings and houses are made safer through standardized construction techniques and enforcement of building codes. Cars are safer with standardized seat belts, airbags, roll cages, and so on; and driving them is made safer through standardization of lane markers, signs, and regulations ranging from speed limits to which side you should be driving on. The FDA imposes food safety standards. Electricity and plumbing in our homes is safe because there are national standards for materials, construction, etc. It’s just better to have everyone on the same page. Standards Pile-Up: Ways to Exchange Data With Other Parties Here at Coviant Software, we help customers exchange data securely with other parties. Whether it’s patient medical information sent from a hospital to an insurance provider, a financial services firm sending and receiving market data, a small business sending ACH payments to their suppliers via their bank… or myriad other examples. Computer networks have been used to transfer data between entities since they were invented. The first protocol to popularize file transfer is FTP, which has been standardized by RFC 959 since 1985. It became a standard for transferring files between systems, and is still widely used today. But it is not the only standard. The advent of the World Wide Web in the ’90s saw the use of HTTP to support file transfers. The late ’90s saw SFTP evolve from a proprietary protocol to an Internet standard. Various industries have also gone on to standardize their own file transfer protocols, like Odette FTP in the automotive industry and PeSIT in the French financial service industry. Now, there are also standard cloud-based services like AWS S3, Google Drive, Azure Files, Dropbox, and Sharefile to consider when transferring files between parties. Today’s file transfer process has become so stacked with standards that the initial function of a standard is starting to lose it’s purpose. Or more bluntly put in a statement once made by a colleague of mine a number of years ago: “What’s so great about standards? There are so many to choose from!” File Transfer Security And Efficiency When it comes to your company automating file transfers, the surfeit of standards can be daunting. You must balance many competing demands: security, speed, responsiveness, cost-effectiveness to name a few. Which standard is the best choice for your file transfers? Here at Coviant Software, we see a lot of file transfers, and work with many customers to balance all of those competing demands. Our recommendation is to pick a secure, universal, and easy to implement protocol that both you and your trading partners can readily implement. The best choice today is SFTP. Choose that if you can (here is a recorded webinar discussing the reasons in more depth). Our Diplomat MFT solution handles SFTP in a very easy to use fashion, with simple configuration of trading partners and even OpenPGP operations on files. Furthermore, we realize that you are not always in control of your trading partners’ choices, so Diplomat MFT also supports numerous file transfer protocols ranging from FTP to Cloud Storage systems. Don’t let the number of standards intimidate you. Choose a good one, like SFTP, and try to stick with it so that you are managing operations efficiently. But if you are required, by contract or convention, to use a different standard then make sure you are doing so with proper tooling to allow for consistency, correctness, and security. If you need any help, please reach out to us and we would be happy to help.
<urn:uuid:7369b484-732f-42ed-9ee9-fa2e48c94f78>
CC-MAIN-2022-40
https://www.coviantsoftware.com/security/standards-are-great-but-do-we-really-need-so-many/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00200.warc.gz
en
0.949207
1,113
2.828125
3
You hear about hacks all the time with the news covering major websites who have had data leaks containing your email and password. Computers get infected and capture your login details for bank accounts and credit cards. In the worst cases, identity theft occurs because it is an easy crime to commit with a high reward. Today the passwords you use to trust to keep attackers out of your accounts are not enough anymore to guarantee your online accounts are safe. Cyber attackers now use methods such as phishing, pharming, and keylogging to steal your password in additional to having the resources to test billions of password combinations. If you’re like the majority of people around the globe, you use the same password for several websites. This means that anybody who has figured out your password to one account will also have access to everything else you’ve logged into with that password. In a time when it is extremely easy to look up what a person named their first pet or high school mascot, security questions aren’t much help either. In the computer world, your second line of defence (after your username and password combination) is called “2-factor authentication.” Sometimes referred to as multiple-step or multi-factor verification, 2-factor authentication is a way to double check a person’s identity. This can be enabled every time a person logs in or just under certain circumstances. For example, signing in from a new device or different country might trigger 2-factor authentication. Many of the services you may already use, such as Facebook, Gmail, Online Banking, and more, have 2-factor authentication options. If your bank has ever sent you a special code through text or email to enter before logging in, you’ve already used a type of 2-factor authentication. They can also be in the form of a smartphone app or a physical electronic dongle. 2-factor authentication is absolutely crucial for online banking, email, and online shopping such as Amazon or PayPal. It’s also a must-have for cloud storage accounts (like Dropbox or OneDrive), password managers, communications apps, and productivity apps. This is especially true if you frequently use the same passwords for different websites and apps. Some may consider 2-factor authentication unnecessary for social networks, but these are actually very important to keep safe. For ease, a lot of websites and apps allow you to sign up through your Facebook or Twitter account. You need to keep these networks safe so that somebody with your password can’t suddenly get into every account you have linked. The point of using 2-factor authentication is to make attackers lives harder and prevent them from getting into your accounts. If they have captured your login username and password, they still need a second device to get in, especially when the computer or phone they are using has never logged into your account before. This makes it significantly more difficult for anybody to breach your account. Plus, if you receive a notification with a special code to enter for logging in, and you weren’t trying to log into that account, you have a good signal that somebody else was trying to get in. That means it’s time to change that password and be grateful you had 2-factor authentication. It’s unfortunate that there is currently an abundance of skilled hackers ready to take advantage of those unprepared. Luckily, you can still stop them -even if they have your login information at hand. 2-factor authentication is one of the easiest methods to keep your accounts safe.
<urn:uuid:380d2eab-0399-47dc-b643-7986d41b5118>
CC-MAIN-2022-40
https://www.c9s.ca/blog/the-importance-of-two-factor-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00200.warc.gz
en
0.951443
723
3.203125
3
Backup Does Not Equal Archive – Here’s Why Knowing the Difference Matters Today is World Backup Day, a reminder to everyone to protect their valuable data by creating a second copy and keeping it somewhere safe. However, when it comes to backup, it is important to distinguish between backup and archive as these core IT processes are not the same and are too often misunderstood. The 2019 Active Archive Alliance survey indicates 57% of respondents still use their backup system for storing archive data. Using backup copies to store archival data and repeatedly backing up unchanging archives lengthens the backup window and wastes costly HDD space. Backup and archive are entirely different processes and have different objectives. The backup process creates copies of data for recovery purposes which may be used to restore the original copy after a data loss or data corruption event. Backups are cycled and updated frequently to account for and protect the latest versions of important data assets. Archiving frees up disk capacity and moves unchanging and less frequently used data to a new location and refers to data specifically selected for long-term retention. Archival data is typically unchanging and is rarely overwritten. Both backup and archive data are accessible through the use of active archives which enable efficient access to data throughout its life and are compatible with flash, disk, tape, or cloud (public or private) block or object storage systems. Active archives help move data to the appropriate storage tiers to minimize cost while maintaining ease of user accessibility. Understanding the difference between backup and archive can help you to reduce costs significantly and boost resource utilization. For more information on this topic, check out the 2019 State of Active Archive report.
<urn:uuid:ca022830-654d-4b6e-a510-9e471096b5db>
CC-MAIN-2022-40
https://activearchive.com/blog/backup-does-not-equal-archive-heres-why-knowing-the-difference-matters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00200.warc.gz
en
0.918904
334
2.625
3
In all the hype regarding new technologies and the search for greater efficiencies, it’s sometimes easy to forget that the push for smarter, more connected city environments should really have one purpose at its heart: a better quality of life for the citizens who live there. Smart cities, with their vast networks of real-time sensors, can do much to improve the quality of life for residents, not least in the areas of safety and security. One area of particular interest that is emerging, however, is the ability to connect these best-of-breed, open data sources to achieve sustainable and environmental benefits. A sustainably smart city As an example, at last year’s Smart City Expo World Congress, the city of Atlanta won the mobility award for its continued work on “North Avenue Smart Corridor”. Almost a hundred network video cameras have been installed along 2.3 miles of road, which is able to derive real-time information regarding volume and velocity of traffic through visual analysis alone. This data is then used to optimise traffic lights along the way, keeping traffic moving. Ten years ago, Atlanta was notable as one of the most congested cities in the US with the second highest levels of traffic-related air pollution. The North Avenue Smart Corridor is just one of many initiatives that the local government has put into place to tackle these issues, and one of the most promising. Traffic flow along the corridor has reduced commuter times by as much as 25 per cent, and the implications for air quality are also significant. Network video, in this kind of instance, is one of many technologies that contributes to such positive congestion and environmental results. Swedish researchers recently calculated that as many as 4 per cent of all premature deaths could be linked to air pollution from traffic. Reducing congestion makes a big difference to quality of air by our roadsides. Smart buildings that protect us Indoors, too, smart technology is driving sustainable business practices. Many offices today, for example, use motion sensor detection systems to turn lights on and off when there are people around, reducing the amount of energy wasted illuminating unused spaces. A group of students from Finland, however, demonstrated that by drawing on multiple sources of data – including video analytics – building managers can gain much more detailed insights into patterns of movement around office spaces. These can, in turn, be used to predict the need for climate control and lighting more accurately. This highlights how small, incremental changes such as adding another data source can result in great cost savings and reduced energy consumption. Potential environmental benefits also include traditional health and safety concerns. In South Africa, for example, one company has deployed thermal imaging cameras with internal GPS sensors to identify shack fires and alert emergency response units, helping to overcome the challenge that many settlements don’t have formal road names that can be used to direct fire fighters to a blaze. Elsewhere in the world, similar cameras are being deployed to help identify chemical leaks or spillages, combining information with other on the ground sensors to quickly map out the scale of an emergency and contain its spread faster. The future for applications such as these is exciting. The pressing need for sustainable business practice We are just at the beginning of understanding what all the environmental benefits for citizens in smart cities will be, but it’s clear that there can be huge potential in combining multiple, real-time data sources to optimise health. What’s also clear is that we need to ensure Proof-of-Concepts (PoC) are developed to learn and evolve, with a view to creating deployable solutions that add value. More than half of the world’s population currently live in urban environments, and approximately three million people move into cities every week. Around the world, city infrastructure investments are struggling to keep up with the demands of the increased and ageing population, a fact reflected in increased congestion, power shortages and overwhelmed social care, health and housing services. Applying intelligent technology to these complex problems must be part of the solution for the future. What’s important to remember, however, is the maxim that we must, as vendors and technology providers, “first do no harm”. There are too many examples in recent history of good ideas which are undermined by poor execution and therefore never realise their potential. It has become clear there is no silver bullet when it comes to solutions for smart cities, one vendor alone will not be the answer. If we are to take the steps from conceptual solutions to those that are deployed and make a positive impact on a city, then this must be underpinned by openness, both in regard to technology and also mindset. Partners and vendors should collaborate and take a best-of-breed approach to develop solutions that deliver a smarter and safer environment for a city’s citizens, businesses and visitors. Furthermore. the billions of Internet of Things devices that are expected to be deployed in our cities all consume resources during manufacture. The onus is on us to ensure that this consumption is kept to an absolute minimum, with more investment in recycled and recyclable materials urgently needed. We must think ahead, too, and consider what happens at the end of the lifecycle of our products. We can, and should, be making sure that as many components and materials are recoverable rather than destined to add to the mountains of waste around the world, themselves a significant environmental hazard to local communities. There should be no tolerance for devices which may leak dangerous chemicals after their disposal. Our supply chains need to be examined and held accountable for dirty factories, or mining practices which are to the detriment of those who work there, too. And just as importantly, smart technologies which deliver efficiency and sustainability, but that are not deployed with cybersecurity in mind, cannot fully claim to be improving quality of life no matter what the intention of their use. But get all of these things right and there is huge promise in the positive impact of smart city technology. All we need now are new ideas for ways that we can put it to use. Daren Lang, Regional Manager, Business Development, Axis Communications (opens in new tab) Image Credit: Jamesteohart / Shutterstock
<urn:uuid:b810cacf-3d33-4e1d-b4a9-712ca867a455>
CC-MAIN-2022-40
https://www.itproportal.com/features/smarter-cities-the-drive-to-safe-and-sustainable-future-environments/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00200.warc.gz
en
0.957335
1,266
2.984375
3
To put it simply, hard drive platter damage is bad news. The hard drive platters store your data, so permanent damage to the platters means permanent data loss. No current data recovery technology can restore the lost material (although there are technologies for reading around the damaged areas — we’ll explain those in another post). The most common cause of platter damage is a failure of the heads. The heads read and write data, operating close to the platters, but not on the platters; they’re never supposed to touch. If they touch, they can scrape (ouch) the magnetized material. If you keep trying to run the hard drive, you’re left with something like this: While that’s pretty, it’s not great for the computer user, since the hard drive is essentially ruined. This is why we advise our clients to turn their hard drives off at the first signs of failure. By the time this hard drive got to our laboratory, it didn’t have much of a chance. If the disk head has enough displacement or debris from head damage lodges in a very specific spot, it can cause the head to contact the platter only there. With millions of rotations (if it still rotates) it will erode a circular trench much like the Colorado River erodes the Grand Canyon. The platter in the following drive was amazingly cut almost completely through. The above drive is another example of severe scoring with a wide path of destruction scraped off. But these pale in comparison with the next few stunning photos. This hard drive had glass or plastic platters, and because it was operated in a failed state for many hours, it lost all of its data. The platters also became completely transparent. We placed a slip of paper under the platters to show the extent of the damage. How Do You Recover Data from Damaged Hard Drive Platters? The bottom line is that if the magnetic stuff is gone, so is the data. There’s no way around that, unfortunately (if there is, we’ll be the first to know). However, there’s good news: most hard drives aren’t as severely damaged as the drives on this page. Most barely have any damage, and the heads don’t always make contact with data storage areas. We can also use some advanced techniques to read “around” the damaged sections to try to get a partial recovery. This is often an option when the platter damage is minor, but it becomes less of an option with more severe damage. Datarecovery.com operates real laboratories, and we can provide you with a free estimate at any of our locations. We will open the drive, assess the degree of damage, and provide you with a quote. Unrecoverable cases are rare, but we never charge for our services if we don’t recover your data. Call us at 1.800.237.4200 if you’d like to set up a case. The big takeaway is to turn off your drive at the first sign of damage. We mean that! Don’t let your drive take on severe rotational damage. If you have a fairly minor issue, it may be covered under our $250* hard drive data recovery case option, and if the damage is more severe, you won’t gain anything by attempting to run the drive. Don’t take unnecessary risks when important files are on the line. *Pricing is subject to change. Please contact us for current pricing and an evaluation of your drive.
<urn:uuid:6fb637ad-a9bc-4d03-bf35-f1d34d652520>
CC-MAIN-2022-40
https://datarecovery.com/2015/07/hard-drive-platter-damage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00200.warc.gz
en
0.944247
754
2.84375
3
When choosing the right surveillance camera, one of the specifications to look at is what kind of zoom the camera has. The camera zoom is important because it will determine how much detail you will be able to record. If you’re not able to see clearly what is happening, your video footage will be useless. Types of Camera Zoom The first type of camera zoom is digital zoom. Digital zoom works by adjusting the image in the camera. This process would be like clicking on a photo to enlarge it on your phone. This is not really zooming in, but cropping the image and making it bigger to see. A pro of digital zoom is that you can do this after you’ve already recorded footage and it doesn’t affect the viewing angle. However, digital zoom usually causes images and video to be blurry and pixelated if you zoom in too much. The other zoom type is optical zoom. Doing this allows you to zoom in by physically adjusting the lens, either manually or with the motor. Optical zoom tends to result in higher quality images but you cannot apply this to footage that has already been recorded. The viewing angle is also reduced. Which Camera Zoom is Best? When choosing which zoom is best, optical is usually the better option. The reason is that with digital zoom, the image is cropped and enlarged which also enlarges the image’s pixels. As a result, these images can appear blurry and pixelated. With optical zoom, the lenses are adjusted to magnify images without affecting the resolution. However, technology has advanced and digital zoom results have improved so this could be the better choice for those who want to observe footage after it’s been recorded. The type of zoom you choose just really depends on when you want to apply the zoom and how well you want to see details.
<urn:uuid:51e749da-24e4-473c-912d-582c52eadc91>
CC-MAIN-2022-40
https://www.2mcctv.com/camera-zoom-digital-vs-optical/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00200.warc.gz
en
0.949961
379
2.765625
3
The term “phishing” can be traced back to 1996, when it was used to reference a group of attackers that were imitating AOL employees using AOL messenger, asking people to verify their accounts or billing information. Many unsuspecting users fell prey to this scam purely due to their novelty. Though we would like to believe that we would never be fooled by such an attack these days, phishing remains as popular as ever. Though internet users may have become more discerning, attackers have also become more skilled in how they’re luring in more victims. Read on to find out how these phish are more sophisticated, and how your organization and its employees can outsmart them. Advanced Phishing Strategies In some ways, the core tenants of phishing have remained the same. The motives are still getting malware past the perimeter or accessing credentials. This is still most frequently accomplished through malicious links or attachments. What has changed is presentation. Though there are still emails with obviously fake email addresses, riddled with spelling errors, an increasing number are nearly impossible to tell from the real thing. Many lead to websites prompting credentials that look almost identical to the site they are imitating. More recently, threat actors have been making conversation-hijacking attacks, using previously compromised email accounts to reply to ongoing email threads. They slide in with an email that has malicious links or attachments right in the midst of a conversation, easily catching other members of the thread off guard. How Phish Get to Your Inbox The backend of phishing has also evolved. There have been increasing advancements in evading filters. One simple method is using images of text to avoid being readable and tagged as junk mail. Another is obfuscating URLs by simply adding a few additional characters—spoofing URLs and email addresses, fooling both the filter and recipient into opening an email or proceeding as normal when on a fake website. Mostly, attackers have just become more shrewd in constantly trying new tactics, knowing that as soon as one obfuscation or evasion technique is exposed, they’ll need to move on to another. Who is Phishing and Who is Getting Phished Another change is in who is targeted. While there are still massive campaigns aimed at whoever will click a link, other phishing attacks are far more precise. Spear-phishing, for instance, targets specific individuals or organizations using sites they are familiar with or imitating known individuals in order to lure them in. Whaling is even more precise, aimed at high level executives. In both cases, extensive research is conducted so threat actors know what may entice these organizations or individuals to open a message. From there, an email is crafted to both personalize the content and convey the right tone for the business or individual. For example, a whaling attack against a c-level employee may require a certain level of urgency to ensure that it’s opened, typically involving financial, legal, or, ironically, security matters. Finally, there is an increasing number of people who have the ability to phish. Before, threat actors were only those who understood the mechanics of phishing. Now, phishing kits can be purchased readily on the dark web, giving nearly anyone who has the desire to phish the tools needed to do so. This has helped boost the amount of attacks even farther upward. With constant attacks being launched, it’s no wonder that so many people have been fooled. How Can Organizations Avoid Getting Phished Advancements are being made to help strengthen filters and prevent phish from ever arriving in your inbox, and browser security is also evolving to detect malicious websites the moment you land on them. For the foreseeable future, however, phishing will continue to be an ongoing challenge for organizations. Strategically manage this threat by following these three steps: - Deploy anti phishing pen tests. You don’t want to wait until after you’ve been hit to find out that your employees are particularly susceptible to phishing. Social engineering testing imitates phishing campaigns in order to safely determine whether your employees are vulnerable to, and what types of phish are most likely to fool them. Using social engineering pen testing services or tools allows you to find out where your weaknesses are by safely launching an attack just like those currently being used by actual threat actors. Such campaigns can be the difference between a company that suffers a huge breach, and one that remains secure. - Educate employees and follow best practices. No matter the outcome of your pen test, it is always worthwhile to educate your employees. Teach them ways to identify phish—from lack of personalization to odd URLs. Urge caution when opening links or attachments, particularly those that come unprompted or from unusual sources. Follow best practices, like going directly to a website instead of using a link when possible. Encourage employees to keep an eye on OpenPhish and PhishTank to familiarize themselves with the most common phish currently floating around. - Retest on a regular basis. Anti phishing penetration tests can and should be utilized frequently. The best way to ensure your education efforts are effective is to test again. Additionally, new phish are constantly being introduced, so you’ll want to stay up to date on the latest tactics. Regular testing keeps employees accountable, vigilant, and ensures that new employees aren’t a security weakness that goes unaddressed for too long.
<urn:uuid:c9dadbb0-0bcd-484b-8fa3-448706f45e4c>
CC-MAIN-2022-40
https://www.coresecurity.com/blog/how-phishing-has-evolved-and-three-ways-prevent-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00200.warc.gz
en
0.96053
1,108
2.84375
3
How Robotic Process Automation Is Different From Test Automation - September 21, 2021 - Maira Asaad When RPA and test automation began finding their way into the industry, they began transforming the nature of work itself. Generally, there are plenty of apparent reasons to call Robotic Process Automation (RPA) and Test Automation two similar processes, including fast releases, cost and time efficiency, correctness, QA, reduced human intervention, and, of course, automation. But RPA is a step ahead approach that provides better assistance to test automation in many ways. Where testing with automation performs quality assurance on a product with practices like functional testing, responsive testing, performance testing etc., RPA delivers services to other departments like HR, EPR, BPO, data entry, automatic invoicing, information transfer, among a range of other things, and is therefore more business-centric. Both these services don’t integrate similar tools to perform automation. The difference is big, but when both processes work together, they can create marvel-worthy business modules and reputations. But first, we need to see the difference between the two: Robotic Process Automation (RPA) RPA is a software-based technique, it is conducted in processes like accounting, finance, management, data entry, front and back office operations, transactions, invoicing, evaluations, and calculations. Any application or development models used by a particular enterprise can be effectively functioned by the RPA framework. To conduct RPA, software robots or virtual assistants are created that perform the above-mentioned activities. These virtual bots are created by tools like - Blue prism - UI Automation tool - Automation Anywhere - OpenSpan Tool - .NET 4.0 - Ace (IDE for V+) - Robot Emulator While the most compatible systems for RPA involve: - Mainframe Terminals Does RPA require any specific knowledge? Robotic Process Automation does not require any tough programming knowledge and is autonomous and input-driven. What are its benefits? The innovative technology is highly productive for industrial sector which often demands extensive work and long hours. Repetitive test cycles and logical tasks can be conducted via RPA as it proficiently reduces human efforts by doing half of their mundane tasks, while improving increases the chances of correctness and accuracy. How is test automation different from RPA? The test automation process is greatly different from RPA in many terms. In short, software testing executed with the help of a testing tool is called test automation or automated testing. It incorporates testing tools. Listed below are some of the most commonly used test automation tools: - HP – UFT/QTP - IBM – RFT - HP – LoadRunner - HP – ALM/QC - Tosca Testsuite Does test automation require any specific knowledge? RPA entails no knowledge and runs entirely on the input being provided. Test automation, on the other hand, is a specific field that needs intensive testing information, with a perspective that encompasses ongoing software testing trends. What are its benefits? Businesses require test automation to make their software or application error-free since customer satisfaction is the basic goal of software testing. It confirms whether a product is performing according to set expectations, or whether it needs improvements. QA is the key element in test automation. Banking, financial, government, health, IT, and many other sectors require the services of software testing as it ensures their development and reputation among users and audiences. Is RPA going to affect jobs? This is a central, and valid, concern. If we talk about jobs, then yes, people are going to face a tremendously competitive environment in the future with RPA. But if we think of it in terms of RPA facilitating job transformation rather than replacement, think of all the jobs being created as RPA takes over more tedious tasks. Testers performing test automation needn’t worry, though, because they’ll continue holding their exclusive place as QA testers. It’s expected that RPA and test automation will both rise as powerful technologies to enhance the application of quality assurance. There are competitive advantages affiliated with both, and they offer industries the chance to improve their scalability models with less investment.
<urn:uuid:7834c597-de40-4b23-8dfb-b7e9fadaee61>
CC-MAIN-2022-40
https://www.kualitatem.com/blog/how-is-robotic-process-automation-different-from-test-automation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00400.warc.gz
en
0.916106
983
2.546875
3
There’s been considerable debate over the existence of Gliese 581g ever since the discovery of the “Goldilocks” planet was first reported nearly two years ago, but new research claims to provide additional evidence that the potentially habitable “super-Earth” really is out there. The prospect of the extrasolar planet, which is said to orbit the red dwarf star Gliese 581, 22 light-years from Earth, is an exciting one. It’s thought to lie in the star’s “habitable zone,” where liquid water could exist on the planet’s surface. That, in turn, could mean that it might be able to support life. “Our findings offer a very compelling case for a potentially habitable planet,” said Steven Vogt, professor of astronomy and astrophysics at UC Santa Cruz, at the time of the discovery in late September 2010. “The fact that we were able to detect this planet so quickly and so nearby tells us that planets like this must be really common.” ‘Liquid Water Is a Distinct Possibility’ Two weeks later, however, the controversy began when a European team declared that they could find no evidence of the planet in their own examination of the same planetary system. Instead of six planets orbiting Gliese 581, the European researchers saw evidence only of the four that had previously been found, they said. In this latest development, a new, expanded analysis led once again by Vogt generated results that the American researchers say do indeed support the existence of Gliese 581g. “This signal has a False Alarm Probability of < 4 percent and is consistent with a planet … orbiting squarely in the star's Habitable Zone at 0.13 AU, where liquid water on planetary surfaces is a distinct possibility," concludes the new paper, which was published online earlier this month. The question now is, which of these two accounts gets the story right? ‘These Are All Experienced Observers’ “It is extremely difficult for an outsider to assess which of the two teams is correct,” Mario Livio, a senior astrophysicist with the Space Telescope Science Institute, told TechNewsWorld. “These are all experienced observers,” Livio added. “The controversy, more that anything else, simply demonstrates that the analysis of the data is extremely difficult.” Indeed, “the question is, is it there because the analysis says it is, or because they ran enough options that they finally got the solution they were looking for?” wondered Paul Czysz, professor emeritus of aerospace engineering with St. Louis University. ‘Wiggles in the Light’ “When you’re looking that far away, and inferring all kinds of orbits, and all you basically see are wiggles in the light coming from the sun,” it’s difficult to know what’s actually being observed, Czysz told TechNewsWorld. Essentially, the researchers assumed that “this little spike is a planet going in front of the sun, and then threw that into an analysis program to see what they get,” he added. Putting it more precisely, “the data set is the observed Doppler shifts of the star light that result from a component of the star’s motion due to the gravitation pulls of multiple objects in orbit around the star,” Scott Austin, associate professor of astronomy and director of the astronomical facilities at the University of Central Arkansas, told TechNewsWorld. ‘There May Be Multiple Solutions Found’ “The goal is to create a computer model of this multi-body gravitational system that can recreate the observed wiggles in Doppler-shift versus time data,” Austin explained. However, “the more physics and physical parameters one includes in the model, the longer it takes to find a match to the data; there may also be multiple solutions found,” he pointed out. Complicating the situation further, the data itself has a level of uncertainty “due to the signal-to-noise ratio of the spectra,” Austin added. “How often the system was observed and over what length of time can also come into play.” In any case, the bottom line is that “some groups are making assumptions and simplifications to their model in order to speed up the solution search,” he said. The question then becomes, are those assumptions and simplifications valid, and do they produce a gravitationally stable system? “The more complete modeling seems to suggest that the planet is there, but then the issue is proving that it is an actual detection versus unintentionally fitting the noise in the data,” Austin concluded. ‘It’s Just Really Hard’ If the Vogt team’s assumptions are correct, “it’s possible there is a planet in this preferred zone,” Czysz conceded. “The difficulty is, there really aren’t very many planetary systems that have a good habitable zone. “No matter what they say, it’s just really hard to find a planet that’s in this habitable zone that also has water, clouds, and is small enough that it doesn’t have a lot of gravity,” he pointed out. Given the vast size of our galaxy, the odds are clearly against us. “There’s about 100 million to a billion stars in our galaxy,” Czysz said. Potentially habitable planets are surely out there, “but it’s like you’re in the Atlantic ocean and [looking] for fish within just a six-foot circle of your boat.” ‘We’re Still Babies Crawling on the Floor’ We also simply don’t have the capability yet to go out and find such planets, Czysz added. Crossing our galaxy so as to avoid its central supermassive black hole, for example, would take some 50,000 years, he pointed out. “We need to figure out how Jean-Luc did it,” Czysz said. “We know there is a way to do it — the difficulty is that we’d have no way of understanding how one implements that. We’re still babies crawling on the floor.” ‘More Planets Are Being Discovered’ As for Gliese 581g, “this probably isn’t going to be sorted out until it is possible to get sufficiently high enough signal-to-noise data,” Austin suggested. At the same time, “given that many more extrasolar planets are being discovered now, the expectation is that quite a number of them will be in the habitable zones of their host stars,” Livio pointed out. “This will make the question of whether planet Gliese 581g exists or not less critical.” Of course, if and when we do find such planets in habitable zones, there won’t be any rush to begin packing our bags. “There is a tendency for the public to come away with the impression that more is being claimed than what is actually being claimed,” Austin warned. “‘Planet in the habitable zone’ does not equal ‘a planet that is habitable by life found on the Earth.'”
<urn:uuid:6c4db1fe-f51f-4781-9b32-0149709047db>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/gliese-581g-a-potentially-habitable-world-or-not-75749.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00400.warc.gz
en
0.952171
1,595
3.109375
3
Analog DAS is not efficient and very costly. This is the inefficiency wagon again described in the iDAS vs. oDAS section. A high-power BTS pumps out RF signals that need to be attenuated and conditioned. The signals then travel through coaxial cables or jumpers which cause further loss affecting the overall link budget. The DAS Head-End converts the signal into optical form which also contributes to loss and adds unnecessary noise from the lasers in the optical transceivers. The signals are converted back into RF form and re-amplified at the remote node. More coaxial cables are needed between the remote nodes and indoor antennas. Passive components such as combiners, splitters and couplers contribute to more power losses. Heat dissipation from high-power RF equipment at the BTS is also a concern. Carriers must pay high utility costs cooling their equipment and maintaining a constant temperature. In addition, DAS equipment requires backup power. The panic would be bad enough if the power went off in a stadium full of spectators, but an even worse scenario would be dealing with the anxiety-ridden users robbed of their cellphone service or unable to update their Facebook status. Analog DAS does have its advantages however. It’s a truly neutral system because the components used are broadband. There are components that go as low as 20 MHz and as high as 3 GHz. Such bandwidth is enough for most cellular systems as spectrums tend to fall within 100 MHz to 2.8 GHz. Analog DAS don’t distinguish between carrier spectrums or bands. As long as RF signals fall within the DAS specified bandwidth and don’t overlap, DAS performs its job of transporting them between BTS and indoor antennas. But Digital DAS is the future. The concept of Digital DAS is to perform digital-to-analog (D/A) and analog-to-digital (A/D) conversions at remote nodes rather than the BTS. It’s a much more efficient system. First of all, carriers won’t need high-powered BTS. Instead they’ll deploy equipment such as a Base Band Unit (BBU) that connects directly to a carriers’ fiber backhaul. Remember backhaul is the fiber optic network that connects a carrier’s central tower to its remote hubs. A BBU performs the same function as a traditional BTS but without its bulky RF processing modules. Because everything is digital. There’s no need to perform analog-to-digital conversions at a BBU location. A BBU essentially functions like a network switch or router that directs traffic. In this instance, the traffic happens to be cellular. BBUs will be connected to low-power remote radio nodes such as MROs (Metro Radio Outdoor) or MCOs (Metro Cell Outdoor). Such radio heads are manufactured by companies like Alcatel Lucent. This is where you’ll start hearing terms like Small Cells, used to describe digitally-fed low-power radio nodes deployed by carriers. It’s a convenient way for carriers to provide coverage without investing in expensive DAS equipment, filling the gap in areas where coverage is spotty or lacking. Carriers offer wireless subscribers living in remote areas with insufficient or no cellular coverage devices called femto-cells. These femto-cells are connected to a consumer’s home internet modem via ethernet cable and provide services for up to 5 simultaneous users. Such equipment has been available at carriers’ disposal for some time now. Think of Small Cells as femto-cells on steroids. Small Cells such as MROs or MCOs are capable of supporting 50 or more users at a given time. These nodes are connected via fiber optic cable to a BBU and have built-in MIMO capability for optimal signal performance and data throughput. Also, Small Cell nodes are smaller in size compared to traditional analog DAS RRH (Remote Radio Head) units. We earlier mentioned CPRI. It is another term you will sooner or later hear. CPRI stands for Common Public Radio Interface. It’s a standard developed by the likes of Nokia-Siemens, Alcatel Lucent, Ericsson and Huawai. Basically it’s a protocol allowing communication between digitized radio units like a BBU and remote nodes such as MROs and MCOs. The name CPRI is deceptive because it implies an open-source like platform, but there is nothing open about CPRI. This is in fact the very reason Digital DAS is not making further inroads into the US wireless market. Every manufacturer has their own proprietary version of CPRI protocol. Therefore, DAS vendors can’t build a system that interfaces with every BBU in the market thus defeating the very purpose of a neutral DAS concept. Carriers like Verizon buys BBUs from multiple vendors and each has their own version of CPRI protocol. There is no single DAS system out there that plugs into all vendor equipment at the same time. Chipsets manufactured by companies like Qualcomm are becoming less and less expensive. Chipsets allow digital-to-analog and analog-to-digital conversions one frequency or band at a time. Everything from smartphones and WiFi routers to Small Cell nodes like MROs and MCOs have one. However, chipset manufacturers haven’t yet made available a low-cost and commercially viable product that handles multiple frequencies. This creates a problem for Small Cell deployments. Every carrier will have their own version of Small Cell nodes because wireless carriers occupy and operate under different spectrum and frequency bands. Venues and city municipalities then have to deal with multiple small cell nodes hanging from light poles or placed inside buildings. Such scenes are not aesthetically pleasing and some DAS integrators have to deal with strict ordinances banning installations that include multiple nodes visible to the public. Density is another term being thrown around the DAS world right now. Basically, it means to maximize network capacity ultimately benefiting customer experience by employing various distribution methods including macrocells, oDAS, iDAS, Small Cells and WiFi. All these different distribution schemes concentrated in a single area create a hub known as a heterogeneous network or also known as HetNet. Fancy terms put aside, the reason behind this effort is simple. Wireless carriers want high data throughput and reliable coverage because their customers demand it. For whatever reason, we modern creatures can’t function without GPS, find restaurants without reading Yelp reviews, exercise without listening to music, stop to smell the flowers without checking our email and updating our Facebook status first. Oh, don’t forget uploading photos of our food to Instagram before we eat.
<urn:uuid:44b0baf3-3ff8-4ccf-9b0a-ff3a0c5f7e72>
CC-MAIN-2022-40
https://daspedia.com/analog-or-digital
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00400.warc.gz
en
0.943488
1,378
2.671875
3
The “internet of things” has been steadily growing to the point where it is now an expansive and vulnerable digital landscape. With so many devices connected to the web, hackers, thieves and criminals can now find ways to achieve network access via unexpected means or cause disruption in ways that were not possible before internet access became ubiquitous. What is the Internet of things? The Internet of Things, often abbreviated as IoT, is a term that is used to describe the interconnection of hardware devices that communicate with one another wirelessly to exchange data. Smart phones, tablets, laptop computers and office devices make up a large portion of the IoT. However, most of today’s electronics need to do more than simply the job they were built for. From video game consoles to home audio systems and even electric toothbrushes and bathroom scales, integration into daily life and any already existing gadgets is an initiative among manufacturers who are eager to put the word “smart” in front of anything they produce. Creating a companion app for a product not only gives the impression of technological superiority if compared to the competition, but also allows manufacturers to take advantage of the insights that can be gleaned from examining user data that may be collected any time the device is used. The internet of things and data privacy A refrigerator seems to be an unlikely cybersecurity liability, but as the IoT has come to encompass everything from water bottles to dog collars, threat actors have been hard at work probing for weaknesses and ways in which to enter systems through backdoors that seemed absurd as recently as a decade ago. Because ease of connection is prioritized over security when it comes to the vast majority of these devices, they provide threat actors with potentially low hanging fruit. Hacking into an electronic dog door may seem like a comical endeavor, but if the owner uses the same password to access their pet’s accessory as they do their personal email, the implications can become serious. As the world’s workforce spent more time at home due to pandemic restrictions, IoT usage has increased in frequency as have cyberattacks designed to compromise connected devices. In the first six months of 2021, computer security provider Kasperky recorded 1.51 billion breaches of IoT devices. The same company noted that 639 million had occurred over the entirety of the previous year, marking a skyrocketing uptick in IoT security incidents. 5 internet of things security nightmares 1. Elekta cancer software In the spring of 2021, Swedish cancer software provider Elekta was the victim of a ransomware attack that required the company’s IT department to take their cloud-based software offline. As a result, US patients across 170 health networks had radiation therapy delayed due to the fact that the machines used in the process require a constant line of communication with Elekta’s software. While the delay of life saving procedures is worrisome enough, other medical devices could possibly be hacked in order to prevent them from working or even present incorrect data that may affect treatment. With the vast amounts of personal data moving through hospital networks and their unique position when it comes to their necessity, healthcare systems are frequently targeted by hackers who know that the seriousness of disrupted service gives them an advantage when it comes to negotiating for a quick ransom. While employing the Internet of Things greatly eases the exchange of information, connecting critical devices via wifi requires a superior degree of security. 2. Ring home security cameras Security cameras, small, affordable and able to capture high resolution video, are one of the most popular internet of things devices outside of peoples’ phones and computers. Ring, an Amazon company, suffered some bad press after a number of families experienced hackers taking remote control of their security cameras. In some instances, the hackers were actually communicating to the homeowners via their connected speakers. Understandably, a company that sells the idea of security found itself having to explain how this could happen. While the intrusions were largely the result of poor password habits as opposed to negligence on the part of Amazon, the incidents are a shocking reminder that privacy is a commodity in a world of data and always-on digital connections. 3. Amazon Echo smart speakers It should come as no surprise that a device that is connected to the web and always listening provides ample opportunities for data theft or breaches of privacy. Recently, an exploit was found in which a hacker could cause an Amazon Echo smart speaker to verbally give itself a command and then comply. This could allow someone to remotely control someone else’s smart home features like cameras and door locks. It can also allow someone to make unauthorized purchases or phone calls. Smart speakers can also be used to eavesdrop if users download malware hidden in seemingly harmless apps. The Google Play store, for example, is notorious for harboring apps created by shady developers that pose as simple utility programs but actually inject malicious code into devices that can then be used to steal everything from audio recordings to banking data. 4. St. Jude cardiac devices In 2017, it was discovered that implantable cardiac devices such as pacemakers or defibrillators manufactured by St. Jude could have their transmitters hacked. Using the transmitter, an assailant could deplete a device’s battery, disrupt its rhythm or use it to shock a victim. While the exploit was never utilized outside of the research that was done to determine its existence, it highlighted a frightening manner in which criminals could be able to exploit advanced, implanted medical devices to inflict physical harm. 5. Smart TVs From stealing the passwords you use to access your favorite streaming apps to using your TV’s camera and microphone to snoop on people in their own homes, smart TVs are treasure troves for even unsophisticated hackers. While no major hacks via smart TVs have been reported, exploits have been found within them that have the potential to allow someone to have remote control over the set or dip into some of the data that the device collects. Like the issue with Ring’s security cameras, much of the security surrounding smart TV usage is left up to the user’s discretion which puts many people in a challenging position given that most simply want to watch their favorite shows, not worry about the amount of data that their television is soaking up. How to improve internet of things security The fact remains that today’s complex devices and onboard computers still provide criminals with ever more opportunities to steal data. Putting obstacles between hackers and your network is the most effective way to stay protected. Some users create a separate network exclusively for their IoT devices. This network does not exchange information or connect to any devices that harbor or transmit sensitive information. This allows IoT devices to work as needed, but restricts their ability to access critical data or integral computer systems. Using a VPN can also prop up your network security, as it provides an additional layer of protection between your Internet of Things and any prying eyes. For the average consumer, strong password hygiene can go a long way. Do not use the same password across multiple devices and only use passwords that are impossible to guess. Make sure that the default username and password that comes installed on any smart device is changed. Leave no stone unturned, as one compromised device in your home or office can be used to access another, causing a domino effect of security breaches that may eventually lead to complete network access. - 10 IoT Security Incidents That Make You Feel Less Secure 10 Jan 2020, CISOMAG - What is IoT Security? by Sharon Shea and Ivy Wigmore, March 2022, TechTarget - IoT Security Breaches: 4 Real-World Examples by Jo Vanwell, 28 Jan 2021, Conosco - IoT Cyberattacks Escalate in 2021, According to Kaspersky by Callum Cyrus, 17 Sep 2021, IoT World Today - 5 biggest IoT security failures of 2018 by James Sanders, 17 Dec 2018, TechRepublic - The 5 Worst Examples of IoT Hacking and Vulnerabilities in Recorded History by Terry Dunlap, 20 June 2020, IoT For All - 5 INFAMOUS IOT HACKS AND VULNERABILITIES by Harold Kilpatrick, IOT Solutions World Congress - Elekta Cancer Software Hit By Healthcare Ransomware Attack 29 APril 2021, Compliancy Group - Samsung and Roku Smart TVs Vulnerable to Hacking 7 Feb 2018, Consumer Reports - Attackers can force Amazon Echos to hack themselves with self-issued commands by Dan Goodin, 6 March 2022, Ars Technica - Three ways to improve IoT security by Naveen Joshi, 16 Aug 2019, Allerin
<urn:uuid:6ea92cce-4243-4627-bb09-94822ce6c1be>
CC-MAIN-2022-40
https://news.networktigers.com/featured/cybersecurity-failures-in-the-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00400.warc.gz
en
0.946038
1,776
3.1875
3
What is Frame Relay in telecommunications? May 30, 2018 Frame Relay is a wide area network technology which is used to specify the physical and data link layers of digital telecommunications channels using packet switching. Frame Relay was originally designed for transport across ISDN (Integrated Services Digital Network) infrastructure but nowadays it can also be used with many other network interfaces. It is usually implemented by network providers as a voice and data encapsulation technique that is used between LANs (Local Area Networks) and WANs (Wide Area Networks). Configuring user equipment within a frame Relay network is very simple and is one the main reasons it is so popular in telecommunication networks around the world. Frame Relay works by putting data in variable-size units which are called “frames” and leaves any necessary error-correction (such as retransmission of data) up to the end-points, which speeds up overall data transmission. In most cases, the network provider will offer a PVC (Permanent Virtual Circuit) to the customer, which allows them to see a continuous, dedicated connection without the need to pay for full-time leased lines. This then allows the service-provider to charge the user based on the route each frame travels to reach its destination. By being able to prioritize some frames and make others less important, customers are able to choose a level of service quality, depending on their specific requirements. Frame relay is capable of running on fractional E1 or full E-carrier systems (T1 or full T-carrier in the Americas). It provides a mid-range service between basic rate ISDN (128kbit/s) and ATM (Asynchronous Transfer Mode, 155 to 622mbit/s). It is a fast packet switching technology, which operates over links which have a very low chance of transmission error (practically lossless like PDH). When a frame relay detects an error in a frame, it simply drops that frame. The end points will then detect the dropped frame and retransmit. Get all of our latest news sent to your inbox each month.
<urn:uuid:b62e4603-3cfc-4906-8c25-dbfcbd1d68d9>
CC-MAIN-2022-40
https://www.carritech.com/news/what-is-frame-relay-in-telecommunications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00400.warc.gz
en
0.921799
445
3.53125
4
In today’s world remote working is extremely essential because of the level of convenience it can get for any kind of business. Talking about convenience, businesses want security, online storage and access, in-house bookkeeping and a lot more attributes that Cloud Computing has to offer. What is Cloud Computing? Cloud Computing is a model for making use of Internet to store and process data or programs instead of using the local storage on a computer. It helps in getting access to computing resources and facilitates the effectiveness of these resources that can be shared with multiple users. Cloud Computing Services The Cloud computing services can be Private, Public or Hybrid. Private services are secure, controlled and convenient and are generally delivered to the internal users by the business. The public cloud services are sold and delivered on-demand over the Internet from a third party provider. Hybrid services are an automated combination of the private and public cloud services. Why Cloud Computing? - Cloud computing makes businesses cost efficient as they can pay on the go. They also save money on licensing fees. - There is less need to use expensive software. - Ability to serve clients faster as cloud computing software helps business, develop and launch their products and services. - No need to create a new infrastructure. - Disaster Recovery is a major issue in today’s world and because Cloud based service companies do not need complex disaster recovery plans. - Data remains safe and sound and can be accessed even if something goes wrong with the machine. - Companies can increase or decrease computing requirement based on the change in demand in turn increasing the elasticity. - There is access anytime anywhere making business to be lot more convenient. Businesses can take their workforce global as Internet allows access to the cloud from anywhere in the world. |Infrastructure as a Service (IaaS)||Platform as a Service (PaaS)||Software as a Service (SaaS)| |Flexible self-service cloud computing model used to store, access, monitor and network.||Provides computing platforms and is used for application development.||Used to deliver applications managed by third party vendor with interface access on client’s side| |Pay as per the consumption||Customized applications with simple, quick and economic development, testing and deployment||Web delivery: Applications need not be installed on individual computers| |Users gain infrastructure and ability to install the required platform.||Reduces coding and automates business||Vendors can manage everything resulting in smooth maintenance and support| |Provider Example: IBM Softlayer||Provider Example: Google App Engine||Provider Example: Salesforce| According to CloudTech, the SaaS model is projected to grow by $15 Billion in 2018 attaining a CAGR of 8.14% and the global spending on IaaS this year is anticipated to have a 32.8% increase from last year. Hexanika has developed two services, DRaaS ™ ( Data Readiness as A Service) and RaaS ™ (Reporting As AService), that offer to scale our products through our Go To Market partnerships and to address the customer ask to deliver complete services around our platform and products. These solutions use SaaS model that utilizes cost effective technology and human resources to deliver results which also significantly improves our competitive advantage. - Enhanced Data Quality - Faster Time to Market - Time Saving - Risk Avoidance - Cost Avoidance - Cost Reduction - Process Automation - Customized Solution
<urn:uuid:71728dc2-ba7c-4550-aa25-2fe3e6e57cca>
CC-MAIN-2022-40
https://hexanika.com/the-cloud-for-productivity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00400.warc.gz
en
0.922557
746
2.96875
3
“Environmental protection”, “high efficiency” and “energy savings” are topics of very high concern for both large corporations, as well as for individuals since these aspects are part of everyday life and may generate significant cost savings in both cases. This is also valid for UPS ranging from large ones protecting substantially vast datacenters, to small ones with just a few kilowatts that safeguard a network or single cabinet in more modest settings. The ECO mode – also known as energy saving mode or high efficiency mode depending on the UPS manufacturer – is currently highly discussed within the industry. The debate primarily focuses on on-line UPS and on large UPS. Nevertheless, energy savings and efficiency are extremely important aspects also in small and micro power UPS (typically from 500 VA to 10 kVA). The reasons are the same as for large power systems: savings on energy costs and lower environmental footprint. Choosing a UPS in the most proper way, means considering the criticality of the application that needs to be protected, as well as evaluating the energy used by the UPS to protect the load against disturbances and interruptions. Here I would like to highlight the “inherent ECO mode” that can be found in line interactive UPS products (VI or Voltage Independent according to EN 62040-3). In this type of line interactive UPS (VI), the power stream flows from the input through several protection devices (overcurrent, overvoltage, etc.) and mainly through an Automatic Voltage Regulation (AVR) transformer. The AVR is in charge of providing output voltage regulation, in order to minimize voltage variations in AC supply and ensure a regulated voltage according to the load tolerances. Because of the high efficiency of the AVR (typically around 98% or 99%) and of the protection devices through which energy flows, as well as the lower quantity of electronic components used in this type of UPS topology, a high performance line interactive UPS can provide an efficiency level higher than 96% at full load. A perfect example of this is Liebert PSI UPS, which makes use of line interactive technology and therefore of AVR, and which can reach the efficiency levels mentioned above. As said, this operation mode is inherent to line interactive UPS topology, and its high efficiency is also ensured in wide load operating conditions and AC mains variation. While ECO mode in on-line UPS is operating in a smaller input voltage range, line interactive topology is able to operate in high efficiency mode during most input voltage changes while still being capable to provide some output regulation. When comparing a line interactive UPS with a double conversion online UPS there are many aspects to be taken into consideration such as stepwise or pure sine wave inverter, transfer time, size, etc. However one of the main differences is exactly that line interactive UPS feature “inherent high efficiency” because of the VI technology and the use of AVR as mentioned earlier. The energy savings associated to it are highly appreciated even if we are talking about single phase UPS meaning UPS which range from 0 to 10 kVA, because: 1. Daily saving just a few watts in continuous UPS operation 365 days a year amounts to a significant total yearly saving 2. In applications such as campuses or big corporations where many of these small UPS devices are used contemporarily, the few watts saved daily per each device increase even more the daily and yearly total saving and reduce the total campus or corporation expenditure. To provide an example, assume a load of 2.5 kW being protected by a UPS. Such load may correspond to a cabinet with several servers for enterprise applications or to a wiring closet distribution panel. Such UPS can work in line interactive mode (assuming 97% efficiency) or operate in double conversion mode with 90% efficiency, using a rough estimation. The difference in power losses and thus energy savings, will be around 200 W. Assuming an electricity cost of 0.138 €/KWh and doing a quick calculation on yearly savings, you can get a value of around 272 € saved per year. This amount can be multiplied for five years and the total saving will reach nearly 1,500 €. So it will be clear by now that additional to traditional ECO mode (typically used in double conversion online UPS in general and large UPS in particular) there is an inherent ECO mode used in single phase UPS, specifically line interactive ones. This grants significant savings to customers as the line interactive technology is inherently highly efficient and as the UPS making use of it are typically used daily all year long so in the long term even little daily saving amounts to a considerable total figure. And what about ECO modes in on-line UPS in this small UPS range? Is there any difference or advantage? There is an interesting story too. Interested in reading more blogs by Emerson Network Power? FOLLOW THIS LINK.
<urn:uuid:b9e5cfc4-2b54-4881-ae2b-5461738f36c3>
CC-MAIN-2022-40
https://www.dvlnet.com/blog/topic/green-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00400.warc.gz
en
0.944246
987
2.859375
3
The Next-Generation Incident Command System is being used by emergency management agencies in California for planning, response and recovery for risky and hazardous events – many of them wildfires. Over the last four decades the number, severity and size of wildfires has increased. And each year, on average, nearly half of all the money spent fighting wildfires in the West goes to California fires, which often threaten homes and infrastructure, according to Climate Central. In that environment, a free Web-based situational awareness tool is gaining momentum. The Next-Generation Incident Command System (NICS) is being used by at least 270 emergency management agencies in California and a few other areas for planning, response and recovery for risky and hazardous events – many of them wildfires. NICS, formerly the Lincoln Distributed Disaster Response System, was first used in California in 2010; last year it was used in managing 102 incidents in the state, the majority of them wildfires, according to a recent article in Wildfire Today. The Web-based tool, developed by MIT Lincoln Laboratory and the California Department of Forestry and Fire Protection (CAL FIRE), can also be accessed by law enforcement officials, emergency services, technical specialists, public utilities and other departments and agencies involved in incident response. It has been described as “technology for the tired, dirty, hungry – dirt simple to learn, dirt simple to use.” NICS provides collaboration and communication capabilities across all echelons of responders, according to the NICS website. Among the benefits listed are its ability to enhance the quality and accessibility of sensor data and to integrate location data for resources, vehicles and personnel. During an incident, NICS provides an information backbone that manages and distributes data, including real-time vehicle location feeds, weather, critical infrastructure, and terrain information. Firefighters can use NICS to quickly do real time fire mapping on the scene, including an incident perimeter, staging areas, evacuation zones, road blocks, division breaks and symbology commonly used on incident maps. The maps are immediately available over the Internet to anyone with access to the incident. NICS offers graphical tools, including geo-referenced virtual whiteboards, for dynamic interagency collaboration. “This technology has been great for our effectiveness during emergency planning and during incidents,” said Greg Alex, fire captain and pre-fire engineer for San Luis Obispo County Fire Department, in a comment in the Wildfire Today article. “It has provided information that was critical to our first responders and command staff. This information has saved time, money and made us extremely efficient at providing life intelligence to all our cooperating agencies during incident for three years.” “NICS has also given us the ability to load pre-planning information specific to our response area,” continued Alex. “The ability to load critical GIS data in the field during the events has enabled us to quickly brief cooperating agencies, ground units and regional planning staff,” improving firefighter and public safety. The free, open standards-based NICS can be used on computers, tablets and hand-held devices and is compatible with Windows, iOS, Linux, Android, and the Web browsers Chrome, Firefox, Safari, and later versions of Internet Explorer. The app will not replace all methods of communication. Firefighters working in remote locations may not have Internet access. However, higher level managers in offices will still be able to create and share information. While NICS is available for free to any emergency response organization, it currently is being primarily used by state and local agencies in California. No federal wildland fire agencies are using the system, according to Wildfire Today. The Department of Homeland Security’s Science and Technology Directorate has funded the project so far, but its participation will end in October of this year. Meanwhile the program is seeking funding for the next five years for the project. NICS is described as being 20 percent complete, according to by retired California Fire Chief Bob Toups, NICS California Liason, and Dr. Jack Thorpe, acting director of NICS’ user group. They are seeking plug-and-play apps to expand NICS’ capabilities, similar to those for smart phones and tablets, from the emergency response community – or indeed anyone. NEXT STORY: Saving time and money with remote IT support
<urn:uuid:279eb8e2-c58b-4fa6-b7b9-9496bfadfa24>
CC-MAIN-2022-40
https://gcn.com/state-local/2014/02/free-situational-awareness-app-gaining-traction-among-california-firefighters/290156/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00600.warc.gz
en
0.944444
889
2.90625
3
The term “social engineering” refers to a wide range of malevolent behaviours carried out through human relationships. It employs psychological tricks to persuade users to make security mistakes or divulge critical information. Social engineering attacks are carried out via a series of steps. To carry out the attacks, a perpetrator first examines the intended victim to obtain background information such as possible avenues of entry and weak security mechanisms. The attacker then works to acquire the victim’s trust and give stimuli for further acts that violate security protocols, such as exposing sensitive data or granting access to crucial resources. How Social Engineering Attacks Happen? It relies on human error rather than software or operating system flaws. Legitimate user errors are less predictable, making them more difficult to detect and prevent than malware-based intrusions. It is particularly very harmful. There are some types of social engineering attacks. Some of the social engineering examples are listed below. Types of Social Engineering Attacks As its name implies, baiting attacks use a false promise to pique a victim’s greed or curiosity. They lure users into a trap that steals their personal information or inflicts their systems with malware. The victims of scareware are assaulted with false alerts and bogus threats. Users are duped into believing their system is infected with malware, prompting them to install software that has no purpose (other than to benefit the offender) or is malware. Deception software, rogue scanning software, and fraud ware are all terms used to describe scareware. An attacker gathers information by telling a series of carefully constructed lies. A perpetrator may start the scam by professing to need sensitive information from a victim in order to complete an essential assignment. Social engineering phishing scams or attacks, which are email and text message campaigns aiming at instilling a sense of urgency, curiosity, or terror in victims, are one of the most common social engineering attack types. It then pressures people into disclosing personal information, visiting fraudulent websites, or opening malware-infected attachments. This is a more focused variation of the phishing scam, in which the perpetrator targets specific people or businesses. They then personalize their messages based on the traits, work titles, and contacts of their victims in order to make their attack less obvious. To carry out schemes and lure victims into their traps, social engineers use human emotions such as curiosity and terror. As a result, be cautious if you receive an alarming email, an enticed by a website’s offer, or come across stray digital media laying around. Being vigilant can help you avoid most social engineering assaults that take place online. Below are ways of how to prevent social engineering - Do not open emails or attachments from unknown senders. - Multifactor authentication should be used. - Be cautious of seductive and alluring offers. - Make sure your antivirus and antimalware software is up to date. NextdoorSec is offering the best external penetration services in Antwerp and other cities of Belgium. If you are looking for the best cyber security consultant, feel free to contact us.
<urn:uuid:e46af2fe-c129-4f30-878d-a2d223290f2d>
CC-MAIN-2022-40
https://nextdoorsec.com/types-of-social-engineering-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00600.warc.gz
en
0.921041
648
3.5625
4
As part of our internal security research, we discovered numerous products in production environments installed with insecure permissions. One of these products was Node.js, and we decided to investigate further. Node.js is vulnerable to local privilege escalation attacks (LPEs) under certain conditions on Windows platforms. More specifically, improper configuration of permissions in the installation directory allows an attacker to perform two different escalation attacks: %PATH% and DLL hijacking. To demonstrate this flaw, we first downloaded the latest version of Node.js from https://nodejs.org/en/download/. During our research, we tested using Node.js version 14.17.0. We followed the standard installation steps, except for the installation directory, which we changed to C:\tools using the installer GUI, as shown below: We also select the option to “automatically install the necessary tools”. After installation we then reviewed the permissions on the installation folder. In the screenshot below, note the improper permissions, BUILTIN\Users Allow *, on C:\tools, which are inherited from the drive root. This gives any local user the ability to create arbitrary files in the installation directory. This unprotected directory has also been added to the system %PATH% variable, allowing an attacker to drop malicious executables in that directory and have them executed by other users in certain circumstances. (Note that you may have to start a new PowerShell instance to see the To fully demonstrate the implications of this vulnerability, first create a new unprivileged user. Then, as this user, drop a malicious exe into the C:\tools directory and rename it to npm.exe. For testing purposes, you can simply do cp node.exe npm.exe. Note that the same could be done for Windows will search for a program with the .exe extension first, meaning that the malicious npm.exe will take precedence over Now, as the privileged user, try running npm. This should drop you into the Node.js shell, demonstrating how an attacker could run a malicious executable. %PATH% directory would also allow an attacker to hijack the execution of any commands that come later in the path. From the default Node.js installation, this would include chocolatey, a software management tool for Windows. However, such a vulnerability would affect programs installed in the future as well. Aside from the %PATH% vulnerability, the insecure permissions configured could also allow an attacker to perform a DLL hijacking attack against the node.exe. Using Process Monitor, we can confirm that node attempts to load a number of DLLs from the unprotected folder. Adding malicious versions of these DLLs would also allow for arbitrary code execution as the user running such a service. |May 27, 2021||Reported to https://hackerone.com/nodejs| |June 5, 2021||Vulnerability triaged| |June 11, 2021||Node.js provided a proposed fix for review| |June 12, 2021||DeepSurface shared feedback on the proposed fix| |July 1, 2021||Node.js released versions v12.22.2, v14.17.2, and v16.4.1 containing the fix| |July 6, 2021||DeepSurface Security advisory released| Node.js users on Windows should upgrade to versions v12.22.2, v14.17.2, or v16.4.1, according to the major version currently in use. In addition, review the permissions of the installation directory to ensure only privileged users are able to add or modify files. This is particularly important if Node.js is installed in a non-default location (e.g. outside of C:\Program Files\). For more information, see our previous post discussing these very common permissions issues.
<urn:uuid:f67416a3-3761-4742-a0dd-bd39fe5ad980>
CC-MAIN-2022-40
https://deepsurface.com/deepsurface-security-advisory-lpes-in-node-js-on-windows-cve-2021-22921/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00600.warc.gz
en
0.89006
861
2.6875
3
Google and Oxford Internet Institute launch guide to AI in Arabic Copyright by dailynewsegypt.com Google launched, on Wednesday, ‘The A to Z of AI’ guide in Arabic in collaboration with the Oxford Internet Institute. The Arabic-language guide aims to make information on Artificial Intelligence (AI) more universally accessible. It is a series of simple, bite-sized explainers to help everyone understand what AI is, how it works, and how it is changing the world. Search interest for AI-related queries on Google Search has grown due to the increasing number of jobs requiring skills in the field across the Middle East and North Africa (MENA) region in the last few months. While there is plenty of information out there on AI, it is not always easy to distinguish fact from fiction, nor find simple explanations.e AI is powering practical tools that exist all around us You have probably interacted with AI without even realising it. Artificial intelligence is the name given to any computer system taught to mimic intelligent human behaviours. These technologies show up in everyday life, whether by helping organise photos on our smartphones or planning a commute to work. They use computer programming to do tasks that have historically required considerable human intellect and work, helping make our lives more efficient. We have all seen major progress over the past decade, sparked by faster computers and the introduction of techniques such as machine learning. Read more: www.dailynewsegypt.com
<urn:uuid:e63b6a24-5974-44c5-9f32-efb3c9631f17>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/03/13/learn-about-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00600.warc.gz
en
0.905283
299
2.703125
3
Data at rest What is data at rest? Data at rest refers to all data that is stored passively in databases, file servers, endpoints, removable storage devices and offline backups. Data at rest is inactive and often considered less of a target (by admins) than other data classifications, so it is often secured with inadequate controls. However, sensitive data at rest, like PII, ePHI and credit card information stored in unsecure locations, are a major source of cybersecurity weakness and magnets for malicious entities looking to steal data. Classification of data states Data at rest is one of the three classifications of data states. The other classifications are data in motion and data in use. Data in use is data that is currently being accessed, modified or processed by an organization and its stakeholders. Data in motion is data that leaves the periphery of the company to be used by external stakeholders or taken out by employees working remotely. The classifications for these three data states are useful in planning and implementing data leak prevention (DLP) policies. Data at rest vs. data in motion Each data state warrants a different approach to security and control. The points of difference can be viewed from data use, transmission, vulnerability to attacks, and security controls. |Point of difference||Data at rest||Data in motion| |Usage||Static data that is not currently accessed, modified, or processed by users||Data that is being shared| |Transmission||Only on demand or never||Continuously or frequently shared| |Vulnerability to attacks||High: For cloud storage backups Low: For offline backups |Always high: Unencrypted data passing through the internet, unsecure removable storage devices used to carry data| |Effective security control||Offline backups with high physical safety controls||Encryption of data passing through the internet, restricting USB devices, and file copy activity to control data transferred using a data leak prevention solution| Threats to data at rest The various threats that data at rest can be exposed to include: - Hackers trying to gain access to cloud backups of data - Poor physical security controls for offline backups - Data loss due to inadvertent physical storage damage - Unauthorized access gained by users - Careless employees exposing data stored in an organization's devices during remote work situations How to secure and protect data at rest Data at rest should be protected based on dual perspectives: protection against insider activity and protection from external entities. Unintentional mistakes by careless employees or calculated data theft attempts by insiders can damage the company massively if a data breach occurs. Controls that authorize use for appropriate users will help limit risks. For data stored passively in endpoints, a DLP solution can help track and block exfiltration of data. Stringent user permissions management is essential to reduce insider incidents. It is difficult to ensure that all security holes are plugged and that all risks are eliminated. However, organizations can safeguard against hackers by employing encryption as one of the security layers that shields enterprise data. As a last resort, secure offline backups are a must to lessen the impact of data theft.
<urn:uuid:6c0069b8-d8e4-4f47-bf41-ad8bb3c1f313>
CC-MAIN-2022-40
https://www.manageengine.com/data-security/what-is/data-at-rest.html?source=what-is
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00600.warc.gz
en
0.93039
641
3
3
Log4j is a widely-used software library from the Apache Foundation that shows up in products across a vast array of industries. On December 10, a serious zero-day vulnerability in Log4j was reported that impacts a dizzying array of potential victims — from Minecraft servers to mobile phones to industrial control systems (ICS). Log4j's job is to log things — a totally normal, and indeed necessary, task in any software system. And sensible developers the world over included Log4j in their software rather than recreating the wheel with a bespoke logging system. Its very utility is the reason Log4j is so widespread. Unfortunately, this vulnerability allows attackers to write malicious “messages” into a log that could then be used to actually execute code loaded from compromised LDAP servers. In a nutshell, the Log4j vulnerability (called Log4Shell) is a result of overly-provisioned features that were enabled by default, an insecure default configuration, and the implicit trust of messages. It has received the identifier When a high-profile vulnerability like Log4Shell hits the front page, ICS operators want to know immediately if they are affected, because malicious actors are generally the fastest to respond to these events (and the exploits swiftly follow). Unfortunately, it can be difficult to know if you are harboring a vulnerable library like Log4j. SBOMs (Software Bill of Materials) are the first and best tool in uncovering hidden vulnerabilities like Log4Shell. Vendors need to be fully transparent about the components that comprise their software, including all subcomponents. They can no longer be selective arbitrators of advisory information. VEX documents are companion documents to SBOMs that allow a vendor to communicate if a reported vulnerability is present in a particular product and if it's actually exploitable. Perhaps a vendor's product uses Log4j but it is using a previous version that is unaffected by the vulnerability. Or the software is configured in a way that makes the exploit impossible. What really matters is if the product you are using can be exploited via the vulnerability. VEX gives you a definitive answer in a machine readable form. A lot of material has been published in the last96 hourson Log4j. We've curated some of the articles and compiled a list of those we believe to be the most helpful. Note that the graphic misses a key point: if you don't know that the software you use contains Log4j, you won't know whether you should patch or block evil traffic, or perhaps do nothing at all! What's the Deal with the Log4Shell Security Nightmare? This is an excellent layman's overview of the situation. CISA warns public, private sector of critical Log4j vulnerability News Editor Anna Riberio covers CISA's response to Log4j and includes commentary from industry leaders. Apache Log4j Vulnerability Guidance page CISA is updating this webpage as they have further recommendations.
<urn:uuid:8e36d63f-d2db-4830-a6f7-8ee8a9fa792d>
CC-MAIN-2022-40
https://adolus.com/vulnerabilities/log4j/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00600.warc.gz
en
0.936521
622
2.59375
3
A comprehensive introduction to understanding what an API is If you are trying to become familiar with the technology that relies on the interconnection between applications and services, probably you have heard the term API several times. If you still do not know what it is, we can introduce it by saying that API is the abbreviation of what is known as Application Programming Interface. Maybe this does not tell you much, but without the API, communication between online services and users, for example, would not be possible or would be deficient. In this article, we try to clarify the basics of what an API is and how you can use it. What is an API? As we mentioned earlier, an API comes from the concept Application Programming Interface. Basically, an API is a system of tools and resources within an operating system that allows developers to create and communicate software applications. In terms of programming, an API is what is generally known as an abstraction layer. If you did not find it clear enough, we’ll explain it more easily: an API is a software that acts as an intermediary between two applications. It is a messenger that helps applications to communicate or to talk to each other. If we think of an illustrative example of everyday life, you can imagine what happens when you order a pizza over the phone. Before the pizza arrives at your place you must tell the waiter in charge of taking the orders over the phone what kind of pizza you want, right? Then, the waiter will ask the chef (system) to prepare your pizza as you ordered. Once it is ready, is going to be delivered to your address. In such a situation, the waiter who takes the order over the phone and communicates it to the cook would be an API. Why? Because the waiter is responsible for taking your order to the system to give you back what you are requesting. Just like a real API does when it provides a communication channel for applications and services to talk to each other. The previous example shows you in a very basic way the function of an API, however, there are several cases in which multiple APIs are used at the same time, for example, when you go booking hotels or airline tickets through travel portals like Kayak, Kiwi or Expedia. All of them connect you to hotels and airlines through different APIs to satisfy your search request. One more example to finish with our explanation of the functionality of an API is what happens when you decide to request an Uber through Google Maps. When Google Maps asks Uber for information such as travel cost, car availability, location, and other information, the Uber API receives this request, processes the information with the Uber platform and returns the results to Google Maps. Main publications such as Forbes and Tech Crunch considered 2017 as the year of the API economy. The language of API Do you know that...? Master Internet has been providing cloud hosting services for 20 years. With our cloud, you can free yourself from investments in aging hardware and adapt capacity and performance to your needs. Our cloud is built on the latest Dell technology and operated in our own datacenters. What is a REST API? A REST API is the evolution of the basic API and is translated as Representational State Transfer. A REST API behaves in a very similar way to a website, that is, the REST API is executed by making a client-to-server call from which the data is returned through an HTTP protocol. Do you remember the example we used with the pizza order? Works just like that. Therefore, a REST API provides us with functions to use the services of a site or platform on the Internet that is not ours, for example, Twitter or Facebook. We can take as an example the API REST of Twitter and the clients that use it as the case of Tweetbot, Metrotwit or Birdie. When each of these clients uses the Twitter REST API, the methods and functions of the API are limited and cannot be modified, that is, new functions cannot be added. In this way, Twitter can be sure that its REST API will behave according to its default anointing. The same goes for many other online services. The advantage of using the APIs created by other developers is that you will save a lot of time in creating code for your own applications since you can get many functions encapsulated and successfully tested in the JSON library. Thanks to this, you can dedicate yourself to designing your applications more quickly and efficiently, regardless of the programming language you use. Now you know why APIs are so important to interconnect. Interesting facts about the APIs – From 2007 to date, the use of APIs has increased 13x. – Google and Facebook receive around 5 billion API calls per day, while Twitter receives 13 billion. – 60% of eBay transactions are made through its REST API. – Google has more than 200 APIs. The most used Google API is Maps.
<urn:uuid:c5dc5b73-23b8-4a5f-bbae-79e9bfee5ccb>
CC-MAIN-2022-40
https://www.masterdc.com/blog/what-is-api-in-programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00600.warc.gz
en
0.952296
1,053
3.875
4
A solar probe that Johns Hopkins University Applied Physics Laboratory designed and produced for NASA took off Sunday aboard United Launch Alliance's Delta IV Heavy rocket from Cape Canaveral Air Force Station in Florida. The Parker Solar Probe will deploy its magnetometer boom and high-gain antenna in the mission's first week and kick off in September the instrument testing phase that will continue for approximately one month, NASA said Sunday. The spacecraft, named for physicist Eugene Parker, is expected to reach Venus to carry out gravity assist operation in October and send scientific observations by December. NASA expects the solar probe, which is designed to help researchers improve space weather event predictions and explore the sun's corona, to perform 24 passes by the sun and six additional Venus flybys during its seven-year mission. The car-sized spacecraft is part of the space agency's Living with a Star program and has four suites of instruments that will work to collect solar wind images and investigate energetic particles, plasma and magnetic fields. Those investigations will be led by the Naval Research Laboratory; University of California, Berkeley; Princeton University; and the University of Michigan, Ann Arbor.
<urn:uuid:eddec56e-d274-4cae-abfc-9aad1405023b>
CC-MAIN-2022-40
https://blog.executivebiz.com/2018/08/ulas-delta-v-heavy-rocket-launches-nasa-solar-probe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00000.warc.gz
en
0.938498
227
3.21875
3
Defining The Fundamental Right of Privacy in Nicaragua Nicaragua’s Law on Personal Data Protection No. 787 of 21 March 2012 is a data privacy law that was passed in 2012. Prior to the passing of Nicaragua’s Law on Personal Data Protection No. 787 of 21 March 2012, data privacy regulations within Nicaragua were largely limited to laws pertaining to specific sectors of the nation’s economy, whether it be banking or healthcare. However, Nicaragua’s Constitution of 1987 does state that citizens of the country have the fundamental right to privacy. To this point, the Law on Personal Data Protection No. 787 of 21 March 2012 further defines this constitutional right by establishing the legal framework that data controllers and processors within Nicaragua must adhere to when processing personal data. How are data controllers and processors defined? Under Nicaragua’s Law on Personal Data Protection No. 787 of 21 March 2012, a data controller is defined as the “person responsible for the data files is any natural or legal person, whether public or private, that, in accordance with the Law, decides on the processing of personal data.” Conversely, the law does not provide a definition for the term data processor, as data controllers and their associated third parties are the only entities permitted to handle personal data under the law. Alternatively, the law defines personal data as “all the information relating to an individual or an entity which identifies them or makes them identifiable”, while sensitive personal data is defined as “information that reveals a person’s racial or ethnic origin, political affiliation, religion, philosophical or moral beliefs, union membership, as well as information related to health or sexual life, criminal or administrative offenses, or financial or credit history, or and any other information that could cause discrimination.” What are the requirements? Much like the EU’s General Data Protection Regulation or GDPR, Nicaragua’s Law on Personal Data Protection No. 787 of 21 March 2012 establishes various principles that data controllers within the country are required to follow when collecting, processing, using, disclosing, or transferring personal data. These principles include: - Legality- Data controllers are responsible for collecting and processing personal data in accordance with the provisions set out in the Political Constitution of the Republic of Nicaragua, as well as other applicable laws. - Legitimate purpose- Personal data may only be collected and processed in accordance with legitimate purposes, and said data must be adequate, proportionate, and necessary in relation to the purposes for which it is to be used. - Transparency- Under the law, data controllers and required to provide data owners with certain details pertaining to the collection and processing of their data, among other things. - Limitation of the conservation period- Under the law, data owners reserve the right “to request the social networks, browsers, and servers to delete and cancel the personal data that are in its files.” - Security measures- Data controllers are responsible for implementing and maintaining technical and organizational security measures for the purposes of protecting all personal data in their possession. - Confidentiality- Data controllers are responsible for maintaining the confidentiality of all personal data they collect or process. What are the rights of Nicaraguan citizens under the Law? Under the Law on Personal Data Protection No. 787 of 21 March 2012, Nicaraguan citizens are entitled to the following data protection and personal privacy rights: - The right to be informed. - The right to access. - The right to erasure. - The right to rectification. - The right to object or opt-out. In terms of the enforcement of the law, Nicaragua’s Law on Personal Data Protection No. 787 of 21 March 2012 did not establish a regulatory authority with the power to enforce the provisions set forth in the law. Nevertheless, the “law establishes administrative sanctions for those adding or providing false data into a file, registry or database, violating confidentiality and security systems, or disclosing information registered in such databases. The administrative sanctions are:” - Issuing a warning notice; - Suspending operations; and - Closing the data files. As data protection and personal privacy continue to become more pressing and relevant issues amidst our current digital climate, many countries around the world have taken legislative measures to ensure the data protection rights of their respective citizens. In the context of South America, Nicaragua joins the list of countries through the continent that have passed laws pertaining to data privacy in the last ten years, including Brazil’s General Data Protection Law or LGPD and Colombia’s Statutory Law 1581 of 2012 (October 17), among others. To this end, Nicaraguan citizens can rest assured that their personal data is protected whenever they provide said data for a specific purpose.
<urn:uuid:16f4b665-157c-4986-b7e6-bd9b1e3fd4e7>
CC-MAIN-2022-40
https://caseguard.com/articles/defining-the-fundamental-right-of-privacy-in-nicaragua/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00000.warc.gz
en
0.906412
980
2.765625
3
What is Malware? The word malware is short for malicious software. Malware is any software that was intentionally designed to damage, disrupt or gain unauthorized access to any device and inflict harm to data and people in different ways. It is a blanket term for viruses, Trojans, worms, and other malicious software that hackers use to destroy or extract data that they can leverage for financial or other gains. Types of Malware Some of the common types of malware are; A virus is a piece of code that inserts itself into a program or a file and is executed when that program is running. Viruses are designed to damage the computer by corrupting the data, formatting the storage devices, or damaging the operating system A worm is a standalone program designed to target vulnerabilities in operating systems and replicates itself to other computers in the same network without requiring a trigger from the user. Since they spread very fast, they are used to execute payloads. Once in position, they can steal data, delete files, give access to the hacker through a backdoor, or encrypt data for a ransomware attack. A Trojan horse is malware that disguises itself as a legitimate file or software. Once downloaded by unsuspecting users, the Trojan kicks into action and can install more malware, steal data, monitor users, conduct Denial of Service attacks, modify files, corrupt data and steal financial information. Trojans may hide in apps, games, and software patches. They may also be embedded in emails during phishing attacks. Just as the name suggests, ransomware is malware that holds your files captive and demands a ransom to be paid so that they can be released back. Once it gets into your computer, it encrypts the target files and leaves a message demanding payment normally in cryptocurrencies. However, there is no guarantee that once you pay, the decryption key will work correctly. Spyware is malicious software that collects information about the user’s activities without their consent and sends them to the attacker. The information may include passwords, financial information, logins, and browsing habits. Other types of malware are rootkits, adware, key loggers, bots, and mobile malware, and file-less malware. Signs of a Malware in your Computer As many small businesses conduct their activities through the internet, their Information technology security becomes vulnerable as many hackers use the internet to conduct the attacks. As part of a security awareness program, the employees in a small business should be able to identify signs of malware. Here are the common telltale signs of a malware attack; 1. Slow Computer Are your files and apps taking longer than usual to load? Is your computer taking longer to start? If so, this is a potential sign of malware infiltrating your computer. However, a slow computer does not necessarily mean that it is infected. Before you conclude that it is malware, you should consider what other factors might have contributed to the slow computer. Common causes apart from malware include; running out of RAM, running out of storage, update needed, or the type of program you are running. 2. Running out of Storage Is your computer unexpectedly running out of space yet the files you have are not enough to fill your storage? This could be another sign of malware. Some malware replicates itself in your computer consequently consuming all the storage space. 3. Pop up Messages Some types of malware such as adware bombard their victims with popups often in form of advertisements. Some pop-ups are common and do not imply that you have malware. However, if you keep getting popups everywhere, you might need to scan your computer. 4. Increase in Internet Traffic A suspicious increase in internet traffic can potentially be a result of a malware infection. In this case, the malware could be uploading or downloading files or apps without your knowledge. Once in a while, it is normal for a computer to crash especially if its usage is intense. However, frequent crashes should raise eyebrows. Sometimes, the crash might display the Blue Screen of Death (BSOD). To check what caused your last crash, Control Panel > System and Security > Administrative Tools > Event Viewer and select Windows Logs. Those marked with "error" are your recorded crashes. 6. Security Solution is Disabled Did you know that some malware are sneaky enough that they can disable your security solution? If you notice that your security solution was disabled without your input, you should immediately enable it and perform a full scan. 7. Ransom demands Have you ever tried to access some files only to notice they are encrypted and have some weird extension. Then you notice a text file demanding payment so that your files can be released? If so, that is a ransomware attack. Some even threaten you that if you try decrypting the files, you will lose them. How to Protect your Computer from Malware With malware, this is one of the scenarios where prevention is better than cure. Some malware are extremely difficult to find let alone to remove. As a user, your focus should be to avoid possible sources of malware. HacWare is a A.I. driven security user training platform to help your team continuously learn how to avoid malware attacks. Here are some ways to protect your computer; - Avoid opening suspicious email attachments 2. Avoid clicking random links that you do not know their source 3. Download apps only from the Microsoft Store if you are using Windows or Apple Store if you are using a Mac. 4. Only download files from trusted sources. 5. Implement the 3-2-1 + data storage strategy. This strategy describes a method of having at least three total copies of your data, two copies should be local but on different devices or storage methods, and at least one copy should be off-site. The PLUS in this strategy takes it one step further and argues that one copy should be both offsite and offline. The offline copy is critical. This makes in inaccessible from the network so hackers, ransomware, and malicious insiders can’t destroy or ransomware all the data. This offline, offsite data means that companies can still maintain business continuity in the event of a catastrophic man-made or natural disaster. Learn more about how HacWare partner, Perpetual Storage can help with your data storage strategy. It is also important to set up a security awareness program at your company because cybersecurity is a team sport and everyone needs to know how to prevent a malware outbreak. Here are is a guide on how to set up a security awareness program in 4 steps. How to Remove Malware from your PC The first step in removing malware is installing an anti-malware solution. You need to be very careful with this because you might end up installing malware in the process. Operating systems also come with an inbuilt malware solution. For example, Windows comes with Windows Defender. If your operating system has one, ensure that it is enabled. Have you ever come across popups claiming that your computer has been infected by malware and that you need to install their software to prevent damage? Most of those popups will lead you directly to installing malware. They might appear to be scanning your computer but in the real sense, they are doing nothing, or worse, they might be installing malware. Once you have your anti-malware solution in place, conduct regular full scans for malware. It is wise to set automatic scans that will run without input from the user. Lastly, update your anti-malware solution once an update is out. Malware are rapidly evolving and so must your antivirus solution. An outdated anti-malware solution might not recognize some malware leaving your computer vulnerable Hackers are constantly coming up with new ways to unleash attacks on unsuspecting computer users. With the worldwide internet coverage experienced today, malware are transmitted rapidly than ever before. Small businesses must therefore invest in information technology security. Having a robust Information technology department is not enough to keep you safe. The employees also to undergo security awareness training. Installing a malware solution is not enough to keep you safe. You also need to actively protect your computers by avoiding risky habits such as clicking random links. HacWare's security awareness and training solution can continuously train your end-users on how to identify internet scams and change their risky behavior. HacWare is a AI powered security awareness and training platform that helps SMBs continuously train their employees about cybersecurity attacks. Learn more about HacWare at hacware.com. If you are a MSP or Managed Security Service provider (MSSP), we would love to automate your security education services, click here to learn more about our partner program.
<urn:uuid:f62ed49d-5723-445f-b549-3d4d26f8fd56>
CC-MAIN-2022-40
https://resources.hacware.com/signs_of_malware_in_my_computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00000.warc.gz
en
0.933633
1,807
3.28125
3
The Markov Chain, Probabilities, New ML Approaches A Markov Chain (MC) refers to a mathematical concept that is used to describe transitions from one state to another in accordance with a specific set of probabilistic rules. In the context of artificial intelligence and machine learning applications, MCs are a form of Probabilistic Graphical Models (PGMs), a powerful framework that can be used to represent complex domains in conjunction with probability distributions. To this point, the probability of transitioning to a particular state within an MC will be dependent on both the current state within the model, as well as the time that has elapsed since this current state was realized. To illustrate this point further, consider an individual that flips a two-sided coin 100 times. As a two-sided coin can only result in two different states, such as heads or tails, each flip of the coin will have the same probability of landing on one of these two different states. This being said, the state of the coin at any particular time will the primary factor that influences the probability of observing the other state. Moreover, if the individual in question were to record every instance where the coin they are flipping had landed on either heads or tails, these collective observations would constitute a Markov Chain. When analyzing these observations, it could be seen that the probability of landing on heads or tails when flipping this coin 100 times would be 50% and 50% respectively. Probabilistic Graphical Models A PGM represents one of the many ways in which software developers can describe a probability of random variables in relation to a particular machine or deep learning problem. More specifically, PGMs utilize graphs to describe what specific variables within a particular probability distribution will interact with one another, where each node within the model will represent a variable, while each edge will represent a direct interaction between these variables. Through this configuration, these models can be created using fewer parameters than are required to successfully create other models within the realm of machine learning and artificial intelligence. In turn, PGMs can make effective predictions using lesser amounts of data. On top of this, these smaller models also allow software developers to cut costs with respect to computational power, as PGMs rely on fewer performance inferences and samples to operate effectively. To this end, PGMs will generally contain both a graphical representation of the model, in addition to a generative process that will outline the manner in which the random variables within the model will be generated. Likewise, PGMs are typically divided into two different types, directed PGMs, otherwise known as Bayesian Networks, and undirected PGMs, also known as Markov random fields or the Markov Chain Monte Carlo. Markov Chain Monte Carlo Due to the fact that inferring values with probabilistic models is often unfeasible and impractical, software developers will instead use approximation methods to generate random variables within their models. For this reason, Markov Chain Monte Carlo (MCMC) sampling is one method that can be used to randomly generate high dimensional probability distributions on a systematic level. This approach combines the Markov chain concept with the Monte Carlo technique, another method that can be used to randomly sample a probability distribution with respect to the approximation of a particular quantity. Through the application of these two methods, machine learning algorithms can be trained to hone in on a specific quantity that is being approximated in regard to a probability distribution, even with an expansive number of random variables involved, effectively facilitating accurate and efficient predictions. Despite the complex mathematical concepts that are associated with predictive algorithms, the idea behind these models is relatively simple, enabling machines to grasp human concepts that involve making predictions. Through the application of the Markov chain concept, software engineers have been able to create applications that are able to predict baseball scores, stock market performance, and future weather predictions, among a host of other applications. In this way, consumers in our current digital age have been able to leverage new products and services in their everyday lives in new and intuitive ways, as predicting future occurrences such as the weather within a given region of the world has historically been a painstaking and arduous process that often required a higher degree of understanding, training, and specialization.
<urn:uuid:7652666f-0beb-4d0a-b50b-ab07f0da9a26>
CC-MAIN-2022-40
https://caseguard.com/articles/the-markov-chain-probabilities-new-ml-approaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00000.warc.gz
en
0.938865
848
3.40625
3
Ageing (British English) or aging (American English) is the process of becoming older. It represents the accumulation of changes in a person over time. In humans, ageing refers to a multidimensional process of physical, psychological, and social change. Reaction time, for example, may slow with age, while knowledge of world events and wisdom may expand. Ageing is an important part of all human societies reflecting the biological changes that occur, but also reflecting cultural and societal conventions. Ageing is among the largest known risk factors for most human diseases. Roughly 100,000 people worldwide die each day of age-related causes. Population ageing is the increase in the number and proportion of older people in society. Population ageing has three possible causes: migration, longer life expectancy (decreased death rate) and decreased birth rate. Ageing has a significant impact on society. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights. Recent scientific successes in rejuvenation and extending the lifespan of model animals (mice 2.5 times, yeast and nematodes 10 times) and discovery of variety of species (including humans of advanced ages) having negligible senescence give hope to achieve negligible senescence (cancel ageing) for younger humans, reverse ageing, or at least significantly delay it. Regenerative medicine is a branch of medicine associated with the treatment of age-related diseases. Ageing is the major cause of mortality in the developed world.
<urn:uuid:9d4d6342-5adc-4d80-81fa-5f477c803f86>
CC-MAIN-2022-40
https://datafloq.com/read/entity/aging/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00000.warc.gz
en
0.952647
349
3.78125
4
An anonymiser, or anonymous proxy, masks a person’s activity on the Internet. Typically, this is done using a proxy server. The proxy server acts as a go-between – accessing the Internet on behalf of the client computer, while shielding the client’s personal information (e.g. disabling cookies, hiding the client IP address). Growing concerns about online privacy have led to an increased use of anonymisers (e.g. the Tor browser).
<urn:uuid:cdc2d7be-ddc1-4c6d-b774-b46e53ab089a>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/anonymiser/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00000.warc.gz
en
0.841312
96
2.6875
3
By Hillary Grigonis Organizations around the world have conducted studies on the potential dangers of flying unpiloted aircraft — but what about their life-saving capabilities? On March 14, DJI released a study based on news reports indicating that drones have saved at least 59 lives. With 38 of those rescues occurring between May 2016 and February 2017, that number averages almost one life per week. And drone use for life-saving purposes is only increasing. Search and rescue teams are quickly adopting imaging drones to act as eyes in the sky — and the Red Cross will even soon have its own drone-launching Land Rover. Meanwhile, DJI’s report has an unexpected statistic. One third of those rescues were not from rescue teams, but from volunteers operating their own consumer drones, suggesting that even hobbyist drones are making a positive impact.
<urn:uuid:6f60c634-d4d9-4923-9969-7a9376afcc0a>
CC-MAIN-2022-40
https://ligado.com/blog/drones-59-naysayers-0-dji-tallies-lives-saved-drone-rescues/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00000.warc.gz
en
0.973683
173
3.203125
3
Has your IT provider gotten around to developing a man-in-the-middle attack prevention plan just yet? If not, they need to do so—now. Man-in-the-middle (MITM) attacks are another in a long list of schemes hackers use frequently, and companies are surprisingly vulnerable to them. It seems like every day cybercriminals are coming up with clever new ingenious ways to separate data from the companies they targets. MITM attacks are just the latest in a long saga of nefarious innovations. With 43 percent of all cyberattacks targeting SMBs, and costing each around $200,000 on average, companies can’t afford not to stay abreast with the latest trends in cybersecurity, even if that’s a full-time job on its own. Read on to learn about the nature and prevention of MITM attacks to help keep your company safe. What Is a Man-in-the-Middle Attack? In a nutshell, a MITM attack is a type of eavesdropping attack where a hacker hangs out on a network and intercepts traffic as it’s transmitted from point to point. A hacker may be simply listening to network traffic. He or she may also engage in active eavesdropping, where communications are intercepted, manipulated, then handed off between two connections as if they were communicating directly. It’s a lot like a third party listening in on a telephone conversation but neither of the two individuals knows that the third person is there. Any sensitive information passed during that phone conversation—such as login data, credit card numbers, or trade secrets—is intercepted and stored by the hacker. Any insecure network is susceptible to an MITM attack. This might include: - Public networks such as those at a coffee shop or a public library. - Public-facing company networks, or the network a company uses to conduct its business which is also viewable to anyone else within range. - Unsecured printer or office device networks that printers use to communicate with computers and vice versa. The Value of a VPN Occasionally, an employee may find it necessary to connect to a public network in order to perform business tasks. Whether the employee is working from the road or on a business trip, circumstances arise where public networks are unavoidable. One of the first lines of defense against eavesdropping attacks while outside the company network environment is the virtual private network (VPN). A VPN encrypts and obscures network traffic using a “tunnel” so that it can’t be intercepted and viewed by outside parties. Since it works from endpoint to endpoint, secure VPN traffic is difficult to capture by unauthorized third-parties. Using a VPN is a best practice for any corporate computers that may be used outside of the office. Although accessing business information on a non-business device or network is never ideal, this kind of access is virtually impossible to avoid. Likewise, many remote workers enjoy working in diverse environments, but this terrain comes with public networks of questionable security. Public networks, such as those at a coffee shop, are especially vulnerable to eavesdropping because they’re frequently open (lacking password protection) or lacking basic privacy mechanisms. In these circumstances, the use of a VPN is a crucial step for protecting sensitive information. Other MITM Attack Prevention Techniques A VPN is an excellent tool to prevent MITM attacks, but it’s just one of many techniques that a company can adopt. Since MITM attacks take advantage of visible, unencrypted or under-secured networks, consider some of these following tactics. Segment the Network Network segmentation is a practice that splits large networks into smaller ones, typically limiting or preventing access and communication between one network and another. Many companies use network segmentation to split the part of the network which is publicly available from the network which employees use to communicate with one another (think “guest” networks). Companies may offer guest networks to customers for convenience, but segmentation is critical in these scenarios! Non-company-owned-devices—such as tablets or cellphones, even those owned by company employees—should never be allowed to connect to a company’s primary internal network. Use Encryption on All Networks MITM attacks can target more than just networks through which computers connect to the internet. Deploy encryption on networks used by printers or other office devices to ensure that jobs can’t be intercepted and stolen when they’re sent to print. Practice a Zero-Trust Security Model Since MITM attacks may masquerade as legitimate connections, practice a zero-trust security model. This requires users to authenticate themselves each time they connect to the network regardless of who or where they are. With zero-trust, it’s more difficult for hackers to pretend to be someone else because they need to prove their identity to access the network in the first place. Consider using Managed IT Services Managed IT services can help a company harden its network and develop man-in-the-middle attack prevention strategies. Consider enlisting the help of a managed IT service provider for a security assessment to discover where networks are susceptible to these kinds of attacks. Man-in-the-Middle Attack Prevention with CDS Man-in-the-middle attack prevention starts with securing a network so that outside users cannot access it without going through the proper authentication procedures. A security specialist like CDS can help companies develop a framework which includes not just state-of-the-art security, but also training in best security practices for employees to follow to enhance existing security features. With a specialist on the company’s side, threats like MITM attacks can be prevented and eliminated before they have a chance to occur. CDS helps Illinois companies, governments, and organizations harden their security infrastructure to stay safe in the era of cybercrime. Contact us today for an assessment of your current cybersecurity strategy.
<urn:uuid:aa844f3d-ba9a-4861-b953-7fe8f5d72a0d>
CC-MAIN-2022-40
https://www.cdsofficetech.com/man-in-the-middle-attack-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00000.warc.gz
en
0.93739
1,217
2.84375
3
Some of the biggest barriers to cloud adoption are concerns about security, data loss/ leakage, and the associated legal and regulatory concerns with storing and processing data off-premises1. Several cloud data breach incidents in recent years indicate that these concerns are warranted as a result of constant insider and outsider threats. The challenge with Infrastructure as a Service (IaaS) offerings in the cloud is that customers lack a guarantee that the infrastructure is secure against threats and have to trust that their cloud providers will not inadvertently or purposefully access the data being processed on their infrastructure. Confidential Computing is an emerging initiative focused on removing the need for trusting such providers, and one of the most promising approaches for achieving this is Secure Enclaves or Trusted Execution Environments. Secure enclaves remove the need for trusting IaaS providers by enforcing a provably secure environment that is inaccessible by other applications, users, or processes colocated on the system. Secure enclaves address the fundamental problem of securing data-in-use – one of the states that digital data can be in. Data-in-use refers to any data that is being actively processed by a system and is different from data that is being stored for later use or in the process of being moved across some network. Most current approaches to protecting data-in-use have relied upon implementing access controls and policies, client-side encryption techniques, and data breach detection, but none are able to prevent the threat in its entirety. Other approaches for protecting data-in-use such as homomorphic encryption are still far too inefficient for many production workloads5. To fully realize the benefits of the cloud, providing confidentiality and integrity guarantees for data at application runtime is vital. As data becomes increasingly valuable, so do the insights which can be derived from them. Aggregating data from multiple data sources for joint analysis such as rich data analytics and machine learning training has also become an increasingly powerful technique, as witnessed by the increased interest in federating learning and collaborative analytics. The recent advances in secure enclave technologies finally enable such collaboration to occur securely and confidentially. Secure Enclaves Explained A Secure Enclave (also known as Trusted Execution Environment) refers to a computing environment that provides isolation for code and data from the operating system using either hardware-based isolation or isolating an entire virtual machine by placing the hypervisor within the Trusted Computing Base (TCB). Even users with physical or root access to the machines and operating system should not be able to learn the contents of secure enclave memory or tamper with the execution of code inside the enclave. There are several implementations of secure enclaves available for both consumer electronic devices and workstations as well as for data centers. Most notable among them are Intel Software Guard Extensions (SGX), AMD Secure Encryption Virtualization (SEV), and AWS Nitro Enclaves. Not all secure enclave implementations are created equal – there is a tradeoff between security and convenience and depending on the complexity of the application, using one secure enclave implementation over another may make more sense. Intel SGX provides integrity and confidentiality guarantees even if all privileged processes on the machine are malicious by using a remote attestation procedure that allows clients to verify that a known application code is running within the enclave. There is good support for developing applications atop Intel SGX in the form of developer SDKs such as OpenEnclave SDK and the Intel SGX SDK. AMD SEV eliminates the need for developer SDKs for building customized enclave-compatible applications by encrypting the entire memory of a VM, thus securing it in the presence of a “benign but vulnerable” hypervisor9. However, since the hypervisor controls the VM’s access to all hardware resources, physical memory, and I/O operations, there is a much larger attack surface present and extra precautions must be taken to guard against resulting threats. AWS Nitro Enclaves are built upon the Nitro System in which the hypervisor is very lightweight since much of the typical hypervisor functionality is modularized and offloaded to different Nitro Cards. These enclaves run alongside an existing AWS EC2 instance but exist as a separate, isolated VM. The Nitro Enclave can only be accessed by an application running in the same EC2 instance. At the time of writing, only one enclave per EC2 instance can be created. All of the major cloud providers now have secure enclave offerings: - Microsoft Azure offers machines with both AMD and Intel-based processors - AWS offers Nitro Enclaves - Google Cloud Confidential VMs are powered by AMD EPYC™ processors - IBM offers secure enclaves in its IBM Z® Platform How is this different from traditional encryption? Most systems today have adopted standard techniques for protecting data-at-rest and data-in-motion using SSL/TLS, symmetric, and asymmetric key encryption. When data needs to be used, its contents are decrypted in memory. This is where the problem arises. An application that operates on some data must be able to see the data, but if this data is sensitive and contains PII or other confidential information, then application users must trust that the application is not leaking data and that the platform on which the application is running is secure and that malicious insiders or hackers will not gain access to their data. We call this the problem of protecting data-in-use2. - Data-at-rest: refers to stationary data that is stored in non-volatile memory such as databases, cloud storage, or hard drives - Data-in-motion: refers to data that is actively being transmitted from one location to another, typically over some network - Data-in-use: refers to data that is actively being processed by a system for the purposes of some application. Various secrets such as keys and passwords or any PII data are some candidates for data-in-use which might need to be secured Other Methods of Securing Data-in-Use Current approaches to protecting data-in-use often rely on enforcing access controls where access to systems or data is restricted to only those who need it; however, this approach is susceptible to human error or access policy misconfigurations which may go undetected. Additionally, managing privileged access does not protect against credential leakage. Unauthorized actors could still gain access to sensitive data in the case of leaked passwords and keys. Another approach is to use homomorphic encryption (HE), a technique that allows for encrypting data in a way such that computations can be performed directly on the encrypted data. The concept has been around for over 40 years and in that time, several implementations7 and variations of the scheme ranging from Partially Homomorphic Encryption (PHE) to Fully Homomorphic Encryption (FHE) have emerged8. However, there are a couple of pitfalls to homomorphic encryption which have prevented its adoption in the industry: - Speed: All current FHE processing implementations are many magnitudes slower (currently anywhere from 1,000 to 1,000,000 times slower) than workloads with equivalent processing on plaintext data and are thus impractical for many production workloads.5 - Lack of Integrity: Homomorphic encryption schemes are inherently malleable, a property that allows encrypted data to be transformed into another valid encryption by applying some function to it. An attacker with access to the encrypted data cannot decrypt the underlying data (unless they posses a decryption key), but they could transform and replace the data without detection, hence violating data integrity. Homomorphic encryption would need to be combined with techniques such as Zero-Knowledge Proofs to provide additional guarantees to mitigate this concern. Secure enclaves provide an alternate solution that is both efficient and is less susceptible to human misconfiguration. See our previous blog post here to learn more about the differences between using secure enclaves and homomorphic encryption. Does Your Company Need Secure Enclaves? Enterprises face a constant onslaught of internal and external threats. According to a recent survey by Pulse sponsored by Arm, using secure enclaves in the enterprise setting is attractive for implementing safeguards for the following scenarios3: - Protect against insider threats – data in the cloud is accessible to the database administrators of the cloud applications or infrastructure via direct access to the database, application logs, and device memory - Prevent platform software (i.e. a platform hypervisor) from accessing data - Protect data from adjacent workloads in a multitenant/user environment - Protect the integrity of crowdsourced ML models Secure enclave technologies can benefit you if you fall into any of the following categories: - You are a company that already stores confidential data in the cloud and wish to analyze and transform that data directly in the cloud - You are a platform or service provider that wishes to improve the guarantees about your security posture and confirm to partners/ customers that their data remains confidential even when passing through your services - You are a company that needs to comply with customer data and privacy regulations (e.g. GDPR, CCPA) while still being able to extract value from customer data - You are one of several entities belonging to some group or consortium that wants to collaborate to perform joint computation without any party revealing its private data Ask Your CISO If Secure Enclaves Are Right For You! Despite the clear security advantages of secure enclaves, there are some key concerns that must be addressed before secure enclaves become the de facto compute standard for cloud workloads. The ecosystem around secure enclaves is still nascent and migrating existing applications and processes for use with secure enclaves may incur overhead. Software vendors considering the use of secure enclaves should ask themselves or their CISOs the following questions: - Do the workloads and applications that we want to secure require the use of specialized hardware beyond CPUs for performing computation? - Secure enclaves implementations are currently limited to CPUs; aside from a few academic works that have attempted to extend secure enclaves to GPUs and other PCI devices, enclave support on specialized hardware is not yet available. This makes it difficult to perform certain types of workloads, for example, training neural networks. - Do we have the engineering bandwidth to migrate existing applications to use a new SDK and (potentially) be rewritten in a different programming language? - Some secure enclave implementations, such as Intel SGX, require rewriting applications by partitioning the code into secure and insecure parts. Any code that lives outside the TCB is only able to interact with enclave code via a narrow interface. Sometimes, performing such partitioning can be very difficult. - Some enclave implementations currently only support applications written in specific languages. - Will application users tolerate slower performance in exchange for stronger security guarantees? - Memory limitations, post-verification, etc. add additional steps computational overhead which can introduce latency for the client. - Do we need to worry about enclave memory limitations? - The size of the memory available for secure enclave computation affects what kinds of workloads can be performed and how efficient the computation will be compared to an unsecured version of the workload. - Memory limitations are not a concern for the AMD-based enclaves or for AWS Nitro Enclaves, but the Enclave Page Cache (EPC) in Intel processors has previously been limited to 256 MB, which restricted the amount of data that could be processed at a time in memory. Recently, however, Intel announced their 3rd Gen Xeon Processors which increases the EPC size from 256 MB in previous Xeon processor generations to anywhere between 8GB to 1TB depending on the processor SKU. - What level of security is needed? - Hardware-based enclaves such as SGX allow for partitioning an application into trusted and untrusted parts, which allows for minimizing the TCB to reduce the potential attack surface; however, this requires rewriting applications using SGX-compatible SDKs. In recent years, several side-channel vulnerabilities have also been found for hardware-based enclaves7. - Secure enclaves which isolate an entire virtual machine instance drastically reduce the amount of application code that needs to be modified in order to make the app compatible with the enclave, but with the assumption of a malicious hypervisor, there is a much larger attack surface present and a variety of attacks become possible8. - Do we want to lock ourselves to a single cloud? - Using AWS Nitro Enclaves is only possible on AWS alongside EC2 instances whereas Google Cloud Confidential VMs only support AMD processors. Choosing which secure enclave implementation to adopt may require locking yourself to a specific cloud. - There are several implementations of secure enclaves out there, each with different SDKs, and varying threat models. It is unclear if a single approach will prevail or if the variety in application requirements will allow for multiple secure enclave implementations to coexist. Some clouds only support specific implementations, but unless a unifying secure enclave framework emerges or unless all clouds add support for multiple secure enclave implementations, application developers may need to implement multiple versions of their application for each cloud, which seems like an unlikely solution. As enterprises migrate from on-premise to cloud environments, there is a need for additional confidentiality guarantees. Many industries/ use-cases can benefit from the adoption of secure enclaves: - Financial Services - Edge computing - Collaborative computing The idea of collaborative/ multiparty computation has existed long before secure enclaves, but until now, the techniques for implementing such computation relied on either purely cryptographic approaches – which are still far too inefficient for anything too complex – or on delegating computation to a trusted third party, which is still less than ideal. With a secure enclave environment, parties can finally efficiently collaborate by contributing individual data for some larger computation without other parties or any third parties ever learning any private data. At Opaque, we’re building a first-of-its-kind analytics platform that uses secure enclaves to provide a confidential and collaborative environment that makes it easy for organizations to take their existing data and jointly perform secure analytics and machine learning on it with others without anyone (except the data owners) ever seeing the data.
<urn:uuid:36b668aa-f485-48b1-b6a5-d28e9b33b3f1>
CC-MAIN-2022-40
https://opaque.co/what-are-secure-enclaves/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00000.warc.gz
en
0.923226
3,159
2.875
3
An old and trusted authentication mechanism that relies on passwords, but in a smarter way In computer systems, a token is an object or structure used to transfer data between applications. Tokens are primarily used by stateless applications as a vehicle for client-side storage of session data. For example, a shopping app may track things like shopping carts, authentication data, and other session-related data in a token that is stored by the client, instead of maintaining and tracking session state on the app itself, and by doing so, allowing the app to be stateless. Authentication tokens are a kind of token used to transfer authentication-related data between a client and a server, or between applications. For example, federated identity solutions like SAML and OpenID Connect rely on authentication tokens for exchanging authentication-related information between parties. JSON Web Tokens (JWT) are another kind of token used for exchanging authentication data in more proprietary authentication protocols. Authentication tokens in general have three parts to them – a header, a payload and a signature. The header typically identifies the user via something like a username. The payload includes all of the authentication claims associated with the user and session. The signature is a digital signature that guarantees the integrity and authenticity of the claims in the payload. Authentication tokens can be digitally signed to protect their integrity and allow the receiver to verify the identity of their sender. Because data stored by the client is subject to tampering, special care needs to be applied to prevent manipulations to the data. An effective way to achieve this is with the help of digital signatures. Once signed, any changes to the data can be easily detected. Forging a properly implemented signature is considered impossible. Encrypting authentication tokens guarantees the confidentiality of their data. Without encryption, anyone can read the data, some of which may be sensitive and can provide an attacker with useful information. Popular authentication token formats include SAML, which relies on XML tagging, and JWT, which is based on a JSON data object. But what makes authentication tokens especially appealing for developers is that they enable building stateless apps. This means that the server does not have to keep track of authenticated sessions. Instead, this data is tracked by client-side data stored in an authentication token. Once authenticated, the client receives from the authentication server a signed and often encrypted authentication token that it then appends to every request sent to applications that it wants to interact with. The app verifies the integrity of the authentication token and parses its contents. If everything checks out, the request is processed by the app and a response provided. The next request sent by the client will again include the authentication token and the process of verifying its integrity and parsing its contents will be repeated by the app. Identity Federation makes extensive use of authentication tokens. Federated identity systems allow relying parties (i.e. applications) to use authentication services from a trusted identity provider (IdP) without the need for tight integration. In such systems, authentication and authorization data is exchanged bypassing authentication tokens. Without going into specific details, the general setup for federated authentication and authorization (authN/authZ) schemes is as follows: - A user attempts to access some resource/application for which access is restricted. - Because the resource/application does not implement its own authN/authZ functionality, it redirects the user to an identity provider (IdP) for authentication. - The user is authenticated by the IdP, which then creates a digitally signed authentication token attesting to the fact that the user was successfully authenticated and hands this token to the user. - The user is then redirected back to the resource/application he wants to access and hands over the authentication token he received from the IdP. - The resource/application reads the authentication token, verifies its signature and checks its claims to make sure the user meets the access criteria. Common formats for authentication tokens include SAML, OpenID Connect, and JWT. For authorization claims, OAUTH2 is a commonly used standard and format. And while each token has its own specific data structures, claim types and digital signature conventions, they are all tokens with the same fundamental constructs – an identifier, a set of claims, and a digital signature from the entity making the claims. Common attacks on token-based authentication include stealing authentication tokens using malware and cross-site scripting attacks. Malware sitting on the client can read a valid authentication token and reuse it so long as it is not expired. In summary, authentication tokens have grown in popularity and are a de facto standard for most modern applications. They help application developers build stateless applications that are easier to maintain and scale.
<urn:uuid:9dce2e63-d6ee-475e-8e02-6bce5661648d>
CC-MAIN-2022-40
https://doubleoctopus.com/security-wiki/cloud-services/token-based-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00000.warc.gz
en
0.915941
1,113
3.578125
4
The purpose of this Instructable is to set up your computer to dual-boot Windows 7 and Ubuntu Linux. Dual-booting is a technique which allows a single physical computer to run two or more operating systems (OSes). This is useful for experimenting with new OSes without putting all your eggs in one basket. Step 1: Getting Prepared 1. A PC running Windows 7, with at least 30 GB of free hard-drive space. 2. A CD/DVD burner and blank CD/ DVD or a USB drive with a capacity of 1GB or more 3. An external Harddrive big enough to backup your Windows installation 4. Administrative access to the PC 5. Internet ConnectivityMost PCs sold in the last three years should meet the criteria for number 1. To confirm what version of Windows you are running, go to My Computer and click on “System properties.” The listed version should be Windows 7. From that same screen, you should see how much free space you have left, which is underneath the blue bar for each disk. In the example image, there is “8.49 GB free of 55.7GB”. NOTE: The 8.49GB shown in the example is not enough space for most people. While you can install Ubuntu to a partition this size, you will typically want more space store documents and applications. 20GB is probably the minimum size you would want. Step 2: Select a Linux Distro Linux comes in many flavors. Some distributions are aimed at full configurability for advanced users (Arch Linux, Gentoo), others at fulfulling simple hardware requirements (Puppy Linux, DSL), while others aim at being as easy to use and accessible as posible (Mint, Ubuntu). These different flavors, or distributions, are commonly called “distros” by the Linux community. There are hundreds if not thousands of distros available, if you’re curious go to http://distrowatch.com/ and compare. However, int this instructions set, we will use Ubuntu Linux. Ubuntu is one of the most widely used Linux distros and has a very helpful community and good user interface for new users. Step 3: Prepare Installation Media Go to the ubuntu website, http://www.ubuntu.com/ and select download. As of this writing, the current version of Ubuntu is 12.04. Depending on your internet connection, the ISO download may take a while. The 32-bit version should be sufficient unless you are running specific applications that require a 64-bit variant of the OS. ISO files are disk images which have been stored for easy distribution and replication. After the ISO has downloaded, navigate to the folder it was downloaded to. Right-click and choose “Burn disk image.” This will launch a tool to burn the image to a CD or DVD. Once the disk is burned, remove it from the disk drive and label it for your future reference. Alternatively, if you want to use a USB drive, you’ll need the program “UNetbootin” which is available here: http://unetbootin.sourceforge.net/ WARNING: Be sure to backup your USB data before running UNetbootin, as it will reformat the drive and destroy any data previously on the USB drive. Step 4: Backup Windows This is perhaps the most important step. If anything goes wrong with the next few steps, this will allow you to restore your computer to its current state. You will need an external hard drive or enough disk media to hold your files. If you have already backed up your system, you can either update your backup or proceed to the next step. If you have never backed up your computer, now is the perfect time to start.You can find a decent external hard drive for around $60. Try http://www.newegg.com or http://www.tigerdirect.com, or your local compouter store. Alternatively, you can use an online subscription-based backup service such as Mozy.com or Carbionite.com to aviod purchasing and setting up hardware. To use the Windows backup, open the start menu, type “backup” and select “backup and restore.” This opens the backup and restore center. Choose “set up backup” and follow the wizard to create your first windows backup. Step 5: Partition the Hard Drive The safest and simplest way to partition your drive is to use the Windows 7 “Disk Management” program, to shrink your existing windows partition. To access this program, click the Windows icon in the bottom left and in the text box which says “Search programs and files”, type in “Disk Management” (without quotes) in this box, and press enter. In the new window that pops up, you’ll be able to see all of your partitions. In windows, these are typically labeled with letters, such as “C:” or “D:”. Check your partitions – there are two common setups. Either you will have one very large drive (“C:”) or you will have a smaller drive for your operating system (“C:”) plus a larger one for data or programs (“C:”). On your larger partition, ensure that at least 30GB of free space available. Next, right click on this drive and select “Shrink Volume.” It will take a little while for windows to analyze available free space, so be patient. Next, it asks how many MB you would like to shrink your volume. For a typical Linux OS install 30GB should be plenty if you’re not planning on using it to store large files such as movies. So, to shrink by 30GB, convert this to MB (multiply 30 x 1000) and enter this value (30000) into the appropriate field. Next, click “Shrink”. You should now see a black bar which says “Unallocated” underneath (see picture 2). If you see this, you are ready to go, and can close the Disk Management window. Step 6: Boot from Removable Media Note, this step will vary slightly depending you your computer’s make and model. If your computer’s documentation makes reference to an option to change the boot order, use that method to set the computer to boot from the disk drive or USB drive, depending on what media you used in step three. Reboot and boot into the Linux media, and proceed to step seven. If you cannot find such a reference, you will need to alter the boot order via BIOS. BIOS stands for Basic Input Output System. It is a low-level environment the computer goes to before it loads the operating system. From here, many variables relating to the system’s hardware can be modified, so it is important not to make accidental changes as they may have dramatic effects on the computer as a whole. Reboot the computer. As it powers up, watch for a screen which says “Settings” or something similar. You will need to press one of the function keys, usually F5 or F12. This will get you into the system’s BIOS and allow you to change the boot priorities.If you miss BIOS, the system will continue booting as usual. If you end up in Windows, shut down and try this step again. Using the keyboard, navigate in BIOS to “boot options” and select the primary boot device to be the CD drive or USB media, depending on what you burned the ISO to in the previous step. Save your changes, insert the media into the drive, and reboot. Step 7: Install OS Next, select “Try Ubuntu without Installing”. From here you can explore and get used to Ubuntu, as well as confirm that your network connections are working. If for some reason they do not work (you cannot access the internet), please consult official Ubuntu documentation, as these settings may vary based on your system. Official Documentation about troubleshooting networking: Otherwise, If you’re ready to proceed, simply double-click the “Install Ubuntu” icon on the desktop to enter the install wizard. Choose your installation language, then on the next screen check the box that says “Install this third-party software”. This software is necessary to use MP3s. From that point, follow the defaults in the install wizard. Be sure that when it asks you where you want to install Ubuntu, you select the option to install side-by-side with your other OS, using the free space available. Continue installing until you receive a message that your installation is complete. It may take a while to get to this point so please be patient. When it’s down, shutdown your computer and remove the installation media. When you boot back up, you should have a choice between Ubuntu Linux or Windows! Step 8: Change Boot Device (again) If you changed the device boot order in the step six, you will need to repeat that process here. Set your BIOS boot order such that once again the hard drive is the primary boot device. Note that if you skip this step, your system will try to load an operating system from removable media before looking for Windows. If there is not a disk in the drive, it should proceed to boot into Windows. Step 9: Reboot and Configure Upon bootup, you will now see a choice between Windows and Ubuntu Linux. Go ahead and select Linux, and get yourself familiarized with this great OS. There is a ton of documentation available to help you get used to the difference between Windows and Linux. One simple source you might want to look at right away is at: https://help.ubuntu.com/community/SwitchingToUbuntu/FromWindows Between its graphical user interface (GUI), command-line interface (CLI), hundreds of free open source applications, and a hugely active and supportive community, Linux has a lot to offer a would-be Windows convert. Step 10: Perfecting your Ubuntu Install (Optional) Lastly, if you decide to make Linux your primary operating system for daily use, you can set your system to boot into Ubuntu rather than Windows by default. Alternatively, to stay with the familiar Windows OS, the community offers a how-to for that as well. For any error or topic not covered in this guide, an answer can most likely be found in the forums. Congratulations, you now have a PC dualbooting Windows 7 and Ubuntu Linux.
<urn:uuid:e171bd57-c424-4f16-b46e-dad792384bd7>
CC-MAIN-2022-40
https://linuxsecurityblog.com/2016/05/12/how-to-dual-boot-linux-and-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00000.warc.gz
en
0.888331
2,244
2.859375
3
Many people need earphones like wired headsets, wireless headsets, Bluetooth headsets, and so on when they are using electronic products. Naturally, a question comes up: what are the differences between wireless headsets and Bluetooth headsets? So this article will tell the differences between the two according to the wireless technology and Bluetooth transmission technology. Many users misunderstood "wireless headset" and think that a Bluetooth headset is different from a wireless headset. But that's not the case, in fact, Bluetooth headsets only adopt a kind of Bluetooth technology and also belong to the category of wireless headset. At present in the market of digital products, there are only two headset types In addition to the technology gap. One is a wired headset, and another is a wireless one. The relationship between a wireless headset and a Bluetooth headset is just like that between father and son because the Bluetooth headset is included in a wireless headset but cannot represent all the wireless headsets. When it comes to wireless earphones, everyone must be impressed by their compactness and portability, but many people don't even know how it works. So let's talk about how Bluetooth headset works. As a matter of fact, the use of Bluetooth headsets is limited, the headset of both sender and receiver have to be equipped with an appropriative decoding chip. And in general, the one who uses a Bluetooth headset is the receiver, when turning on the Bluetooth of the telephone and connecting it with the headset, it means that the decoding chip in the telephone turns the music into a kind of specific digital signal. And then after the receiver inside the Bluetooth headset receives the signal, it will be converted into an analogue signal that people can understand. In the process of conversion, the Bluetooth headset for sale also plays the role of amplifying or reducing the signal. What this function depends on is the signal amplification chip built in the Bluetooth headset to complete the conversion. When the digital signal is played out after being converted by these chips in the Bluetooth headset, naturally, our ears can receive the sound. In fact, the working principle of the wireless headset for sale has only a little different from that of the Bluetooth headset, the difference is only the transmission mode. Some wireless headphones require a specific receiver and the transmission mode they adopt is an infrared transmission. However, what mode some other headphones adopt is the 2.4G wireless transmission mode. Although the signal transmission mode is different from that of Bluetooth headphones, their principles are almost the same, converting the sound into various signals and then restoring them through the receiver. But when it comes to performance, FM-typed headphone is the worst of all because of their low transmission efficiency and poor sound quality. The second worst is the headset with Bluetooth or infrared transmission that requires high directivity. So far, the best wireless headset is a 2.4G headset. Bluetooth headset whose working principle has few differences from the wireless headset is only one kind of the latter, and it's not the best wireless headset.
<urn:uuid:f42372b0-6243-4139-801e-b8ac822bee0f>
CC-MAIN-2022-40
https://www.addasound.com/what-are-the-differences-between-wireless-headphones-and-bluetooth-headphones.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00000.warc.gz
en
0.953963
603
2.859375
3
When it comes to classroom management, technology is typically thought of as a distraction to students. “Technology is a distraction when we need literacy, numeracy and critical thinking,” Paul Thomas, an associate professor of education at Furman University, said to The New York Times. However, this might not be the case after all. Writing in the Getting Smart blog, Alexia McCormick argued that computers can actually help with attention spans in the classroom, so long as the right classroom management software is used. McCormick said that such technology can actually make for a richer educational experience, encouraging a more hands-on approach to learning. Instead of trying to fight for students’ attention with a classroom computer, teachers should use the right classroom software that allows for an optimal mix of listening and experiential learning. For example, she said classroom software should allow teachers to look at students’ screens and to lock them out of a computer if it becomes too distracting. Proper classroom software can also make it easier for students to share information with peers and with the teacher. Also, as opposed to having a teacher dictate knowledge to students, technology offers the opportunity for students to discover answers independently, she wrote. Plus, classroom computers can make it easier for teachers to guide students to the right resources, instead of students trying to write down what is written in just one part of a classroom, McCormick said. “We believe that there should be middle ground between letting students have free range to research what they want and keeping them on task,” she wrote. Do you think a classroom computer serves as a distraction in a classroom, or can the internet be used as an effective teaching tool? What kinds of classroom management software do schools need to use to make sure kids do not get distracted by technology?
<urn:uuid:73d049d9-3a7e-4587-895c-fe9beb00c2f0>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/using-classroom-computers-to-gain-students-attention
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00000.warc.gz
en
0.960917
372
3.03125
3
Originally an economic observation, Goodhart’s Law observes something we see more in today’s tech-driven, track-everything world: that measuring something inherently changes user behavior. Let’s take a look at Goodhart’s Law and what it means in your organization. Data: how and why When a company defines the data it collects, it defines what information is important to the company. Companies use data in several ways. Companies might collect data to improve services or measure user behavior. Some data is required for a service to function. For example, a company needs to know a user’s name if they are to address the user by name. Other data the company chooses to read into might not be necessary for the company to function. But they continue to collect the because it is deemed “useful to provide better services to its users”. Some questions a company can read into, given existing data, might be: - What hours is the user asleep? - How popular is the user among their friends? - Which way does the user lean politically? How a company reads into their user data is subject to scrutiny and ethics questions. This is where Goodhart’s Law comes in. What is Goodhart’s Law? “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” Goodhart observed this in the domain of economics, which he detailed in the book Problem of Monetary Management: The U.K. Experience. Within a couple years, anthropologist Marilyn Strathern recognized that this law could apply in domains beyond economics. She summed up Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” It is an admission: When data is collected, people will refine their behaviors to optimize for those data points. And, suddenly, when people are adapting their behaviors to suit those items, those data points are not as valuable a measure as they were before. Where authenticity becomes ingenuine Measurement can turn authentic human behavior into ingenuine human behavior. A teacher may look at his students to see which students are the better students. He might assess that based on: - How attentive they are in class - Whether they are asking questions - Their grades - Their socializing with other kids in the class When these items become the basis for the class rubric, these behaviors become ingenuine. No one likes that kid who knows these things about the teacher, the kid who plays into them accordingly out of self-interest. Some students game the system, trying really, really hard to be liked. They might reside at the top of the social structure in the eyes of the higher power. Can they be blamed though? They operate within the rules given to them. And they operate within the rules available to the entire class. In one way, they have successfully listened and performed exceptionally well. Any other member of the class also had the same knowledge of the rules (we assume). But in another way, those students are somehow also perceived as ingenuine. In person-to-person interactions, people tend to be good at identifying who is good and who is bad. All the other students know this person’s behaviors are ingenuine. The teacher likely does, too. On social media feeds, however, identifying this behavior is much more difficult. When using technology, people no longer listen holistically with their eyes and ears. Instead, people need to create benchmarks and generalizations to grade their user’s behaviors—they are often called Key Performance Indicators (KPIs) or user metadata. And no matter how much information is taken, or paid attention to, through metadata, a person is still only making judgements about a person through distilled information. The benefit of meeting someone in person, not online, is what you can learn or assume about that person, both with and without explicit information. When you and another person are physically present, you can pick up on tiny cues, mannerisms, and other things that online metadata likely doesn’t know. In person, you can make a holistic assessment based on how that person speaks, how they appear, how they engage with you, how they treat the café barista, and a whole host of tangibles and intangibles. Technologically, far fewer data points are available. Interactions are impersonal. Bad actors have an easier time of gaming the system. Whether it is from a classroom setting, a small company, a one-on-one meeting, or measuring the GDP of a country, Goodhart Law’s seems to find value in social settings: “When a measure becomes a target, it ceases to be a good measure.” Google Search Engine Google’s search engine can make or break a company’s profits. Being in the top 1-5 search results versus the 6-10 search results can turn a company’s profits sour by thousands of dollars or more. When a company has that much to lose—or gain—they will want to understand how to break into the top five results and hold their position. For them, it’s worth it to hire marketing gurus and SEO savants and pay the thousands per week just to hold their position in the top spot. (This means entire industries are created just to game algorithms.) But, at its core, it is Google’s job to provide the best search results it can. Google is to give people the information for which they search. What we see, then, is a continuous struggle between people: - Those with authority trying to provide the best services they can. - Those who will exploit the system to get themselves on top. The problem with this becomes a game for Game Theorists to analyze: one where private information exists—and the existence of private information doesn’t always sit well with people. Tiktok was recently pressured into exposing its content algorithm. People wondered, “Why is it so addicting?” “How do they make it work?” Content creators (a fancy way of saying humans with a camera and a fun idea) get lucky with a viral video, and suddenly their social worth, their clout, jumps significantly. Many people want their Warholian 15 minutes of fame and will game the system to earn it. Now, the TikTok algorithm undergoes incredible scrutiny by the creators because they want to figure out how to get their names at the top of people’s news feeds. They ask questions like: - Should I wear a certain color? - What song should be in the background? - Should I buy followers? - What time should I make the post? - How do I get people to share this post? Do I pay them? TikTok’s initial aim would be to let people naturally like and share a video on its marketplace of videos. Whatever video gets the most likes and shares should propagate around the internet. If that’s what people like, then that’s what people like. But we know that people are gaming their likes just so their video can be seen more. Now, TikTok’s initial aim is gone. Instead, they must ask the question, “What is this person’s intent with pushing their content into the world?” When people game the algorithm, it forces platforms like Google, TikTok, Pinterest, Facebook, YouTube, Twitter, et al., to: - Judge the creator’s intent. - Moderate what information is good and what information is bad. Companies need to keep algorithms private To combat people gaming their algorithms, companies need to keep their algorithms private. The masses prefer authentic behavior. But it isn’t the recent phenomenon of social media that has changed this. For decades, Hollywood has bred more and more crazies who try to game the show business system. They try to have the right hair, the right personality, the right jawline to be accepted by Hollywood and to get their role in a play, show, or movie. Social media has opened this game to the world, creating more types of Hollywood stars—influencers—pushing the same identity and personality problems from Hollywood to a global audience. In any environment of gamblers and actors, those seeking to game a system, there will be winners and losers. The two groups will need to learn how to play well with one another. But, when the company keeps their algorithm private, people grow skeptical about how the company makes its decisions. They doubt the company’s integrity. They question the company’s agenda. When the users do not know the parameters of the game, they are lost and less trusting of its service. Goodhart’s Law requires balance Likely, the line drawn by Goodhart’s Law is one of those paradoxes not to be solved, but to be balanced. It is the delicate fabric which shifts and balances through the tensions of two forces; the ones measuring, and the ones behaving. For related reading, explore these articles:
<urn:uuid:28c61105-3dd3-4b96-8592-7eb2a980d545>
CC-MAIN-2022-40
https://www.bmc.com/blogs/goodharts-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00200.warc.gz
en
0.956731
1,918
3.234375
3
Corporate enterprises face one of the biggest security risks of recent times: the “Meltdown” and “Spectre” critical vulnerabilities in the architecture of processors affect almost every computer and mobile device on the planet. Those vulnerabilities exploit processors’ speculative execution technique that optimally queues functions that are expected to run. It gives cybercriminals access to data running through the CPU itself. It is achieved by allowing a malicious application operating on a device to peek into the memory of another application on the same device and suck out its contents. “Meltdown”, designated as CVE-2017-5754, can enable hackers to gain privileged access to parts of a computer’s memory used by an application/program and the operating system (OS). Meltdown affects Intel processors. “Spectre”, designated as CVE-2017-5753 and CVE-2017-5715, can allow attackers to steal information leaked in the kernel/cached files or data stored in the memory of running programs, such as credentials. Spectre affects processors from Intel, Advanced Micro Devices (AMD), and Advanced RISC Machine (ARM). Microsoft is releasing updates for Windows to block malicious attempts to exploit the Meltdown vulnerability in Intel processors. At the same time, fixes to prevent user-mode programs from “peering inside” kernel-mode memory are being introduced by operating system vendors, hypervisor vendors and cloud computing companies. So what should enterprise users and admins learn from the “Meltdown” and “Spectre” vulnerabilities? There are three primary dimensions of the issue: - Late/no patching. Commercial devices patch timing is an industry-wide challenge. Though iOS and Google native mobile devices experience a relatively swift patching, most Android devices remain unpatched, meaning that potentially any data on these devices is at risk. Data present in any one of those could have been stolen. - Technology Blackhole. Speculative execution derived vulnerabilities have existed for a decade or more. The complex technology and the abundance of mobile apps and malware form uncertainty and a huge attack surface. Users and system admins never know what other unknown vulnerabilities may cause data compromise. Not everything is known or told. Hackers and government agencies keep priceless knowledge on exiting breaches to themselves, though surprises keep popping up. - Defense vs. detection. A strong cyber-defense starts with the realization that everything is hackable and every organization will be compromised at some point. Organizations have maxed out on their ability to lock down systems and networks, leaving mobile devices as the weakest entry point to their cyber environment. Vulnerabilities require complementary attack vectors to facilitate the exploitation but organizations fail to block entry-points thus allowing vulnerabilities big impact. Threat detection systems are constantly late to respond and introduce after the fact resolution. It is clear that traditional techniques for detecting attacks and protecting mobile devices are just not sufficient. What should organizations do? Organizations should take active steps toward creating an effective defense. This defense should: - Deploy platforms that rely on dedicated mobile security hardware and software that leverage trusted environments; - Differentiate between mobile worker types and the risk that is associated with their work. Employees with secretive work should use trusted hardened mobile devices where other employees can use commercial BYO devices; - Block unauthorized access points to the organizational network via mobile devices, including rapid security patches across enterprise mobile devices. To maintain this practice, security-minded organizations should utilize purpose-built mobile devices with security-rich operating system that allows enhanced defenses; - Create a fusion of multiple defense layers across the organizational wireless environment providing in-depth protection against cyber-attacks, including communications within a persistent VPN and a locked down private network; - Dismantle all attack vectors – interception, injection, intelligence and forensic wire, and employ fused controls to eliminate careless use impact. Cyber-criminals outsmart security defenders but operate within a known set of attack vectors, regardless the vulnerability. Shutting down the attack vectors, while allowing accepted reductions in users' experiences, will guarantee safe enterprise mobility.
<urn:uuid:38ec4881-690f-4802-8164-392048a1c1a1>
CC-MAIN-2022-40
https://www.communitake.com/meltdown-spectre-next/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00200.warc.gz
en
0.920907
838
2.890625
3
Chapter 1, “Let’s Talk About Network Security,” discusses the importance of identifying and securing the assets in your network. One of the most important assets is the network itself. If your network devices are compromised, then any data flowing through it will be compromised, too. This chapter discusses the security of the network infrastructure, including the three planes: management, control, and data. This chapter also discusses the importance of segmenting traffic within a network as well as methods for doing so. Finally, this chapter lays a foundation for traffic analysis and security integration with a discussion on NetFlow and its security benefits. The Three Planes Yes, this is still a book on security! We did not leave the world of security for the astral planes, and neither did we switch to the flying kind. The planes we discuss here are the three that exist on a network device: the management, control, and data planes. Any functions related to managing a device, such as configuring it, happen in the management plane. Access to this plane requires use of various protocols, such as SSH, Telnet, SNMP, and SFTP. This is the most important plane in terms of securing a device because any breach on this plane will allow access to all data flowing through the device and even the ability to reroute traffic. In the control plane, the device discovers its environment and builds the foundation to do its job. For example, a router uses routing protocols in the control plane to learn about the various routes. Routes allow a router to do its primary job—route packets. A switch uses protocols such as VTP and STP to learn about various paths, and that allows it to switch traffic. If the protocols in the control plane are not secured, a malicious actor may be able to inject rogue control packets and influence the path of the packets. For example, if your routing protocols are not secure, then it is possible to inject rogue routes, causing data to flow to a different device. This technique is often used in man- in-the-middle (MITM) attacks. The data plane, also called the forwarding plane, is where the actual data flows. When a router receives a packet to route or a switch receives a frame to switch, it does so on the data plane. Information learned in the control plane facilitates the functions of the data plane. Typically, this is where most of the network security controls are focused. Packet filtering, protocol validation, segmentation, malicious traffic detection, distributed denial-of-service (DDoS) protection, and other security measures are utilized in this plane. In addition to the three planes, the physical security of a device itself is important. After all, the security in the software does not matter if the hardware is compromised. While it is the responsibility of the vendor—Cisco in this case—to ensure that a device is not tampered with between manufacturing and your doorstep, it is your responsibility to ensure that the devices are kept in a secure location where unauthorized access can be prevented. The three planes and physical security can be visualized as the pyramid shown in Figure 2-1, where compromise on one layer will affect all the layers above it. Figure 2-1 The Pyramid of Planes
<urn:uuid:d14f438f-643a-4d5b-85ed-f29e275a221a>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2928193&amp;seqNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00200.warc.gz
en
0.929382
667
3.234375
3
What is OSINT? Any intelligence collected legally from open, public sources is known as Open Source Intelligence Gathering, or OSINT. With so much information freely available on social media and other online sources, OSINT is often the most effective method for profiling people or groups, gathering evidence, or following up on reports of suspected attacks or fraud. OSINT grew out of spycraft as it shifted away from clandestine methods of information gathering (think phone tapping and couriers ferrying secure communications) and toward scouring publicly available information like newspapers and files or databases. With the advent of the internet, vast amounts of information became accessible to anyone, and OSINT became increasingly useful not just to sophisticated government and law enforcement agencies, but to financial crime analysts, fraud and brand misuse investigations and particularly – to cybersecurity analysts. Cybersecurity teams frequently use OSINT for OPSEC (operational security) by understanding what information about their company is available in the public domain. OSINT is a great way to find out if any private information has intentionally been leaked, especially on social media, or perhaps accidentally exposed on public sources without proper authorization or approval. OSINT on the deep and dark web OSINT is not limited to research on the surface web – it can also be conducted on the deep or dark web. OSINT can still be applied to sites requiring login or subscription — as long as analysts can gather the information legally, without violating any access rules. And, that extends to the dark web. If you’re using the dark web for OSINT, it’s important to remember: - Paying for hacked/stole items doesn’t qualify as OSINT and create legal problems for the analyst and their organization - Any website could introduce malicious code to your computer, but this is especially true on the dark web, where site owners often set boobytraps to track potential adversaries - There is some anonymity to using the dark web, but there are still lots of details given to site owners about your identity — you’ll need to take special percussions to control your digital fingerprint How is OSINT used in threat intelligence gathering? In addition to being a valuable technique for OPSEC, OSINT can also be used to gather threat intelligence to proactively reduce cyber risks. OSINT is used to analyze, monitor and track cyberthreats from targeted or indiscriminate attacks against an organization. If an issue is caught by a threat intelligence platform (TIP) or subscription service, the job of an OSINT analyst is to dig deeper and gather any available information across surface, deep and dark web to understand the urgency and scope of the potential problem. For example, a TIP may identify that company’s email addresses and passwords have been found for sale on a dark web site. An analyst would want to look at the complete package to assess the risk of bad actors using this information for future phishing attacks or data breaches. Investigators may also gather valuable insights on how the email addresses may have been obtained and where the weaknesses in the enterprise security perimeter lie. Additional information about attackers’ tactics and methods can be gleaned from various dark web forums. Having a thorough understanding of how the dark web works and how to use it as a resource without exposing their organization to risks is an essential skill for any OSINT analyst. Homegrown solutions are no longer sufficient for OSINT research Using the local computer and network to collect open source content puts OSINT teams and investigators at risk. In To minimize the risk, organizations use a variety of tools such as client-side virtualization, VPNs, segregated storage and advanced malware-scanning solutions. These are costly to deploy, and the complicated IT management requirements create security and attribution gaps. Tools like Silo for Research offer a fully isolated, anonymous and secure platform designed for the demands of OSINT teams. They protect analysts and their organizations during the information gathering process and keep researchers compliant through collection, collaboration and production. A specialized solution like Silo for Research, gives analysts an isolated browsing platform for accessing social media sites, forums, and other web-based resources without ever touching the web. It also gives them control of their digital fingerprint to avoid tipping off subjects and adversaries during their investigation. OSINT automation: a valuable resource for time-strapped analysts Analysts are always under pressure. Especially when they are investigating a fast-moving incident or impending threat, they can’t afford to waste any time – researchers need to process as many data sources as possible in the shortest amount of time. And this is where automation is most valuable. Automation help you target more sources in less time, removing the human bandwidth limitation, increasing output and productivity, and saving valuable time to remediate issues faster. OSINT is a fast-growing, multi-faceted discipline, and an increasing number of organizations, even beyond financial corporations and federal and law enforcement agencies, are investing in tools that can help make their analysts’ jobs easier and accelerate issue resolution times. - Among many important considerations for OSINT automation tools are: - How they manage footprint and attribution - Whether they can rotate IP addresses and imitate various locations and time zones - How effectively they can protect networks from accidental exposure to malware - The ease of storing and sharing sensitive data - Whether they can comply with industry and company audit requirements The more sophisticated your adversary is – more time and effort is required to set up a successful OSINT strategy. With data constantly changing, the number of sites analysts need to investigate grows every single day. Automation – especially using the right tools and techniques – can help ensure that teams are gathering the most relevant data as quickly and efficiently as possible, while keeping investigations – and investigators – secure. For more information on OSINT and how to protect your investigators, see:
<urn:uuid:550d2bb6-d4f3-41d1-8c75-9e1f326c0801>
CC-MAIN-2022-40
https://authentic8.com/glossary/what-is-OSINT
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00401.warc.gz
en
0.930776
1,193
2.8125
3
Is Digital Twin Promising A New Era for Healthcare? Over the next three years, 66% of healthcare executives are planning to increase their investment in digital twin, states a recent digital health technology report. Since the outbreak of the COVID-19 pandemic, digital twin technology has been playing a key role in aiding healthcare professionals. Digital twin is used to optimize the usage of ventilators for critical patients, support contactless temperature scanning, reduce person-to-person contact, trial drugs, and prevent the risk of disease transmission. In the post-COVID-19 era too, medical researchers can leverage digital twins to analyze the existing data and study the impact caused on the human body. In this blog, we’ll take you through what is digital twin technology and how it’s creating an impact in the healthcare industry? Let’s checkout! The Role of Digital Twin in Healthcare Digital twin technology allows you to replicate the physical world in a digital layout. A digital twin is a virtual model of a device, object, or process that operates in real-time to update data when changes are made. Researchers rely on digital twins to test new scenarios in real-life environments with improved safety and cost-effectiveness. In the past, the application of digital twins was limited to sectors such as industrial engineering and manufacturing. It was economically unviable to build digital twins in other fields like healthcare and education. The proliferation and affordability of innovative technologies such as IoT, AI, ML, AR, VR, and XR are accelerating the adoption of digital twins in healthcare. Read more: The Application and Impact of Information Technology in Healthcare The healthcare industry is constantly striving to enhance patient outcomes, reduce operating costs, and address unforeseen medical crises effectively. The US-based Digital Twin Consortium observes that digital twin technology has the potential to improve patient turnaround, reduce patient wait times, optimize equipment utilization, cut staffing expenses, and minimize bed shortages. It helps personalize medicines based on real-time data and improve the operational efficiency and performance of healthcare organizations by identifying workflow bottlenecks and scheduling optimization. Top 5 Applications of Digital Twin in Healthcare Digital twins allow the creation of handy virtual models and medical simulations based on the data gathered from wearable devices, patient records, drugs and pharmaceutical companies, device manufacturers, and other healthcare departments. This helps streamline the overall clinical and caregiving processes. Listed here are the top five applications of medical digital twins: 1. Customize treatments and drug administration Digital twins allow physicians, hospitals, and clinics to deliver patient-centric care by leveraging precision medicine. Data stored in healthcare mobile apps, medical software, wearables, fitness trackers, and other medical devices can be captured into digital twins which enables doctors and front-line health workers to address patients with persistent or critical conditions. For example, combining AI-powered anatomical analysis with the virtual model of a patient’s heart helps understand the progression of heart diseases over time. It enables medical researchers to identify how the patient will respond to new drugs, treatments, or surgical intercessions. Digital twin experiments are also conducted to analyze the progression of neurogenerative ailments such as Alzheimer’s and Parkinson’s. 2. Advance surgical procedure planning Digital twin technology enables brain and heart surgeons to run virtual simulations of surgical procedures prior to executing complex surgeries. Testing pre-operative and post-operative surgical procedures and outcomes on a digital replica of human body parts reduces the risk of hampering human health. Advanced, patient-specific computational models of human organs help plan and augment complex surgical interventions with improved precision and care. Read more: How Virtual Reality Benefits Autistic Patients 3. Enhance caregivers’ efficiency and experience Digital twins support caregivers to gain a consolidated view of patient data scattered across various medical applications, physicians, and specialists. Technologies like Natural Language Processing (NLP) help infer the data and summarize the medical history of each patient. Capturing patient-specific information onto your medical dashboard throws better light into the context of each patient. This improves your clinical decision-making ability. Digital twin model of a hospital allows you to measure the impact of organizational changes. For instance, you can use the virtual model to test new operational strategies, care delivery programs, staffing rotation, appointment scheduling, hospital bed facilities, surgical schedules, and so on. This helps redesign your organization’s workflow, improve coordination among various departments, and reduce the treatment window. Case Study: How Fingent’s healthcare technology solution helped improve collaboration between doctors, patients, and caregivers 4. Test new medical devices and drugs Federal drug regulators such as the United States Food and Drug Administration (FDA) agency as well as the European Medicines Agency (EMA) propose using AI algorithms to determine the safety and effectiveness of pilot drugs. Digital twins can simulate the health traits of a larger number of patients which helps analyze how a drug’s usage will impact a wider population. Using several inclusion and exclusion paradigms, AI helps pace up drug trials by identifying the willingness and availability of patients. Digital twins can also mitigate the harmful impact of experimental drugs and reduce the number of patients who need to undergo real-world testing. It takes more than $2 billion to manufacture and launch a new drug into the market. Trial phase alone costs heavily and over 90% of treatments fail during this period. Capitalizing on technologies like machine learning and computational modeling helps expedite the early stages of drug design, development, and safety evaluation. Digital twins integrate the test data across various samples to give a holistic picture of the drug’s effect on patients. 5. Improve supply chain flexibility The first wave of the COVID-19 pandemic weakened our supply chains due to the lockdowns and transportation bans across various countries. This resulted in the shortage of essential healthcare supplies. Digital twins allow healthcare organizations to create robust contingency plans to address such unpredicted events, increase bed capacity, manage emergencies during shutdowns or shortages, offer remote patient care, and design and construct new medical facilities to reach out to more patients. Hospitals, labs, and healthcare establishments can remodel their supply chain relationships to create alternative plans, improve collaboration with suppliers, and team up with authorities to plan and negotiate. Read more: Why is it better to outsource custom healthcare software development Make The Most of Digital Twins with Fingent Healthcare application development experts at Fingent help you overcome the hurdles that defer digital twin adoption such as data gathering, quality of clinical trial datasets, and information security and privacy. We develop custom healthcare apps leveraging technologies such as VR, AI, ML, and IoT that enable you to virtually test innovations and deliver exceptional patient care. These solutions can be tailored to optimize both your clinical and operational functions. For instance, we help you develop virtual simulators for ACLS (Advanced Cardiac Life Support System), accident trauma care standard operating procedure, an orthopedic or cardiac surgical procedure involving complex tools, and Neo-natal Resuscitation Simulator (GOLDEN MINUTE PROTOCOL). Read more: How Virtual Reality Improves the Standards of Medical Education and Training Besides VR, healthcare providers can benefit from various customizable solutions such as connected healthcare apps powered by IoT, integrated medical dashboard software, remote patient monitoring systems, and healthcare analytics applications. Improve your organization’s technology ecosystem with Fingent. Contact us to design digital twins and drive innovation.
<urn:uuid:08bae718-e9bc-49fc-811d-b0325f64e7dd>
CC-MAIN-2022-40
https://www.fingent.com/blog/is-digital-twin-promising-a-new-era-for-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00401.warc.gz
en
0.905317
1,517
2.984375
3
Security awareness games by Infosec On-demand training for every cybersecurity role Certification training from industry experts 2021 IT & Security Talent Pipeline Study Cyber Work Podcast New cybersecurity career conversations every week Join a team dedicated to making a difference. Your computer is at risk of attacks by hackers using malware, or malicious software, that is intended to steal, damage or control computers and computer systems. Designed specifically for remote workers, this interactive module details networking essentials and best security practices to help keep remote personnel secure. Learners will gain deeper understanding of home networks and the devices and risks that come with them, and learn why use of public networks and devices should be avoided when possible. Suspicious hosts are known malicious or potentially unsafe IP addresses or hostnames. This module will show how safe browsing can protect Internet users from malicious attacks from suspicious hosts.
<urn:uuid:1596a407-74fd-4d76-b917-4003d1167d89>
CC-MAIN-2022-40
https://www.infosecinstitute.com/content-library/dont-be-a-target-your-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00401.warc.gz
en
0.886651
198
2.703125
3
Google Chrome is today the most diffused web browser, nearly 39% of internet users have chosen it according the data proposed by StatCounter. The reason of the success behind Google Chrome is its efficiency and simplicity of use, but what can we say about the security? How Google Chrome manage and protect users’ data? Security experts at Identity Finder published an interesting blog post to highlight a series of security flaws in Google Chrome that could allow an attacker to steal personal data archived in the history files. Experts at Identity Finder demonstrated different methods to access personal data from the History Provider Cache in Google Chrome using their Sensitive Data Manager application, even if those data has been submitted through a secure website. The purpose of the post is to inform Google Chrome that their browser stores sensitive data unencrypted with obvious risks. The flaws exploitable in Google Chrome are related to SQLite and protocol buffers, used by the popular browser as storage for user’s personal data including names, email, phone numbers, bank details, credit card and social security numbers. “Despite employees having entered this information on secure websites, Chrome saved copies of this data in the History Provider Cache. Other SQLite databases of interest include “Web Data” and “History.” On Windows machines, these files are located at The attacker needs to have a physical access to the user’s machine, the same results could be obtained if a malicious code is specifically written to exploit well known vulnerability in Google Chrome. “Chrome browser data is unprotected, and can be read by anyone with physical access to the hard drive, access to the file system, or simple malware,” noted the researchers. “There are dozens of well-known exploits to access payload data and locally stored files.” Earlier this year, software designer Elliott Kember revealed how to expose users’ passwords saved by Google Chrome simply visiting viewing Chrome’s settings stored under the path chrome/settings/passwords in plain text. Identity Finder security experts have provided the following Infograph to sensibilize Google Chrome users on the risks of exposure for their data. Identity Finder has notified Google of the risk related to the exploit of the flaw, but have not yet received responses. Waiting for Google reply the experts provided the following suggestions: “Anytime you enter a credit card number or other PII into a form, be sure to “Clear saved Autofill form data”, “Empty the cache”, and “Clear browsing history” from the past hour and the information you typed will be erased. Alternatively, disabling Autofill or using Incognito mode will protect form data.” (Security Affairs – Google Chrome, privacy)
<urn:uuid:9f484b05-2229-4cf9-8caa-cc747b501428>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/18732/hacking/google-chrome-vulnerabilities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00401.warc.gz
en
0.901629
587
2.796875
3
WHAT’S DIFFERENT ABOUT DIGITAL IN MEDICAL DEVICES? Traditionally, medical devices were restricted to giving a certain set of outputs for a complimentary set of inputs, which would be used as information by doctors to make decisions. The accuracy of decisions would be purely at the professional’s discretion. But as healthcare becomes more specialized and complex, quality and attention to detail is struggling to keep up with patient requirements. The key to bridging this gap lies in medical technology (Medtech) and how AI, IoT and Analytics can be used to enhance the scope of traditional devices used by both healthcare professionals as well as patients. With intelligence powering Medtech, decision making becomes more certain, preventive care is more achievable, and patient centricity is front and center. It is no longer just healthcare organizations who are responsible in improving patient treatment. Smart and connected Devices armed with cognitive capabilities, is enabling better patient outcomes, shifting the focus to technology providers and manufacturers. FORCES PUSHING FOR AI IN MEDTECH Artificial Intelligence is currently at the heart of digital technology. Artificial intelligence in medicine has emerged as one of the most important tools for progression of the Healthcare industry. Several forces have come together in developing AI as a necessity in MedTech: Force 1: Healthcare professionals- Two great benefits of Artificial intelligence are accuracy and speed. While medical devices were previously result oriented, AI in healthcare enables them to provide better and faster analysis from highly complex information. The ability to provide medical and healthcare professionals almost perfect accuracy makes it much easier for them to take the right calls in tough situations, while also freeing up bandwidth for more important cases. An example of this is the advancement in medical imaging through AI. CT scans can be a challenge at times for doctors to analyze accurately, but AI powered algorithms can now help capture even small patterns of information indicating organ damage that would usually be difficult to spot for even the most experienced professionals. Surgery is also a field that will greatly benefit from Intelligence. Recovery from surgery is a difficult task for both patients and surgeons, and monitoring vitals post-surgery is a constant process. Using intelligence on pre and post-surgery data can help with tracking vital signs. Complimented by comparative patient analytics, necessary intervention by healthcare professionals can be taken at the right time. This provides both patients and the hospital better outcome in terms of recovery time and cost borne. AI powered medical devices are paving the path for a highly effective healthcare system. Force 2: Self-care Fitness and health have become an increasingly important trend leading to an increased interest in self-care and preventive measures. Patients want greater control over their treatment and conditioning and this puts emphasis to home-care. Devices need re-designing to create a solution for the above need. The enabler for this will be AI and other digital technologies. This type of care is possible through the integration of AI and cognitive capabilities with medical devices. Cloud connectivity, real-time analytics and smart sensors allow devices to perform several tasks like remote monitoring, or biometric indicator tracking that aid patient care greatly. Because this information Is also easily accessible to physicians, they can take preemptive action whenever required. With AI enabled medical devices, patients and care-givers are more in control of situations concerning the patient’s health. In addition to healthcare providers, AI powered Medical devices are witnessing a growing demand from the home-care/self-care segment as well. Force 3: Medical Devices Manufacturers With the homecare/self-care market for AI-powered medical devices growing, and healthcare organizations seeking to transform rapidly, medical device manufactures need to explore new avenues and consolidate existing ones to grow their business. This means, creating new services and solutions using smart, intelligent technologies, which will help them address these markets while being able to have significant impact on patient-care. With the rising number of technology companies looking to offer solutions over a wide range of healthcare requirements, competition is fierce, and differentiation is table stakes. Therefore, traditional OEMs must look at collaboration with new-age technology companies to integrate and implement AI in products to stay ahead of competition, while addressing the current market needs. Force 4. Insurance providers Medical device manufacturers are also responsible for ensuring proper functioning of the device. Here is where insurance companies also play a part in covering liabilities. This can become a problem when certain devices like surgical kits are vulnerable to contamination with no way of confirmation. But if these devices are connected then it becomes easier to track and ensure that sterilization of the equipment has not been compromised. There is no more ambiguity when it comes to claim amounts for insurance companies. Hospitals can keep an excellent track of inventory and place orders according to requirement. The medical device industry can save field visit costs while ensuring there is a significant increase in device uptime. All these advantages are secured with the simple integration of AI in healthcare and other smart technologies that allow medical devices to do more for all the stakeholders in a care-lifecycle. Technology integration in medical devices is not the most straightforward process. The right infrastructure and technology model needs to be in place for the product to be a success. Developing expertise in-house is tedious and expensive because of the scarcity of skilled talent in specific areas of healthcare technology. Partnership with new age technology companies provides the best course of action for OEMs to accelerate their product and platform engineering. It provides the dual advantage of speed and GTM experience. Ensuring security of patient data while being compliant with regulations such as HIPPA is a huge concern. With many sensors and data flows, creating the right security measures is a big challenge for the medical device industry/manufacturers. While developing the product is the first step, implementation is a much tougher hill to climb. Here is where domain experience and the right testing environment will play a crucial role in ensuring the product is a success. Innovation through collaboration between all stakeholders involved in the development and application of medical devices will be pivotal to the advancement of patientcare. What Doctor? – Why AI and Robotics will define new health – PwC Global Medical Device Market Outlook,2018 - Frost and Sullivan
<urn:uuid:3f7b14aa-a2b9-4e29-9761-1aa5716c0f2a>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/artificial-intelligence-changing-role-medical-devices-patient-care
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00401.warc.gz
en
0.948773
1,295
2.921875
3
What Are Business Logic Vulnerabilities? Business logic vulnerabilities are design and implementation flaws in software applications. They have a legitimate business function, but can also be exploited by malicious attackers to create unexpected behavior. These flaws often result from an application’s inability to identify and safely handle unexpected user actions. In most applications, business logic is implemented with defined rules, constraints, and user workflows. These rules and flows are defined either at the design stage or easier, when defining business requirements. Developers build these constraints into applications, defining how the application will behave. However, a weak point is the implementation of appropriate access rights at each point of the user flow. Depending on how user inputs are handled and parameters passed to functions and APIs, business logic vulnerabilities can occur. Business logic flaws are often difficult to detect and vulnerability management can be challenging. Typically, identifying them requires cooperation between individuals who deeply understand the business and manual testing teams. Automated testing of business logic vulnerabilities is challenging, but a new generation of tools can achieve it using artificial intelligence (AI) and fuzz testing technology. An example of such a tool is Bright Security DAST. In this article: - Adding Business Logic Vulnerabilities to the Vulnerability Management Process - Business Logic Vulnerability Examples - 4 Critical Best Practices for Business Logic Vulnerability Management - Eliminating Business Logic Vulnerabilities with Bright Security Adding Business Logic Vulnerabilities to the Vulnerability Management Process The risks associated with flaws in business logic are context-specific and depend on the nature of the business. Organizations must perform threat modeling, leveraging knowledge of the business processes carried out by the application, to accurately identify threat agents. Another aspect of vulnerability assessment is to identify processes related to revenue streams. These processes, if interrupted, could cause major damage to the organization. They could also be attractive for attackers to target because of their financial value. Business logic flaws are very common in large application projects involving large development teams. Developers who work on specific modules or components may not fully understand the work done by other developers, and might make incorrect assumptions. Without proper coordination and documentation, these assumptions can become vulnerabilities that can impact application security. Organizations need a process for regularly checking existing applications and new code for business logic vulnerabilities. This should be a part of the overall vulnerability management strategy. When the organization tests for and remediates known vulnerabilities in its applications, it must not neglect business logic vulnerabilities. Business Logic Vulnerability Examples Excessive Trust in Client-Side Controls A fundamentally flawed assumption is that the user only interacts with the application through the provided web interface. This is especially dangerous because it leads to the additional assumption that client-side validation prevents the user from providing malicious input. However, attackers can use proxy tools to tamper with data after it is sent from the browser and before it is passed to server-side logic. This effectively disables client-side controls. Accepting data at face value without performing proper integrity checks and server-side validation allows an attacker to do major damage with minimal effort. What they can achieve depends on the application’s capabilities and the value of the data it holds. Making Flawed Assumptions About User Behavior One of the most common root causes of business logic bugs is wrong assumptions about user behavior. Commonly, developers don’t consider potentially dangerous scenarios that violate these assumptions. For example, applications can appear secure because they implement a robust way to enforce business rules. However, some developers don’t realize that users and data within the application cannot be trusted indefinitely after passing these strict controls. By applying constraints only at the beginning of the interaction, and failing to verify them later, these applications can allow privilege escalation. In general, if business rules and security measures are not applied consistently across applications and throughout user interactions, they can create potentially dangerous vulnerabilities that attackers can exploit. Many logical flaws are related to the specific business domain or the subject matter of a specific application. An example is a discount feature in an eCommerce website. This is a significant attack surface, because it allows attackers to explore underlying logical flaws in the way discounts are applied. In general, any application function that makes it possible to adjust prices, make payments, or modify any sensitive data value based on user interaction, must be carefully considered. It is important to understand the algorithms the application uses to make these adjustments and in which circumstances they occur. A good way to test this is to manipulate these types of functions, attempting user inputs that will lead to unexpected results. 4 Critical Best Practices for Business Logic Vulnerability Management Identifying business logic vulnerabilities requires determining how an application should work and understanding how attackers might exploit the business logic. Penetration testers use this information to design and test threat scenarios. Human creativity allows attackers (and pentesters) to find workarounds. Identifying Logic Flaws Security analysts should assess the codebase to understand the business rules and logic of the application. They should identify the security controls in place, how they work, and any control gaps. Understanding the Software To protect the software, security, testing, and development teams must understand it fully. Organizations should compile lists of known vulnerabilities, licenses, and code components. Scanning the codebase can help identify vulnerabilities. Automating Security Processes Vulnerability management processes are often too complex and time-consuming for human security and dev teams to handle alone. Organizations should utilize automated testing tools like Interactive Application Security Testing (IAST), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST). Security teams should use the insights from scans and analysis tools to prioritize high-risk vulnerabilities. It is often impractical to address all vulnerabilities quickly, so prioritization allows developers to fix the most pressing issues first. Detect Business Logic Vulnerabilities automatically – Sign up for a free Bright account
<urn:uuid:e6efab37-018a-4975-9a65-6f01002e4f9b>
CC-MAIN-2022-40
https://brightsec.com/blog/business-logic-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00601.warc.gz
en
0.901624
1,210
2.75
3
Healthcare and the Role of Cable Advances in nanotechnology, internet of things, 3D printing, personalized medicine, genomics, and big data are creating a convergence that will allow lower-cost, more effective, and more convenient medical practices to become the norm over the next few years. These advances will change the medical landscape significantly, and create large opportunities for those who can integrate enabling and underlying technologies. Environmental and Demographic Challenges Now is a very interesting time in medical technology. Researchers at McGill University and the UCLA Fielding School of Public Health analyzed the efficacy of health care systems across the world and found the U.S. ranks 22nd out of 27 high income nations when it comes to increasing life expectancy (per dollar spent), meaning the US health care market is especially inefficient. In addition, in the next twenty years, due to an aging population, for the first time the US health care system will have fewer payers than payees – more people will be on Medicare and Medicaid than paying into the system. An aged population requires different (and frequently more expensive) services, which will add additional economic pressure on the system. For this reason, the next few years will see a shift towards demonstrable value in services, as well as shifts in the technological and entrepreneurial landscape, with a goal of providing more services at a lower cost (also called medical efficiency) – and thanks to technological innovation, this should come without compromising healthcare outcomes. Although the landscape may appear dismal, technological opportunities may save American society. Technological Factors at Work Several technological factors are at work right now that should help make this a reality: - Nanotechnology is becoming mature, especially as applied to in-the-field testing. Several technologies are currently being developed and tested, including a portable dengue fever test and HIV test. By 2020, most blood tests that previously required a trip to a regional lab may be available from anywhere, at a very low cost. - The Internet of Things (IoT) and 3D Printing are creating an innovation environment for devices where the cost of prototyping a new device has dropped over 75% in the last 3 years, with a similar drop in cost of end user healthcare devices. 3D printing has also allowed for the creation of customized health care, such as custom prosthetics. - Lower costs due to the above should make possible near-continuous testing of such things as blood pressure, blood glucose and hormone levels, leading to significantly better well-care outcomes for patients with diabetes, high blood pressure, and hormonal issues – the three most common chronic issues in the population. - In addition, low cost remote devices will allow better follow-up and post-procedure compliance on the part of patients. A recent study showed that average compliance after hospital stays is less than 50%, mostly because of inability to remember or follow post-care instructions. Several companies are developing software that is used both in-hospital and once the patient is home (for example, GetWell Networks), and will be integrating these systems with home care products that provide reporting and alerting on everything from outpatient activity levels to pharmaceutical consumption, allowing for far more comprehensive and effective follow-up care and significantly better outcomes. - Electronic Health Records (EHR) are rapidly getting standardized, and devices are beginning to interoperate more effectively with these systems. This allows big data analysis and patient monitoring automation at levels not previously seen. - Genomic testing is becoming available, allowing “personalized medicine” – testing against a user’s genetic information to determine whether a treatment is likely to work for an individual (rather than statistically across a broader swatch of the general population). These technological factors will enable more efficient analysis of patient records. More efficient analysis of patient records allows allows for “continuous analysis” of medical device, pharmaceutical, treatment, and procedural effectiveness across a broad population – a continuous clinical trial for existing and emerging treatments. This would allow innovative entrepreneurial and reimbursement and treatment models on the part of the medical insurance industry – keying reimbursement rates and copayments to the efficacy of treatments in the general population. These potential reimbursement and treatment models can lower one of the key factors that increase the cost of health care adapting the standard of care based on what’s new and more effective for only a minority of the population. These technical advances also allow for personalization of medicine potentially providing incentives for pharmaceutical companies to develop test that indicate the efficacy of a medicine for a particular patient. Big data models can also help in fraud detection, approval process, and detection of cross-indicators that define populations at high risk of complication, all additional causes of inefficiency in the health care system. Where Cable Adds Value to the Healthcare Equation There are a number of opportunities for the Cable industry regarding these developments: - Network Services: Remote testing and monitoring requires a highly secure, private backbone for data transmission, as well as the ability to transmit large quantities of imaging data. - Inter-Clinic Connectivity: As data interoperability standards mature for medical devices, it should allow independent remote “clinics” that can interconnect with any hospital – these could exist in caregiver facilities, offices, or neighborhoods. These clinics should be able to “dial-up” to a larger care facility and interoperate securely for the duration of a care visit, without having to be a part of that facilities’ network. Again, these clinics need secure, private, high-bandwidth services. - Data Centers: Big data and machine learning requirements of healthcare will require huge amounts of data and compute, an opportunity for large-scale datacenters. In addition, these services may require the ability to anonymize data for remote application consumption, and this will be a new class of cloud service. By Ken Fricklas —
<urn:uuid:9fbe221b-f373-414b-a727-65f6061f6656>
CC-MAIN-2022-40
https://www.cablelabs.com/blog/healthcare-and-the-role-of-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00601.warc.gz
en
0.936919
1,195
3.015625
3
Because of its cooling power, the HP Apollo 8000 can pack more servers in a smaller space than traditional systems – up to 144 servers per rack. Hewlett-Packard has introduced a new supercomputer server system cooled by warm water. HP describes its Apollo 8000 as “the world's first water-cooled supercomputer with dry-disconnect servers, delivering liquid cooling without the risk.” The technology is based on work HP did with the National Renewable Energy Lab (NREL) on the Peregrine supercomputer. Water cooling is 1,000 times more efficient than air. According to HP, compared to air-cooled systems the Apollo 8000 can provide up to 4 times more teraflops per sq ft, 40 percent more FLOPS/watt and save more than 3,800 tons of CO2 annually. Antonio Neri, HP’s head of servers and networking, said the supercomputer requires 28 percent less energy than air-cooled systems, according to a Wall Street Journal blog. The system uses sealed heat pipes to circulate water past the cores and is paired with an HVAC power distribution system and an iCDU (cooling) rack for high efficiency. Because of its cooling power, the system can pack more servers in a smaller space than traditional systems – up to 144 servers per rack. The massive amount of space and energy required to power supercomputers has been a limiting factor in growing processing power. In fact, utility and energy provider ComEd reports that cooling equipment used to remove heat in data centers accounts for nearly 45 percent of a data center’s energy costs. “The HPC world has hit a wall in regard to its goal of achieving Exascale systems by 2018,”said Peter ffoulkes, research director at 451 Research, in a Scientific Computing article. “To reach Exascale would require a machine 30 times faster. If such a machine could be built with today’s technology it would require an energy supply equivalent to a nuclear power station to feed it. This is clearly not practical.” Jim Ganthier, HP Server’s vice president of global marketing concurs. "The present course is unsustainable. If you continue the present course/speed over the next five years, you would need a gigawatt of power, the output of Hoover Dam, and 30 football fields to accommodate the space requirements,” he told CRN. Today supercomputers are primarily used by government research organizations, such as the Department of Energy laboratories, and corporations that require massive computing power for complex calculations. However, there is an ever increasing need across all sectors for increased processing power as the growth of big data continues. Recently Lawrence Livermore National Lab announced it is making its Catalyst supercomputing cluster available to industry, universities and other collaborators to test big data technologies, architectures and applications. Other providers, including IBM, also sell water-cooled supercomputers, and the technology is spreading. According to The Register, some of the world's top 500 supercomputers may start using the technology by mid-2015. One of the secondary benefits to liquid cooling of high-performance computers is that the process also allows data centers to use the heat transferred to the water to heat office space and laboratories – improving waste-heat recovery and reducing water consumption in the data center. The Peregrine system at NREL’s Golden, Colo., Energy Systems Integration Facility uses water that’s 75 degrees Fahrenheit for cooling. “That temperate allows us to cool the data center effectively, without compromising the IT equipment, without any chillers,” said Steve Hammond, director of Computational Sciences, NREL. When the water is returned from Peregrine, it’s about 95 degrees, which creates a ready-made source of heat for the facility. NREL expects to save $1 million per year in operations costs – $800,000 in server cooling costs and $200,000 in building heating costs – by using HP’s warm water cooling technology, according to a HP case study on the project. The National Security Agency is going one step further in data center energy conservation. It will be using wastewater to cool its servers at its data center in Fort Meade, Md. Up to five million gallons a day of treated wastewater, also known as graywater, will be used for cooling systems at the data center, due to open in 2016.
<urn:uuid:542bbaa5-add9-4427-8c7d-9287e03b0c20>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2014/07/water-cooled-system-packs-more-power-less-heat-for-data-centers/296998/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00601.warc.gz
en
0.927903
922
3.140625
3
As students return to school, the month of September inspires learning and collaboration to strengthen opportunities for work and personal interests. Many of us enjoy the range of benefits that technology introduces to our lives, and artificial intelligence is embedded in many applications that we are starting to depend on for employment and home life. Unfortunately, we cannot ignore the ethical, privacy, and human rights issues that AI presents to individuals, locally and globally. In conjunction with the Morality & Knowledge in Artificial Intelligence (MKAI) Forum, PrivCom is pleased to share this guest post by Animesh Jain, Vemir Ambartsoumean & Richard Foster-Fletcher (Chair of MKAI). The writers of the below article explore the importance of academia's role in mutual trust-building exercises and cross-cultural development to promote collaboration and cooperation in order to establish AI governance that serves & protects humanity.
<urn:uuid:37bf50f9-8753-40e3-affe-b9c9a552cf34>
CC-MAIN-2022-40
https://www.privacy.bm/post/the-role-of-academia-in-bridging-global-ai-divides-cultural-representation-in-ai
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00601.warc.gz
en
0.933759
177
2.6875
3
As technology creeps into more and more areas of consumers’ everyday lives, the risk of overexposure to gadgets, content, games and high-tech services rises. How much is too much? This first article in a three-part series on the potential dangers of substantial exposure to technology focuses on the risk to infants and children. From Baby Einstein tapes for infants to Reader Rabbitsoftware for two-year-olds toNintendo consoles given as early as fifth birthdays and beyond, technological advancements designed to stimulate the intellect and entertain the soul are overwhelming many 21st century kids. Technology access has been linked to improved reading skills, but some believe that too much technology can impose dangers on today’s youth — including vision impairment, technology addiction and sexual solicitation. To be sure, technology opens the doors to a world that includes much more than convenience, knowledge and entertainment. Pros and Cons “In the past, we only had to be concerned about too much TV exposure. Now we have video games, computers and cell phones. It is overwhelming for young children and creates patterns of behaviors similar to addiction patterns,” said Mali Mann, M.D., adjunct clinical assistant professor of psychiatry and behavioral science at Stanford University’s School of Medicine. “Their brains get used to too much auditory and visual stimulation — and in the absence of these stimulations, they do not know what to do with themselves,” she told TechNewsWorld. “They get anxious, restless, bored and aggressive.” Researchers are conducting numerous studies to measure how much children of all ages use technology and to evaluate its impact. The responses are mixed — and telling. Some reports have condemned the use of computers in schools. Others have endorsed Internet use in the classroom. Whether or not they are exposed to technology in the classroom, kids often have bedrooms that are media centers, according to a Knowledge Networks/SRI study. It reveals that nearly two-thirds of children have a television in their room, while 17 percent have their own computer and 35 percent have a video game system. The use of this technology begins early. According to a Kaiser Family Foundation survey, 31 percent of children age three and under are already using computers. Sixteen percent use them several times a week, 21 percent can point and click with a mouse by themselves, and 11 percent can turn on the computer without assistance. What’s more, a third of children — many as young as 11 years old — use blogs and social networking sites at least two or three times a week. Yet two-thirds of parents don’t even know what a blog is, according to a report by NCH Children’s Charities and Tesco Telecoms. The report reveals an alarming gap in knowledge between parents and their children when it comes to technology, breeding concern that children may be at risk of exposure to sexual predation and other dangers. Incessant exposure to “all day TV,” violent video games, instant messaging, and the always accessible cell phone interferes with the development of the psychological traits known to be essential to positive outcomes for children, according to Leah Klungness, Ph.D., psychologist in private practice and co-author of The Complete Single Mother. Self-control is one of these essential psychological traits. “Research findings suggest that the ability to focus attention and delay gratification have both a hereditary and environmental component. Differences among children in their ability to focus attention and exercise control emerge before a child’s first birthday. No one is sure how much of the ability to exercise self control is hereditary or how much is learned,” Klungness told TechNewsWorld. In other words, through experience, children can be taught to exercise self-control. On the other hand, such innate abilities can be “unlearned” by experiences that reduce a youngster’s capacity to exercise self-control. Constant media exposure is an experience that will reduce self-control in children, Klungness argued, because media is all about immediacy. How Young Is Too Young? Some child experts are asking what age is too young to introduce children to the immediacy of technology. Launched in May, BabyFirstTV rekindled the debate over age-appropriate technology and media. While BabyFirstTV’s founders cite top child development experts’ endorsements of the interactive TV channel, which provides content for babies and toddlers, not all child development experts agree that babies should watch TV. What does that mean for the 68 percent of children under age two that the Kaiser Family Foundation reports watch on-screen media every day? “When used responsibly, television can be a powerful interactive medium that provides parents with unique opportunities to bond with their children. The key is the quality and interactive nature of the content, and this is what BabyFirstTV offers to parents,” said Dr. Edward McCabe, M.D., Ph.D, physician-in-chief, Mattel Children’s Hospital at UCLA and BabyFirstTV advisory board member. Differing Medical Opinions However, the American Academy of Pediatrics maintains that children under the age of two should not be exposed to television for two major reasons: First, the baby’s brain is still developing. Doctors do not understand what happens when that brain is exposed to too much television stimuli, explained AAP spokesperson Dr. Kenneth Ginsburg. Second, the AAP is concerned that television will get in the way of child-parent bonding rather than encourage it. “What we know absolutely is that the most important thing that happens in the first couple of years of a child’s life is they form a deep connection with their parents,” Ginsburg told TechNewsWorld. “We know that connection is forged through active play, active interaction and reading. We know that kids who are read to grow to learn to love books, and that it is a stepping stone towards a lot of positive things that can happen in the future.” Oh, the Irony! Ironically, medical experts said parents with the highest educational goals and aspirations for their children and the resources to make choices to reach these goals are the ones who typically flood their children with every type of media from “educational” videos to the latest in computer technology. In doing so, Klungness said these parents ignore common sense and practical experience and actually deprive their children of the very experiences that allow them to master the sort of self-control that leads to academic success. “What kinds of experiences develop increased self-control?” Klungness asked rhetorically. “Activities which require patience — such as waiting for seedlings to sprout or working on a craft project which requires wait time between steps — are the types of activities accessible to all children with appropriate parental supervision.” Klungness summarized a point on which most medical experts can agree: Parents should supervise their children’s use of technology. Premature exposure and overexposure to technology creates emotional numbness, confusion between fantasy and reality, and pent-up anxiety that leads to aggressive behavior in children, according to Stanford’s Mann. “Too much technology exposure can lead to inattentiveness in the classroom setting for school-aged children. They may get diagnosed incorrectly with Attention Deficit Disorder or Attention Deficit Hyperactivity Disorder, or even be erroneously labeled with bipolar disorder,” Mann stressed. “These kids do not show interest in healthy physical exercise, and [they] lose interest in sports.” Mindfulness trainer Maya Talisman Frost has four kids — ages 15, 16, 18 and 20 — all of whom use technology on a regular basis to stay connected. Her take on technology is simple: Pay attention. Frost told TechNewsWorld that overworking, overspending, overeating, and overusing technology, among other things, are a direct result of not paying attention. “The key to managing kids’ technology use is to establish clear ‘tech-free’ zones,” she explained. “This means recognizing times when the present moment is the priority and technology is given a secondary role. Kids need to learn that there are times when paying attention to those around you is of primary importance, no matter what type of urgent phone calls or instant messages might be coming their way.” Practice What You Preach Frost and Mann both stress the need for parents to practice what they preach. If mom and dad have a difficult time disconnecting from technology, then kids will not see the need to disconnect either, they said. Parents set the tone when it comes to limiting technology. “Parents use cell phones on the way to dropoff or pickup times. They are absorbed in their own virtual world and pay no attention to their children’s departure and reunion at the end of the day,” Mann asserted. “Children learn from their parents as if these are perfectly normal behaviors, and they emulate them, too.”
<urn:uuid:64277594-3f6e-4ac5-8ec4-22e29a186c5d>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/kids-and-tech-how-much-is-too-much-52677.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00601.warc.gz
en
0.953361
1,878
3.40625
3
What does “zero-day” mean? The term “zero-day,” sometimes written as “0-day,” is used to describe a software security flaw that exists in a product that is already in circulation. To put it plainly, it means that the developers have “zero days” to fix the issue because it is already possible for bad actors to use it to their advantage. A zero-day attack refers to the use of a zero-day vulnerability to damage, steal, cripple or otherwise disrupt a system for unauthorized or criminal intent. What is the Log4Shell zero-day vulnerability? The Log4Shell vulnerability is a flaw discovered in Apache’s Log4J logging tool. When properly exploited, this flaw can allow an unauthorized user to take control of a server, install malware, run bots, set the stage for ransomware attacks, utilize resources to mine for cryptocurrency or snoop into the data present to steal valuable information. In summary, Log4Shell essentially gives hackers a blank check when it comes to how they may wish to exploit their targeted system. Why is Log4Shell so dangerous? While newly discovered, Log4Shell is already causing tremendous stress and concern among organizations all over the world, with some researchers confident that it may be the single largest, most critical vulnerability in modern computing. Apache itself ranked the exploit as a 10 out of 10 on its scale of severity, and the CEO of cybersecurity firm Tenable, Amit Yoran, has described Log4Shell as “the single biggest, most critical vulnerability of the last decade.” Log4Shell’s severity is twofold: Apache’s Log4J product is ubiquitous, present in potentially millions of apps across both private and government sectors. Because it is a foundational piece of software, there are almost no systems that aren’t at risk. IT administrators all over the world are scrambling to patch the vulnerability in a race against thousands of efforts by criminals to take advantage of it. Secondly, it is remarkably easy for hackers to use the flaw. Criminals can access a web server without the use of a password and with little of the savvy usually associated with breaches that allow this degree of control. How was the Log4Shell vulnerability found? On December 9th, a tweet was posted by Chen Zhaojun of the Alibaba Cloud Security team that described the vulnerability. Believed to only be present in Minecraft, Log4Shell was used to essentially create pranks within the game. However, it was soon discovered that the flaw was actually present in every single instance of the LogJ tool. How are organizations reacting to Log4Shell? It is believed that the Log4Shell exploit had been in circulation among hackers prior to its official discovery for at least a number of days. However, since becoming public, efforts to utilize the flaw have exploded, resulting in a nightmare scenario for security professionals who have found themselves working extensively to fortify their networks against hackers. Apache has released a patch for LogJ and, naturally, is urging all users to update their systems immediately. In some cases, administrators have opted to simply take their systems offline in pre-emptive efforts to avoid catastrophic intrusion into their data. The government of Quebec, Canada, feeling that the inconvenience of shutting down their websites in order to allow IT professionals to fix the vulnerability outweighs the risk associated with allowing the possibility of it being used, did just that. Their websites are being combed over by developers who are working to patch each instance of the flaw before allowing public access to the sites to resume. The US’s Cybersecurity and Infrastructure Security Agency (CISA) has released a statement regarding the exploit, its severity and the importance of communicating the importance of installing Apache’s update: “This vulnerability, which is being widely exploited by a growing set of threat actors, presents an urgent challenge to network defenders given its broad use. End users will be reliant on their vendors, and the vendor community must immediately identify, mitigate, and patch the wide array of products using this software. Vendors should also be communicating with their customers to ensure end users know that their product contains this vulnerability and should prioritize software updates.” CISA is urging administrators to take the following steps with regard to the Log4Shell exploit: - Enumerate any external facing devices that have log4j installed. - Make sure that your security operations center is actioning every single alert on the devices that fall into the category above. - Install a web application firewall (WAF) with rules that automatically update so that your SOC is able to concentrate on fewer alerts. What can we expect now? As with all software vulnerabilities, organizations and companies all over the world will have to patch their systems to ensure continued security. A great number of breaches are already taking place, with threat actors installing malware and digging into compromised systems. While some of these instances will be caught immediately, recent history has shown us that network vulnerabilities can have effects that last for months and even years after patches are released. Vulnerabilities that were discovered to be exploitable within Microsoft Exchange Server in January of 2021, for example, continue to plague organizations in spite of the company quickly providing updates that fixed the highly publicized flaw. In many cases, administrators simply never patch or update their systems due to ignorance or budget constraints. Because Log4Shell is both new and highly exploitable, we can expect to feel the effects of the flaw for months to come, as those that are slow to update find themselves targeted and those that did not update soon enough discover damage that had remained hidden or fall victim to newer iterations of the attack. At this point in time, security experts do not even fully understand the entirety of the flaw’s implications and the ways in which it may be weaponized. Many expect that Log4Shell will likely remain headline cybersecurity news well into 2022 and beyond unless security teams bolster their defenses significantly. Some cybersecurity researchers have already noted that hackers are further weaponizing the flaw with more than 60 Log4Shell mutations developed in 24 hours. - The Log4Shell 0-day, four days on: What is it and how bad is it really? By Dan Goodin, 13 Dec 2021, Ars Technica - What is Log4Shell, and why are we panicking about it? By Alex Scroxton, 13 Dec 2021, Computer Weekly - Hackers start pushing malware in worldwide Log4Shell attacks by Lawrence Abrams, 12 Dec 2021, Bleeping Computer - Recently uncovered software flaw ‘most critical vulnerability of the last decade’ Associated Press, 10 Dec 2021, The Guardian - What’s the Deal with the Log4Shell Security Nightmare? By Nicholas Weaver, 10 Dec 2021, Lawfare - Quebec government shutting down websites as a ‘preventive’ measure by Virginie Ann and Clara Descurninges, 12 Dec 2021, Montreal Gazette - Statement from CISA Director Easterly on “Log4j” Vulnerability 11 Dec, 2021, CISA - Zero-Day Exploits & Zero-Day Attacks Kapersky - Zero-Day Exploits & Zero-Day Attacks Kapersky Kapersky - Experts: Log4j Bug Could Be Exploited for “Years” By Phil Muncaster, 14 Dec 2021, InfoSecurity Log4Shell Is Spawning Even Nastier Mutations By Lisa Vaas, 13 Dec 2021, ThreatPost
<urn:uuid:4a0167f5-9b88-4160-ab65-f594f35a19ce>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/log4shell-zero-day-vulnerability-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00001.warc.gz
en
0.948265
1,554
3.265625
3
10 gigabit Ethernet (or 10 GB Ethernet, 10 GbE) is the most recent advancement of the networking technology. What is 10 gigabit Ethernet? “Ethernet” is the shorthand term for the manner in which one can physically connect computers and devices using an Ethernet switch and the appropriate cables. This relatively small, local area network is referred to as a LAN. LANs can be made of only a few pieces of hardware in a home office, or they can be large enough to encompass an entire retail store or college campus. Traditional Ethernet, with a speed of 1 gigabit, has been used in various networks for decades. This speaks volumes about its usefulness and ease of application. 10 gigabit Ethernet, often abbreviated as 10 GB or 10 GbE, is ten times the faster than its predecessor. Why your small business should upgrade to 10 gigabit Ethernet? Making upgrades to any established network takes careful consideration, budgeting, and planning. Here is a list of six ways that an upgrade to 10 GB Ethernet can help your small business grow and compete for years to come: 1. 10 gigabit Ethernet is faster Small businesses and data centers have become increasingly virtualized, connected, and dense with computers and other hardware required to stay constantly connected. In many offices, congested networks of speedy, high-output devices have outpaced their network infrastructure. An upgrade to 10 gigabit Ethernet removes the network bottleneck and allows for faster communication between devices thanks to its added speed. 2. Make your small business more flexible 10 GB ethernet grants small businesses the headroom required to expand operations without obstacles and without compromise. A network foundation built on 10 GB Ethernet is the smartest way to ensure smooth sailing with regard to network growth and density as the new technology is designed for scalability. 3. 10 gigabit Ethernet takes up less space Even the most well organized data centers can use a little breathing room. 10 GB Ethernet switches provide far more power in a physically compact package. Entire cabinets of traditional Ethernet switches can be exchanged for only a few 10 gigabit models. Fewer switches means less cable, and less clutter in general means that your data center can benefit from better airflow, less maintenance, and more efficient temperature management. 4. You can keep up with data requirements Data requirements double at a rate of every two years. The new standard of HD video, the increasing prevalence of 4K streaming content, and employees often using their own personal devices at the office has resulted in a huge impact on office data usage. Electronic door locks, security cameras, and card readers are also easily forgotten when it comes to considering how much data your network will generate. Businesses that migrate to 10 GB Ethernet networks are able to sidestep drag on their LANs and still manage more bandwidth and deeper connectivity. 5. You can probably use a lot of what you already have One of the biggest financial and logistical obstacles when it comes to major upgrades is that you can rarely replace one component without then needing to replace those associated with it. To take advantage of all that 10 GB Ethernet has to offer, you need Cat 6 or higher cables. Another option is a full upgrade to fiber cables. This is a significant obstacle in the way of updating, but many manufacturers have accommodated this by offering 10 gigabit Ethernet switches that support different speeds across different cable types. Purchasing a switch with this option allows businesses to build a new 10 GB network without necessitating a simultaneous, full scale cable replacement. 6. 10 gigabit Ethernet has become more affordable The cost of an upgrade has to be weighed against the business gains. 10 GB Ethernet has become more affordable partly because of the availability of refurbished switches and hardware. Many refurbished switches still have warranties in place, and there is also an advantage in using equipment that has already been tested in the real world. Not being an early adopter allows you to avoid many of the bugs and surprises associated with a brand new, untested product. Businesses everywhere strive to improve efficiency and maximize output. The advent and application of 10 GB Ethernet is an important step forward in achieving your small business goals. An upgrade to 10 GB Ethernet for your small business: - Allows for greater network speed and reliability - Future-proofs your small business for growing data demands - Ensures a less cluttered and cooler data center - Permits unhindered network growth - 10 Gigabit Ethernet Guide: is it time to upgrade to 10 GbE? - 10 gigabit Ethernet (10 GbE) - How to Upgrade Your Network to 10 Gb/s and Speed Up Your Workflow - 10 Gigabit Ethernet Switches: 6 Benefits You Might Not Have Considered - What Is Ethernet? - Gigabit Ethernet Versus 10 GbE: What’s Best For Small Businesses? - 10 Reasons Your Business Should Switch to 10-Gigabit Switching
<urn:uuid:f5388310-e863-45ce-9e63-7fcfcb7fa3b9>
CC-MAIN-2022-40
https://news.networktigers.com/opinion/10-gigabit-ethernet-6-reasons-businesses-should-upgrade/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00001.warc.gz
en
0.937647
1,019
2.53125
3
White Paper: Concise Guide to E-discovery E-discovery is the process of identifying, preserving, collecting, reviewing, analyzing and producing case relevant information during civil or criminal legal actions. The matter presented during e-discovery is useful in pre-trial motions as well as in the trial itself. Information sought during e-discovery can include electronic documents, testimony and other information that may be considered necessary by the court. E-discovery is an extension of discovery, “the compulsory disclosure of pertinent facts or documents to the opposing party in a civil action, usually before a trial begins.” It is a well-established process pertaining to the electronic content including email communication, instant messages, word files, spreadsheets, presentations, invoices, contracts, social networking content and all other electronic content created in an organization. Data for e-discovery can be stored in desktop computers, laptops, smart phones, backup tapes, servers and even employees’ personal computers. Preparing for e-discovery is not only useful for litigation purposes, it also helps in a number of ways including managing and avoiding employee misconduct and legal actions, complying with regulatory obligations, preserving intellectual property, knowledge and other data, helping in storage management and avoiding embarrassing data loss or theft. Organizations can prepare for e-discovery appropriately if they understand the importance of e-discovery, gain insight into key issues and relevant regulations and create an infallible strategy with the help of technology so that the data is managed, preserved and produced properly. Why is preparing for e-discovery important? Every organization is under obligation by law to retain and dispose its data carefully so that they can respond to e-discovery requests quickly without any scope of human error; understand which data can be accessed and which cannot be; and discover the growing variety of content. Benefits of E-discovery There are many benefits to preparing for e-discovery; some of them include: Prepare for e-discovery request during litigation Early preparedness for e-discovery can help an organization to perform early case assessment and understand its legal position. It can also result in discovering some proofs to gain advantage over opposition. This can reduce legal fees and other expenses. Reduces risk of lawsuits on the whole If a strict policy of e-discovery is followed in an organization, it can monitor employee behavior for actionable statements or activities, and reform and enforce corporate policies quickly. Saves time and money Preparing for e-discovery can save time and efforts of staff and legal counsel during e-discovery request, resulting in prompt response. It can also reduce costs like legal counsel expenses, email and records management software, storage and provide a number of other savings. Saves from unfavorable legal judgments, fines and bad press Good e-discovery capabilities allow organizations to make prompt and accurate decisions, as they are equipped with better information. This increases the chances of favorable judgment during trial, reduces legal costs and decreases time required to resume normal business operations. It also saves from fines, damage to goodwill and public relations from negative press and other more dramatic outcomes. DMS for E-discovery Document management system allows organizations to prepare for e-discovery with the help of tools like document profiling or metadata preservation, version control, audit trail, security and document retention. Docsvault offers three document management editions – Docsvault Small Business, Docsvault Enterprise and Docsvault Ultimate. While the SB edition offers a full range of document management software, Enterprise edition combines benefits of record management and web access along with document management and Docsvault Ultimate edition comes with all the features of our document management software along with power pack add-ons developed by Docsvault. Let us briefly have a look at all the tools that can help companies improve their customer service. How to Prepare for E-discovery? Prior to litigation - Train staff to recognize electronic data such as files and communication for “potential litigation” - Form an e-discovery records policy with stakeholders including IT, HR, Legal and regional managers - Understand the knowhow of document management and retention plan - Understand processes that might affect collection, tampering, preservation, and deletion of data - Adopt software system to ensure smooth document processing - Create a data map to ascertain where data is stored and check its consistency to see whether it is accessible or not; classify data such that only authorized staff can access it - Familiarize staff with the document management and retention plan - Perform routine computer and IT checks to ensure you can take advantage of the Safe Harbor rule - Standardize processes of repeatable operations like document collection, review, privacy and security issues, etc. Reasonable Anticipation of Litigation or Known Litigation - Implement your litigation hold policy - Identify imminent spoliation concerns and take immediate action - Determine scope of discovery, including file types, 3rd party data sources, no. of custodians, accessibility/inaccessibility of data, metadata, backup media and date ranges - Send written notices to involved personnel - Follow up orally or in e-mail at least quarterly with personnel - Prepare for your meet with opposing counsel after conferring with IT personnel - Determine how the court prefers to handle e-discovery issues, i.e. local rules or standing orders - Begin the budgeting process - Determine scope, devices, accessibility of data, number of custodians and volume of data - Determine reasonableness and proportionality of what the discovery should cost in terms of the exposure of the case - Implement collection of data – paper and electronic - Determine who will collect the data, i.e., staff, custodians, attorneys, and/or IT personnel - Choose& Process Data - Scan and code paper documents - Review Data - Define standardized review process for Privacy, Privilege, Audit log, etc - Produce data in the form agreed upon. If no agreement, give opposing counsel notice of the form you intend to use for production. Give opposing counsel time to request an alternate form - Notify custodians that litigation has ended and the hold is removed. Instruct them to follow normal document retention procedures with regard to the data held. Notify them that some of the data may be pertinent to other legal holds and should be maintained per those hold instructions Federal Rules for E-discovery According to the amendments to the FRCP that went into effect on December 1, 2006, organizations must manage their data in such a way that this data can be produced in a timely and complete manner when necessary, such as during legal discovery proceedings. The key issues that bought about these amendments are: - Electronic data is normally stored in a greater volume than hard copy documents - Electronic data is dynamic and can be easily modified or spoiled - Electronic data may become incomprehensible when separated from the systems that created it - Electronic data contains non-apparent information, or metadata, that describes the context of the information and provides other useful and important information The main issue during litigation is identifying relevant data that is to be presented. DMS features like document metadata and other folder sorting features can help ease the process of understanding the universe of information required and respond to e-discovery requests better. This critical step needs to be taken to ensure that data is protected from spoliation and tampering. If spoliation occurs, the consequences can be expensive. DMS offers fine-grained security to protect data from spoliation. During this phase, all relevant ESI is collected from the various sources that contain it, including emails, backup tapes, file servers, desktops, laptops, employees’ home computers and other sources. If a strict record management policy is followed with the help of software system, this step becomes less expensive and time consuming. Collected data should also be processes into a) relevant data and b) data not relevant during litigation. Shared repository, review and comment process can make this step easier. This phase involves different kind of activities such as determining exactly what the data means in the context of litigation, developing summaries of relevant information, determining the key issues on which to focus, etc. The production of data involves bringing the relevant data to any parties or systems that are concerned. It also includes the activities focused on delivering data in the appropriate form(s), including DVDs, CD-ROMs, paper, etc. It is a recognized issue that some electronic data becomes incomprehensible outside the system where it was created. It is important to adopt technological software that does not change the format of data at any stage of document process. What should you do next? While it may seem from the above information that it is easy to prepare for e-discovery, it is important to understand that putting this information in action rather than leaving it as a theoretical exercise is easier said than done. Don’t ignore this exercise thinking it won’t be needed in your organization and take the next step. What e-discovery means for a particular organization? The first step is to determine in which way e-discovery is important to a business or organization. Apart from the law and regulatory compliance, there are many other factors such as potential litigation or audits that make it important for organizations to prepare for e-discovery. Since every organization, its policies, business processes and technologies are different, the way it prepares for e-discovery is different too. It is important to consider industry standards, budget and policy to justify e-discovery in future before forming any policy. How to form a data retention and deletion policy? Every organization musts et data retention and deletion schedules for all its data in order to meet current or potential e-discovery or other requests. Most organizations out of fear of deleting important data often retain more data than required. This can lead to higher expenses in terms of storage and longer time spend in analyzing, reviewing and presenting in time of e-discovery request. Legal counsel can prove to be helpful in forming a data retention and deletion policy. What e-discovery tools are needed? The next step is to adopt tools that enable organizations to prepare for e-discovery. These technological tools help by making data more accessible and easily reviewable during the early case cycle. Tools like document management system can help classify data since the time it is created with the help of metadata properties until the time of discovery through faster search feature. How to implement legal hold capability? If a legal action is expected or once it is in progress, it is necessary that an organization immediately initiate identification and preservation of all relevant data, such as all emails sent from staff to concerned individuals and all data concerning the litigation. If properly prepared for e-discovery, organizations can right away place a hold on data when requested by a court or regulator, or on the advice of legal counsel, and retain it for as long as necessary. If an organization is not properly able to place a hold on data when required, it can face a variety of negative outcomes, such as legal fines, bad publicity and loss of face. Docsvault document management software provides record management features such as data retention and deletion. You can even put data on hold during anticipated or ongoing litigation with the help of this feature. It is no longer optional but mandatory to be prepared for e-discovery. This must be done to proactively manage legal actions, regulatory audits and other such activities and respond to them quickly. An organization has a lot of obligations in order to prepare for e-discovery, including becoming completely aware of all the past, current and potential information and deletion obligations; effectively managing and retaining documents and data; implementing software technology that can ease processes such as preservation, deletion, production and retention; be prepared for litigation and take steps to minimize the risk of non-compliance with e-discovery obligations. Docsvault helps organizations to comply with e-discovery obligations with the help of features such as comprehensive and full text search, document profiling, email integration, version control and audit trail that ease e-discovery processes such as preservation, analysis, production and presentation.
<urn:uuid:f6ec9738-a9b0-4b76-b8f5-2d59f28e9ccb>
CC-MAIN-2022-40
https://www.docsvault.com/white-paper-document-management-guide-e-discovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00001.warc.gz
en
0.927709
2,546
2.6875
3
There are many different background noises known to boost productivity or to help you fall asleep faster. Here’s how to find the right one for you. When it comes to working, some of us prefer absolute silence, while others are more productive when background noise is present. Whether it’s the sounds of a coffee shop, leaves rustling, or just “noise,” technology has made it easier to tailor our preferences. No matter what environment you’re in, you can now generate a specific type of noise through an app, or simply by searching online for specific sounds. But how do you know where to start? We’ve all heard of “white noise,” but there are actually many more types of broadband sounds, including pink, white, red, and brown noise. The answer to why each has its own characteristics and different purpose is complex, so we asked musician and audio engineer K. Marie Kim to break it down for us. Kim has worked on tour or in studio sessions with artists like Lorde, Rosanne Cash, and Dead Kennedys. She is a monitors engineer at various music venues in New York City, and was recently touring as keyboardist for the artist Mitski. What are the different types of broadband noises, and why are they assigned different “colors”? Kim: There are about six defined colors of broadband noise. Pink noise and white noise are used most commonly in audio and electronic applications. I’ve also seen some references to the use of red noise (also known as Brownian noise after Robert Brown) for sleep aid or masking usage. The color name assignments are loosely correlated to colors of light. Where white light contains all visible wavelengths, red light has higher energy at lower frequency waves, and pink is the in-between. How would you describe what you hear between the different types of broadband noise? Kim: It helps to know that humans perceive varying frequencies as varying pitch. It’s especially easy to tell apart if you hear them closely in series. Pink noise is often heard as being even across all frequencies (it can be somewhat comparable to the fullness of a commercial pop or rock record, or perhaps closer to a dense heavy metal or noise record), mostly due to the fact that human ears are generally less sensitive to low frequencies. White noise sounds a little hissy or thin, like a small faucet running at high pressure. Red or Brownian noise on the other hand, sounds rumbly and lacks brightness, like a herd of animals moving along. In terms of productivity, is having these types of noise better than having just silence? Kim: There actually isn’t enough research to fully support this claim. There have been some studies that show there may be correlation between noise being present during memorization and recall tasks, but only certain people performed better with significantly loud noise compared to silence. There is the possible explanation that our brain associates the noise stimuli with memorization task functions, thus enabling more robust recall when the same stimuli are present. However, the likely explanation of why some people think they perform better with noise present is that it masks other distracting noises in our work environment. Even our quietest environment will have distracting sounds (like cute bird chirps!), and music is often an emotive and attention-sucking stimulus. So out of those options, the information-vacant stream of broadband noise can help reduce distractors to help productivity. Are different frequencies better for different situations, i.e. sleeping or concentration, and what’s the best way to implement them? Kim: If you have unwanted sounds you want to mask, using broadband noise on headphones can be helpful. Using pink or even red noise may prove more helpful than using white noise, since lower frequencies can mask higher frequencies, but not as much so the other way around. Broadband noise has also been shown to help you fall asleep faster by not only masking environmental sounds, but at the right volume, the randomly fluctuating nature of pink or white noise also helps increase brainwave synchronization which quickens sleep onset. Some apps and sound machines use nature sounds. Is this as effective for concentration? Do sounds in nature have different frequencies? Kim: Actually, a lot of these nature sound models create broadband noises. Rainfall, rustling leaves, and ocean waves are great examples of broadband nature noises that are similar to pink noise. A big difference is that white/pink/red noise are constant, randomized mathematically generated noises, while sounds in nature vary over time in power density distributions. In other words, nature sounds have audible movement over a longer time. Depending on the preference of the listener, they may find constant generated noise best for concentrating, or more natural gradually modulating noise more effective, and some may prefer layers of both kinds of noises (like me!) or a synthesized version meant to represent something in between the two worlds. Using noise to help us calm down, focus, or fall asleep quicker are easy and safe applications, but always be aware of temporary and permanent hearing damage that can occur from prolonged exposure to loud sound pressure levels.
<urn:uuid:d791d815-a876-4a2d-8eb3-9e8c39ca05e2>
CC-MAIN-2022-40
https://blog.dashlane.com/audio-engineer-what-background-noise-best-for-concentration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00001.warc.gz
en
0.947818
1,059
2.703125
3
Document Management is an Art Document Management Do’s and Don’ts Documentation is an intellectual property, when properly managed can give businesses an edge and a competitive advantage even when producing or delivering uncompetitive products or services. Documentation make historical references, guiding processes, valued communications and trusted reporting. On the other hand and in many instances, documentation can be nothing more than a collection of inconsistent, irrelevant and inapplicable sets or records. So, what should organizations do to maintain a healthy documentation of their activities? Here are some key observations. Have a Clear Purpose Each document produced or needs to be produced should have a well-defined purpose. Key questions should be asked regarding what role a document serves, who its key audience is, what decisions are made from a document, what expectations the audience has from it. Often after answering these questions, different documents are found to have the same purpose and in turn can be combined to reduce redundancy and duplication. Templates of documents should always start with a brief description on what the document is for. Aside from defining the format of a document, templates should include references for guiding the content as well. Each section or header should include directions on how to complete and generate meaningful content. Sometimes having a list of questions to be answered for each section can be very helpful in creating consistency. Know the Building Blocks It is important to understand that a document is a view of a collection records at a certain date and time. That means that the content of it can come or be fed from other records. At the same time, portions of a document can be needed in another document. This is essential to note because there are documents that can be automatically created and formatted based on connecting different but unique sources. For example, let’s assume a statement of work needs to be created and it should include project objectives, list of deliverables, timeline and cost. The project objectives are probably not being defined for the first time in the statement of work. They were most likely defined in a previously authored document like a creative brief. Thus, that portion of the creative brief needs to make its way into the statement of work. Copying and pasting is one option but it will lead to inconsistent information and discrepancies if both documents continue to develop separately. Format and Appeal are Key There are different audiences for different documents. So proper formatting should take into account legibility and ease of navigating content. It is ok sometimes to have multiple variation of document content for different settings as long as the content is not being duplicated, but rather fed into both documents from a single repository. A status report for top management should be formatted differently than a status report for peers or the project team. Yes, the first is a summarized view and a bit higher level but key information like milestones, due dates and overall status should be the same and coming from the same data source. Keep Ease of Search and Accessibility on Top of Mind There is nothing more frustrating than knowing that a document that you need to review or reference does exist but it is nowhere to be found, especially if you’re under a time crunch. That happens due to either poor filing or poor naming of the document. Strick naming convention and versioning must be followed. What I mean by this is not simply stating that a document should have a publishing date as part of its name, but rather specifically noting what the format of that name should be. A Document Management Application May Not Be the Answer. "Documentation is an intellectual property, when properly managed can give businesses an edge and a competitive advantage even when producing or delivering uncompetitive products or services" There are hundreds if not thousands of document management systems and applications. None of them can do magic without a pre-defined plan of how documents should be managed, and before setting standards for people to follow regardless of the application selected. A document management application is a tool that does what you tell it to (most of the time). Investing in one without knowing the needs of document management for an agency in general can be disadvantageous. It can give authors and owner of documents a way-out of doing due diligence to properly store a document. Uploading a document to the system is not sufficient without having a common knowledge on where to upload each document, how to name them and how to manage archiving. Archiving is Not for Saving Older Versions Archiving is important but what’s more important is knowing what to archive. You never want your archive locations, folders, directories or repositories to be a dumping ground for purposeless documents or documents people don’t know where else to place. It is very easy and convenient to move a version 1 of a document to archive after it has been updated multiple times leading to a version 5. But is this really right? Why keep version 1? Going back to the purpose again, what purpose does version 1 serve now? I can understand if there are two different variations of the document – meaning both have different but equally important information, e.g. SOW A has options X and Y where SOW B covers options X and Z. However, if version 5 now encompasses everything previously recorded in versions 1 through 4 then those previous versions are no longer needed. Maybe keep just one previous version, but the older ones should be purged and not archived. I say all that to show that document management is an art - ok, maybe that’s a stretch but it can be or will need that aspect. It is clear that any document solution requires proper naming convention, storing repository, archiving mechanism and searching capability. However, when it comes to controlling document clutter, avoiding redundancies and maintaining quality content it does call for reaching beyond the traditional document management guidelines to find scalable and applicable methods to support one’s industry and organizational culture.
<urn:uuid:d95d55a4-6511-4310-8eb4-0a57c03de776>
CC-MAIN-2022-40
https://document-management.cioreview.com/cxoinsight/document-management-is-an-art-nid-7170-cid-98.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00001.warc.gz
en
0.940247
1,188
2.609375
3
Celebrities are in a unique position to tell us about the dangers of cybersecurity. They have the power to influence many people and make them aware of what they should do to protect themselves. . Essentially, they have the power to save lives. There are many celebrities in Hollywood who have spoken about the importance of cybersecurity. Hack Schools. Or don't because it is illegal, causes huge issues for schools, colleges and universities and the people involved. Schools and Universities are a major target for hackers because of the lack of cybersecurity measures, and this is not just a problem in the U.S. Cyber + reputation risk from deepfakes, AI, AR, VR is growing rapidly. We are living in the age of AI (Artificial Intelligence) combined with MI (Machine Intelligence) where everything is becoming digital and automated. The problem with this is that it also means that we are opening up a lot of doors for cyber attacks. Cyber reputation risk from disinformation is growing globally. Disinformation is a type of misinformation that is spread deliberately to deceive, mislead, or confuse. Attack surface areas explode with risk. In today’s world, we are surrounded by a lot of devices that are connected to the internet, which means that there are more devices for hackers to use to get into our networks. This can lead to a significant increase in risk for your company’s reputation and assets. The other thing to consider is that devices are becoming more intelligent. For example, voice-activated systems like Amazon’s Alexa and Google Home can either be a pro or con depending on the organization. In this digital age, it is important for people to understand how these tools affect their personal and business reputation. While some brands have taken the right steps to create social media policies and guidelines, a lot of companies don’t take the time to do this. Quantum computing is a new technology that will change the world. It will be a game changer in cybersecurity and will also change how we think about privacy and reputation. Quantum computing is the next big technology revolution. It will create new opportunities in cybersecurity, reliability and anonymity. It is important to know the basics of how your phone works and what you can do to protect it. Some of the key advice is to: Update your software, use a password and update it regularly as well as use two-factor and multifactor authentication when available, and use an antivirus program (the more up-to-date the better) to protect against malware infections, viruses, and phishing attacks. Getting trapped in someone's else's version of your reputation can be very scary, especially in today's always on society. Even worse is when a reputational attack on you affects you and your family's wealth protection and growth. It can lead to cybersecurity attacks, feeling very vulnerable. Reputation and Wealth Management Combine. We all have one. It us up to us to define it, scale it and defend it.
<urn:uuid:1bdd3ea9-f61a-4f57-b106-d1d432c4d40f>
CC-MAIN-2022-40
https://digijaks.com/category/cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00001.warc.gz
en
0.962349
608
2.59375
3
[TTL] field in IPv4). Because there is no checksum in the IPv6 header, the router can decrease the field without recomputing the checksum. On IPv4 routers the recomputation costs processing time. Source Address: This field has 16 octets or 128 bits. It identifies the source of the packet. Destination Address: This field has 16 octets or 128 bits. It identifies the destination of the packet. Extension Headers: The extension headers, if any, and the data portion of the packet follow the eight fields. The number of extension headers is not fixed, so the total length of the extension header chain is variable. Cisco CCNP ROUTE IPv6 Header The IPv6 header has 40 octets in contrast to the 20 octets in IPv4. IPv6 has a smaller number of fields, and the header is 64-bit aligned to enable fast processing by current processors. Address fields are four times larger than in IPv4. When multiple extension headers are used in the same packet, the order of the headers should be as follows: - IPv6 header: This header is the basic header described in the previous figure. 2. Hop-by-hop options header: When this header is used for the router alert (Resource Reservation Protocol [RSVP] and Multicast Listener Discovery version 1 [MLDv1]) and the jumbogram, this header (value = 0) is processed by all hops in the path of a packet. When present, the hop-by-hop options header always follows immediately after the basic IPv6 packet header. 3. Destination options header (when the routing header is used): This header (value = 60) can follow any hop-by-hop options header, in which case the destination options header is processed at the final destination and also at each visited address specified by a routing header. Alternatively, the destination options header can follow any Encapsulating Security Payload (ESP) header, in which case the destination options header is processed only at the final destination. For example, mobile IP uses this header. 4. Routing header: This header (value = 43) is used for source routing and mobile IPv6. 5. Fragment header: This header is used when a source must fragment a packet that is larger than the MTU for the path between itself and a destination device. The fragment header is used in each fragmented packet. 6. Authentication header and Encapsulating Security Payload header: The authentication header (value = 51) and the ESP header (value = 50) are used within IPsec to provide authentication, integrity, and confidentiality of a packet. These headers are identical for both IPv4 and IPv6. 7. Upper-layer header: The upper-layer (transport) headers are the typical headers used inside a packet to transport the data. The two main transport protocols are TCP (value = 6) and UDP (value = 17). Cisco CCNP ROUTE IPv6 Address Structure IPv6 is the solution to many of the limitation in addressing that are inherent to IPv4. Why aren’t we all using it yet? Well, there would be an overwhelming task of readdressing networks and upgrading applications. IPv6 increases the amount of address space available in IPv4 by quadrupling the amount of available address. IPv6 provides 128 bits for addressing versus IPv4’s 32 bits. IPv6 Addresses are represented in HEX versus the Dotted Decimal format offered in IPv4. Colons separate 8 16-bit hex fields, which are portions of the 128 bit address. Here are the rules that govern the IPv6 address format: Hex numbers are not case sensitive Leading 0s in any 16 bit field can be dropped and represented by colons A pair of colons (::) indicates the successive 16 bit fields of 0s have been dropped. It can represent any number of 0 fields so FF00:0000:0000:0000:0000:0000:0000:00AB could also be written as FF00::AB. Only on pair of colons is allowed in any address, because the process would not be able to tell how many 0s should be replaced in each location. Giving the above rules, lets now evaluate the IPv6 address: 1026:0000:1999:0000:0000:0AC0:1016:2002 This Address, following our stated rules can also be written as: 1026:0:1999::0Ac0:1016:2002 Cisco CCNP ROUTE IPv6 Address Scope Types Broadcasting in IPv4 results in a number of problems. Broadcasting generates a number of interrupts in every computer on the network and, in some cases, triggers malfunctions that can completely halt an entire network. This disastrous network event is known as a “broadcast storm.” In IPv6, broadcasting does not exist. Broadcasts are replaced by multicasts and anycasts. Multicast enables efficient network operation by using a number of functionally specific multicast groups to send requests to a limited number of computers on the network. The multicast groups prevent most of the problems that are related to broadcast storms in IPv4. The range of multicast addresses in IPv6 is larger than in IPv4. For the foreseeable future, allocation of multicast groups is not being limited. IPv6 also defines a new type of address called an anycast address. An anycast address identifies a list of devices or nodes; therefore, an anycast address identifies multiple interfaces. A packet sent to an anycast address is delivered to the closest interface—as defined by the routing protocols in use—identified by the anycast address. Anycast addresses are syntactically indistinguishable from global unicast addresses because anycast addresses are allocated from the global unicast address space. Cisco CCNP ROUTE IPv6 Unicast Addressing The IPv6 global unicast address is the equivalent of the IPv4 global unicast address. A global unicast address is an IPv6 address from the global unicast prefix. The structure of global unicast addresses enables aggregation of routing prefixes that limits the number of routing table entries in the global routing table. Global unicast addresses used on links are aggregated upward through organizations and eventually to the Internet service providers (ISPs). Global unicast addresses are defined by a global routing prefix, a subnet ID, and an interface ID. The IPv6 unicast address space encompasses the entire IPv6 address range, with the exception of FF00::/8 (1111 1111), which is used for multicast addresses. The current global unicast address assignment by the Internet Assigned Numbers Authority (IANA) uses the range of addresses that start with binary value 001 (2000::/3), which is one-eighth of the total IPv6 address space and is the largest block of assigned block addresses. Addresses with a prefix of 2000::/3 (001) through E000::/3 (111), with the exception of the FF00::/8 (1111 1111) multicast addresses, are required to have 64-bit interface identifiers in the extended universal identifier (EUI)-64 format. The IANA is allocating the IPv6 address space in the ranges of 2001::/16 to the registries. The global unicast address typically consists of a 48-bit global routing prefix and a 16-bit subnet ID. In the now obsolete RFC 2374, An IPv6 Aggregatable Global Unicast Address Format, the global routing prefix included two other hierarchically structured fields called Top-Level Aggregator and Next-Level Aggregator. Because these fields were policy-based, the Internet Engineering Task Force (IETF) decided to remove them from the RFCs. However, some existing IPv6 networks deployed in the early days might still be using networks based on the older architecture. A 16-bit subnet field called Subnet ID could be used by individual organizations to create their own local addressing hierarchy and to identify subnets. This field allows an organization to use up to 65,535 individual subnets. (RFC 2374 has now been replaced by RFC 3587, IPv6 Aggregatable Global Unicast Address Format.) Cisco CCNP ROUTE IPv6 Multicast The multicast addresses, FF00:: to FF0F::, are reserved. Within that range, the following are some examples of assigned addresses (there are many more assignments made; assignments are tracked by the Internet Assigned Numbers Authority [IANA]): FF02::1 — All nodes on link (link-local scope) FF02::2 — All routers on link FF02::9 — All Routing Information Protocol (RIP) routers on link FF02::1:FFXX:XXXX — Solicited-node multicast on link, where XX:XXXX is the rightmost 24 bits of the corresponding unicast or anycast address of the node. (Neighbor solicitation messages are sent on a local link when a node wants to determine the link-layer address of another node on the same local link, similar to Address Resolution Protocol [ARP] in IPv4.) FF05::101 — All Network Time Protocol (NTP) servers in the site (site-local scope) The site-local multicast scope has an administratively assigned radius and has no direct correlation to the (now deprecated) site-local unicast prefix of FEC0::/10.
<urn:uuid:54a6d191-ace3-4910-978e-ba6aaa233fc0>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/cisco-ccnp-route-642-902-exam-study-guide/cisco-ccnp-route-implementing-ipv6-part-i/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00001.warc.gz
en
0.860766
1,978
3.09375
3
A clinical trial of more than 10,000 heart attack patients reported today supports a novel way to protect them from a stroke or a second attack: with drugs that stop inflammation. The approach has been advanced by some scientists for years, but this is the first trial to conclusively show that it works. Cardiologists hailed it as vindication for the heart attack–inflammation link, which hadn’t been proved in people. The effect on future heart attacks is modest, but even skeptics were swayed. “I have to congratulate” those running this trial, says Terje Pedersen, a cardiologist at Oslo University who had been skeptical in the past. Called the Canakinumab Anti-inflammatory Thrombosis Outcomes Study (CANTOS) and funded by the drug giant Novartis, the trial also found fewer cases of lung cancer in those on the treatment, rekindling basic research findings hinting that the same inflammatory pathway may initiate or spur the growth of such tumors. Nearly 2% of people in the placebo group were diagnosed with lung cancer during the study compared with 1% on the treatment. The actual disparity in number of cases between the two groups was small, with 129 lung cancers in all, and the trial wasn’t set up to study that disease. CANTOS grew out of years of ups and downs in the heart disease field, as scientists tried to trace the role of inflammation, a complex cascade of immune signals and various white blood cells that occurs in response to wounds, infections, and more. One detective was cardiologist Peter Libby at Brigham and Women’s Hospital in Boston, who determined that various molecules recruit macrophages and other immune cells to blood vessels. That leads to inflammation and eventually, arterial plaques. Back in the 1980s, when Libby’s work began, “I was a lonely voice” advocating a connection, he says. Now, researchers believe “the whole atherosclerosis process begins as an inflammatory event,” explains Mark Creager, director of the heart and vascular center at Dartmouth-Hitchcock Medical Center in Lebanon, New Hampshire, who wasn’t involved in the study. But was inflammation relevant to the triggering of actual heart attack—which often occurs when an arterial plaque ruptures and blocks an artery—not just the long process leading up to it? While Libby worked away in his lab, another cardiologist at Brigham and Women’s, Paul Ridker, began testing this premise in people. Ridker showed that high levels of inflammation molecules in a person’s blood can help predict a heart attack. One such marker is called c-reactive protein (CRP). In patients, Ridker found that statin drugs, widely known to prevent heart attacks by lowering cholesterol, lower CRP levels, too, suggesting they blunt inflammation—something Libby had seen in animals. But neither physician could promise that this anti-inflammatory power had anything to do with statins’ heart protection. Meanwhile, results from these and other animal and observational studies appeared to clash with how anti-inflammatory drugs affected human hearts. In 2004, a large clinical trial of arthritis sufferers taking the nonsteroidal anti-inflammatory drug (NSAID) Vioxx for their condition were found to have double the normal risk of a heart attack, and the drug was yanked from the market. Steroids, which are potent anti-inflammatory drugs, also didn’t prevent heart attacks or strokes in people taking them for other reasons, several studies showed. Ridker and Libby speculated that for an anti-inflammatory to work, it needed to be much more specific; NSAIDs and steroids have broad effects all over the body, and NSAIDs can spur inflammation along with blunting it. The pair focused on the monoclonal antibody canakinumab, already approved for juvenile arthritis, because it selectively targets a molecule called IL-1β, which is part of the pathway driving atherosclerosis. Together, they persuaded Novartis to support a study. The heart attack patients who enrolled all had high CRP levels and were given the best treatments available, including aggressive statin therapy. Half also received four infusions of canakinumab each year, at one of three different doses. And in the end, those infusions made a difference, if a modest one. People receiving the placebo had about a 4.5% risk of a second cardiovascular event after a year versus 3.86% for those on the medium dose of the drug. This meant they were about 15% less likely to suffer a heart attack or stroke or die from cardiovascular disease. Over about 3.5 years, 535 of 3344 people in the placebo group suffered such an “event,” compared with 642 of 4547 getting the medium and high doses. Participants were also about 30% less likely to need a stent or cardiac bypass surgery if they got canakinumab, suggesting that damping down inflammation helps arteries stay healthy. Ridker and others say that even a 15% reduction is exciting, because it came on top of the already significant benefits from the best standard treatment. They’re especially happy with the larger drop in invasive treatments. “I’m pinching myself,” Ridker says. This is “exactly what we had hoped for the last 30 years that we’d get. … It makes absolutely clear that if you lower inflammation, you lower risk.” The results were presented this morning at the European Society of Cardiology meeting in Barcelona, Spain, with the heart data appearing simultaneously in The New England Journal of Medicine and the cancer analysis in The Lancet. Where to go from here is complicated. For one, canakinumab is expensive, at about $16,000 per infusion. Cardiologists agree that Novartis will have to lower the drug’s price to make it more competitive with existing heart treatments. Doctors will also need to carefully balance the risks with benefits. About 1% of those on canakinumab died from an infection during the multiyear trial, nearly double the rate of infection deaths on placebo; those who died were often older and had diabetes. Asked whether he’d offer this therapy to people with high CRP who’ve never had a heart attack, Ridker was unequivocal: No way. “I think this is going to be a therapy for very high risk patients,” he says. More information may come from another large trial Ridker is running. This one is federally funded and is testing the much cheaper but less targeted anti-inflammatory methotrexate in a similar population. Results will be available in a couple of years. The lung cancer piece is murkier, if provocative. “There needs to be more work in this area,” says Colin Baigent, an epidemiologist at the University of Oxford in the United Kingdom, who sees the observation as fodder for a future trial. “It’s not convincing on its own.” Baigent is most excited that CANTOS firmly backs the notion that blocking inflammation can prevent heart disease. This first salvo is now sure to be followed by many more, he and others say. “It’s a gateway to a very wide variety of therapies that are going to be developed,” says Steven Nissen, a cardiologist at the Cleveland Clinic in Ohio who has long sided with Ridker. “This is as big as anything we’ve seen in a while.”
<urn:uuid:985ea0d0-fb6c-42d8-90b6-b40f9ae73ddd>
CC-MAIN-2022-40
https://debuglies.com/2017/08/27/anti-inflammatory-cuts-risk-of-heart-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00001.warc.gz
en
0.962388
1,610
2.71875
3
Under the mandatory access control model, also known as MAC, both users and resources are assigned security labels. To access a resource, the user must have a security clearance matching or exceeding the resource’s security classification. Unlike under discretionary access control, users under mandatory access control cannot readily hand out access at their discretion. Instead, access is set by a high level administrator. Under many DAC systems, obtaining a new security clearance often requires the approval of multiple administrators and security professionals. Mandatory access control is a highly secure access control model, making it the model of choice for matters of national security. However, it is highly bureaucratic by nature, and can be burdensome to maintain. Though it can be absolutely worth it to protect critical assets, its inflexibility makes mandatory access control a poor fit for many business applications. How Mandatory Access Control Works Mandatory access control relies on a system of security labels. Every resource under MAC has a security classification, such as Classified, Secret, and Top Secret. Likewise, every user has one or more security clearances. To access a given resource, the user must have a clearance matching or exceeding the resource’s classification. So if Greg wants to access a Secret file on the Hoover Dam, he would need to have a Secret or Top Secret security clearance on that topic. These security labels tend to be fairly specific. Greg’s Top Secret clearance for the Hoover Dam would not grant him access to the nuclear plant in Poughkeepsie. Instead, he would have to apply for an additional security clearance to access resources pertaining to the Poughkeepsie Nuclear Plant. These national security designations each have a clear definition, as defined by the Code of Federal Regulations: Top Secret refers to that national security information which requires the highest degree of protection, and shall be applied only to such information as the unauthorized disclosure of which could reasonably be expected to cause exceptionally grave damage to the national security. Examples of exceptionally grave damage include armed hostilities against the United States or its allies, disruption of foreign relations vitally affecting the national security, intelligence sources and methods, and the compromise of vital national defense plans or complex cryptologic and communications systems. This classification shall be used with the utmost restraint. Secret refers to that national security information or material which requires a substantial degree of protection, and shall be applied only to such information as the unauthorized disclosure of which could reasonably be expected to cause serious damage to the national security. Examples of serious damage include disruption of foreign relations significantly affecting the national security, significant impairment of a program or policy directly related to the national security, and revelation of significant military plans or intelligence operations. This classification shall be used sparingly. Confidential refers to other national security information which requires protection, and shall be applied only to such information as the unauthorized disclosure of which could reasonably be expected to cause identifiable damage to the national security. Any resources with no security classification would be considered unclassified, and would be available to the public. Note that ‘unclassified’ is not itself a security label; rather, it is the absence of one. A resource can not be assigned an unclassified label. But by being stripped of its security label, it becomes unclassified. Security classifications can change over time – in fact, they’re designed to change. All classified documents undergo an automatic classification review after 25 years, after which most documents are declassified. There are nine exceptions that can prevent a document from being declassified. But at the 50-year mark, only two of these exemptions remain valid, and at the 75-year mark, a document can only remain classified via special permission. The Need-to-Know Principle To ensure maximum security, mandatory access control often goes hand-in-hand with the need-to-know principle. This rule holds that users should only have access to the resources they need to do their job. To access something under a strict MAC system, you would need not only the right clearance, but also a clear justification as to why you need to access the resource. Under mandatory access control, obtaining a new security clearance often requires multiple levels of approval. To obtain a new security clearance on the Poughkeepsie plant, for instance, Greg would ask a security officer who would then submit a request to a higher-up official. This official would then submit their approval to an IT officer, who would then put the new clearance into effect. Even with those layers of approval, Greg would still have to provide a need-to-know justification each time he wanted to access classified resources pertaining to the Poughkeepsie plant. As you can see, mandatory access control demands a great deal of bureaucracy. While it’s worth it to protect matters of national security, all this administrative upkeep can make MAC impractical for most business uses. Commercial Security Labels When businesses implement mandatory access control, they often classify data based on the following levels of access: - Internal information is open to employees. This might include company newsletters or announcements. At companies with a high degree of transparency, it can even include information such as detailed revenue breakdowns. Though this information is not publicly available, it would not cause tangible harm to the company if made public. - Confidential or Sensitive information requires a specific authorization to access. This can include information such as company strategy, product plans, and any dealings that have not yet been announced to the company at large. This information could cause serious damage to the company if made public. - Restricted or Highly Sensitive information would cause serious damage to the company if made public. This includes personally identifiable information such as social security numbers and credit card numbers, and would present a serious legal risk to the company if made public. Alternatives to Mandatory Access Control Mandatory access control comes with some real strengths and weaknesses. It’s the most secure access control model, which is why it is the method of choice for sensitive government matters. But it’s also a very involved and bureaucratic system, making it a poor fit for many business uses. More frequently, businesses will use the more flexible discretionary access control model. Under this system, every resource has an owner, who can then give out access at their discretion. Though this model is very flexible, it can often be very insecure if not implemented correctly. It can also get pretty convoluted as it scales – it’s much easier to manage a company with 20 employees than one with 1,000 employees, especially when each of those employees might be the owner of specific resources. Many businesses use role-based access control. This model allows a company to group users, and then set access based on those groups or roles. An employee in the marketing group, for instance, would have access to the resources they need to accomplish their work in marketing. These systems are not mutually exclusive. It might make sense to implement discretionary access control across a company, for instance, and then layer in mandatory access control to protect the most sensitive assets, such as customers’ personal information. The Windows operating system does this. Though Windows operates on a foundation of discretionary access, the operating system itself and its security features are protected under a system of mandatory access control. By implementing an extra layer of security around these key areas, Microsoft was able to seriously reduce the number of malware attacks happening in Windows, shoring up a critical vulnerability through a combination of access control models.
<urn:uuid:db2466e7-372b-4ef9-a2f8-8ef83c00e72b>
CC-MAIN-2022-40
https://firewalltimes.com/mandatory-access-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00001.warc.gz
en
0.943097
1,552
2.796875
3
Storing diesel fuel is vital for a wide range of industries — your stored fuel allows you to use generators for energy and industrial vehicles during emergencies. Fuel quality directly relates to performance — higher quality fuel works better, igniting quicker. It also protects equipment, which can sustain damage when exposed to poor-quality fuel. Understanding diesel fuel quality is vital for businesses that store and use large quantities of diesel fuel. Proper storage methods allow for preserving fuel quality between uses. Regular quality monitoring will help you protect your equipment and ensure you have access to usable fuel when needed. Learn more about fuel quality — how to measure it, why it’s important and what best practices you can follow to ensure the highest quality fuel when you need it. What Is Fuel Quality? The definition of fuel quality depends on the type of fuel and its intended uses. Diesel fuel quality relates to its performance, which depends on its cleanliness and its contents. The contaminants that mingle with stored fuel degrade its performance and quality. High-quality fuel is essential for operating generators, vehicles and other equipment. Additives and stabilizers can help to preserve fuel quality, as can appropriate storage procedures. Fuel Quality Standards in the US Several different fuel quality standards apply throughout the United States. The Environmental Protection Agency (EPA), federal government and state governments set regulations with which fuel producers must comply. Diesel fuel is subject to the American Society for Testing and Materials (ASTM) D975 standard specifications for diesel fuel. The specifications describe 13 tests and their acceptable limits. For instance, the EPA restricts how much sulfur diesel fuel can contain. Fuel quality comparison depends on the results of these standardized tests. These standards help to reduce pollution and make diesel fuel safer for the environment and public health. Since fuel quality impacts its performance, it’s crucial to understand how to measure it and how to preserve it. How Is Diesel Fuel Quality Determined? The main way to measure diesel fuel quality is through the cetane number, which describes the fuel’s ignition quality. A higher cetane number means the fuel ignites quicker and burns more thoroughly. In that way, the cetane number rates the quality of combustion. Higher cetane numbers boost engine performance. Determining fuel’s cetane number requires a standardized fuel quality testing method using a single cylinder in a variable compression ratio diesel engine. Pure cetane has the highest possible cetane number of 100. Contaminants, additives and other properties determine cetane numbers and fuel quality levels. Several contaminants and other factors can affect fuel performance. Following best practices and keeping up with routine maintenance can help decrease contaminants buildup and retain fuel quality. Storage Contaminants That Can Affect Diesel Fuel Quality Several contaminants impact the quality and performance of diesel fuel. Improper storage or prolonged lack of use can cause these particles to appear. Listed below are the biggest threats to fuel quality. The presence of water degrades fuel quality by causing a reaction called hydrolysis. If condensation accumulates within the diesel storage tank, water droplets fall into the diesel fuel. The resulting chemical reaction breaks down the diesel fuel. Hydrolysis is a general term for any chemical reaction in which water breaks chemical bonds. Water in the storage tank also makes the fuel susceptible to microbial growth, since organisms like bacteria and fungus require water to thrive. Water condensation is a common issue when it comes to diesel tanks. The fuel has no vapor pressure to displace air. If the tank is warm, expanding air gets forced out. Then, as the tank cools, humid air gets sucked back in, causing water to condense inside. Avoiding condensation buildup is one of the reasons proper diesel fuel storage is a necessity. Since microbial life requires water to live and reproduce, these tiny organisms accumulate as water interferes with diesel fuel. Bacteria, fungi and other microbes degrade fuel quality and performance. Microbial contaminants: - Produce acids that eat away at diesel fuel. - Clog tank filters, limiting filtration. - Restrict the flow of fluid in and out of the tank. - Corrode the storage tank itself. - Damage the engines in which the fuel is used. Microbial contaminants are a greater concern today than they were in the past, following the Environmental Protection Agency’s (EPA’s) regulations of sulfur content in diesel fuel. Since 2017, the cap for sulfur in gasoline has been 10 parts per million (ppm). Restricted sulfur content reduces air pollutants and allows for stricter emissions standards. For that reason, it’s essential for reducing environmental impacts and preserving public health. At the same time, restricted sulfur content makes fuel more susceptible to microbial growth. Another major threat to fuel quality is dirt. Sediment buildup in the form of sand, dust, rust and other debris can lead to serious performance issues upon use. Using dirty diesel fuel can cause poor engine performance, trouble starting up, stalling, misfiring and part failures. In that way, dirt within diesel fuel can reduce equipment life spans. Other Factors That Can Affect Fuel Quality Certain factors tend to impact fuel quality by increasing the opportunities for contamination. Proper storage and regular fuel replacement can decrease these factors and protect fuel quality. Here are some of the concerns regarding stored diesel fuel. The longer stored fuel sits, the more likely it is to undergo a decline in quality. Diesel fuel will remain usable in storage for a matter of months, but adverse storage conditions shorten its shelf life. Over time, various environmental factors will affect the fuel, causing chain reactions. Long-stored diesel may appear sludge-like in consistency rather than oily. The change in consistency is due to microbial buildup and chemical reactions. Thick, sludge-like fuel will not perform well and may damage equipment. In extreme cases, it can prevent an engine from starting. Another factor impacting fuel quality is temperature. Diesel fuel composition varies depending on region and temperatures. Fuel intended for warm weather will not store well in cold weather and vice versa. Warm ambient temperatures can speed the deterioration of diesel fuel. On the other hand, cold ambient air can contribute to condensation within the tank. As a result, temperature control surrounding a diesel storage tank is essential. Because water is a major diesel contaminant, humidity levels also affect fuel quality. The amount of water in the air correlates to the amount of water condensing in a diesel storage tank. Diesel fuel in storage will maintain its quality longer in low-humidity settings. You can also cut back on condensation buildup by limiting the amount of empty space in a storage tank — empty space allows water droplets to form. Oxidation is the chemical reaction that occurs due to oxygen exposure. As diesel fuel mingles with oxygen, the resulting chemical reaction leads to higher acidity levels and sludge buildup. The acidity corrodes the tank while the sludge clogs filters. Both effects decrease diesel performance and damage equipment like storage tanks, fuel injectors, fuel lines and engines. Of course, exposure to oxygen is inevitable. However, certain fuel additives can lengthen diesel fuel’s shelf life by slowing down oxidation. How to Store Diesel Fuel to Protect Fuel Quality It’s essential to practice proper storage methods to maintain the best quality fuel possible. How you store your diesel fuel can guard against contaminant buildup, ensuring you always have access to good quality fuel and protect your equipment from wear or damage. Here are some best practices for storing diesel fuel. 1. Keep the Tank Cool You should ensure your storage tank is cool. High temperatures speed up the oxidation process, resulting in sludge accumulation and higher acidity levels. You might consider investing in an underground storage tank to maintain consistent cool temperatures. Another method is to build an enclosure for your storage tank. In addition to reducing ambient temperatures, an enclosure will limit water and humidity exposure. 2. Treat the Fuel Treated fuel will last longer in storage than untreated fuel. You’ll want to look into fuel additives and stability treatments to maintain diesel fuel quality. Additives and treatments can increase your fuel’s shelf life by: - Dispersing water molecules or particulates in the storage tank. - Counteracting microbial growth or killing microbial life. - Stabilizing the fuel. Which additives will help you maintain your fuel depends on how and when you plan to use it — experts can determine a customized additive content for your fuel. 3. Only Use Diesel-Specific Treatments Be sure to stick to additives and treatments meant specifically for diesel fuel rather than generic options. Not all treatments and additives will help to preserve diesel fuel quality — some might even speed up degradation. It’s essential to use the right solutions for diesel, specific to your unique needs. 4. Empty the Tank Every 10 Years Long-term stored fuel management calls for deep tank cleaning at least once every decade. Schedule a time to empty and clean your storage tanks once every ten years. Doing so will: - Ensure high-quality fuel. - Extend the life span of the tank itself. - Protect equipment from accumulated tank contaminants. Regular preventive maintenance can help you avoid unexpected downtime and equipment damage. While the tank is empty, a thorough inspection should occur and any necessary tank or component replacements should take place. Replacing storage tanks and accessories as needed will help make stored fuel last longer. 5. Consider Underground Storage Where and how you place your storage tanks can also affect diesel fuel quality. For instance, underground storage provides better surroundings for a storage tank by minimizing exposure to heat, humidity and other environmental influences. In addition to keeping the tank cooler and dryer, it will also decrease opportunities for damages or leaks. The initial expenses related to underground storage might be greater. However, the superior fuel protection it provides is worth the investment in the long run. If you plan on implementing long-term diesel fuel storage, consider investing in underground tanks. If you opt for an aboveground storage tank, be sure to follow placement and maintenance guidelines for fuel preservation and safety. 6. Monitor the Tank for Water Buildup Regular monitoring is also a fundamental part of fuel storage. You should check for water buildup on a regular basis, especially during humid or rainy parts of the year. Humid conditions lead to iincreased condensation within the tank, and pooled water on the top of the tank will lead to rusting in metal containers. Monitor for water in, on and around your fuel storage tank. 7. Keep the Fuel Away From Ignition Sources For safety and fuel preservation, it’s vital to keep your storage tank away from any ignition sources. Diesel fuel is flammable, with a flashpoint of 100 to 204 degrees Fahrenheit. Make sure any nearby electrical outlets are rated for explosions. You should also put up signage prohibiting smoking or open flames surrounding the tank. Any fuel storage tanks should be located in isolated areas away from residences. 8. Keep the Tanks Full Minimizing empty space within the tank limits water buildup and microbial growth. It’s a good idea to enroll in a regular, automatic fill program to keep your tanks full. Doing so will ensure you always have access to high-quality fuel for routine or emergency use. The last thing you’d want during a power outage is to find your stored fuel has degraded and is no longer usable to power your generators or equipment. Unexpected power outages can result in lost revenue, unturned inventory buildup, decreased productivity and other undesirable effects. Being prepared with usable diesel fuel can prevent these issues. Keeping your tanks full will also preserve your tank and related equipment by reducing contaminant accumulation. In that way, maintaining full tanks can help you extend your equipment’s life span. Be sure to keep your diesel storage tanks full with regular refilling services. Contact Foster Fuels’ Mission Critical Division Storing large quantities of diesel fuel is a necessity across many industries. Those who do not use diesel fuel regularly may still require stored fuel to run generators and power equipment during emergencies. No matter why you need to store diesel fuel, following best practices will help to preserve its quality, maintain high performance and protect your equipment. Contaminants to avoid include water, microbial life and dirt. Fuel additives, proper storage and regular maintenance will help to minimize contaminant buildup. It’s essential to store your diesel fuel tank in cool, dry and covered conditions far from any possible ignition sources. Underground storage may be the best option, but enclosures can make aboveground storage viable, as well. Regular tasks should include deep cleaning the tank once per decade, monitoring for water buildup and scheduling refillings. If you’re looking for diesel fuel services, consider Foster Fuels. Here at Foster Fuels, we help preserve high fuel quality through testing, analysis and customized additive solutions. We also provide emergency fuel services for immediate fuel needs and fuel pump-outs due to tank leaks or contamination. To learn more about Foster Fuels diesel fuel services, reach out to us today.
<urn:uuid:75f169d0-b881-45a9-b82f-e2bed9483367>
CC-MAIN-2022-40
https://fosterfuelsmissioncritical.com/what-is-fuel-quality-and-why-does-it-matter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00001.warc.gz
en
0.913906
2,714
3.359375
3
A new, hair-sprouting dollop of human skin created in the lab might one day help prevent hair loss. Organoids are small, lab-grown cell groupings are designed to model real-world organs -in this case, skin. A paper published in Nature describes the hairy creation as the first hair-baring human skin organoid made with pluripotent stem cells, or the master cells present during early stages of embryonic development that later turn into specific cell types. The hirsute organoid’s development was led by Karl Koehler, Ph.D., formerly of Indiana University School of Medicine and now at Boston Children’s Hospital. An Oregon Health & Science University graduate student, Benjamin Woodruff, contributed by helping make the organoids as a post-baccalaureate research technician in the Stanford University lab of Stefan Heller, Ph.D. “This makes it possible to produce human hair for science without having to take it from a human,” explained Woodruff, who now is completing his first year of cell and developmental biology graduate studies at OHSU. “For the first time, we could have, more or less, an unlimited source of human hair follicles for research.” Having access to more hair-growing skin can help researchers better understand hair growth and development – and maybe even provide clues needed to reverse a retreating hair line. Steps toward human hair follicle regeneration. Cell isolation: cell sources for bioengineering can be follicular (bulge stem cells, dermal papilla, and dermal sheath cells) and non‐follicular (keratinocytes, skin‐derived progenitors, and mesenchymal stem cells). Cell expansion: mesenchymal and epithelial cell sources are cultured in vitro. Bioengineering: cell clustering in 3D instructive hair bulbs. Implantation: bulbs could generate functional hair follicles. Hair loss (alopecia) is a disease that affects a growing number of people worldwide and impacts individuals’ physical, psychological, and social well‐being.1 Patients with hair disorders suffer from emotional stress, embarrassment, and depression that severely compromise their life quality.2 Up to date, treatments include pharmacological and surgical (autologous hair transplant) interventions. Although hair restoration surgery is nowadays the most effective method, donor hair follicles (HFs) scarcity is often its major limitation.3 Besides, pharmacological treatments still not fully satisfy the patient’s needs and entail drastic side effects.4 Thus, the limited efficacy and possible side effects of the current treatments have fostered the search for alternative therapeutic solutions, capable of generating unlimited number of HFs de novo. Noteworthy, stem cell‐based tissue engineering is emerging as the most thriving approach, aiming to reconstruct HFs in vitro to replace lost or damaged HFs as a consequence of disease, injury, or aging. HF bioengineering approaches are based on the accumulated knowledge on reciprocal epithelial‐mesenchymal (EM) interactions controlling embryonic organogenesis and postnatal HF cyclic growth. However, despite recent progress in the field, clinical applications of tissue engineering strategies for hair loss are still missing. Neogenesis of human follicles derived from cultured HF dermal cells has not been successfully achieved yet. This review focus on the research approaches being developed to tackle the major limitations of human HF bioengineering, namely the loss of cellular function following in vitro HF cells expansion, the loss of in vivo tissue context/architecture, and the reconstruction of autologous functional HF germs for clinical procedures. HF MORPHOGENESIS AND CYCLING: STEM CELL POPULATIONS HF is a mini‐organ that forms during embryonic skin development. Its functional and cycling activities rely on a coordinated communication between the different cell populations from epithelial, mesenchymal, and neural crest stem cell origin,5 which additionally regulates adult skin homeostasis and wound repair.6, 7 Therefore, understanding the HF anatomy, as well as the stem‐cell populations operating during postnatal cyclic regeneration, is crucial for tissue engineering‐based solutions. Follicular dermal stem cells exist in the dermis (skin‐derived precursors, SKP) able to regenerate dermal sheath (DS), and populate the dermal papilla (DP) at every growth cycle.8 Both DS and DP comprise mesenchymal cells with multi‐lineage differentiation capacity.9 In the mature HF, the DP is adjoined to connective tissue sheath (DS), together forming the dermal component of the mature HF10 (Figure (Figure1).1). The DP is thought to be a master regulator of HF cycling, which consists in serial phases of growth (anagen), apoptotic‐driven regression (catagen), and rest (telogen).11 On the human scalp, anagen lasts 1‐6 years and it involves the complete regeneration of the cycling portion of the HF (Figure (Figure1).1). At the telogen‐to‐anagen transition, DP stimulates epithelial hair follicle stem cells (HFSC) from the bulge region, which are adult multipotent cells holding self‐renewal capability and kept quiescent in their niche surrounded by the sebaceous gland (SG) in the outer root sheath.12, 13 When DP stimulatory signaling overcomes the threshold imposed by the inhibitory bulge microenvironment,14 HFSCs divide generating a new pool of progenitors at the bulge base called the secondary germ cells,15 which survive catagen‐driven apoptosis.16 These primed hair germ cells migrate to the bulb, while expanding and differentiating into transit‐amplifying cells (HF‐TACs) that attach to the basement membrane surrounding the DP lower half. HF‐TACs likely sit in place throughout much of anagen to fuel HF growth by differentiating into eight distinct epithelial lineages (eg, shaft, inner root sheath, and companion layer cells) and SGs17, 18, 19 (Figure1). In addition to HFSCs, melanocyte stem cells also reside in the bulge. During anagen, they are coordinately activated with HFSCs to generate mature melanocytes that produce and distribute pigment granules to the adjacent differentiating cells to form pigmented hair fibers.20 Catagen phase can last between 4 and 6 weeks, where keratinocytes and melanocytes undergo apoptotic processes. This apoptotic‐driven regression causes DP to move upward, bringing it closer to the epithelial bulge.21 Following complete regression, HF enters a quiescent phase (telogen), which can last several months. The replacement of the old hair shaft fiber by the forming club fiber at the end of telogen is called “exogen.”22 Finally, HF macroenvironment also encompasses adipocyte tissue containing adipose‐derived stem cells in close commitment with hair growth regulation, as patented by increased adipose tissue thickness in anagen.6 Altogether, the DP, differentiated epithelial cells and the hair matrix constitute the cycling portion of the postnatal HF, being actively renewed each HF cycle. Conversely, the upper portion of the follicle (including bulge, SG appendage, and the infundibular epidermis) constitute the permanent portion of the postnatal HF, being formed during embryonic development and kept throughout life.23, 24 Any aberrant signaling affecting the communication between mesenchymal DP and the surrounding epithelial cells will disrupt the hair cyclic regeneration during postnatal life. HF BIOENGINEERING: CELL SOURCES AND CHALLENGES Stem cell‐based regenerative medicine is emerging as the most thriving approach for hair loss treatment by holding the potential of HF cloning, that is, the production of bioengineered instructive germs from human HF cells expanded in vitro to generating fully functional HFs upon transplant into the patient’s bald scalp. Rationally, such a regenerative therapy may only be possible if combining receptive‐epithelial and inductive‐mesenchymal populations to mimic the well‐orchestrated interactions controlling lifelong HF cycles, which are deeply affected during hair loss.25 Ideally, a cell‐based regenerative medicine therapy would be autologous, that is, resort to patients’ cells derived from small amounts of tissue biopsies (eg, HF punch). Thus, researchers in the field have been mainly focused on developing therapeutic bioengineering solutions using dissociated HFSCs and DP cells (DPCs) isolated from HF biopsies. HFSCs and DPCs retrieved from nonbalding scalp follicles should first be expanded in culture to produce bioengineered structures in vitro with hair regenerative potential. Still, an allogeneic cell source could be alternatively used for HF regenerative therapy. Two decades ago, transgender transplantation of microdissected DP and DS was shown to successfully induce HFs.26 This study not only pointed to the need of an inductive dermal component for HF regeneration, but it also disclosed the possibility of using an allogeneic cell source for therapy. Indeed, HF proved to be an immune‐privileged site, as it does not express MHC (major histocompatibility complex) class I antigens.27, 28 Regardless of autologous vs allogeneic therapy, the relevance of HFSCs, and mainly DPCs, on tissue‐engineering approaches for treating alopecia has been the focus of intensive research over the last decade. 1. Marks DH, Penzi LR, Ibler E, et al. The medical and psychosocial associations of alopecia: recognizing hair loss as more than a cosmetic concern. Am J Clin Dermatol. 2018;20:195‐200. [PubMed] [Google Scholar] 2. Hadshiew IM, Foitzik K, Arck PC, et al. Burden of hair loss: stress and the underestimated psychosocial impact of telogen effluvium and androgenetic alopecia. J Investig Dermatol. 2004;123:455‐457. [PubMed] [Google Scholar] 8. Rahmani W, Abbasi S, Hagner A, et al. Hair follicle dermal stem cells regenerate the dermal sheath, repopulate the dermal papilla, and modulate hair type. Dev Cell. 2014;31:543‐558. [PubMed] [Google Scholar] 13. Rompolas P, Deschene ER, Zito G, et al. Live imaging of stem cell and progeny behaviour in physiological hair‐follicle regeneration. Nature. 2012;487:496‐499. [PMC free article] [PubMed] [Google Scholar] 15. Ito M, Kizawa K, Hamada K, et al. Hair follicle stem cells in the lower bulge form the secondary germ, a biochemically distinct but functionally equivalent progenitor cell population, at the termination of catagen. Differentiation. 2004;72:548‐557. [PubMed] [Google Scholar] 21. Cotsarelis G, Sun T‐T, Lavker RM. Label‐retaining cells reside in the bulge area of pilosebaceous unit: implications for follicular stem cells, hair cycle, and skin carcinogenesis. Cell. 1990;61:1329‐1337. [PubMed] [Google Scholar] 22. Higgins CA, Westgate GE, Jahoda CA. From telogen to exogen: mechanisms underlying formation and subsequent loss of the hair club fiber. J Invest Dermatol. 2009;129:2100‐2108. [PubMed] [Google Scholar] 25. Garza LA, Yang CC, Zhao T, et al. Bald scalp in men with androgenetic alopecia retains hair follicle stem cells but lacks CD200‐rich and CD34‐positive hair follicle progenitor cells. J Clin Invest. 2011;121:613‐622. [PMC free article] [PubMed] [Google Scholar] More information: Jiyoon Lee et al, Hair-bearing human skin generated entirely from pluripotent stem cells, Nature (2020). DOI: 10.1038/s41586-020-2352-3
<urn:uuid:65a65753-f7db-445e-bd7b-8f6d3721ac7c>
CC-MAIN-2022-40
https://debuglies.com/2020/06/04/researchers-can-produce-human-hair-for-science-without-having-to-take-it-from-a-human/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00001.warc.gz
en
0.882445
2,588
3.828125
4
The Internet of Things (IoT) is changing the way subscribers use data and operators manage devices. Discover why open device management and support for multiple standards and protocols is essential to the future of IoT applications. This content stream includes: - IoT and M2M: What's the Difference – Explore the key differences between the Internet of Things and Machine-to-Machine communication. - IoT Has a New Friend: 802.11ah – Take a deep dive into the IEEE draft for 802.11ah standard. - Managing Residential and Industrial IoT – Learn how OMA-DM, SNMP, TR-069, and IoT-specific protocols accelerate smart home and industrial IoT service deployments.
<urn:uuid:b60a8a78-7605-409d-9a56-4aae776a3499>
CC-MAIN-2022-40
https://go.incognito.com/future-proof-your-business-with-iot-support
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00201.warc.gz
en
0.812201
143
2.640625
3
The Common Vulnerability Scoring System (CVSS) is an open standard for assessing the severity of security vulnerabilities. “Common” being the keyword, indicating that CVSS is designed to not only be independent to a specific vendor or industry, but also interoperable across systems that vary in size and scope. This is not only a great initiative but it also attempts to provide an open scoring standard which is understood and actively contributed by the security community—making it effective and efficient to use in many different fields and industries. A CVSS score classifies a vulnerability based on the potential impact inflicted on the host where the vulnerability resides. This takes into consideration the nature of data that may be compromised by evaluating a series of metrics such as Attack Vector, Attack Complexity and also Privileges Required which is a new metric available in CVSS version 3. Network vulnerabilities were far more common in the past, and CVSS v2 did a very good job at classifying these. Over the years there has been a rise in web application vulnerabilities which demanded a more granular and accurate scoring system to accurately reflect the severity of both network and web application vulnerabilities. Since June 2012, FIRST (Forum of Incident Response and Security Teams) have been working on the newer version of CVSS—CVSS version 3 which became available on June 10th 2015. CVSS version 3 aims to provide clearer, consistent and accurate scores for modern day vulnerabilities. As an example, let’s look at the OpenSSL Heartbleed Vulnerability (CVE-2014-0160)—a vulnerability that took the Internet by storm. Heartbleed’s CVSS v2 Base Score is that of 5.0 out of 10. Such a score was deemed too low for a vulnerability that could potentially disclose sensitive or secret information from a vulnerable server’s memory. With the change in score interpretation from CVSS v2 to CVSS v3, as well as the new CVSS v3 metrics (namely, Privileges Required and Scope); vulnerabilities such as Heartbleed, now score a more accurate Base Score of 7.5 out of 10, in contrast to its 5.0 out of 10 Base Score in CVSS v2 score. Other examples would include a Cross-site Request Forgery vulnerability in SearchBlox (CVE-2015-0970) which has a CVSS v2 score of 6.8 and a CVSS v3 score of 8.8 as well as a Stored SQL Injection vulnerability in MySQL (CVE-2013-0375) scoring 5.5 in CVSS v2 and 6.4 in CVSS v3. This goes to show that CVSS version 3 improves the accuracy and consistency of web application related vulnerabilities which makes it more relevant to a web application scanner, such as Acunetix. Acunetix provides CVSS as a scoring guideline for professionals who need to use CVSS for Compliance or when the vulnerabilities identified by Acunetix need to be prioritised with bugs identified by other vulnerability management systems. Acunetix Web Vulnerability Scanner v10.5 ships with support for CVSS v3 to allow users to better categorise web vulnerabilities identified by Acunetix. Get the latest content on web security in your inbox each week.
<urn:uuid:c880e933-db9a-485d-a043-3589d1e1d3be>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/acunetix-v10-5-assigns-cvss-3-0-scoring-to-its-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00201.warc.gz
en
0.933499
679
2.59375
3
The arrival of the General Data Protection Regulation (GDPR) in May 2018 is the most sweeping and comprehensive European legislation to address the issues of personal data protection and online privacy in more than 20 years. GDPR defines personal data broadly and aims to give consumers control over how their data is stored, transferred, and used by third-parties. According to the McDermott and Ponemon GDPR Survey Results, sixty percent of respondents said that GDPR has “significantly changed their organizations’ workflows for collecting, using, and protecting personal information.” With so much of today’s personal data being collected and stored in cloud environments, rush to meet GDPR compliance has the potential to hasten the adoption of edge computing. According to MachNation, edge computing is a distributed technology and processing architecture that brings computational and analytics capabilities near the point of data generation. Edge computing enables certain processes to be decentralized and to occur in a more optimal physical location. This creates more secure, reliable, and scalable IoT deployments, while also offering new opportunities for IoT solutions to generate business value. Here are a few use-case examples where edge computing can help enterprises move towards GDPR compliance. Virtually any data collected about a patient falls within the bounds of GDPR or HIPAA-type regulatory requirements. As healthcare providers adopt IoT solutions to deliver enhanced patient care, a heightened set of security and privacy concerns present new challenges. Leveraging edge capabilities in medical devices and services allows patient data to remain close to the source, limiting risk of a privacy breach. By restricting the movement and storage of personally identifiable information (PII), users are able to choose when, where, and for how long their data is accessible to third-party applications or their medical provider. This a la carte approach to data management offers end-user options to tailor their devices to individual healthcare needs and keep medical providers securely connected to their patients. These on-device capabilities not only improve the standard of care offered, but empower patients to have greater control over their information and allow service providers to assume less risk. As the adoption of smart devices at home continues to grow, so do the privacy concerns. Some data from smart home devices falls within the confines of GDPR. Consumer products like Amazon Echo or Google Home must transmit data to and from the cloud to function. For the consumer, this means linking a home to a variety of third-party services, which collect and store potentially vulnerable and sensitive data elsewhere. Edge computing in the smart home has the potential to give control of personal data back to consumers – one of the primary goals of GDPR. By integrating edge capabilities into their core services, providers of smart home accessories offer users control of the data, whether they transmit it to the cloud or store and process it locally. This has particular implications for the transmission of financial, health, and location data. Virtually all utility usage data collected from homes falls within the privacy bounds contemplated by GDPR. For example, power companies collect data at both macro and micro levels to monitor electricity usage and encourage reduction in consumption. GDPR non-compliance risks arise during the collection of private usage data. Edge capabilities help utilities meet privacy requirements by providing localized and secured data streams. By monitoring and analyzing data in a delimited geographical perimeter, rather than aggregating individual consumer metrics, utility providers are able to create more granular analyses of usage patterns while remaining in compliance with GDPR. GDPR is a landmark legislation that is here to stay, and serves as a blueprint for driving industry advancements towards increased privacy and protection. GDPR has wide-reaching implications around which industries will have to model their internal data privacy and protection playbooks. As IoT solutions continue to permeate throughout the globe, so too will the challenges of keeping sensitive data secure.
<urn:uuid:7e230737-5320-40fc-b7e2-a6f87af78374>
CC-MAIN-2022-40
https://www.machnation.com/2019/02/19/edge-computing-helps-organizations-meet-gdpr-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00201.warc.gz
en
0.916628
785
2.546875
3
This ArcGIS Toolbox provides many tools to manipulate and analyze GIS data. Geoprocessing frequently uses the output of one tool as the input for a second tool which creates a set of tools that are chained together. ArcGIS provides Model Builder, a graphical user interface (GUI) which can be used to drag and drop tools to create models which will run the tools in sequence. Model Builder also allows any model to be exported to Python code. Tools can be used to manipulate and analyze GIS data. Tools are located in Toolboxes. To view the Tools in ArcMap, click on the Catalog tab at the far right, as shown in Figure 1. Figure 1: Locate the Catalog tab in ArcMap - In the Catalog window, click the plus sign to expand Toolboxes > System Toolboxes. - Expand Analysis Tools > Extract, as shown in Figure 2. Figure 2: Analysis Tools in the System Toolbox 3. Double-click on the Clip tool. Each geoprocessing tools have inputs and outputs that are required, as shown in Figure 3. Input parameters are the values that the tool uses to start its work. Output parameters are the results of finishing its work. Figure 3: Clip tool parameters There is a Show Help button in the lower-right corner of the tool which provides an explanation of any parameter that is clicked on. Figure 4: Show Help on Clip Features Tools can be run just one time. If you need to set up tools to run in a specific sequence, ModelBuilder is the solution. Geoprocessing frequently uses the output of one tool as the input for a second tool which creates a set of tools that are chained together. A set of chained tools is also referred to as a model. Models are created when there is not a suitable solution provided by one of the existing tools in the box. ArcGIS provides Model Builder, a graphical user interface (GUI) which can be used to drag and drop tools from the Catalog window into the model. Now the tools are connected and in the proper sequence. Model Builder also allows any model to be exported to Python code. Sometimes it is easier to create a model in Model Builder and then export the model to Python code to see how the corresponding Python code is constructed. Read What is Model Builder? to learn how to lay out a workflow using Model Builder. After setting up a tool and running it manually, the results window in Figure 1 tracks the tools that have been executed. Click Geoprocessing > Results if the Results Window is not open. Figure 5: Results Window Right-click on one of the executed tools and select Copy As Python Script. Figure 6: Accessing Features via right-click Paste the code into the Python window. Figure 7: Python code generated by executed tool Fairly complex inputs can be quickly created using the toolbox. Even if you are familiar with python, this method can be very useful to get the correct quotation marks and options expected by a function. The “#” indicates a comment, and that line will not execute. Comments can be used as placeholders or as a means of documenting a script. Read Tools and toolboxes for an explanation of how tools can be chained together to create models by following the steps below. - Browse to ArcGIS Resources. - Select Desktop. - Select Geoprocessing. - Select Introduction. Click on A quick tour of geoprocessing. ArcGIS Model Builder The Model Builder utility within ArcGIS is used to create, edit and manage models. Read through A quick tour of Model Builder for an explanation of the Model Builder interface. In Figure 4, an input has been dragged into a new model and the inputs and outputs for the parameters have been set. Figure 8: Model Builder Read A quick tour of creating tools with Model Builder for a step-by-step explanation based on a given scenario. From into the Model window menu and choose Model > export > To Python Script and choose a location to save the new .py text file. This will give a more comprehensive script that can run independently of ArcMap with inputs run from the command line or from other Python code. It keeps the defaults chosen from the toolbox but allows those to be changed. Figure 9: Creating an Independent Python Script It also uses the environment variables and sets those temporarily to the settings that were in effect while within Model Builder. It then shows an example of how to use variables when calling the function, rather than sending a text string with the values. Finally, it sets the environment variables back to their original values. Watch the video, Using Model Builder in ArcGIS 10.x
<urn:uuid:ad274bb6-aebc-43f1-aaf8-28a22659a176>
CC-MAIN-2022-40
https://electricala2z.com/python-programming/arcgis-model-builder-tutorial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00201.warc.gz
en
0.884507
997
2.96875
3
But risk can never be completely neutralized, officials said. Risk management is the name of the game around the world as governments move to electronic voting, and there is no single answer for gaining public trust, officials said this week. No matter what system a government adopts, there is no technology that can provide complete security and prevent tampering, said Julian Bowrey, program manager for local e-government in the United Kingdom. "In any system, you have to understand the risks and manage the risks," he said, speaking at the Government Solutions Forum in Washington, D.C. "There are many ways you can manage the risk, but you can't guarantee there is no risk at all," said Cameron Quinn, U.S. elections advisor at the International Foundation for Election Systems, a nonprofit organization that advises on all areas of election management. Voter identity verification is one of the biggest concerns. Officials in Ontario, Canada, have set up a two-step registration process to ID citizens before they come to the polling place or vote online, and identification must be presented when they vote. It's a simple measure, but citizens are embracing the new voting systems and processes so far, using both the Internet and touch-screen systems without any qualms, said Sheila Birrell, town clerk in Markham, Ontario. The United Kingdom held several local pilot tests of e-voting systems in 2002 and 2003, and in all cases auditors found no identity fraud, Bowrey said. There is no way to tell what will happen as the pool of voters gets larger, but officials need to weigh that possibility along with the risks that even a paper-based system can face, Bowrey said. The possibility that the systems themselves could be tampered with is another concern, but other than testing and validating previous tests, the best way to mitigate risk seems to be to standardize on a system so everyone faces the same risks, said David Walsh, assistant principal in the franchise section of Ireland's Department of the Environment, Heritage and Local Government. Electronic voting has faced heavy opposition in Ireland, and the government there recently delayed its move to an e-voting system because of concerns about paper audit trails -- an issue being debated currently in the United States. California is one of several states considering laws to require paper records, and the federal Election Assistance Commission last month started developing recommendations for addressing the issue. NEXT STORY: Defense releases IT reference models
<urn:uuid:29aafa6d-58fe-4d7d-ac4a-0461c590f5d0>
CC-MAIN-2022-40
https://fcw.com/workforce/2004/06/e-voting-requires-risk-management/227047/?oref=fcw-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00201.warc.gz
en
0.958499
502
2.515625
3
A Key Management System (KMS), also known as a cryptographic key management system (CKMS) or enterprise key management system (EKMS), is an integrated approach for generating, distributing and managing cryptographic keys for devices and applications. They may cover all aspects of security – from the secure generation of keys over the secure exchange of keys up to secure key handling and storage on the client. Thus, a KMS includes the backend functionality for key generation, distribution, and replacement as well as the client functionality for injecting keys, storing and managing keys on devices. Key management components To ensure online data remains protected, it’s critical to understand the different components of an encryption key management service, so that you know the right questions to ask when evaluating new and existing types of KMS technologies that can be implemented. - Key storage: As a general principle, the person or company who stores your encrypted content should not also store the encryption keys for that content (unless you’re comfortable with them accessing your data). - Policy management: While the primary role of encryption keys is to protect data, they can also deliver powerful capabilities to control encrypted information. Policy management is what allows an individual to add and adjust these capabilities. For example, by setting policies on encryption keys, a company can revoke, expire, or prevent the sharing of the encryption keys, and thus of the unencrypted data, too. - Authentication: This is needed to verify that the person given a decryption key should be allowed to receive it. When encrypting digital content, there are several ways to achieve this. - Authorization: Authorization is the step that verifies the actions that people can take on encrypted data once they’ve been authenticated. It’s the process that enforces encryption key policies and ensures that the encrypted content creator has control of the data that’s been shared. - Key transmission: This is the final step in the overall encryption key management process and is related to how keys get transmitted to the people who need them, yet still restrict access to those who don’t. (SANTA CLARA, Calif.—April 20, 2021) The Physical Security Interoperability Alliance (PSIA) today announced its Secure Credential Interoperability (SCI) initiative and a working group to advance its development. “The physical access control industry has demonstrated a need for a universally compatible Berlin, 10. March, 2021 – The introduction of the Covid-19 vaccines across the globe has prompted discussions on the need for vaccination documentation. Veridos, a world-leading provider of integrated identity solutions, explains the five prerequisites of a secure and effective Nov 14, 2019 -- ePasslet Suite v3 – cryptovision’s Java card framework for electronic ID documents – will be available in 2020 on SECORA™ ID, Infineon’s new Java card operating system. Using ePasslet Suite, users of SECORA™ ID can easily and flexibly Entrust Datacard Earns Frost & Sullivan North American Product Leadership Award for its IoT Cybersecurity Solution, ioTrust Santa Clara, CA, United States, 2019/07/25 - Based on its recent analysis of the North American Internet of Things (IoT) cybersecurity market, Frost & Sullivan recognizes Entrust Datacard Corporation with the 2019 North American Product Leadership Award for its ioTrust HONG KONG, 29 May, 2019 - Advanced Card Systems Ltd. (ACS), Asia Pacific's top supplier and one of the world's top 3 suppliers of PC-linked smart card readers (Source: Frost & Sullivan), exhibits in Securing Federal Identity 2019 from 4
<urn:uuid:4f40eae4-a1f3-4c3e-9347-30830c36a7db>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/key-management-system-kms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00401.warc.gz
en
0.886482
761
2.78125
3
Computers get hot and data centres use a huge amount of energy to deal with it. Remarkably the waste heat from a large data centre could provide hot water for 11,000 homes. But moving heat is hard, it requires new infrastructure or a pre-existing heat network. Moving bits and bytes is easy, so we've taken the servers to where heat is needed, in people's homes. We harness the heat from compute to provide free hot water for those that need it, turning a compute problem into a social benefit.
<urn:uuid:876ae86c-ece2-4ad3-a232-d83d8bece57d>
CC-MAIN-2022-40
https://www.heata.co/company/heata-unit
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00401.warc.gz
en
0.93292
112
2.5625
3
While the threat of IoT security issues is apparent, people and the processes they create are often more problematic. In the early 1990s, Kevin Mitnick was one of the most notorious hackers on the planet. Now, however, he’s a security rockstar — a best-selling author and popular speaker who has recast himself as a trusted adviser to the Fortune 500 and international governments. Hackers like Mitnick should remind enterprise companies of the human element of hacking. Mitnick has long been an expert in social engineering, which he defines in his book “The Art of Deception” as “getting people to do things they wouldn't ordinarily do for a stranger.” Threat actors have long used social engineering to target traditional computer networks and computing platforms. But the technique is also perilous for enterprise IoT devices, nearly half of which have been breached in the past two years, according to a survey of 400 IT executives from Altman Vilandrie & Co. A post on the Mitnick Security blog, for instance, explains how social engineering was likely used in the Stuxnet attack against the Natanz nuclear facility in Iran. The plant’s network may have been isolated from the public internet, but all it took to launch the attack was for a single worker to plug a USB flash drive into a computer within the facility. Stuxnet, one of the first examples of an IoT-based digital weapon, caused Iranian nuclear centrifuges to fail and reportedly explode in 2010. “It is common for organizations to focus on technology-based cybersecurity risks while not focusing sufficiently on people and process, both of which are common failure points,” said T.J. Laher, senior solutions marketing manager at Cloudera and host of the Cybersecurity On Call podcast. A May feature in Harvard Business Review reaches a similar conclusion: “The major sources of cyber threats aren’t technological. They’re found in the human brain, in the form of curiosity, ignorance, apathy, and hubris.” Another recent HBR piece considers the behavioral economics of why executives tend to underinvest in cybersecurity. (Note: Cloudera is sponsoring an HBR webinar on the subject of cybersecurity for the C-suite to be held on Aug. 3.) Such biases can also create trouble for cutting-edge networks designed to confront IoT security issues posed by networks with thousands or millions of IoT devices, said Ofer Amitai, CEO and co-founder of security startup Portnox. Consider, for instance, intuitive networking, which relies on machine learning and artificial intelligence to facilitate network administration and threat detection. “One of the most impressive aspects of Cisco’s Network Intuitive [platform], for instance, is that it claims to be able to identify malware in encrypted web traffic without the need to decrypt the information and breach privacy,” Amitai said. “However, if this tool is based on network context, it could create space for social engineering and put the network under threat from potentially dangerous malware ‘disguised’ as regular encrypted traffic.” For example a hacker could disguise a phishing campaign so that it resembles regular behavior and actions carried out by employees on the network, thereby easily gaining entry into the network and access to its assets, Amitai added. “Additionally, a hacker could use social engineering to gain access to the network and then send out what look like regular encrypted commands, which are actually network attack verticals. This would fly under the radar of network admins if they aren’t decrypting traffic to check for malware threats.” In addition, an employee with low-level internet etiquette could “miseducate” the network and exposes the organization to cyberthreats. For many enterprises, it may still be too early to automate network access and control to be “intuitive,” Amitai concluded. [IoT Security Summit, co-located with Blockchain360 and Cloud Security Summit, explores how industry-wide security, privacy and trust can be established to unlock the full potential of IoT.] Another consideration is that relatively few executives worry sufficiently about IoT security issues. This is often the case for organizations fortunate enough never to have been hacked. “We see buyers who think of security as a cost center who want to achieve as much security as possible at the lowest cost,” Laher said. “But if a CEO has ever been part of an organization that has been hacked before, cybersecurity has a bigger budget. They might even have a blank check,” he explained. Another common hurdle is that executives think of IoT security issues as external. Many breaches, however, are aided or abetted by people within the company. IBM’s 2016 Cyber Security Intelligence Index reported that 60% of such attacks were from insiders. An example might be an engineer unwittingly deploying an insecure network of IoT devices, or it might be a disgruntled cybersecurity professional. “We are seeing forward-looking organizations embrace this concept of ‘watching the watcher,’” Laher said. “A lot of cybersecurity professionals are ex-hackers. They were black-hat [hackers] at one point or [hacktivists].” In the end, the triad of people, process and things is interwoven. “Ultimately, the notion of watching the watcher becomes a technology problem,” Laher noted. “You need to do a complete audit so you can track what everybody is doing and what they are accessing and modifying. You need to have all of your data encrypted and secure so that only one or two people can access it.” With the explosion of IoT devices, “the future of networking is really more about having visibility to all devices connected to the network in real time and the ability to control and manage them in a way that protects the network,” Amitai said. Peter Tran, GM and senior director of RSA's Advanced Cyber Defense division, says that it is noble to aim to achieve a perfect triad between people, process and technology, but stresses that it is challenging “given the disparate nature of IoT” and “today’s rush to migrate to the cloud.” “The scales tend to get tipped pretty heavily towards technology when IT and sensors come together,” he said. This very cool article and other interesting ones can be found at the source. Source: Internet of Things Institute
<urn:uuid:b5d7fc63-be14-4260-a258-bba00b655a98>
CC-MAIN-2022-40
https://www.mitnicksecurity.com/in-the-news/social-engineering-threatens-iot-security-issues
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00401.warc.gz
en
0.960391
1,342
2.640625
3
Databricks: The Future of Data Ethics Artificial Intelligence solutions are on the rise. A recent study from Tractica discovered that the market for AI software products and services would surpass $89 billions by 2025. Moreover, investments in AI startups or solutions totalled more than $15 billions in 2017 alone. With these rapid growth rates ahead, there is no wonder that SMEs and enterprises alike are fighting to jump onto the AI trend. Don’t get me wrong that is fantastic. Many market participants generally mean innovation happens faster than usual. Plus, having businesses participate in the AI revolution helps everyone to get closer to the holy grail of developing General AI. Artifical Intelligence will transform from a particular, use case driven technology to an intelligent companion that makes sensible decisions. What is Artificial Intelligence? Artificial Intelligence is arguably one of the most debatable fields in contemporary science. Although modern AI it's rooted in the 1930s, we are still unsure how to define it. The recent media attention on the topic hasn’t helped with clearing these debates. Either because we lack better names or to jump on the trend; we now have many applications labelled as artificial intelligence when they are just statistical analysis or manually encoded “if-then” rules. Moreover, the field is still consolidating as old topics transition to other fields and new ones added to the forefront of the field. For example, automated search and planning used to be at the forefront of the AI research once, but is now something taught to undergraduates everywhere. Defining AI becomes even more tricky because we tend to associate intelligence to complex tasks such as chess, Go, or proving maths theorems, while tasks like walking, recognising our loved ones in images, or picking up soft objects are mere commonalities. In reality, AI algorithms excel at solving tasks like playing complex games because the rules are predetermined, and they can handle a much larger amount of information that we can. On the other hand, “soft” tasks are still close to a mystery. Take a look at the evolution of autonomous driving. We are still trying to figure out all the variables involved and the decisions that a human driver makes almost instinctively. Thus, we should be careful when defining something like an AI solution or system. It could be that it looks intelligent when in terms it is just performing a long list of complex operations. Artificial Intelligence is an attempt to replicate human thinking and decision making. To some extent that is true, but artificial intelligence is about much more than replicating human cognitive abilities. Many applications and solutions today perform human tasks such as speaking, translation, or driving. However, AI solutions should reason, solve problems, acquire and use knowledge, make decisions, and communicate in natural language. The whole purpose of AI is to create systems that are more efficient, more effective, and more productive than humans are. In other words, AI systems should process unfathomable amounts of data in a timely fashion to arrive at practical solutions. An AI algorithm is a method of computation when the steps towards solving a problem are unknown. In other words, artificial intelligence is using information about the situation to find out how to react to it. Most of the times, the starting point is a basic model of the problem domain and a bunch of data. The data are either labelled or unlabelled. The algorithm adjusts the parameters of the model to represent the data best. In other words, the algorithm adjusts such that its representation of the problem domain is in tune with what the examples reveal about the world. When the model gets to a state that best represents the world, the algorithm will stop learning. Thus, any solution that can improve its performance by learning from experience while performing tasks in complex environments without supervision can be considered artificial intelligence. Narrow vs General AI Artificial Intelligence seeks to create adaptive and autonomous systems. In its pursuit, it is common to go onto two parallel routes, either narrow (or weak) AI or general (or strong) AI. The first route leads to the systems that we see in practice today; highly specialised algorithms that are solving one very well defined task or problem. Siri is one example of narrow AI. Even if she appears to perform a plethora of functions, it can only operate within a given domain and is not self-aware. In contrast, general AI (or strong AI) seeks to create a system that is self-aware and capable of solving any problem. Unfortunately, we are still far from building general AI applications, but the systems like Jarvis of Marvel’s Iron Man series are a good representation for general AI. AI and Business Although recent years witnessed an explosion in applications with a form or another of AI, business applications are still limited to a restricted set. Typical examples of AI solutions for business are in recommender systems for content or products, chatbots to ease customer service, intelligent virtual assistants to facilitate internal communication and workflows, fraud detection, bots to improve customer experience, or programmatic advertising and automated marketing solutions. Industrial sectors also benefit from AI developments mostly through intelligent control systems for industrial assets and processes, predictive maintenance, process monitoring and optimisation, autonomous trucks and other smart assets. Although overwhelming, these examples are only the tip of the iceberg when it comes to what AI can do. Most of the applications I mentioned are prime examples of narrow AI. They are designed to resolve specific use cases. Nevertheless, these systems are working, and businesses have reported improvements in efficiency and productivity after adopting them. However, these improvements don’t come for free. Significant drawbacks on the road to broad AI adoption is not technical, but ethical and skills related. In a recent survey from InfoSys, senior business decision makers revealed that they are concerned about the lack of clear ethical standards when it comes to AI. Furthermore, business leaders are worried about the job displacement that AI will cause. However, AI is not just about automating processes and having machines perform human tasks. In reality, as AI adoption increases, new jobs will emerge from robotic and intelligent systems designers and builders to supervisors or other positions where creativity and intuition are essential. In a recent study, PwC study revealed sectors like professional services, science, and education would see a significant rise in the number of jobs available. Sectors, where people are performing repetitive tasks such as manufacturing or transport, will see a drop in jobs. Nevertheless, if people, business, and governments alike invest in the continual development of their employees' knowledge, especially in STEM subjects and arts knowledge, companies will only have to benefit from adding AI to their workforce. The future of AI for business is not all gloomy. However, how can companies transition from single, isolated AI solutions to enterprise-wide systems? A solution is to create networks of specific AI solutions and intelligent agents. These agents adapt to their environment and are autonomous. In other words, they continually learn from situations and each other about how to best perform their tasks. Soon, intelligent agents augment the whole enterprise and support employees. As they grow and evolve, these systems become ubiquitous, from internal systems to monitor and control processes to client facing applications and services. Thus, the enterprise backbone shifts from human decision makers to a mix of software or hardware agents performing repetitive tasks and adaptive and autonomous agents augmenting people.
<urn:uuid:0bbddb18-90f0-4e45-8e32-4ad5ddd704f5>
CC-MAIN-2022-40
https://em360tech.com/tech-news/tech-features/ai-enterprise-wide-system
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00401.warc.gz
en
0.948988
1,484
2.90625
3
The Role of a Project Manager - April 4, 2016 - Posted by: Juan van Niekerk - Category: Project Management Overview of Project Management We hear the phrase “project management” on a daily basis, but few of us know what the term entails and what the role of a Project Manager actually is. There is a misconception that Project Managers are constantly inundated with paperwork and that they mostly run around arranging meetings. Quite the opposite is, in fact, true. Paperwork can be a large part of the job, but many organisations have moved their operations to the cloud, significantly cutting down on time spent doing physical paperwork. Although a Project Manager spends around 90% of their time communicating, meetings are held to a minimum and are kept as short as possible to ensure only the most relevant information is discussed and to ensure productivity is not affected by long meetings. What exactly is a project? To determine what a Project Manager actually does, it is necessary to first understand what a project is. Think of a project as a temporary goal. To achieve this goal, a set of operations need to be put into place and executed correctly for the outcome to be successful. It has a predetermined start and end, and a definable size with a certain amount of resources available to complete the goal. The “goal” may be a physical product that needs to be produced to a certain quality standard, a bridge that needs to be erected by a certain time or even an IT solution that needs to be implemented by a certain date. Roles and Responsibilities of a Project Manager The Project Manager is the individual that is held responsible for the outcome of the project. They will need to initiate, plan, design, monitor control and, ultimately, close the project, whether it ends successfully or in a failure. Common Project Manager Duties Seeing as the Project Manager is held accountable for the project’s status at any point, they will have various duties to perform during the lifecycle of a project. They need to ensure that any products (or parts of a product) that are completed, are delivered within the time scale that had been agreed upon while ensuring that the costs involved in completing that product are within budget. Quality is a major factor in projects. If the product is not of a desirable quality, it will be rejected by the client and the project will either have to be restarted or scrapped altogether. The Project Manger needs to ensure that this does not happen. A Project Manager will also ensure that the progress being made by the project team is sufficient to justify the cost of that phase the project, ensuring that the project remains viable. A Project needs to adhere to and achieve the expectations of any stakeholders or sponsors that are invested in it’s outcome. The Project Manager will supply regular reports as far as progress and set guidelines are concerned. There is always the chance of unforeseen risks cropping up in any project and this needs to be monitored by the Project Manager. Risks need to be identified in a timeous manner and suggested remedies reported to the project board. The approved fixes will then be implemented to ensure the project stays on track. Change in the project needs to be monitored and addressed while ensuring that the objectives of the project remain within the agreed upon constraints. The above mentioned are but a few of the tasks and responsibilities that a Project Manager will face on a daily basis. In short, they need to monitor and manage the project and project staff in such a way that it stays within the boundaries that were set during the initiation of a project. Skills Required by a Project Manager The skills required to be a competent Project Manager are to do with leadership and management, for example: Team-Building Skills: Creating a common purpose for a team to work towards creates a sense of comradery between team members as they are all working toward a common goal that will, ultimately, benefit each member of the team. Ability to Delegate Tasks: Delegating tasks to your team members will enhance the sense that they are trusted in their skills and that the Project Manager has faith that the task will be completed as is expected. Project Managers that check up on their fellow workers are seen as micro-managers, breaking the trust that has been instilled in their team. Scheduling: Working according to a plan is essential if a project is to have the desired outcome. Ensuring the right tasks are carried out by the right person at the right time will see those targets being met more often than not. Resource allocation: The right resources need to be allocated to the right departments when they are needed. The individuals that the resources are allocated may need to be coached or trained according to their skillset to ensure that resources are not wasted or used incorrectly. Risk management: When an issue arises during a project, there needs to be a plan in place and it needs to be followed calmly and according to a risk management strategy. This will also be created by the Project Manager. Budgeting: There are serious boundaries and constraints attached to every project. One of the most regular reasons for project failure is breaching the limit of budgeting. This could arise not only from overspending on resources, but from bad time management of staff or hiring of contractors to provide specialised work that cannot be sourced from inside the organisation. Issue management: Akin to risk management, issues that arise could seriously hinder the progress and ultimate conclusion of a project. These also need to be identified and dealt with swiftly to keep the project on it’s intended path. Soft skills are interpersonal skills that can make a very big difference, especially since Project Managers work with people. Some examples are: Communication: As with delegation, project staff need to be aware of their duties as well as responsibilities and the progress of the overall project at all times. It is up to the Project Manager to ensure that everyone is on the same wave length, especially when not everyone is working on the same site. Integrity: Best practise is not only applicable to the processes used by an organisation, but also to the way a Project Manager attends to their duties. A Project Manager cannot expect project staff to work ethically when they are not doing the same. Empathy: Project Managers will do well to remember that each individual working on the project has a life outside the project and that each of those individuals may have their own ideas and feelings about the work being done. Retaining a human element to the work being done binds teams together. Staying calm under pressure: We’ve all been in a situation where we feel that we are in over our heads. When this happens, the worst possible thing to do is to lose control and start pointing fingers. A good Project Manager will collect themselves and face the problem head-on in a calm and collected manner. With the right mindset and a trust in the training that has been undertaken, project management can be a long and fulfilling career. The schedule of a Project Manager is often a hectic one, but once a project has been completed successfully, there is a real sense of achievement and the dividends can be enjoyed.
<urn:uuid:04e188c2-8bb2-4fac-9352-a4b707fde0db>
CC-MAIN-2022-40
https://www.itonlinelearning.com/blog/the-role-of-a-project-manager/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00401.warc.gz
en
0.96725
1,475
2.5625
3
In DNA printing, genetic code becomes computer code. This transformation occurs when the chemical bases adenine, thymine, cytosine and guanine present in a chemical mix or gene sequence are translated by computer through gel electrophoresis technology into their representative letters: A/T, T/A, C/G, G/C. This alphabet code was formalized in 1970 by the International Union of Pure and Applied Chemistry (IUPAC) for integration into a text-based bioinformatics format, called “FASTA,” in which nucleotides are represented symbolically using single letters. Also known as “artificial gene sequencing, synthesis and protein production,” DNA printing is a method in synthetic biology that is used to create artificial genes in the laboratory. What sets it apart from molecular cloning and polymerase chain reaction (PCR) is that scientists can use DNA printing to make a completely synthetic double-stranded DNA molecule artificially, without the need for preexisting DNA sequences. The science behind DNA printing of rDNA and proteins is known as “phosphonamidite chemistry” and “solid-phase DNA synthesis.” Artificial DNA in a Jar “This means you can buy in jars chemicals which are derived from sugar cane, and the chemical phosphoramidites in these four bottles end up being the four bases of DNA … A/T, C/G, T/A, G/C … in a form that can be readily assembled,” explained DrewEndy, assistant professor of bioengineering at Stanford University, in a 2008 Long New Foundation presentation titled “Creating Synthetic DNA.” “So, you hook these bottles up to a machine, and into the machine comes information from a computer, a sequence of DNA … whatever you would like to build, and that machine will stitch the genetic materials together from scratch,” he continued. “It’s DNA synthesis … . You take information and the raw chemicals and you compile genetic material. It’s practically speaking the coolest, most impressive/scary technology I’ve encountered.” Artificial DNA synthesis involves building a man-made version of the nucleic acid strands that form genetic code. Currently, solid-phase synthesis is carried out automatically using computer-controlled instruments. A “gene of interest” fragment sequence FASTA file is downloaded to an automated synthesizer. The synthesizer computer’s onboard synthesis program applies this code to an actual phosphoramidite chemical mix of nucleobase pairs, the building blocks of DNA — adenine/thymine, cytosine/guanine — represented in the computer as the letters AT/CG. The desired AT/CG sequence is entered on a keyboard and the system’s microprocessor automatically opens the valves of the containers of successive AT/CG phosphoramidite nucleotide bases, reagents and solvents needed at each step, into a synthesizer column, which is packed with tiny microbeads (called a “resin”) made of controlled pore glass (CPG), polystyrene or silica. These beads provide support on which DNA molecules are assembled. The phosphoramidite building blocks are coupled sequentially to the beads that support the growing nucleotide chain in the order required by the sequence of the “gene of interest” and the intended downstream protein product (e.g., a vaccine, biologic). The chemical succinyl acts as a sequence-specific linker of phosphoramidite molecules to target beads. Upon the completion of the chain assembly process and after all steps are finished, the synthesized compound is cleaved chemically from the solid-phase beads, released to solution and deprotected, and the resulting strand of synthetic gene or genes is collected for purification. The method has been used to generate functional bacterial or yeast chromosomes containing approximately 1 million base pairs. (By comparison, the human genome is made up of 3 billion base pairs). Making a Protein – Proteomics in Action Once purified, the gene is ready to make a protein. The journey from gene to protein is complex and tightly controlled within each cell. Isolation of a specific gene begins with scientists constructing a DNA library — a comprehensive collection of cloned DNA fragments from a particular cell, tissue or organism. The DNA containing the target gene(s) is split into fragments using restriction enzymes or the protein Cas9 (or CRISPR-associated), an enzyme that acts like a pair of “molecular scissors” capable of cutting strands of DNA. The target gene of interest in a segment of DNA is isolated and inserted into the purified DNA genome of a self-replicating genetic element — generally a virus or a bacterial plasmid. The gene of interest merges with the plasmid’s DNA to make a recombinant DNA molecule known as a plasmid “cloning expression vector.” Cloning vectors are plasmids used primarily to propagate DNA. An expression vector is a specialized type of cloning vector designed to allow transcription of the genetic information into messenger RNA (mRNA) and translation into a protein. Because bacteria divide rapidly, they can be used as “factories” to copy DNA fragments in large quantities. E. coli is used widely in laboratories as a host organism because it is easy to manipulate and inexpensive to grow. E. Coli is the most common prokaryotic (no membrane-bound nucleus) organism used in research. It is an excellent host for producing various proteins, and was one of the first organisms to have its genome sequenced, in 1997. Once the vector is inserted into an E. coli bacteria cell (transformation) for amplification, the rDNA molecule replicates inside the host E. coli bacteria cell while the host cell divides, forming a clone of cells called a “library.” DNA contains the instructions to assemble amino acids in a specific order. Each cell type only “turns on” (or expresses) the genes that have the code for the proteins it needs to use. Double-stranded DNA “breathes” (frays) in a rhythmic unwrapping and rewrapping, zippering and unzippering — a dynamic opening and closing of “bubbles” between the two strands that leads to the breaking apart of base pairs. The bubble opening between the two strands results in a transient single-stranded DNA region containing one or more bases, allowing proteins to gain their initial access to DNA through ribonucleic acid (RNA), a long, single-stranded chain of cells that process protein. There are four types of RNA, and each is encoded by its own type of gene: mRNA (messenger RNA) encodes amino acid sequence of a polypeptide; tRNA (transfer RNA) brings amino acids to ribosomes during translation; rRNA (ribosomal RNA), along with ribosomal proteins, makes up the ribosomes — the organelles that translate the mRNA; and snRNA (small nuclear RNA), along with proteins, forms complexes that are used in RNA processing. Gene DNA sequences instruct cells to produce particular proteins. RNA enzymes read the information in a DNA molecule and transcribe it into the intermediary messenger ribonucleic acid (mRNA) molecule. Transcription begins when an enzyme called “RNA polymerase” attaches to the newly opened DNA template strand and begins assembling a new chain of nucleotides to produce a complementary RNA strand. The Universal Genetic Code contained in DNA sequences enables a cell to translate the nucleotide “language” of DNA into the amino acid “language” of proteins made of long chains of amino acids joined end to end. Amino acids have many functions, but the most well known is that they are the building blocks for protein synthesis. The genes in RNA that code for proteins are composed of codons, a triplet of adjacent nucleotides (ATC/GAC, etc.) in the messenger RNA (mRNA) chain. Each codon codes for a single, specific amino acid in the synthesis of a protein molecule. Here’s where the gene of interest begins morphing into the protein of interest. When the DNA gene of interest segment is fully transcribed into RNA, one base of DNA corresponds to one base of RNA, now mRNA. This DNA-created mRNA molecule then carries DNA’s coded instructions for making a protein. The DNA information contained in the mRNA molecule has been translated into the “language” of amino acids, the building blocks of proteins. Together, transcription and translation are known as “gene expression” or “protein synthesis,” all of which describe the same process that takes place in the cell cytoplasm — the cell substance between the cell nucleus and outer membrane. After building the template to construct a protein, the mRNA molecule brings the DNA message out of the cell nucleus into the cell cytoplasm to protein-manufacturing ribosomes. Ribosomal ribonucleic acid (rRNA), the RNA component of the ribosome, is essential for protein synthesis. During translation, ribosomal subunits assemble together like a sandwich on the strand of mRNA newly arrived from the cell nucleus with its genetic code for creating a protein. The ribosomal subunits proceed to attract transfer RNA (tRNA) molecules tethered to amino acids. E. coli has amino acids within the cell, or can pull them into the cytoplasm from an outside environment like a nutrient mix. tRNA transfers amino acids from the cell cytoplasm to the ribosome. The complex ribosomal structures physically move along an mRNA molecule like a train on a track, catalyzing the assembly of amino acids into protein chains. They also bind tRNAs and various accessory molecules necessary for protein synthesis. A long chain of amino acids emerges as the ribosome decodes the mRNA sequence into a polypeptide chain, or a new protein. As the recombinant proteins are produced by the cloned genes, the E. coli host cells start accumulating. Surviving clones that carry the protein of interest form a colony, which is grown into a large culture. The next task is to collect and purify the specific product, i.e., the desired recombinant protein. The first step in the collection of recombinant DNA expressed in E. coli is the lysis (loosening, destruction) of the E. coli cell to release the protein of interest. In the cell lysis process, the bacteria’s cell membrane is ruptured, exposing the contents. Lipids from the cell membrane and the nucleus are broken down with detergents and surfactants. Extraction, separation and purification are the techniques used to concentrate the protein of interest macromolecule. The purification of the newly created target protein is a necessary step after its extraction from the E. coli bacterium and its separation from cell debris and other insoluble material, contaminants, the crude biological source, plasmid DNA, and other proteins and macromolecules. Purification is achieved either by enzymatic or chemical means. Most commercial proteins are developed in phosphate buffered saline solutions. Liquid formulations usually are preferred for injectable protein therapeutics (in terms of convenience for the end user and ease of preparation for the manufacturer). The most common liquid product containers are bottles, flasks, vials and trays. The liquid form is not always feasible, given the susceptibility of proteins to denaturation and aggregation under stresses such as heating, freezing, pH changes and agitation, all of which could result in the loss of biological activity. Lyophilization, also called “freeze-drying,” is one method of drying biological materials that minimizes damage to its internal structure. Lyophilization generally results in improved stability profiles. Lyophilized protein products can be shipped and stored in powder form in plastic and glass jars and bottles. At time of use, the original liquid formulation is reconstituted. The protein can be supplied in a two-chamber cartridge, with the lyophilized powder in the front chamber and a diluent in the rear chamber. A reconstitution device is used to mix the diluent and powder. Some proteins designed for oral consumption can be distributed as capsules consisting of powder or jelly enclosed in a dissolvable gelatin container. A tablet is a compressed powder in solid form. DNA synthesizers are machines used to custom-build DNA molecules to contain a particular sequence of nucleotides. DNA synthesizers can create specific DNA molecules for use in the treatment of a variety of diseases by replacing a faulty or damaged section of DNA with a repaired section. The devices accept digital representations of DNA in the FASTA file format over the Internet, and reconstruct them using chemicals represented by the four AT/CG nitrogenous nucleotide bases that make up DNA. Following are some examples of leading commercial DNA synthesizers: - the GenPlus Next-Gen HT Gene Synthesis platform from GenScript Biotech Corp.; - a semiconductor-based synthetic DNA manufacturing process featuring a high-throughput silicon platform from Twist Bioscience Corp.; - the Invitrogen GeneArt GeneAssembler gene synthesis platform from Thermo Fisher Scientific; and - the Gene Designer from ATUM (formerly DNA2).
<urn:uuid:f12d1ebf-4f36-49cb-807c-f2190fae1954>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/dna-printing-in-the-cloud-part-2-85755.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00401.warc.gz
en
0.919187
2,828
4.125
4