text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The U.S. government operates the most powerful supercomputers in the world. Using some of that power to process government health care records would be difficult to set up, but the benefits could be worth it, says William Jackson. The application of information technology to health care records promises to improve not only medical care but also health care processes, as critical information is digitized for greater portability and easier access. Medical claims processing has been computerized for years, but it remains a fragmented and time-consuming process filled with inefficiencies. The U.S. government is the nation’s largest health care provider, and its system of paying first, evaluating claims later, and finally chasing down inappropriate payments months after the fact has resulted in a level of waste estimated in the hundreds of billions of dollars a year. But the government also operates the most powerful supercomputers in the world. Why not use some of that power to process government health care records? That is the idea of Andrew Loebl, a senior researcher at the Computational Science and Engineering Division at Oak Ridge National Laboratory. The benefits would be threefold: Real-time processing of claims as a whole could help identify fraudulent, wasteful and inappropriate claims before they are paid; it could provide a better understanding of the relationship between treatment and outcome; and because the data would remain in government hands, it could ensure privacy. The technology is not a problem, Loebl said. Oak Ridge’s Jaguar supercomputer, ranked the fastest in the world at 2.3 petaflops -- a petaflop is a thousand trillion floating-point operations/sec -- has more than enough memory and processing power to handle a year’s worth of claims in minutes without interfering with ongoing research. And the software to process and analyze claims already exists. “There’s nothing rocket science about this,” Loebl said. “None of this will take any sophisticated software.” The hard part would begetting the myriad government health programs, from Medicare and Medicaid to those run by the Defense and Veterans Affairs departments, to combine their data into a single pool for processing and analysis. “The idea is unbelievable to the decision-makers,” Loebl said. “We’re a long way from persuading people that this is practical.” Such a program would not completely reform government health care delivery and payment systems, of course. The folks at Oak Ridge might know nuclear fission and supercomputers, but they are not physicians, and they would not be handling the actual claims payments and enforcement. There still would be plenty of work for administrators and contractors to do. But the economies of scale possible from applying supercomputing power to heretofore distributed business processes could help to stem waste, fraud and abuse in the health care system while protecting the integrity of sensitive information. Combining data from a variety of programs into a single coherent stream for processing would not be a trivial task. It would involve a level of cooperation not often seen among agencies, in addition to substantial changes in business policies and processes. It is likely that legislation would be necessary to allow or require this. But the potential benefits could well be worth the effort. Administrators and legislators should take a careful look at Loebl’s idea, determine its feasibility, evaluate costs and benefits, and decide whether it is worth moving forward with. After all, as Loebl said, it’s not rocket science.
<urn:uuid:d221018f-569d-4644-bdd1-c6c70051e691>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2010/01/why-not-use-a-supercomputer-to-process-health-care-records/293490/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00534.warc.gz
en
0.948665
723
2.578125
3
The Department of Energy (DoE) is hoping that investing in research and development (R&D) for supercomputers will help the department achieve its clean energy goals. The department announced that it is allocated $28 million in funding for five research projects to develop software that will “fully unleash the potential of DoE supercomputers to make new leaps in fields such as quantum information science and chemical reactions for clean energy applications.” “DoE’s national labs are home to some of the world’s fastest supercomputers, and with more advanced software programs we can fully harness the power of these supercomputers to make breakthrough discoveries and solve the world’s hardest to crack problems,” said U.S. Secretary of Energy Jennifer Granholm. “These investments will help sustain U.S. leadership in science, accelerate basic research in energy, and advance solutions to the nation’s clean energy priorities.” The funding awards were made through the DoE’s Scientific Discovery through Advanced Computing (SciDAC) program and will bring together a variety of experts in science and energy research, applied mathematics, and computer science to take maximum advantage of DOE’s supercomputers, allowing them to quicken the pace of scientific discovery. The projects are sponsored by the Offices of Advanced Scientific Computing Research (ASCR) and Basic Energy Sciences (BES) within the Department’s Office of Science through the SciDAC program. The DoE said the selected projects will focus on computational methods, algorithms, and software to further chemical and materials research, specifically for simulating quantum phenomena and chemical reactions. Research teams will partner with either or both of the SciDAC Institutes. The DoE provided a list of the selected research institutions, as well as the research proposal title. The selected research institutions and projects are: - California Institute of Technology: Traversing the “death valley” separating short and long times in non-equilibrium quantum dynamical simulations of real materials; - Florida State University: Relativistic Quantum Dynamics in the NonEquilibrium Regime; - Berkeley National Laboratory: Large-scale algorithms and software for modeling chemical reactivity in complex systems; - University of California, Santa Barbara: Real-time dynamics of driven correlated electrons in quantum materials; and - University of California, Riverside: DECODE: Data-driven Exascale Control of Optically Driven Excitations in Chemical and Material Systems. The DoE said the research projects were chosen by competitive peer review under a DoE Funding Opportunity Announcement open to universities, national laboratories, and other research organizations.
<urn:uuid:5d566a51-21c0-4d3f-a078-742e57e04626>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/doe-invests-28m-in-supercomputing-rd-to-meet-clean-energy-goals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00734.warc.gz
en
0.904014
550
2.84375
3
Having a calibration programme is just part of the process to towards generating a valid result. It provides assurance that suitable equipment is being used and that results are traceable to SI units. Validation however is the process of providing objective evidence that a method can perform consistently to meet specified requirements that are adequate for the intended use of method. Requirements could be that a specific (low enough) detection level or high enough accuracy can be achieved. ISO 17025 also requires that a laboratory monitors and evaluates its performance by comparison with other laboratories, through proficiency testing (PT) or interlaboratory comparisons All these activities are required to show technical competency.
<urn:uuid:a62c7333-7210-4645-91c1-da6f93dcaf27>
CC-MAIN-2022-40
https://community.advisera.com/topic/calibration-program/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00734.warc.gz
en
0.945648
129
2.65625
3
Cybercriminals regularly seize on popular news stories to take advantage of public fears. Case in point: the COVID-19 coronavirus outbreak. As reported cases and death tolls rise worldwide, malicious actors are using the pandemic to entice people to click on links, open attachments, and generally forget their security best practices and information awareness training. Here are four common cyber threats to watch out for—and potential ways to keep your employees, data, and organization safe during the COVID-19 pandemic. 1. Misleading “health and safety” emails In the most common COVID-19 cyber threat, emails promise valuable information, but instead deliver dangerous malware for cyberespionage, ransomware installation, and credential theft. Examples include: - Ransomware through a fake statement about coronavirus in Hong Kong, which referenced “Dr. Chuang Shuk-kwan, Head of the Communicable Disease Branch” to add an appearance of legitimacy - A remote access trojan through a PDF of coronavirus safety measures - Information-stealing malware through a coronavirus-themed email campaign about the shipping industry - A virus through a coronavirus-themed document - A malware bot through an email titled “Emergency Regulations,” that looks like it’s from the Chinese Ministry of Health - “Coronavirus” ransomware that used a fake version of the WiseCleaner site for Windows system utilities Many examples of coronavirus social engineering so far have masqueraded as public health or official government announcements. However, as the virus spreads to the United States, some actors may adjust their tactics to pose as other prominent public officials, including politicians and local health authorities. 2. Dangerous websites and maps Not all websites with COVID-19 in their URL are legitimate or safe. In late February 2020, Check Point reported 3% of all COVID-19-themed domains to be malicious and another 5% as suspicious, out of a sample of more than 4,000 domains. As people search for information about the virus’ geographic spread, cybercriminals are also using online maps—and selling coronavirus-themed malware loaders online. In a well-publicized case, spoofed versions of Johns Hopkins University’s COVID-19 tracking map distributed information-stealing malware. 3. Phishing scams Pretending to offer infection-prevention measures, information about new cases, and general COVID-19 “awareness,” phishing campaigns target Microsoft Outlook and Office365—and credit card data. Scammers promise you can: - Donate food, water, and medical care, sometimes with a QR code for “donating” bitcoins - Access non-public information that “is not being told to you by your government” - Buy hand sanitizers, vitamins, supplements, and other supplies to fight infection - Purchase a COVID-19 vaccine, payable by bitcoin through a fake PayPal page [Note: There is currently no vaccine to prevent coronavirus disease.] 4. State-sponsored campaigns Nation-state actors are suspected to be actively using coronavirus themes in malware campaigns. While data remains relatively limited and it’s unclear how frequent this activity is, it seems clear that government-backed actors are utilizing mentions of the coronavirus to social engineer victims. At the moment, state-sponsored campaigns appear to be geared predominantly toward cyberespionage. However, other types of campaigns, such as those targeting intellectual property, may be possible.
<urn:uuid:ea09b972-67b9-467b-ba30-effe6f12a4ea>
CC-MAIN-2022-40
https://www.boozallen.com/insights/covid-19/coronavirus-related-cyber-threats.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00734.warc.gz
en
0.906474
748
2.53125
3
Technology shapes the way we live, work and play. As we continue through the fourth industrial revolution, Industry 4.0, more and more technologies are being introduced that make different aspects of our daily lives easier and more digitised. The Internet of Things (IoT) plays a significant role in this technological era, where the cyber and physical worlds are integrated to create higher levels of automation, increase quality, improve efficiencies and drive hyper customisation. Working as the data backbone for Industry 4.0, IoT enables the convergence of data between operational technology and information technology. It works with all critical technology such as cloud, 5G, edge computing and AI to drive real-time controls and decision making. “IoT delivers real-time access to data on the location, health and performance of a particular industrial asset, making that information available for analysis and action by AI and human experts and helping them make sense and pull actionable and meaningful insights. This in turn is essential for improving asset efficiency, increasing equipment uptime, reducing risks, and minimising costs,” says Bjorn Andersson, Senior Director, Global IoT at Hitachi Vantara. Adding to this, David Beamonte Arbués, Product Manager (IoT & Embedded Products) at Canonical, notes: “IoT is a core part of Industry 4.0 because it allows us to build digital networks of machinery, devices, and infrastructure. By using IoT, organisations can assemble smart factories and supply chain processes which continuously collect data.” “Businesses can then apply AI and Machine Learning (ML) technologies which, once synchronised, remove silos in the supply chain process and allow unprecedented levels of transparency, automation, insight and control. Industry 4.0 focuses heavily on interconnectivity, automation, ML and real-time data. It marries physical production and operations with smart technology – none of which would be possible without IoT at its core,” he continues. Managing supply chain disruptions with IoT Many, if not all, industries can deeply benefit from the integration of IoT, but within the supply chain is where technologists could reap the most benefits. As IoT devices use sensors and software to measure specific elements of the supply chain – including location, temperature and movement – it can be incredibly helpful in flagging any issues before they become too significant. The technology can also authenticate products and shipments, streamline the problematic movement of goods, and provide accurate timescales of when those items will arrive – thus revolutionising supply chain management by providing new levels of transparency, automation, insight and control of the overall supply chain process. Ultimately, the technology enables organisations to make smarter, cost-effective and timely decisions. “Some of the recent supply chain disruptions we have seen are due to a lack of real-time visibility across the supply chain. For a well-functioning supply chain, across multiple levels, you must collect, integrate and analyse data to provide a single view of the supply chain,” says Hitachi Vantara’s Andersson. “This view should include data from sensors and devices, such as data associated with temperature and vibration. IoT enables this level of visibility, allowing you to see if suppliers can meet their commitments, or to spot when a proactive action needs to be taken to prevent a disruption in production.” He continues: “As more and more data is fed in, you can start exploring technologies such as digital twins. By using data to build digital twins, you can run scenarios that help optimise the supply chain whilst preparing for any unexpected scenarios.” Interestingly, it is not just supply chain management where IoT devices prove their worth for organisations, but for delivery and fulfilment too, as Janet White, Industrial Products Leader at IBM Consulting, explains: “Delivery and fulfilment companies can gain efficiencies and improve customer satisfaction through IoT. From an efficiency perspective, the simple task of scheduling resources around the availability of a product or service from suppliers can be improved to allow the allocation of the right storage space, vehicle, equipment and people to match the products or services that are ready to ship. IoT can provide information to fulfilment companies across multiple time scales, from advance planning to on the day resource allocation.” Responding to worldwide events with the help of IoT Although cost reduction and productivity remain key drivers to the enhancements of the supply chain, the complexities brought about by the coronavirus pandemic and increasing geopolitical unrest have added a new dimension to the needs and capabilities of supply chains. This pressure has threatened the survival of many businesses as they grapple to meet customer demand and expectations. As a result, many organisations have shifted their focus from cost and productivity to business continuity by ensuring resilience and flexibility. Supported by IoT, this pivot is made a lot easier as organisations try to build resilient and flexible processes. Adding to this, Beamonte Arbués provides an example: “Leveraging IoT correctly allows organisations to develop a resilient, dynamic approach to inventory forecasting. This involves combining data intelligence with pattern analysis, which, over time, allows accurate forecasting of stock demands. More importantly, it provides insights that can aid necessary interventions in the event of faulty operations or unexpected, external pressures and demands, such as those caused by the aftermath of COVID-19 or the war in Ukraine.” Additionally, the data collected by IoT sensors combined with advanced data analytics and AI helps build a clearer picture of the entire supply chain. “Once that data is analysed, you can start applying it to look ahead. In practice, that can be playing out the ‘what-if’ scenarios to see the knock-on impact of a disruption at a specific point in the supply chain. This helps identify the critical areas of a given supply chain. Once that has been mapped out, organisations can develop their back-up plans for when a disruption occurs,” explains Andersson. He concludes: “When a crisis erupts, like a border closure for example, the company able to respond quickest will see the greatest benefit. It could be as simple as having a contract in place that allows transport to be routed around that location. But that contract can only be foreseen through the use of IoT data in the earlier stages of contingency planning.”
<urn:uuid:ba504626-998b-45b5-b6bc-6cb288ef9664>
CC-MAIN-2022-40
https://aimagazine.com/technology/ibm-canonical-hitachi-vantara-iot-and-the-supply-chain
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00734.warc.gz
en
0.926484
1,302
2.75
3
Have you ever been lost inside a large building? What if there was technology available to point you in the right direction? A National Institute of Standards and Technology (NIST) research team collected data from four different smartphone models over an 18-month period to further the development of indoor navigation apps. This type of technology would assist users in finding their way inside unfamiliar buildings, help emergency responders find victims, direct visitors to find specific works of art in large museums, and more. “The user community has expressed the need for careful testing of indoor localization solutions,” Nader Moayeri, NIST’s principal investigator on the project, said in a press release. “Fire departments, for example, strongly desire ways to find a comrade who’s fallen inside a burning building, and who may die because he cannot determine the exit location due to low visibility from smoke or some other reason. Fire departments need to know how well these solutions are going to work before they invest their limited financial resources in them.” Developing apps for this technology is vital: The Federal Communications Commission estimates that more than 10,000 lives can be saved annually with better and timely location information for 911 calls placed from cellphones. Therefore, to speed the development of these apps, NIST is sponsoring PerfLoc– competition for app developers to make this technology a reality. To assist developers, NIST has made the collected data available to the public, and are accepting submissions through Aug. 17. The top three submissions will receive cash prizes–and the first-place winner will also be flown to a conference in Japan to present their idea and do a live demonstration of their app. “Of course, the biggest reward will not be the cash prize,” Moayeri said. “The prestige that goes with it will matter to the designer.”
<urn:uuid:51cba83a-3714-4919-a448-02b87ca3d109>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/nist-research-to-support-indoor-navigation-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00734.warc.gz
en
0.942704
382
2.75
3
World Password Day: Healthy Password Etiquette 101 World Password Day is dedicated to reminding people about the importance of protecting themselves online using strong passwords. Founded in 2013, it’s also a day to commemorate security researcher Mark Burnett’s book, Perfect Password: Selection, Protection, Authentication, who was an advocate for people to use stronger and smarter passwords. So, in honour of today we’d like to share with you our tips for choosing and managing your passwords more securely so that you’ll be safer online. Remembering a multitude of passwords for our various accounts can be frustrating and many of us resort to simple, easy to remember passwords that we reuse everywhere online, from our social media accounts to our banking sites. This is a recipe for disaster in the form of identity theft or an account takeover. When you reuse passwords, every account that uses the same password becomes vulnerable if one account is compromised. Hackers use a multitude of techniques to crack well known passwords. Fortunately, a tool called a password manager can create and keep track of robust, unique passwords, helping you to avoid identity theft and other forms of online fraud. Why you should join the 20% who use a password manager A password manager can store all your passwords securely, so you don’t have to worry about remembering them. Password managers can generate complex, random passwords for all the unique sites you visit. They store these credentials encrypted in a secure virtual ‘vault’, so that when you return to a site, the password manager will automatically fill in your login details for you. Your password is stored using a unique key that only you can access using your “master” password, so it’s vital that you use a strong “master” password to control access to your password manager. They can also synchronise your passwords across your different devices, making it easier to log on, wherever you are, and whatever you’re using. Password managers also let you know if you’re re-using the same password across different accounts and notifies you if your password appears within a known data breach so that you know if you need to change it. As with any technology, password managers aren’t perfect. In the chance you forget your master password, you’ll be locked out of the password manager’s database, forcing you to reset the password for all your accounts. But security researchers would argue that the benefits you get from a password manager outweigh the drawbacks of the possibility of there being a vulnerability associated with the password manager. Why password managers are trusty The main reason there is much advocation for password managers is that they are cryptographically secure. They may seem like you are introducing a single point of failure, especially if your master password is insecure in the first place. However, the systems in place are more secure than most password security in general which greatly improves the level of trust. LastPass is one of the more popular password managers which produces a master password by appending your email and your own password and then hashing it. In a cryptography sense, this is also known as PBKDF2 and is an algorithm which iterates this process 100,000 times. This then produces your vault key which is then hashed again with your password before it’s stored on the cloud. While this process might seem complicated, that’s the whole point! The reason behind why hashing is so important in security is that it’s a one-way function. This means that unlike encryption, there’s no way to redo a ciphertext once it has been hashed. So not only will an attacker not be able to know your password if they theoretically compromise the whole of LastPass but LastPass themselves don’t your password either! This is generally why websites will ask you to reset your password if you’ve ever forgotten your password instead of them just reminding you what it was. In addition to its strong security however, over the years password managers have become more elegant with its usability. 1Password is an example of a password manager that’s incorporated biometrics into its authentication, allowing users to login to their account through touchID instead of their master password. This considerably improves the user experience as it eliminates the need to type out your same password with each attempt, especially as we already use this same authentication to log on our mobile phones. 1Password also provides other features that extend it beyond just a password manager, such as notifying the user whenever an account of theirs is compromised as well as offering other categories of storage such as credit cards. It even goes as far as providing an emergency kit such as that in the case you forget your master password, you are provided with a secret key to sign yourself back in. The only issue this system has however is that there is no way to recover your secret key if you lose that as well. How to create strong passwords While password managers can create unique and complex passwords for you, you still need good password etiquette if you decide to create your own unique passwords. As mentioned above, the consequences of a badly curated password can be severe. Cyber criminals are becoming more resourceful when it comes to cracking passwords. In the era of COVID-19 and working remote, it is more important than ever to secure not only your passwords, but your network with a strong password that can protect your sensitive information from unwanted attention. Password length and complexity are the essentials of good password hygiene. Long and complex passwords require more effort and time on behalf of the adversary. Passwords should contain at least ten characters and include a combination of special characters, as well as upper-case and lower-case letters and numbers. It is important that you strike that balance between a memorable password and a complex one. An overly complex password that you will more than likely forget is no use to you. Try make use of pass phrases where you can, arranging unrelated words in an odd order can curate a powerful password. Having said all of this, the rule of thumb of a strong password is that you should never reuse it. As we mentioned earlier, reusing passwords is a massive red flag and can leave your accounts susceptible to being compromised all at once. Find out more about current cybersecurity issues on the Adarma blog or if you’re looking for more specific support then read up on cybersecurity services offered by Adarma. Strictly Necessary Cookies Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings. If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again. 3rd Party Cookies This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages. Keeping this cookie enabled helps us to improve our website. Please enable Strictly Necessary Cookies first so that we can save your preferences!
<urn:uuid:8423c970-2fbd-460e-8b31-7d7dbbfa25f8>
CC-MAIN-2022-40
https://adarma.com/world-password-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00134.warc.gz
en
0.9414
1,507
2.71875
3
For people outside the tech industry, an API is an obscure concept that’s often misunderstood. Businesses that serve users over the internet often package their APIs as products; for example, access to Bloomberg’s server API requires an active account on Bloomberg and runs a company upwards of thousands of dollars. What role does the API play for the companies that make use of it? Let’s explain. The Server Client Model Before we can get into APIs themselves, we’ll first need to brush up on what we know about how requests are processed over the internet. When you load up a website on your phone or desktop browser or open up an app that retrieves information from the internet, you’re basically submitting a request to the website or application’s server. This makes you the client; the client communicates with the server with the aim to service a need of some kind. The need itself could be anything in particular; you could be searching for something on Google or trying to upload a picture to your Instagram account. To explain what an API, or Application Program Interface, is, let’s look at the Google example first. You search for a term and are returned results relevant to that search term. To generate those results, your search term needs to be communicated to Google’s server, and the server’s response needs to be communicated to your device. All of this communication takes place via an API. The Instagram example is slightly different. When uploading a picture, you not only need to retrieve information from Instagram’s server, but you also need to update information on the server itself for the picture to make it onto your page. In this case, you will first communicate to the server that you need to upload a picture. Once the server is primed and prepared to accept a picture, you’ll submit that picture to Instagram’s API via your browser or application, and the API will ferry it to the server. A good real-world example of what an API does would be that of a waiter in a restaurant. If people were storming into the kitchen every time they needed more ice in their drinks, the kitchen itself wouldn’t be able to get anything done. The waiter, or the API, acts as an intermediary between the kitchen (the server) and the customer (the client), making sure communication is structured, streamlined, and results-oriented. But speed isn’t everything. The greatest benefit to APIs is that they limit the amount of communication that needs to occur between a client and the server. In doing so, they limit the amount of exposure you have to the server and the server has to you. Instead of communicating directly with the other application, your browser and the server each only communicate with small packets of data via APIs. From an error handling and load balancing perspective, it’s much easier to build an API that only accepts data in a certain format than it is to configure your server to handle all the different kinds of requests that could possibly come in. In this way, APIs lend structure to the wider network framework; their development is highly regimented and governed by strict principles, and they have their own unique software development life cycle (SDLC). This also means that they ensure connectivity and backward compatibility for emergent technologies. Think of it this way: any future data stream can be developed independently of considerations for present-day interfaces. Once it’s been fully developed, all you’ll need is an API that can facilitate communication between the existing network and the new data stream. There are multiple different API protocols (SOAP, REST, and RPC), and different types of APIs built for different kinds of communication (private, public, internal, and composite). The protocols offer different tradeoffs in terms of development; RESTful API development offers greater flexibility to developers and is generally preferred when developing public APIs that are meant to cater to a wide variety of simple queries. SOAP, meanwhile, is useful for internal or enterprise-level communication, where security and standardization are both priorities. RPC is limited in its scope and in terms of the security considerations that it can handle, but it’s useful when the speed of data transactions is a priority and security isn’t, for example, when the employees of a company are querying the company’s own internal database. Conclusion for what is an API? To recap, an API is a piece of software that acts as an intermediary between two different programs or applications. It allows for secure, poignant communication between the two programs by lending structure to incoming and outgoing messages. This makes APIs very useful and lucrative tools for organizations that benefit from or rely upon exchanging information and/or network resources. To learn more about what is an API, contact us at Cloud Computing Technologies. Further blogs within this What is an API category.
<urn:uuid:afe89158-26d9-48a2-af1c-54f933d1599c>
CC-MAIN-2022-40
https://cloudcomputingtechnologies.com/what-is-an-api-an-introduction-to-application-program-interfaces/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00134.warc.gz
en
0.936116
1,187
2.984375
3
Ransomware is a fast emerging threat and successful attack method faced by organisations and individuals alike. Attackers use malware to break into computers and lock victims out of patient records, sensitive documents, case information, and other irreplaceable data. Victims then face a tough choice: pay the ransom or lose access to their data forever. Since ransomware first appeared in 1989, malware authors have used increasingly advanced and more complex techniques. Today, ransomware is rapidly spreading (opens in new tab) through phishing emails that contain an infected Word document, as well as through a technique called “malvertising,” which works by inserting malicious code into an online ad network. When a person visits a website featuring the malicious ad, the code executes and the browser automatically loads the malicious content. With new variants appearing daily, ransomware is a rapidly growing threat. In fact, by the end of 2011 nearly 100,000 different variants of ransomware were seen in the wild. In the first quarter of 2015, the number of known unique variants had jumped to more than 750,000 (opens in new tab). In light of these numbers, organisations large and small should take some practical steps to reduce their chances of falling victim to this threat. When securing their data, organisations are usually encouraged to reduce the overall volume of data they hold and focus on protecting the data that means the most to their customers and to the organisation itself. However, ransomware encourages us to turn this concept on its head. Backup, backup, and backup again Your most reliable defence is something you should be doing already: backing up your data on a regular basis. If you have a complete backup of your important data and fall victim to a ransomware attack, you can simply wipe your computer clean, recover your data from your backup, and avoid paying the ransom that your attackers are demanding. However, it’s not enough just to back up your data every now and then. Instead, it’s critical to keep the backup up to date. What’s more, it’s a good idea to keep multiple backups, in case your system automatically backs up after the ransomware takes hold and overwrites your backup with compromised data. Just remember to keep your backups offline and isolated from your network, as some ransomware will try to encrypt networked and removable drives. You should also regularly test your backups by restoring from them; this will give you confidence that the backup data is safe and you’ll know what to do in the event of a real problem. Keeping your attack surface in check At this stage, you’re probably thinking creating backups presents a bit of a contradiction, as this can only add to your attack surface by generating more potentially vulnerable data. The trick is to address the issue of what data you store and what value it has. For example, you probably no longer need emails that are four years old, or notes about every meeting you attended three years ago. However, losing that important document you’re working on just a few hours before your deadline would have a devastating effect. When it comes to managing your data, best practice is to remove ROT (redundant, obsolete, and trivial) data, appropriately securing the most important information, and protecting it against the worst case scenario. Taking a well-rounded approach While regular backups will help protect against ransomware, there are additional practices you should also adopt. In order to implement an in depth defence, there are some tips you should consider: - Ensure you have strong and up to date antivirus and firewall solutions - Only use escalated account privileges – through which an account gains elevated access to resources that are normally protected from an application or user – when they are needed - Set up good email rules and filters to protect against potentially dangerous attachments, such as .exe files - Enable software – such as Windows System Restore, Mac OS Time Machine, or the equivalent in your operating system – which will allow you to restore your computer's system files to an earlier point in time - Disable remote access tools if you don’t need them - Ensure all your software is up to date - Stay aware of security issues - Consider using a Cryptolocker prevention utility to guard against this most common form of ransomware. Limiting the damage So what should you do if you do fall victim to ransomware? The first step is to immediately isolate the infected machine from the network to contain the infection. Take some time to consider your options before acting, and avoid the initial instinct to pay the ransom to make the problem go away. Depending on the type of attack, you may have some better options than paying your attacker. For example, the Windows volume shadow copy service can help you recover data or encryption keys from certain ransomware variants. The dangers of ransomware aren’t going away anytime soon. The ease with which attackers can compromise a machine means it will be a threat for years to come. Fortunately, enterprises can take a few simple steps to protect themselves. Backing up your data, not opening attachments from unknown senders, and following the suggestions above, you can greatly decrease the risk and impact of becoming a victim. Stuart Clarke, Chief Technical Officer, Cybersecurity, Nuix (opens in new tab) Image source: Shutterstock/Nicescene
<urn:uuid:25319fdb-1d23-4669-8e4f-697cd26ec251>
CC-MAIN-2022-40
https://www.itproportal.com/features/overcoming-the-threat-of-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00134.warc.gz
en
0.937173
1,085
2.875
3
Email provides us a convenient and powerful communications tool. Unfortunately, it also provides scammers and other malicious individuals an easy means for luring potential victims. To protect yourself from these scams, you should understand what they are, what they look like, how they work, and what you can do to avoid them. Unsolicited commercial email, or “spam,” is the starting point for many email scams. Before the advent of email, a scammer had to contact each potential victim individually by post, fax, telephone, or through direct personal contact. These methods would often require a significant investment in time and money.Email has changed the game for scammers. The convenience and anonymity of email, along with the capability it provides for easily contacting thousands of people at once, enables scammers to work in volume. The different kinds of emails we are likely to receive are: - Fraud Schemes such as bogus business opportunities,chain letters,work-at-home schemes,health and diet scams, easy money, “free” goods, investment opportunities,bulk email schemes,cable descrambler kit, “guaranteed” loans or credit - Bogus Business Opportunities for example emails with subjects like : - ‘Make a Regular Income with Online’; - ‘Put your computer to work for you! Auctions’; - ‘Get Rich Click’; - ‘Use the Internet to make money’ and etc. - Health and Diet Scams, for example emails with subjects like - Need to lose weight for summer - Reduce body fat and build lean muscle without exercise - Increase Your Sexual Performance Drastically - Young at any age - CONTROL YOUR WEIGHT!! - Takes years off your appearance - Discount Software Offers that consist of advertisements for cheap versions of commercial software like latest package of Windows or Photoshop. - Phishing Email that are crafted to look as if they’ve been sent from a legitimate organization. These emails attempt to fool you into visiting a bogus web site to either download malware (viruses and other software intended to compromise your computer) or reveal sensitive personal information. Eg. from a bank.The bogus site will look astonishingly like the real thing, and will present an online form asking for information like your account number, your address, your online banking username and password—all the information an attacker needs to steal your identity and raid your bank account. How to detect phishing emails: Following are the different ways to detect phishing emails as given in the infographic above: 1. The email is sent from a public email address Look at the sender’s email address, as this can help identify if the person is truly who they claim to be. Often, the criminal will use a public email address such as gmail.com. If your bank or colleague is going to email you, it will come from a company email account with the company name in the email address. 2. Strange attachments If you receive an unexpected email or an email from someone you don’t know asking you to open an attachment, do not open it. These attachments can contain malware that can harm your computer and capture your personal data. 3. The creation of a sense of urgency Phishing emails often ask recipients to verify personal information, such as bank details or a password. They can create a sense of urgency by warning that your account has experienced suspicious activity or pretending to be someone you know who is in urgent need of financial help. These are massive warning signs. If you are ever unsure, contact the company or person using the contact details you already have for them or that are on their legitimate website. Never use any contact details or click any links provided in the email. 4. Links to unrecognised sites or URLs that misspell a familiar domain name Phishing emails may ask you to click a link within the email. By hovering your mouse over the link or address, you can see the linked site’s true URL. These URLs can be slightly misspelled or completely different to what you are expecting, so always double check before you click. 4. Poor spelling and grammar You can often detect a phishing email by the way it is written. The writing style might be different to that usually used by the sender and it might contain spelling mistakes and poor grammar. How to protect yourself from phishing emails: - Filter Spam – Most email applications and web mail services include spam-filtering features, or ways in which you can configure your email applications to filter spam. Consult the help file for your email application or service to find out what you must do to filter spam. - Regard Unsolicited Email with Suspicion – Don’t automatically trust any email sent to you by an unknown individual or organization. Never open an attachment to unsolicited email. Most importantly, never click on a link sent to you in an email. - Install Antivirus Software and Keep it Up to Date - Install a Personal Firewall and Keep it Up to Date - Use Common Sense Additional resource can be found from Centre for Internet Security website article on How to Spot Phishing Messages Like a Pro here: https://www.cisecurity.org/newsletter/how-to-spot-phishing-messages-like-a-pro/ Annabelle Graham, 5 ways to detect a phishing email, https://www.itgovernance.co.uk/blog/5-ways-to-detect-a-phishing-email/ US CERT Technical Cyber Security Alert, Recognizing and Avoiding Email Scams https://www.us-cert.gov/sites/default/files/publications/emailscams_0905.pdf Centre for Internet Security article on How to Spot Phishing Messages Like a Pro https://www.cisecurity.org/newsletter/how-to-spot-phishing-messages-like-a-pro/
<urn:uuid:7bdbd81a-d0b1-4cd7-806b-ab44848958ba>
CC-MAIN-2022-40
https://www.btcirt.bt/how-to-recognise-and-protect-against-email-scams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00134.warc.gz
en
0.897094
1,257
2.734375
3
A simple guide on how to be GDPR compliant What is GDPR? The General Data Protection Regulation (GDPR) is a data privacy legislation introduced and approved by the European Commision and European Parliament, which went into effect in May 2018. The GDPR provides rules and guidance to both European and non-European companies that collect, share, and manage data of their European users. It gives EU residents the right to know what data is collected about them as well as how it’s stored, protected, and transferred. The GDPR also includes the right to be forgotten and the right to access. That means customers can request to see the collected data and ask for it to be deleted. Do I need to be GDPR-compliant? All companies that collect data of users in the European Union, no matter where they are based, must comply with the GDPR. Non-compliance could result in hefty GDPR fines, which are up to 20 million euro or 4% of annual worldwide turnover, whichever is bigger. Protecting your users’ personal information by following the GDPR will affect the whole company as most of your procedures may have to be revised and adapted. However, there are no clear rules that would apply to every single organization. How to protect data will depend on the type of data your company processes. Some GDPR consultants say that there’s no such thing as being 100% GDPR-compliant and meeting GDPR requirements is more about reviewing your data handling and processing activities from an ethical standpoint, rather than ticking boxes on a checklist. A good starting point is going through 7 principles of the GDPR. 7 principles of GDPR Lawfulness, fairness, and transparency. Data should be processed in a lawful, fair and transparent way; Purpose limitation and data minimization. Data should only be collected for specific and legitimate business purposes. Accuracy. All efforts, where necessary, should be made to keep the data up to date. If data is inaccurate or outdated — it should be deleted. Storage limitation. The data should only be stored for the amount of time needed to provide products or services. It can be kept for longer only for archiving purposes in the public interest, for scientific or historical research purposes, or for statistical purposes. Integrity and confidentiality. The company should do all they can to ensure the security of personal data. They should protect it from unlawful access such as data breaches, as well as accidental loss, destruction or damage. Accountability. Most companies are required to keep records of data processing and are required to present them to supervisory authorities then needed. How to be GDPR-compliant Please note that the following information should only be taken as rough guidance. It is intended for general information purposes only and does not constitute legal advice. The GDPR legislation consists of 11 chapters, 99 articles, and nearly two hundred recitals, so to fully comply with the GDPR, we suggest getting legal advice from your legal counsel or the supervisory authority. Review all your data handling procedures Sit down and draw a map of how your company collects data from start to finish of your customer journey. It should help to identify points that need closer inspection. For example: You may need to review your mailing and emailing lists. If you do not have legitimate grounds for processing your customers' data for marketing or other purposes, you cannot use such personal data. See if it is useful to create segmented lists for your European customers; You need to check if you have legitimate grounds (e.g., consent, legitimate interest) for processing personal data for all different data collection channels, including events, newsletter subscriptions, or even paid lists; Review your future EU marketing campaigns that might aim to collect user data — you may need to adapt the processes. At this stage, it’s also advisable to appoint one person (or the whole team) in your marketing department to consult with lawyers who specialize in the GDPR. This person or team should work closely with a data protection officer (DPO) if the DPO is appointed in the company. They will be able to review and approve your marketing campaigns. Make your website GDPR-friendly If you have a website, you’re no doubt collecting data in one way or the other. To make your website comply with the GDPR, you should consider: Including a cookie consent. All web forms should have a cookie consent informing visitors on the type of data you collect and giving them an option to opt-in if they agree to such tracking. Creating age-verification. If your visitors are younger than 16 (the age limit might be different in some EU countries), the GDPR requires their parental consent to collect data. Make sure to include such verification. Update your data collection forms. These should state in an easy-to-understand language what data is being collected and for what purpose. (A full list of what information needs to be presented to a user can be found in GDPR Articles 13 and 14.) If your company operates outside of the EU, you should also consider adding the ‘Country of residence’ field, so you could separate your databases if needed. Update your current database It’s advisable to update your database regularly. You can do so by sending your customers an email with an option to choose what type of information they want to receive. Then it’s more likely your customers won’t unsubscribe altogether. Any correspondence should also include an ‘Unsubscribe’ or ‘Update your preferences button.’ Also, don’t contact users who have previously unsubscribed. It’s prohibited by the Privacy and Electronic Communications Directive. Be prepared for the worst In case of a breach, the GDPR requires to report it within 72 hours (with some exceptions). Thus it’s a good idea to prepare a data breach plan and educate your employees on what to do in such circumstances. You should consider: How your customer-facing employees should respond to customers; How you will handle social media channels and will you have enough staff to respond to all messages; What channels you will use to inform the affected parties, like your customers and vendors, if necessary; How you will inform the media and what channels you will use to provide updates; How you will communicate about the breach internally; What procedures you have in place if your customers want to file complaints or get refunds; How you will ensure that this doesn’t happen again. The GDPR isn’t a one-off project and you shouldn’t treat it as such. It is about continuously working on improving your company’s privacy and security standards.
<urn:uuid:8ec4608d-86d4-4c02-ae3a-3e3e1a2bf116>
CC-MAIN-2022-40
https://nordpass.com/lt/business-password-manager/what-is-gdpr-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00134.warc.gz
en
0.931997
1,582
2.578125
3
When it comes to fixing a root cause, there are two questions. The first is “Who is able to apply the fix?”, and the second is “who is responsible for applying the fix?” This article explains what we get wrong about cybersecurity, how and why we get it wrong, and what it’s going to take to fix it. Fair warning: it’s going to be a long and bumpy ride. Those bumps include a healthy dose of counterintuitive assertions, cybersecurity heresy, and no mincing of words. Organizations are continually striving to assess and mitigate their cybersecurity risk, working to minimize the likelihood that their brand name will be splashed across newspapers nationwide because they’ve fallen prey to a high-profile hack. In cybersecurity, at least the way it is currently practiced, risk is not quantifiable. Two different classes of identifiers must be tested to reliably authenticate things and people: assigned identifiers, such as names, addresses and social security numbers, and some number of physical characteristics. For example, driver’s licenses list assigned identifiers (name, address and driver’s license number) and physical characteristics (picture, age, height, eye and hair color and digitized fingerprints). Authentication requires examining both the license and the person to verify the match. Identical things are distinguished by unique assigned identities such as a serial number. For especially hazardous or valuable things, we supplement authentication with checking provenance — proof of origin and proof tampering hasn’t occurred. Our current concept of cybersecurity is to defend against attacks and remedy failure by erecting more and better defenses. That’s a fundamental mistake in thinking that guarantees failure. Why? Because it’s mathematically impossible for a defensive strategy to fully succeed, as explained in the previous installment of this article series. Another even more fundamental mistake in thinking is that cyberattackers are the cause of our woes. They aren’t. They’re the effect. This article is the second in a series on the physicality of data. Cybersecurity failures have been trending sharply upwards in number and severity for the past 25 years. The target of every cyberattack is data — i.e., digitized information that is created, processed, stored and distributed by computers. Cyberattackers seek to steal, corrupt, impede or destroy data. Users, software, hardware and networks aren’t the target; they’re vectors (pathways) to the target. To protect data, the current strategy, “defense in depth,” seeks to shut off every possible vector to data by erecting layered defenses. Bad news: That’s mathematically impossible.
<urn:uuid:b6e0ff0d-a0d0-43ce-a7ba-b8545528b97d>
CC-MAIN-2022-40
https://www.absio.com/tag/cyberattack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00334.warc.gz
en
0.929873
554
3.046875
3
Our Easy-to-understand cyber security awareness training program includes the following features An Introduction to Phishing, How It Works, and How to Prevent It. Social Engineering 101 How to Recognize and Mitigate Social Engineering Attacks IT Security Made Easy: Security Basics for Non-Technical Users Why Phishing Awareness Training is Important Often, it’s about tricking human beings into divulging the secrets that security systems are trying to protect. The social engineering aspect of cyber security is as important as the technological one. Phishing is one of the most common methods social engineers use to defraud their targets. There is no way to secure your business without training your employees to identify phishing attack attempts. Keep Up With Employee Progress with Our Dedicated Training Portal Our mobile-friendly training portal allows users to complete their studies on any device. Every employee has access to a unique portal and dashboard. Users can take these 100% online security education courses at any time they wish. Our training platform is designed to minimize disruption to employee productivity. Get Up-to-the-Minute Reports on Every User Use our comprehensive reporting tool to obtain real-time data on each employees’ progress through the security training curriculum. Group employees by department and delegate managers to oversee progress in order to make sure every employee is up-to-date. Our reports compile data in real-time and support automated email reminders for late users. Automatically Enroll Employees Who Fail Phishing Simulation Tests Our security curriculum works in tandem with our phishing simulation package. Run tests on your employees to determine your overall security resilience and automatically assign security courses to users who fail the tests. Employees who fail phishing simulation tests represent a serious risk to the entire organization. Our auto-enroll feature makes sure that each employee gets the training they need as quickly as possible. Streamline Training From Start to Finish Our training platform starts with the fundamentals of cyber security awareness and progresses to advanced concepts in a streamlined manner. Incentivize your employees to dive into our comprehensive library of advanced security courses. We offer targeted phishing awareness training that addresses some of the unique challenges that every industry faces. Our vast library also includes security courses from other vendors. Customize Your Own Course Our flexible system can help you create training courses on a wide variety of subjects not just cyber security awareness. Feel free to customize an existing course to meet your needs, or create a new course altogether. You can easily create your own questions and answers for organization-specific training modules. Invest in Repeat Testing and Continuing Education Two out of every three cases of cyber espionage begin with a phishing email. Keeping your employees informed on the latest developments in the world of cyber security will help you keep your business and its customers safe. Your employees are the front line of defense against phishing attacks, data breaches, and ransomware. A well-trained workforce will present far better resistance to sophisticated phishing attacks. We highly recommend combining our cyber security training curriculum with our Phishing Simulator package to guarantee best-in-class defense against the latest threats. Join the thousands of organizations that use DuoCircle Find out how affordable it is for your organization today and be pleasantly surprised.
<urn:uuid:ce510631-b2ee-4ac6-996e-338173c1d2cb>
CC-MAIN-2022-40
https://www.duocircle.com/phishing-awareness-training
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00334.warc.gz
en
0.922714
674
2.546875
3
As Photoshop is one of our most popular training subjects here at New Horizons Ireland, we thought we would put together this tutorial for our students and others to enjoy. Get your map First things first, we need our map or aerial photo. I got mine using google earth, this allows you to get different angles rather just the overhead view you get in Google maps. If you don't already have google maps, you can download it for free here. When in Google earth, simply use the search bar to find your location and use the tools to pan around until you get the desired location and angle. I recommend not zooming in too close to the area you want as we a need a margin around the outside to allow us cut out our map. Then either use the Google Earth tools to download a snapshot or simple take a screen grab. Once we have our image from Google Earth, we can open a new document in Photoshop by hitting Ctrl & N. If you choose 'clipboard' from the preset menu in the new document dialouge box, it will automatically match the dimensions of your copied screen shot. If you saved your map image rather than copied, then just open your saved image in Photoshop. Cut it out Once we have our map in Photoshop, we now need to cut the desired area. I recommend a rectangular shape that roughly follows the roofs and roads in your map. I used the Polygonal Marquee which allows you to click on one corner of your building and click again on the next corner to trace a straight line between. If you encounter a curved line, simply use shorter clicks in order to achieve a curve shape. Continue this method all the way around until you have achieved your rough overall rectangle shape. If you ever need need to start over, hit Ctrl & D to deselect and start over with the marquee tool to make a new selection. Mask out the excess Once you have your rectangle marked out with the marquee tool, hit the mask button at the bottom of the layers panel which will hide everything except the area you have marqueed. Now in order to add depth to the map, we want to create a new layer to add our new shape to. To create a new layer below your current layer, hold the Ctrl key and hit the new layer button at the bottom of the layers panel. Holding the Ctrl key ensures the new layer is below. I used the Polygonal Marquee tool again to mark out a shape on he new layer. I then used the eyedropper tool to pick a colour that roughly match the layer above. I then selected Edit > Fill > Foreground colour to fill the marqueed shape with that colour. I also added an inner shadow to add more depth by choosing Layer>Layer Style> Inner shadow. Play with the distance, choke and size of the inner shadow until you get the desired effect. While we are playing with shadows, we may as well add drop show to the map layer, by clicking on the map layer in the layers panel, then selecting Layer>Layer Style>Drop shadow. Again play with distance, choke and size until you get a suitable shadow. Now we will add some a gradient colour to the background. To achieve this, click on your bottom layer, hold Ctrl and click the new layer button to create a new layer on the bottom. Once clicked into this new layer, select the gradient tool and choose appropriate foreground and background colour for your gradient. I chose sky blue and white to give a sky effect. Then with the gradient tool in radial mode, click where you would like the white (the sun) to go and drag out a short distance. The distance you drag out and realise the mouse button will determine the size of your sun. So try it several times until you are happy with it. And that's it. If you're happy with the result go to File>Save For Web and choose your format and size to save your image. I recommend jpeg file format and a width of 900px as this should be sufficient for sharing online. If you would like to be able to come back a edit your image again at a later date, then you need to save a photoshop file by clicking File>Save As, then name your file, choose a folder and save it as a PSD file. In my image I added labels by using the line tool on a new layer and then used the type tool to add text. I also added Hue/Saturation and Levels adjustment layers above the map layer to adjust the colour and contrast of the map image. Share Your Images Feel free to send us your own map images on Twitter @NewHorizonsIRL or Facebook at facebook.com/NHIreland/ View all our Adobe courses here
<urn:uuid:1da9a5bd-243b-475e-b402-2bf8abc02a9a>
CC-MAIN-2022-40
https://ireland.newhorizons.com/blog/make-a-map-island-in-photoshop
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00334.warc.gz
en
0.880707
979
2.53125
3
svedoliver - Fotolia The solution offered the twin benefits of being very fast – it’s not an ordinary storage area network (SAN) – and being able to deliver input/output (I/O) across several floors at the ICM’s premises for its high-performance computing (HPC) needs. The ICM was formed in 2010 to bring together the work of 700 researchers. Information collected during medical imaging and microscopy undergoes some processing at work stations, but between data capture and analysis the data is also centralised in the basement datacentre at the ICM. “The challenge with the deployment is that to traverse the floors that separate the labs and the datacentre, traffic has to travel via Ethernet cables and switches that also handle file shares,” says technical lead at the ICM, Caroline Vidal. “Historically, an installation like this would have used NAS [network-attached storage] storage, which wouldn’t really have the performance to match the read and write speeds of the instruments. With the latest microscopes, data takes longer and longer to save, then to make it available at workstations means researchers sometimes waiting hours in front of their screens.” “Initially, we chose a NAS from Active Circle that had a number of features we considered essential, such as data security. But we came to realise that losing data wasn’t really an issue for our researchers – the real pain point was the wait to get to their data. “By 2016, we’d decided to abandon the NAS and to share all findings via the Lustre storage on our supercomputer, because it is built for rapid concurrent access,” says Vidal. Like other research institutes, the ICM’s datacentre is built around its supercomputer. Data being processed is stored in a Lustre file system cluster then archived in object storage, with data in use by scientists made available from a NAS. But after three years, that was it. The 3PB of capacity on the Lustre file system was saturated with observation data. There just wasn’t any more room for any more. NVMe/RoCE: speed of a SAN, easy deployment like NAS Vidal adds: “In 2019, we started to think about decentralising storage from the workstations in the sense of distributing all-flash storage between floors. The difficulty was that our building isn’t well-adapted to deploying things in this way. We would have needed mini datacentres in our corridors, and that would have meant a lot of work.” So, one of Vidal’s technical architects approached Western Digital, which proposed ICM carried out a proof-of-concept of a then-unreleased NVMe-over-fabrics solution. “What was interesting about the OpenFlex product was that, with NVMe/RoCE, it would be possible to install it in our datacentre and to connect it to workstations on a number of floors via our existing infrastructure” says Vidal. “Physically, the product is easier to install than a NAS box. It is also faster than the flash arrays we would have deployed right next to the labs.” NVMe-over-fabrics is a storage protocol that allows NVMe solid-state drives (SSDs) to be treated as extensions of non-volatile memory connected via the server PCIe bus. It does away with the SCSI protocol as an intermediate layer, which tends to form a bottleneck, and so allows for flow rates several times faster compared to a traditionally connected array. NVMe using RoCE is an implementation of NVMe-over-Fabrics that uses pretty much standard Ethernet cables and switches. The benefit here is that this is an already-deployed infrastructure in a lot of office buildings. NVMe-over-RoCE doesn’t make use of TCP/IP layers. That’s distinct from NVMe-over-TCP, which is a little less performant and doesn’t allow for storage and network traffic to pass across the same connections. “At first, we could connect OpenFlex via network equipment that we had in place, which was 10Gbps. But it was getting old, so in a fairly short time we moved to 100Gbps, which allowed OpenFlex to flex its muscles,” says Vidal. ICM verified the feasibility of the deployment with its integration partner 2CRSi, which came up with the idea of implementing OpenFlex like a SAN in which the capacity would appear local to each workstation. “The OpenFlex operating system allows it to connect with 1,000 client machines,” says 2CRSi technical director Frédéric Mossmann. “You just have to partition all the storage into independent volumes, with up to 256 possible, and each becoming the drive for four work stations. Client machines have to be equipped with Ethernet-compatible cards, such as those from Mellanox that communicate at 10Gbps to support RoCE.” Vidal adds: “We carried out tests, and the most prominent result was latency – which was below 40µs. In practice, that allows for image capture in a completely fluid manner so a work station can view sequences without stuttering.” The E3000 chassis was deployed at the start of 2020 and occupied 3U of rack space. Five of its six vertical slots are provided with 15TB NVMe modules for a total of 75TB. According to Western Digital, each of these offers throughput of 11.5GBps for reads and writes with around 2 million IOPS from each. All these elements are directed by a Linux controller accessible via command line or from a Puppet console when partitioning drives or dynamically allocating capacity to each user. “One thing that really won us over is the openness of the system. We are very keen on free technologies in the scientific world,” says Vidal. “The fact of knowing that there is a community that can quickly develop extensions for use cases that we need, but also that any maker can provide compatible SSD modules, reassures us even though we have chosen a relatively untested innovative solution,” adds Vidal, explaining how the ICM is playing the part of a test case for OpenFlex. At ICM, OpenFlex supports SSD modules that can expand in raw capacity to 61.4TB. At the back end, each SSD module has two 50Gbps Ethernet ports in QSFP28 optical connector format. “The array offers a multitude of uses,” says Vidal. “While waiting to modernise our Ethernet infrastructure, we have connected OpenFlex with several client machines. In time, we will connect it to diskless NAS for backup in the labs. These are connected to work stations via a traditional network so as to limit the expense of deploying Mellanox RoCE cards. “At the same time, we have connected OpenFlex to the rest of the datacentre to validate that we can provide Lustre metadata during heavy processing.” Vidal says the Covid-19 pandemic has slowed down the deployment, but she has already seen benefits. “Our scientists are not limited by the slow speed of data movement in their clinical analysis pipeline. They can now work on images with a resolution 4x to previously. We don’t doubt that this will help to deepen the understanding of neurological illnesses and to help the rapid introduction of new treatments,” she adds. Read more about HPC storage - How the University of Liverpool balances HPC and the cloud. The University of Liverpool has been running a hybrid HPC environment since 2017, which uses PowerEdge nodes and AWS public cloud services. - Panasas storage revs up parallelisation for HPC workloads. Panasas storage software focuses on novel method to boost capacity and efficiency. Dynamic Data Acceleration in PanFS shuttles data to disk or flash based on file size.
<urn:uuid:7da7f9fb-f1d8-42f8-964e-f9fdad603daf>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Brain-researchers-get-NVMe-over-RoCE-for-super-fast-HPC-storage
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00334.warc.gz
en
0.955471
1,701
2.71875
3
The Merge rows (diff) step compares and merges data within two rows of data. This step is useful for comparing data collected at two different times. For example, the source system of your data warehouse might not contain a timestamp of the last data update. You could use this step to compare the two data streams and merge the dates and timestamps in the rows. Based on keys for comparison, this step merges reference rows (previous data) with compare rows (new data) and creates merged output rows. A flag in the row indicates how the values were compared and merged. Flag values include: The key was found in both rows, and the compared values are identical. The key was found in both rows, but one or more compared values are different. The key was not found in the reference rows. The key was not found in the compare rows. If the rows are flagged as deleted, the merged output rows are created based upon the original reference rows stream. changed rows, the merged output rows are created based upon the original compare rows stream. You can also send values from the merged and flagged rows to a subsequent step in your transformation, such as the Switch-Case step or the Synchronize after merge step. In the subsequent step, you can use the flag field generated by Merge rows (diff) to control updates/inserts/deletes on a target table. Select an Engine You can run the Merge rows (diff) step on the Pentaho engine or on the Spark engine. Depending on your selected engine, the transformation runs differently. Select one of the following options to view how to set up the Merge rows (diff) step for your selected engine. - Using Merge rows (diff) on the Pentaho engine: Learn how to set up this step when using the Pentaho engine. - Using Merge rows (diff) on the Spark engine: Learn how to set up this step when using the Spark engine. For instructions on selecting an engine from your transformation, see Run configurations.
<urn:uuid:5a8fadda-3e34-4146-b7e6-a6f10623835d>
CC-MAIN-2022-40
https://help.hitachivantara.com/Documentation/Pentaho/9.3/Products/Merge_rows_(diff)
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00334.warc.gz
en
0.910157
444
2.515625
3
How to Capture the Right Type of Data When it comes to collecting data with industrial IoT within the world of manufacturing, it isn’t just about quantity. It’s also about the quality of the information that you are gathering from various machines; manufacturing data analytics that enables you to analyze it and make decisions now, and for the long term. What is Deep Data Vs. Big Data? Read about the industrial IoT and you’ll find a lot of information on the usefulness of big data and predictive analytics. This is absolutely true. “While the volume, speed, and variety that Big Data provide can no doubt unveil important effects that escape both the human eye and traditional methods of empirical research, that is simply the first step in the process of creating valuable insights that derive in evidence-based interventions. Predicting outcomes is helpful, but explaining them— understanding their causes—is far more valuable, both from a theoretical and practical perspective.” —Tomas Chamorro-Premuzic, Professor of Business Psychology at University College London (UCL) and Columbia University The author, in this case, was referring to HR processes in his piece, but his assertions are just as valid for the factory floor as they are for the hiring department. Big data is about capturing the vast quantities of data that is already available and analyzing it; in other words, looking at the data in a different way. Some of the data won’t be helpful, nor will some of the results, but many will. Deep data takes that analysis down to a more granular level. By eliminating data that isn’t relevant to a certain course of investigation, and focusing on streams that will provide richer information, deep data provide analysis that is more detailed and specific. The predictive trends that result from analyzing deep data are likely to be more accurate overall. What’s the Difference Between Sensor Data and PLC Data? Sensor data is all data from a specific sensor on a machine, within a designated time frame. It is designed to monitor something specific, like a vibration, which might tell the operator that a machine is on vs. off. That data may or may not be meaningful when reviewed or analyzed. A PLC—a Programmable Logic Controller— is able to pull a large amount of data items that, together with the sensor data, give you a fuller picture of what’s going on with any given machine. It can monitor inputs and outputs to and from a machine, and can make logical decisions when necessary, based on programming. Why Having Both Sets of Data is Optimal. The key to high quality analytics and results is to be able to have a platform that can both capture deep PLC data AND the data from sensors, which monitor more specific items that might not be available via the PLC. PLC Data Benefits For example, as noted above, while a sensor may provide the vibration limits on a certain machine or part of a machine, the PLC data from that machine might include parameters to signal that a fault is in the process of occurring in production. With PLC data comes the ability to control for operations, including the sequence of activity that a machine might be engaged in, timing for certain tasks and so on. When the data returns that one of these programmed elements is out of line, the operator can respond more quickly than if they had to manually investigate an issue with the output. Optimal Machine Function and Data Analysis Logic can be programmed into a PLC, to ensure that the data being returned matches what is desired and the machine if functioning at an optimal level. This is more ‘in depth’ than the notion of whether or not a machine is on or off, vibrating or not. Having both sets of data analyzed and returned to the user provides much more information than sensors on their own would, allowing the operator the flexibility of collecting the necessary data in a timely fashion so as to avoid costly downtime and maintenance issues. Instead, planned downtime and proactive, predictive maintenance can take place, increasing efficiency and boosting the bottom line. Download the “The Machine Builders’ Guide to Remote Machine Monitoring” e-book to find out more about: - Three ways Machine Builders can leverage IIoT - How to capture the right type of data - How Edge Devices Are Changing the Connectivity Landscape in Manufacturing - How Edge Devices Are Enabling a New World Of Analytics - How Industrial IoT Creates a New Value Stream for OEMs - Challenges with Remote Monitoring - The Future Opportunities for OEMS, Beyond Predictive Maintenance - MachineMetrics Service for Machine Builders and Distributors Graham Immerman is Director of Marketing for MachineMetrics, a venture-backed manufacturing analytics platform. Graham has quickly become an authority on digital transformation and the application of IIoT technology for the manufacturing industry. An accomplished leader and experienced start-up veteran with an integrated background in digital, social, traditional, account-based marketing, growth strategies, and business development.
<urn:uuid:dbb18f8b-b145-4ac1-9d03-1323b0fbf026>
CC-MAIN-2022-40
https://www.iiot-world.com/predictive-analytics/analytics/how-to-capture-the-right-type-of-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00334.warc.gz
en
0.920494
1,059
2.640625
3
What is phishing? Phishing is a scam where psychological manipulation is used to scare or trick victims into giving away sensitive data like passwords or paying money through the use of fraudulent invoices. What types of phishing are there? Spear phishing – These are emails that are targeted at individuals. Typically Whaling/business email compromise – this involves targeting upper management, usually c-level, into releasing sensitive information or making fraudulent payments. What does whaling look like? Clone phishing – in these attempts, a previously delivered legitimate email that contains an attachment or link has its content taken and replaced into an email General phishing – How do I spot phishing scams? Grammar – since the majority of phishing email creators are not native English speakers, they tend to make mistakes in their writing. Words will be misspelled, formatting such as spacing may be off and the usage of words may not sound normal. These are all major tell tale signs that the email you are viewing is not legitimate. Impersonal – Since the sender often does not know much about the recipient, the email Email Header – Asking for a quick reply – See our blog post on how to spot phishing emails – How do I stop phishing scams from succeeding in my organization? Security awareness training – For when phishing emails do get past your prevention systems, you need users that are knowledgeable and vigilant.
<urn:uuid:03924482-c4de-4ea2-a86f-c79d47fba4c6>
CC-MAIN-2022-40
https://www.clearnetwork.com/email-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00334.warc.gz
en
0.939205
297
3.015625
3
If you ask around the cyber-security circles who or what is the real Fsociety, they’ll likely say that it’s Anonymous, or try to be clever and answer “Fun Society.” Truth is Sometimes lamer than fiction. Fsociety is a new ransomware infection, and it’s not created by an anti-corporate revolutionary group, but by common cyber-criminals. The virus’s name is inspired by the hit TV show Mr. Robot. It infects PCs, encrypts their files and asks for money in return for their decryption. Fsociety uses the strong AES-256 algorithm. Fsociety ransomware virus – Way of Infection The developers of Fsociety ransomware are reportedly distributing it by spamming email massages containing malicious URLs and attachments. Users are often too trusting of emails that look like they were sent from PayPal, Microsoft, or other big name companies. This is a quick way to get infected, as cyber-criminals use these disguises to win the user’s trust and then infect their computer. The best defense against this type of spam is to avoid opening emails containing archives, shortcuts, or weird URLs. How does the Fsociety ransomware work? When the Fsociety virus sneaks into your computer, it’ll create looking files in following folders: C:\ User\[Windows username]\ AppData\ Local C:\ Windows\ Temp C:\ Users\ [ Windows username]\ Appdata\ Roaming C:\ Users\[ Windows username]\ Appdata The virus’s files would be generated to look like your average Windows system file. Names like: notepad.exe, patch, update, setup.exe. Chances are that most users already have similar files on their computer, so they wouldn’t notice if some more popped up. Fsociety is reportedly a variant of the EDA2 project. That is to say, Fsociety is based on code from EDA2 and it’s not a 100% original work. Recently, there was another project that surfaced, called Shark ransomware project. After Fsociety infects your computer, it’ll start searching for particular files to encrypt. The virus targets files of the following types: “PNG .PSD .PSPIMAGE .TGA .THM .TIF .TIFF .YUV .AI .EPS .PS .SVG .INDD .PCT .PDF .XLR .XLS .XLSX .ACCDB .DB .DBF .MDB .PDB .SQL .APK .APP .BAT .CGI .COM .EXE .GADGET .JAR .PIF .WSF .DEM .GAM .NES .ROM .SAV CAD Files .DWG .DXF GIS Files .GPX .KML .KMZ .ASP .ASPX .CER .CFM .CSR .CSS .HTM .HTML .JS .JSP .PHP .RSS .XHTML. DOC .DOCX .LOG .MSG .ODT .PAGES .RTF .TEX .TXT .WPD .WPS .CSV .DAT .GED .KEY .KEYCHAIN .PPS .PPT .PPTX ..INI .PRF Encoded Files .HQX .MIM .UUE .7Z .CBR .DEB .GZ .PKG .RAR .RPM .SITX .TAR.GZ .ZIP .ZIPX .BIN .CUE .DMG .ISO .MDF .TOAST .VCD SDF .TAR .TAX2014 .TAX2015 .VCF .XML Audio Files .AIF .IFF .M3U .M4A .MID .MP3 .MPA .WAV .WMA Video Files .3G2 .3GP .ASF .AVI .FLV .M4V .MOV .MP4 .MPG .RM .SRT .SWF .VOB .WMV 3D .3DM .3DS .MAX .OBJ R.BMP .DDS .GIF .JPG ..CRX .PLUGIN .FNT .FON .OTF .TTF .CAB .CPL .CUR .DESKTHEMEPACK .DLL .DMP .DRV .ICNS .ICO .LNK .SYS .CFG” Once the files are encrypted, the cyber criminals will hold the key to their decryption. They promise to give it to you if you pay them enough money, but there’s nothing that obligates them to keep their promise. If your computer gets infected by the Fsociety virus, it’ll be best to try to solve the problem by other means before paying the crooks. Ransomware naming and Fscociety While ransomware viruses are an awful trend in the Web, at least some of the crooks have an original way of naming their viruses. Fsociety is an interesting example, but other, more extreme cases include the Bart virus and the Hitler virus. Another virus that’s named after a currently hot property is the Pokemon Go ransomware that our team reported on recently
<urn:uuid:9f22e176-a437-440a-b4e0-24ca7b758850>
CC-MAIN-2022-40
https://bestsecuritysearch.com/real-fsociety-virus-inspired-mr-robot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00534.warc.gz
en
0.836068
1,130
2.640625
3
Many people think of scientific disciplines, such as chemistry or physics, as purely fact-based endeavors, not concerned with the fuzzy field of politics. That’s rarely the case because when humans are involved, things often get messy. A perfect example is the question of cold fusion. Back in 1989, scientists Stanley Pons and Martin Fleischmann announced they had discovered cold fusion, or nuclear energy that could be released at room temperature and would produce clean, cheap energy. A media frenzy followed, but excitement over the announcement quickly dissipated when others had trouble replicating their results. Whether or not cold fusion will eventually work on a consistent basis is still up in the air. But the political fallout from the Pons and Fleischmann announcement was so bad that it almost completely wiped out research in an extremely important field. Because of this announcement, and the subsequent failure to reproduce results, cold-fusion research became stigmatized and regarded by many scientists as a hoax. What Happened to Persistence? In 1999, Time magazine called cold fusion one of the 100 worst ideas of the century, and others ridiculed it as nothing more than an “Elvis sighting.” But not everyone agrees. Scientists such as SRI International’s Michael McKubre and Peter Hagelstein, who designed the X-ray laser that was to be a part of President Reagan’s “Star Wars” anti-ballistic missile system, are betting cold fusion can work. And governments around the world are putting money into research. Given that there are smart, competent people on both sides of the debate, one might wonder what happened to the American attitude of accepting past failures and trying to build on them. In this respect, the scientific community could learn a lot from Silicon Valley. When smart, well-regarded people in California’s tech mecca fail, they pick up the pieces and the community pats them on the back for taking a risk in the name of progress. Heck, some entrepreneurs even take a different stab at the same idea with the hope that they’ll be able to do it better. So why does the pure science community play by different rules? Slaves to Data Perhaps it’s because there’s a public perception that scientifically derived data cannot be subject to interpretation, and that skews behavior. Or, as some researchers have suggested, maybe it’s because the scientific community acts under a paternalistic type of data-releasing regime that says results should not be announced to the impressionable public until they are sanctioned by the top dogs of the group. This scientific McCarthyism has a chilling effect on research and could be holding America back from major scientific breakthroughs. If we could figure out cold fusion, we’d have a clean, cheap energy source that would last for an incredibly long time. And that would mean less reliance on oil exporting countries, as well as a cleaner environment and a better standard of living. So even if some experts say it’s a long shot, isn’t it worth working towards? Yet the U.S. Department of Energy continues to tiptoe around the issue, and the U.S. Patent and Trademark Office refuses to grant a patent on any invention claiming cold fusion. That’s almost a categorical denial of any research money for this important field. Further, getting an article on cold fusion published in any scientific journal is almost impossible. The scientific community is starting to look pretty regressive and reactionary. Saving Good Ideas “We have always been open to proposals that have scientific merit as determined by peer review,” said the Energy Department’s James Decker. But what happens when the peers in question might lose their hot fission research money if cold fusion were possible? Or consider the comments of an embittered Fleischmann to a Wired reporter in 1998: “What you have to ask yourself is who wants this discovery? Do you imagine the seven sisters [the world’s top oil companies] want it? … And do you really think that the Department of Defense wants electrochemists producing nuclear reactions in test tubes?” The answer is that Americans want a clean, cheap and abundant energy source if they can get it. And they certainly don’t want some other country, potentially one with terrorists, to figure it out first. Bureaucracy in both the private and public sectors can kill good ideas. America needs a return to the days when renaissance men and women populated the field of scientific discovery. If the cold fusion issue is indicative of where scientific inquiry is today, creativity and thinking outside the bureaucratic box appear to be sorely needed. Our world depends on it. Sonia Arrison, a TechNewsWorld columnist, is director of Technology Studies at the California-based Pacific Research Institute.
<urn:uuid:c758f81d-3444-4cdb-8190-63a71c101e90>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/the-big-science-chill-39360.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00534.warc.gz
en
0.950116
999
2.859375
3
Blockchain is a technology that can keep records of transactions secure. One of the most common and well known blockchain implementations is the transactional backbone of Bitcoin and a number of other cryptocurrencies. Now, both tech and financial services companies such as IBM, Deutsche Bank, HSBC, CIBC, Barclays, Intel, Wells Fargo, and Bank of America Merril Lynch are counting on blockchain technology to help secure the financial sector. But what exactly is Blockchain, and will it do society any good? What is Blockchain? Blockchain is a type of distributed ledger technology. It’s composed of “blocks” which are “chained” together by cryptographic hashing. With financial transactions, or other sorts of data which can be logged occur through a particular blockchain system, new blocks are added to the ledger and connected to all of the other blocks through the hashing function. When an entity has the right keys, they can refer to transactions in the chain. The data in the chain should be ciphertext – text encrypted by strong algorithms – so when used properly, the data shouldn’t be accessible to unauthorized parties. Blockchain is designed to protect both the confidentiality and integrity concepts of the CIA triad. There are two major types of blockchains: public blockchains and permissioned blockchains. Public blockchains are implemented by Bitcoin (BTC), Ethereum (ETH), and a number of other technologies. Usually anyone with the right software can add blocks to the ledger, validate transactions, and view the ledger. When Bitcoin works properly, a Bitcoin user can make payments or receive money in the cryptocurrency, add those transactions to the blockchain, and the software can validate that data. Permissioned blockchains are stricter in some ways than public blockchains. Only a limited number of parties are granted access to a permissioned blockchain. The authentication standards for the specific authorized parties are usually rigorous, and there usually must be a record of an individual’s legal name or “true” identity. For example, if I was an authorized party to a permissioned blockchain, they’d have to know that my full legal name is Kimberly Faye Crawley, I couldn’t simply go by “Crowgirl.” They may also want other details about me, such as my home or business street address. Permissioned blockchain implementations can integrate more traditional cybersecurity measures such as access control lists. Both public and permissioned blockchains have their respective strengths and weaknesses. The financial services industry is generally more interested in permissioned blockchains. How is the Financial Services Industry Incorporating Blockchain? A number of big names in the financial sector are working on implementing blockchain technology with the goal of improving their overall cybersecurity. Full details about Utility Settlement Coin (USC) were publicly announced on August 30, 2017. USC is a collaborative effort between HSBC, CIBC, MUFG, Deutsche Bank, Credit Suisse, UBS, State Street, Barclays, NEX, BNY Mellon, and Santander. Blockchain startup Clearmatics is working on the technical development. USC will be a new digital currency standard. UBS head of strategic development and fintech innovation Hyder Jaffrey discussed USC in June 2017. “We think a distributed ledger can help banks better manage risk and increase capital efficiency,” Jaffrey explains. “By moving post-trade processes onto a distributed ledger, banks can reduce settlement risk, counterparty risk and market risk. But in order to do that, the cash that is at the root of everything banks do has to be represented on the ledger. USC is a way of representing cash on a ledger.” He goes on to add: “We don’t see USC as a cryptocurrency, we see it as ‘cryptocash.’ It isn’t a new currency, it’s a way to represent existing currencies like dollars or pounds or euros on a distributed ledger. If a client presents £100 they will be issued the corresponding value in sterling USC and the value would always remain £100. It means the cash is on the ledger and will always be backed by real cash held at the central bank – in much the same way cash is technically a promissory note that used to be backed by physical gold. It helps to think of the world of digital currencies as a spectrum. At one end of it you have bitcoin, which is unregulated and operates outside of government control. At the other end you have central bank digital currencies – digital versions of existing currencies. USC is positioned right in the middle, with some of the benefits of Bitcoin, such as the real-time transfer of value, while taking on some characteristics of ‘real money’ issued by central banks. It is pegged to those fiat currencies and will always have the same value.” So, USC wouldn’t be a cryptocurrency like Bitcoin or Litecoin (LTC). It’ll be a digital currency standard that fiat currencies can be transacted through. The banks participating in USC can then implement the ‘cryptocash’ technology to conduct the kind of financial transactions they’ve been doing for years. Interestingly enough, HSBC, Unicredit, KBC, Natixis, Societe Generale, Deutsche Bank, and Rabobank support IBM’s Hyperledger Fabric project. That means HSBC and Deutsche Bank are interested in both USC and the Hyperledger Fabric project. This is an exciting development for many reasons. IBM’s Hyperledger Fabric project is a trade finance platform which will go through IBM Cloud. The technology is designed to be highly scalable, which could make it easy for many other financial institutions around the world to use the platform. It’s an open source framework, so many other developers may be able to improve Hyperledger Fabric’s security and functionality as time goes on. Loyyal is a universal loyalty and rewards platform, built with blockchain and smart contact technology. Loyyal’s Chief Architect Shannon Code is one of many developers who have been working with Hyperledger Fabric. He’s optimistic about the technology’s potential. “The Hyperledger Project is an obvious first step at global adoption and standardization,” he notes. “Blockchain and distributed ledger technology can’t get the attention it deserves without sharing and discovering the technology’s strengths and weaknesses. Loyyal joined the Hyperledger Project early because we understand this need for coopetition. Fabric has done a fantastic job combining distributed ledger technology in a way that can be used to meet the needs of businesses. The focus on security and privacy combined with modularity means that some of the hard questions that get asked now have answers.” R3 is a blockchain consortium which is supported by Wells Fargo, ING, Bank of America Merrill Lynch, Temasek, and SBI Group. Tech giant Intel is also involved. R3 is also a contributor to the Hyperledger Fabric project. Corda is R3’s open source financial platform. R3 CEO David Rutter says they’re developing an “operating system for finance.” Corda will be a blockchain-based platform which banks can use to develop apps. Clearly, Rutter believes that Corda is the most promising blockchain implementation for the financial services industry. He notes: “Corda is a completely open system that is going to empower entrepreneurs to be able to build Corda apps, roll them out, and actually have them be adopted because they will work with the current financial rails, in a way that is cognizant of and compliant with the regulatory regime. Corda and R3 has just been legitimised by not just a $107 million investment, but we’re now majority owned by the world’s largest financial institutions. There’s no safer bet in the world.” Opportunities and Risks for Blockchain and Finsec Proper blockchain implementation could do wonders to improve finsec (financial security), and also to improve the functionality and efficiency of banks’ digital backends. But like anything else, there are also risks involved. And absolutely nothing is 100% secure. Microsoft just released a report on blockchain’s potential for financial cybersecurity, Advancing Blockchain Cybersecurity: Technical and Policy Considerations for the Financial Services Industry (PDF). Microsoft sees a lot of potential in how aspects of permissioned blockchain technology can improve the cybersecurity of the financial services industry. The distributed architecture of permissioned blockchains can improve resiliency against cyberattacks. According to the new Microsoft report: “The distributed architecture of a permissioned blockchain is an advantage that can deter or minimize the effect of cyber attacks. Threat actors generally prefer to target a centralized database that, once compromised, would infect and destabilize the system as a whole. A distributed network structure, however, provides inherent operational resilience because there is no single point of failure. With the risk of compromise dispersed among various nodes, an attack on one or a small number of participants would not result in the loss or compromise of the ledger stored on computer nodes not subject to attack. This distributed architecture, for example, makes permissioned blockchains less appealing targets for ransomware attacks since a ledger securely stored in multiple nodes is less susceptible to lock down by a hacker than centrally stored information.” The transparent nature of permissioned blockchains is another advantage. Microsoft elaborates: “Transparency in permissioned blockchain networks provides another degree of cybersecurity protection. For example, the transparency of a permissioned blockchain among participants makes it more challenging for hackers to place malware in the network to collect information and to transmit it covertly to another database managed by the hacker. Because each participant has an identical copy of the ledger, the network creates the opportunity for deploying enhanced compliance processes including, among other things, real-time auditing or monitoring by other participants or by regulators granted limited access to the network. As a result, vulnerabilities and threats may be identified quickly if good risk management and compliance controls are implemented.” Of course, the implementation of encryption is a key security feature of permissioned blockchains. “Permissioned blockchain networks employ multiple forms of encryption at different points, providing multilayered protections against cybersecurity threats... Strong key management preserves the integrity of the public and private key encryption mechanism, and helps fortify the ledger and the network against cyber attacks.” But permissioned blockchain systems can be quite vulnerable if not implemented with care. Here are some of the risks which Microsoft identifies. Any cryptographic system is only as good as its key management: “Perhaps the single most important risk to blockchain security is key management. Maintaining the confidentiality, integrity, and availability of private keys requires thoughtful and robust cybersecurity controls. Some individuals reportedly have lost or misplaced their private keys, resulting in the loss of assets stored on a blockchain because private keys, by design, are not recoverable. To minimize individual mistakes, service providers, including digital wallet providers and CSPs (communications service providers), have emerged to provide key management services, which has become a critical feature of all types of blockchains. To date, the majority of cyber attacks related to blockchains have not attacked the blockchains themselves, but have targeted providers of key management services in attempts to steal private keys.” There are, however, software vulnerabilities in everything. Permissioned blockchain implementation can only be reasonably effective if care is taken to develop secure code. “As with any computer IT system, human coding errors can introduce cybersecurity risk into blockchains. Permissioned blockchains are built on software code, as are numerous off-chain applications that interface with such blockchains. No software is 100% free from defects, and any defect has the potential to be exploited to compromise a cybersecurity program. For example, hackers in 2016 exploited a coding defect in the source code of a virtual company, known as the Distributed Autonomous Organization (DAO), which resulted in the theft of $55 million.” Attack vectors are always evolving. Permissioned blockchain systems can only maintain adequate security if the entities which implement them keep on their toes. Security is a process, not a product! “It is reasonable to expect new strategies and threats to emerge to exploit unforeseen vulnerabilities in blockchains. One longer-term risk that is gaining attention among observers is the possibility of quantum computing-based attacks that leverage enhanced computational power to weaken or compromise existing cryptographic algorithms used in existing IT systems and in blockchains. As a general matter, all participants in blockchain systems need continuing education to anticipate and protect against threats from new attack vectors, and to adapt and upgrade security protocols as necessary to ensure the success and viability of the network.” Is Blockchain the Future of Finsec? The implementation of permissioned blockchain systems in financial security looks really promising. Perhaps sooner rather than later, my own checking account transactions will involve Canadian Dollars, American Dollars, and British Pounds going through USC. I may at some point use an app that my bank developed with Corda. It’s also quite likely that my transactions will go through Hyperledger Fabric, and I won’t even be able to tell as a mere consumer. If all of these technologies are developed and implemented properly, my money and financial activities may be more secure from cyberattacks. And I’m just a private individual, not a business. But there’s an awful lot of hype about blockchain in general. It’s vital that the financial services industry understands that permissioned blockchain systems aren’t a panacea against all cyberattacks, and they’ll only be effectively secure if they’re implemented and maintained with tremendous vigilance.
<urn:uuid:29d58464-cdbf-4d47-87b5-e537d665f05d>
CC-MAIN-2022-40
https://blogs.blackberry.com/en/2018/07/will-blockchain-improve-financial-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00534.warc.gz
en
0.932278
2,844
3.0625
3
This is the second of a series of articles that introduces and explains API security threats, challenges, and solutions for participants in software development, operations, and protection. Growth of APIs When Salesforce and eBay became the first major Internet players to focus on making their systems available to external programs via an API (versus traditional means such as command line interface), they ushered in a new era of so-called open computing. What this meant was that rather than close off their software to the external world, as was the general practice before 2000, open computing encouraged systems to allow for others to have their software connect directly. As one might guess, the result was explosive growth on the Internet. Imagine, for example, how difficult it might have been for Amazon (which published their first API just after Salesforce and eBay), to have grown so quickly if they had walled off their applications from other systems on the Internet. Without open computing, they would have had trouble integrating security protections, purchasing partners, supply chain management, authentication services, and on and on. All the things we have come to expect from a modern Internet service now depend on open computing and APIs. More recently, API usage has seen even greater exponential growth driven by several factors – the first of which is the ubiquitous mobile device. By making the Internet accessible anywhere, anytime, and to everyone – mobility increased the demand for more connected and integrated services. It’s hard to imagine API-heavy services such as Salesforce, eBay, and Amazon experiencing such great success without the explosion of mobile device usage. Additional factors driving API usage might be less familiar to normal users. Software designers have moved, for example, to modular applications, which makes it easier for them to add features more quickly and to iterate more rapidly during software development to create standard interfaces. Network architects have also begun to adopt an approach known as a service mesh, which depends on hyper-connectivity between software workloads. As one might expect, this connectivity is achieved through the use of APIs. Invention of the REST API In 2000, Roy Fielding completed his PhD at the University of California at Irvine. His PhD thesis, unlike most such works, includes arguably the first meaningful description of what we would now refer to as an API. Specifically, “Architectural Styles and the Design of Network-Based Software Architectures” ushered in a new era of programming style for the web, using a technique referred to as Representational State Transfer or REST. The specific details of REST APIs are beyond the scope of this short summary, but we can outline some of the more salient constraints that help define this uniform set of software connector interfaces. The first design constraint in the REST style of programming involves stateless processing for all client-server interactions. By reducing API requests to a single transaction (versus including history), it become much easier to create proper “visibility, reliability, and scalability,” as Fielding explains in his thesis. Additionally, cache constraints are added to the REST API model to reduce the latency of interactions. The most central design constraint of REST APIs, however, is the uniformity of the interfaces that is inherent in the overall design. This is complemented by design layering, which reduces the complexity at a given layer (via abstraction of lower layers) and code-on-demand, which “allows client functionality to be extended by downloading and executing code in the form of applets or scripts,” again, as Fielding describes in his work. The implications of REST API design from Fielding’s PhD proposal were immediately felt across the entire web community. Soon after publication of the thesis, companies like Salesforce and eBay began to demonstrate how the programming style as associated uniform connector model could substantially increase their reach to the web. They quickly saw that APIs not only made their interfaces more standard, but made the services they provided to the external community much more accessible and more popular. Contributing author: Matthew Keil, Director of Product Marketing, Cequence.
<urn:uuid:f9c8d8ac-6ce4-47b3-9033-b1d3c4ce876b>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/05/01/growth-of-apis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00534.warc.gz
en
0.96532
825
2.578125
3
More about PCE: Path computation in the SDN world Increasingly, network operators and service providers are adopting Traffic Engineering (TE) and WAN Software-Defined Networking (SDN) to optimize their IP/MPLS networks and provide better services to their customers. Following up on our introduction to Path Computation Element (PCE), here’s how it facilitates TE and SDN in multi-domain, multi-layer carrier networks. Why do we need PCE? In MPLS-TE tunnels, path computation is done by the provider’s head-end (ingress) router that receives the path request from the customer. But computing constraints-based paths for different flows can also overwhelm the router’s CPU, thereby affecting overall routing performance, too. Path computation becomes even more complex when it involves inter-domain and inter-AS routing and multi-layer networks. This brings in the need for a dedicated path computation server that can compute paths within large complex, meshed networks as well as across domains, irrespective of the underlying network layer. The separation of path computation from the router can be brought about using a PCE. PCE architecture allows for the path computation to be done on the router or on a separate server, including a network management system. The architecture also allows a PCE in one domain to communicate with PCEs in other domains to enable it to compute end-to-end paths spanning multiple domains. Path computation and SDN By extending the concept of PCE to SDN, an SDN controller or domain orchestrator, which originally does not have the knowledge about the paths across domains, now gets the ability to compute end-to-end paths across multi-domain, multi-layer networks. Here’s how: SDN architecture allows a software controller to manage the flow of packets from source to destination without the router having to make intelligent forwarding decisions. To determine how traffic will flow across multiple network nodes and domains, the SDN controller must have the ability to compute multiple end-to-end paths for different traffic flows, some spanning different domains, while also considering constraints such as bandwidth, QoS, and latency requirements. Network operators, especially service providers, face a few challenges when implementing SDN in their IP/MPLS networks for traffic engineering purposes. To start, for an SDN protocol such as OpenFlow to relay end-to-end path information to routers, the OpenFlow controller must understand the concept of MPLS forwarding and have all the logic of an MPLS router implemented in it. Another major challenge is that SDN requires all the network nodes along the path to support the SDN protocol in use. This might require replacing or upgrading all network nodes, potentially increasing expenses, downtime, and change management risks. With a dedicated PCE server, the path is computed and sent to the head-end router to enable source routing-based forwarding of packets. Running PCE from a dedicated server also avoids overloading the processors in head-end routers. Further, PCE provides all existing MPLS-TE functionalities without the need for deploying protocols such as RSVP-TE and the associated overhead from running additional protocols in the network. Finally, with PCE, only the head-end router needs to be upgraded to understand the path computation messages, thereby saving time and significant expenses for the provider. Adoption of PCE with SDN also paves the way for Path Computation Element Communication Protocol (PCEP) to become an SDN protocol. As such, PCEP can be used by SDN controllers for path computation or for an SDN orchestrator to interact with other SDN controllers and provision different types of paths: end-to-end Label Switched Paths (LSPs), a segment of an LSP, or even to provide forwarding instructions to a single node. In fact, extending the PCE components to function as an SDN central controller allows an existing network to easily evolve into an SDN-enabled network with minimal changes to the current infrastructure. This concept, known as PCECC, is discussed in more detail here: https://www.ietf.org/proceedings/88/slides/slides-88-pce-5.pdf The increasing adoption of PCE in service provider networks has led to SDN controllers such as ONOS implementing PCE and PCECC in their releases to provide IP/MPLS SDN capabilities. If you’re ready for PCE and automation with SDN, Blue Planet Route Optimization and Assurance (ROA) includes PCE capabilities and can work with any SDN controller or network orchestrator to automate service provisioning and congestion avoidance. This content was originally published on the Packet Design blog and has been updated since the acquisition by Blue Planet.
<urn:uuid:2fb43902-4af3-4a21-bcfa-f8dc0bdcee6c>
CC-MAIN-2022-40
https://www.blueplanet.com/blog/More-about-PCE-Path-computation-in-the-SDN-world.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00534.warc.gz
en
0.90492
997
2.796875
3
Homes and businesses worldwide are vulnerable to attacks from cyber thieves and other bad actors – and not just through their computer networks. The embedded electronics inside appliances present an easy path of entry. According to Business Insider and Proofpoint, one of the first refrigerator incidents occurred in late 2013 when a refrigerator-based botnet was used to attack businesses. Some of these attacks, such as infecting appliances with botnet malware, don’t really have much effect upon a family’s security and safety. In fact, if a “smart” refrigerator gets infected by a bot, the homeowner might not even notice anything wrong. However, connected-appliance based cyberattacks are not limited to just refrigerators – and they are rarely one-off incidents. Almost any type of appliance can be hacked and used to host a botnet that could attack the web. According to Wired Magazine, a botnet of compromised water heaters, space heaters, air conditioners and other big power consuming home appliances, could suddenly turn on simultaneously, creating an immense power draw that could cripple the country’s power grid. A bot, quite simply, is an infected computer. Many cyberattacks, like the Mirai Malware and the Dyn attacks, infect a network of computers, including “smart” connected devices such as home appliances, security cameras, baby monitors, air conditioning/heating controls, televisions, etc., and turn them all into compromised servers. These compromised servers then act as nodes in an attack and together create a botnet. They can participate in a variety of coordinated attacks, infecting other devices and expanding the network of bots, or participating in Denial of Service attacks. Without any apparent symptoms or notice, a criminally enhanced refrigerator could be generating and sending out thousands of attacks every minute. In addition to the homeowner or business manager never realizing what is going on, these attacks may be unstoppable until the machine itself is disconnected from its web connection. Additionally, the infected refrigerator could spread malware from the kitchen to the home’s “smart” TVs, to the home’s computer networks, to other smart devices in the home, and even to connected smart phones. Every target could be transformed into malicious bots that distribute millions of infected spam messages and cyber-attacks. So How Do We Combat This Threat? Unfortunately, end users really have no power to fix this problem. There is probably no way for a homeowner, office manager – or even an experienced refrigerator repair person – to talk to a refrigerator’s electronics. No way to get into the appliance’s software and middleware to identify and kill an infection. However, if the homeowner suspects an infection, they could disconnect the refrigerator from the its internet connection to make it “dumb” again. So how do manufacturers combat this type of attack? How can they ensure that appliances in homes and offices do not get infected to cause mayhem? Security starts in the design process for the refrigerator itself, as well as for the appliances’ various electronic components and control surfaces. Most appliance manufacturers get their control sub-assemblies from a wide network of smaller manufacturers, sometimes with a worldwide supply chain. These suppliers need to make sure that the chips and sub-assemblies they use are secure from hacks. Two important security practices can be utilized by appliance makers: - An Embedded Firewall with blacklist and whitelist support. Protect appliances and edge devices from attacks by building firewall technology directly into the appliance. An embedded firewall can review incoming messages from the web or over the home network and, via a built in, and regularly updated blacklist, reject any that are not previously approved. - Secure Remote Updates and Alerts. Validate that the firmware inside the device is authenticated and unmodified before permitting installation of any new firmware updates. Updates ensure the incoming software components have not been modified and are authenticated software downloads modules from the appliance manufacturer. Most consumer and device manufacturers have heard about the potential for attacks on smart devices like door locks, baby monitors, and home thermostats, but this risk awareness needs to expand to types of connected systems – including appliances. An infected refrigerator sending out malware is not just a funny story. These systems have been attacked and used to spread malware. Ensuring the security of these devices is necessary to protect home network, slow the spread of malware and even protect credit card numbers or other personal data stored in smart home devices.
<urn:uuid:32b213a0-7d44-49d1-a989-8f4b06fe101b>
CC-MAIN-2022-40
https://www.mbtmag.com/home/blog/21102311/when-refrigerators-attack-how-criminals-infect-appliances-but-mfrs-can-stop-them
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00534.warc.gz
en
0.939986
911
2.859375
3
It has been a long way already and the future hides a lot of uncertainty – while the heroic struggle of governments with the pandemic is plain to see, yet this war was for the most part waged in an uncharted territory dotted with unknowns, so it’s worth looking at the trials and failures, weak and strong sides. How did governments deal with digital tools in their battles against the epidemic? What were the problems and challenges of the public sector in this area? Did any country fare significantly better than others, and why are all eyes turning to Germany? The public sector vs the pandemic Whereas businesses mainly had to maintain their continuity and secure their economic results or, sometimes, seize new opportunities presented by the pandemic, the challenge for public sector could be summarized in one sentence: To stop the epidemic from developing in order to protect human life and health, and be prepared for problems emerging in the course of doing so. But in order to assess the activities of the public sector and look at the problems encountered, it is necessary to zoom in to a more detailed level. Preliminarily speaking, the most important tasks that fell on the shoulders of public authorities during the pandemic have been: - Reducing the number of new infections - Providing the resources needed to treat as many patients as possible - Minimizing the negative social and economic effects of epidemic restrictions - Convincing as many people as possible to get vaccinated and organize an effective vaccination process What do all the above have in common? All of the above tasks can be described with numbers. And it is the numbers, and getting them right, that have become some of the major challenges for the public sector in fighting the epidemic. The tasks of the public sector related to the management of the epidemic crisis were carried out on the basis of poor quality data. What does it mean? In short: there were no common definitions of basic terms, no common methodologies to gather the data, no transparent procedures. In terms of data, what went wrong with the tasks carried out by the public sector? - In reducing the number of new infections: Statistics of infections have been severely tainted with comparability issues since the pandemic started. In some countries, they are based on screening tests. Other countries test only those who report COVID-19 symptoms. It is impossible to draw any rational conclusions from comparing the statistics in both cases. Based on these two different policies, it is impossible to find which country is developing the epidemic faster or which one is more effective in preventing it. When you add different efficiency of the tests and their types (antibodies and PCR), a rather disturbing conclusion arises: regarding the pace of the epidemic, the contagiousness of the virus and how it penetrates populations, we are still in the dark. - In providing the resources needed to treat as many patients as possible: To follow the development of the epidemic, we describe it with statistics. We observe the number of infections, the number of recoveries and, of course, the death rate. And here comes the next data-related problem – different states have adopted different definitions of death after contracting COVID-19! For example, if someone had had a coronary heart disease, and died of a heart attack while infected with the coronavirus, in one country they would be classified as a COVID-19 victim, and in another – as a victim of a heart attack. This means that today’s staggering 3,5 million deaths from COVID-19 worldwide shown on worldometers.info is really just a really rough approximation. - In minimizing the negative social and economic effects of epidemic restrictions There is an ongoing debate as to whether lockdown costs outweigh the health benefits. The rise in unemployment due to the closure of many companies, and limited access to health care, have been taking their toll for over a year now and researchers from Harvard Medical University, John Hopkins University, and Duke University have proved this can be measured. According to their calculations, unemployment caused by lockdowns will result in more than 0.8 million additional deaths over the next 15 years in the United States only. Are governments measuring the social and economic costs of epidemic restrictions? In most countries, cyclical lockdowns have been the only way to contain the virus for over a year… - In convincing as many people as possible to get vaccinated and coordinating an effective vaccination process Vaccinations against COVID-19 are progressing at varying rates. Countries are struggling with the supply of vaccines, have organizational difficulties. According to global research conducted by Gallup over 1 billion people expressed reluctance to vaccinate – such a staggering number must be partly due to a lack of trust in governments. Germany – a success story on the pandemic battlefield? In the long run of the pandemic, Germany has made several important decisions and developed relatively successful procedures to manage the pandemic. - Cooperation of key health and science institutions. Both local and national public health institutions, as well as partners from the scientific community, developed analyses and collected data on an ongoing basis. Already at the beginning of the pandemic, national crisis management was also mobilized to understand the epidemiology of the coronavirus. - The government also mobilized state-run and private laboratories to rapidly increase the volume of tests. One of the first tests was carried out in a hospital in Berlin. Subsequently, Germany became a leader in RT-PCR testing, which is now the standard method for diagnosing COVID-19. - Germany implemented additional security measures to minimize transmission in long-term care facilities. This and other measures significantly reduced the infection rate among Germans who reached the age of 70. All this translates into an overall reduced fatality rate, and a relaxation of restrictions which did not result in significant recurrences of the epidemic. As of May 2020, the death rate amounted to 4.6 percent, compared with 13.1 percent and 12 percent in Italy and Spain, respectively. South Korea is portrayed as an equivalent example to Germany in terms of managing to secure the over-70 population from infections (11 percent of all cases). Such data demonstrates significant success in isolating the most at high-risk groups. In April 2020, Robert Koch Institute, together with SAS, announced the creation of an information and forecasting platform for intensive care beds with ventilators that provides an overview of existing capacity as well as demand.The platform is an example of how analytical software can help solve one of the greatest challenges during a pandemic like SARS-CoV-2: coordinating intensive care based on forecasting, so that personnel and resources are available exactly where and — most importantly — when they are needed. As C&F, we took part in the project, covering the platform’s maintenance and performing its quite sophisticated security audit. SAS’s solution enables the management of resources that are instrumental in treating COVID-19 patients – and therefore helps avoid scenarios known from the first wave of the pandemic, when one of the main causes of the high mortality rate was limited access to resources (e.g. oxygen, ventilators), formation of local “bottlenecks”, and lack of central information support systems for the operational management. The system that is now used in Germany is in fact a real-time data acquisition and analysis environment for intensive care bed (ICU) capacities and aggregated case numbers. It covers 1298 hospitals and provides real-time information on available COVID-19 resources on all organizational levels. We are still far from announcing the end of the crisis, the epidemic is still ongoing, although we can hope that it is slowly coming to an end. Certainly, however, we can already draw conclusions for the future. Scientists say that there is a risk of further pandemics. We have to be better prepared. Certainly, a great deal can be done in the area of data use which, as the Germans have proved, can be of great help. Getting rid of factors that negatively impact data quality is particularly important in the capacity of the health system (ICU and HCP capacity), number of total and new deaths, normalization of the number of infections and geographical variances. Authorities should make sure the collection of critical data is implemented on solid foundations of simple but up-to-date databases, a rigorous data collection process and efficient data reporting and democratization. This approach, enhanced by regionally or globally standardized data collection, reporting and analysis. will lead to the public sector obtaining a true picture of the situation and the ability to make rational decisions. Data-driven decisions. Following article was prepared with close cooperation with Maciej Kornacki. His knowledge and experience constitute a strong contribution to this article and the entire project.
<urn:uuid:35eeb835-f8cb-46f4-aeec-5d8ef188ffd4>
CC-MAIN-2022-40
https://candf.com/articles/data-driven-public-healthcare-resources-management-done-right/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00534.warc.gz
en
0.954858
1,807
2.8125
3
At Fort Belvoir in Springfield, Va., less than 20 miles from downtown Washington, D.C., sits the massive headquarters of the National Geospatial-Intelligence Agency, where intelligence analysts pore over satellite images of everything from natural disasters to missile equipment and the movements of terrorist networks. NGA analysts at the 2.3 million-square-foot headquarters collect and examine geospatial intelligence and distribute data to the Defense Department and national security community. And in the not-too-distant future, they may have a lot more help from computers. NGA is one of the federal agencies that is most interested in computer vision technology. Director Robert Cardillo has said on the record several times that he hopes to use the technology to augment (not replace) human analysts. The goal is to use the artificial intelligence that underpins computer vision to automate certain kinds of image analysis, which will free up analysts to perform higher-level work. Computer vision is more sophisticated than traditional image processing, but is still a relatively nascent technology, certainly within the federal government. Although promoted by tech giants, such as Google and Microsoft, the government is still in the exploratory stages of adopting computer vision. However, the technology has the potential to reshape how certain agencies achieve their missions, especially those that involve object detection and threat analysis. In May, the White House convened a summit on AI and established a Select Committee on Artificial Intelligence under the National Science and Technology Council. The committee will advise the White House on interagency AI R&D priorities; consider the creation of federal partnerships with industry and academia; establish structures to improve government planning and coordination of AI R&D; and identify opportunities to leverage federal data and computational resources to support our national AI R&D ecosystem. Notably, the federal AI activities the committee is charged with coordinating include “those related to autonomous systems, biometric identification, computer vision, human-computer interactions, machine learning, natural language processing, and robotics,” according to the AI summit’s summary report. Computer vision is clearly on the minds of many across the government. But what is the technology, how is it different than image processing and how might federal agencies use it to advance their missions? What Is Computer Vision Technology? Computer vision is not one technology, but several that are combined to form a new kind of tool. In the end, it is a method for acquiring, processing and analyzing images, and can automate, through machine learning techniques, what human visual analysis can perform. One way to imagine computer vision technology, industry analysts say, is as a stool with three legs: sensing hardware, software (algorithms, specifically) and the data sets they produce when combined. First, there is the hardware: camera sensors that acquire images. These can be on surveillance cameras or satellites and other monitoring systems. “The majority of use cases that are out there incorporate looking at video,” says Carrie Solinger, a senior research analyst for IDC’s cognitive/artificial intelligence systems and content analytics research. Computer vision technology can aid agencies in object detection. Photo: ShashiBellamkonda, Flickr/Creative Commons However, Solinger says, the analysis is often retrospective as opposed to real-time streaming because the software to perform that kind of analysis is not mature enough yet. The computing power required to analyze images has advanced significantly, but Solinger contends the algorithms haven’t quite caught up. However, she notes that Google, Microsoft and other tech companies are trying to build such algorithms to process images in real time or near real time. Werner Goertz, a Gartner analyst who covers AI, says that algorithms continue to exploit improving camera and sensor technology. “The data sets are a result of the commoditization of cameras, the corresponding improvements algorithmically,” he says. “All of this leads to more and more data sets.” The more computer vision systems are deployed, Goertz notes, the more opportunity exists to create, define and exploit the data sets they create. Large technology companies “are all recognizing that we are on the threshold where this stuff can scale, can be become affordable and can become a disruptive and major part of all of our lives, because it’s going to affect us all in one shape or form.” Computer Vision vs. Image Processing: Understand the Difference What separates computer vision from image processing? Image processing works off of rules-based engines, Goertz notes. For example, one can apply rules to a digital image to highlight certain colors or aspects of the image. Those rules generate a final image. Computer vision, on the other hand, is fueled by machine learning algorithms and AI principles. Rules do not govern the outcome of the image analysis — machine learning does. And with each processing of an image by the algorithms that underpin computer vision platforms, the computer refines its techniques and improves. This, Goertz notes, means that computer vision results “in a higher and higher probability of a correct interpretation the more times you use it.” Solinger adds that the major difference between computer vision and image processing is that image processing is actually a step in a computer vision process. The main difference, she says, is “the methods, not the goals.” Computer vision encompasses hardware and software. Image processing tools look at images and pull out metadata, and then allow users to make changes to the images and render them how they want. Computer vision uses image processing, and then uses algorithms to generate data for computer vision use, Solinger says. How Are Computer Vision Algorithms Used? The most sophisticated computer vision algorithms are based on a kind of artificial intelligence known as a convolutional neural network. A CNN is a type of artificial neural network, and is most often used to analyze visual imagery. What computer vision algorithms bring to the table, Goertz contends, is the scalability and ability to memorize outcomes. “We no longer need to capture and store large amounts of video data,” he says. Instead, computer systems can store the fact that a person was present at a certain location at a certain time and wandered from point A to point B. Computer vision can automate the process, extract that metadata about an image video and then store the metadata without the image having to be stored. “We’ll have to then think about where it’s desirable for this information to be captured,” he says. There are still biases that affect computer vision systems, especially for facial recognition, Solinger says. Many are able to identify white males without about 90 percent accuracy, but falter when trying to recognize women, or people of other races. Computer vision systems are obviously not totally accurate Indeed, last week, Amazon’s facial recognition tools “incorrectly identified Rep. John Lewis (D-Ga.) and 27 other members of Congress as people arrested for a crime during a test commissioned by the American Civil Liberties Union of Northern California,” the Washington Post reports. Computer vision can be used not just for facial recognition but for object detection, Solinger says. Computer vision can explore the emotion and intent of individuals. For example, if surveillance cameras capture footage of an individual who is in a database of those considered by law enforcement to be a threat, and that person is walking toward a federal building, computer vision tools could analyze the person’s gait and determine that they are leaning heavily on one side, which could mean the individual is carrying a bomb or other dangerous object, she says. NGA, TSA Interested in Using Computer Vision to Enhance Missions NGA Director Robert Cardillo has been a proponent of computer vision. In June 2017, at a conference in San Francisco hosted by the United States Geospatial Intelligence Foundation, Cardillo said that at some point computers may perform 75 percent of the tasks NGA analysts currently do, according to a Foreign Policy report. At the time, Cardillo said the NGA workforce was “skeptical,” if not “cynical” or “downright mad,” about the idea of computer vision technology becoming more of a presence in analysts’ work and potentially replacing them. According to Foreign Policy, Cardillo said he sees AI as a “transforming opportunity for the profession” and is trying to show analysts that the technology is “not all smoke and mirrors.” The message Cardillo wants to get across is that computer vision “isn’t to get rid of you — it’s there to elevate you. … It’s about giving you a higher-level role to do the harder things.” Cardillo wants machine learning to help analysts study the vast amounts of imagery of the Earth’s surface. Foreign Policy reports: Instead of analysts staring at millions of images of coastlines and beachfronts, computers could digitally pore over images, calculating baselines for elevation and other features of the landscape. NGA’s goal is to establish a “pattern of life” for the surfaces of the Earth to be able to detect when that pattern changes, rather than looking for specific people or objects. And in September, Cardillo said the NGA was in talks with Congress to swap years’ worth of historical data the agency has for computer vision technology from private industry. Such a “public-private partnership” could help the agency in overcoming its challenges to deploying AI, Federal News Radio reports. An interior view of the atrium of the National Geospatial-Intelligence Agency Campus East. Photo: Marc Barnes, Flickr/Creative Commons “The proposition is, we have labeled data sets that are decades old that we know have value for those that are pursuing artificial intelligence, computer vision, algorithmic development to automate some of the interpretation that was done strictly by humans in my era of being an analyst,” he said at an intelligence conference hosted by Georgetown University, according to Federal News Radio. “And so that partnership is one that we’re discussing with the Hill now to make sure we can do it fairly and openly.” Solinger notes that computer vision can help agencies categorize and analyze images, because manually coding everything can be extremely time-consuming and expensive. “The federal government is sitting on tons and tons and tons of image data, and they have all of this historical data that they could process that can help them understand different security concerns or learn from past mistakes,” she says. Meanwhile, in May, the Transportation Security Administration and Department of Homeland Security’s Science and Technology Directorate released a solicitation for new and innovative technologies to enhance security screening at airports. The solicitation, under S&T’s Silicon Valley Innovation Program, is called “Object Recognition and Adaptive Algorithms in Passenger Property Screening.” TSA says its goal is to “automate the detection decision for all threat items to the greatest extent possible through the application of artificial intelligence techniques.” Analysts say that this can be done but will require TSA to take other measures. “To detect nefarious objects or materials in luggage or even on persons, it will probably take more than traditional camera sensors within the spectrum of light that humans can capture,” Goertz says. Solinger says TSA will need to feed real-time 3D videos into a computer vision system for it to accurately detect threats in luggage or cargo. “You can’t look at a flat image,” she says, adding that “it is going to be prudent to have that information in real time.” And in June, the DHS S&T unit said it is “looking to equip drones with different sensors useful in search-and-rescue, reconnaissance, active shooter response, hostage rescue situations, and a myriad of border security scenarios.” Throughout 2018, S&T will be selecting commercially available sensors and will demonstrate them at Camp Shelby in Mississippi, according to the agency. Notably, DHS says that on land, “some drones may need to be able to use a variety of computer-vision enhancements and navigational tools as well as specialized deployment methods (i.e.: truck-mounted, back-packable, tethered, etc.).”
<urn:uuid:69f54b8d-1e8a-44a4-b5f9-3896c3af9a51>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2018/08/computer-vision-how-feds-can-use-ai-advance-beyond-image-processing-perfcon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00734.warc.gz
en
0.935959
2,563
2.859375
3
What motivates modern hackers? Ever wondered why hackers do what they do? Thycotic, a software firm specializing in privileged access password protection, conducted a survey of 127 hackers at Black Hat USA 2014 to try and understand their thinking. The company found that more than half of the hackers (51 percent) were driven by the fun/thrill, while 19 percent were in it for the money. Few hackers fear getting caught with 86 percent confident they will never face repercussions for their activities. 99 percent said they believed that simplistic hacking tactics such as phishing are still effective, and when asked which types of employees they would most likely target first in order to gain login credentials for a particular company, 40 percent said they would start with a contractor. A smart move, given that Edward Snowden was a contractor, and used his privileged access to steal sensitive NSA documents. "The motivations and inner workings of today’s hacker community have always been somewhat mysterious, but the damage they can do to an enterprise is painfully clear," said Jonathan Cogley, founder and CEO of Thycotic. "Understanding why hackers do what they do is the first step as IT security teams take measures to better control and monitor access to company secrets. Organizations need to do a better job of protecting the passwords and privileged login credentials associated with contractors and IT administrators, as these employees are a huge target for cybercriminal activity". The full findings can be viewed in infographic form below.
<urn:uuid:088444f4-405c-4820-9396-4ae333a50458>
CC-MAIN-2022-40
https://betanews.com/2014/08/14/what-motivates-modern-hackers/?ref=hackernoon.com
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00734.warc.gz
en
0.969237
296
2.546875
3
October 12, 2020 – The Supreme Court on Wednesday publicly struggled with the copyrightability of software in a uniquely contested case between Google and Oracle, the outcome of which could play a significant role in the future of software development in the United States. The oral arguments were the culmination of a battle that started 10 years ago, when tech company Oracle accused Google of illegally copying its code. Oracle owns the copyright to the Java application programming interface that Google utilized to establish a new mobile operating system. The company has sued Google for more than $9 billion in damages. Yet Google claimed a “fair use” defense to its copying. Google copied less than 1 percent of the Java code. Even though the law generally treats computer programs as copyrightable, Google’s attorney before the Supreme Court, Thomas Goldstein, said that by adapting Oracle’s code to serve a different purpose, Google’s use was “transformational,” and entitled to fair use protections. Goldstein said that this form of unlicensed copying is completely standard in software, and saves developers time and lowers barriers to innovation. He referenced a famous Supreme Court precedent about public domain works, Baker v. Selden, which in 1880 declared that once information is published to the public, the public has a right to use it. “Google had the right to do this,” said Goldstein. Still, Oracle attorney Joshua Rosenkranz asserted that the Java code is an expressive work eligible for copyright protections. Rosenkranz further argued that Google’s use of the code was not transformational. Justice Sonia Sotomayor appeared to suggest that jurors in the lower court case properly found Google’s use to be transformational because it took the APIs from a desktop environment to smartphones. “Interfaces have been reused for decades,” said Goldstein. Google had to reuse Oracle’s code to respond to interoperability demands. “It has always been the understanding that this purely functional, non-creative code that is essentially the glue that keeps computer programs together could be reused, and it would upend that world to rule the other way,” he said. Supreme Court observers said that the high court appeared leaning toward upholding the 2016 jury verdict vindicating Google’s fair use defense. Public Knowledge Celebrates 20 Years of Helping Congress Get a Clue on Digital Rights February 27, 2021 – The non-profit advocacy group Public Knowledge celebrated its twentieth anniversary year in a Monday event revolving around the issues that the group has made its hallmark: Copyright, open standards and other digital rights issues. Group Founder Gigi Sohn, now a Benton Institute for Broadband and Society senior fellow and public advocate, said that through her professional relationship with Laurie Racine, now president of Racine Strategy, that she became “appointed and anointed” to help start the interest group. Together with David Bollier, who also had worked on public interest projects in broadcast media with Sohn, and is now director of Reinventing the Commons program at the Schumacher Center for a New Economics, the two cofounded a small and scrappy Public Knowledge that has become a non-profit powerhouse. The secret sauce? Timing, which couldn’t have been better, said Sohn. Being given free office space at DuPont Circle at the New America Foundation by Steve Clemmons and the late Ted Halstead, then head of the foundation, was instrumental in Public Knowledge’s launch. The cofounders met with major challenges, Sohn and others said. The nationwide tragedy of September 11, 2001, occurred weeks after its official founding. The group continued their advocacy of what was then more commonly known as “open source,” a related grandparent to the new “net neutrality” of today, she said. In the aftermath of September 11, a bill by the late Sen. Ernest “Fritz” Hollings, D-S.C., demonstrated a bid by large copyright interest to force technology companies to effectively be the copyright police. Additional copyright maximalist measures we launched almost every month, she said. Public Knowledge grew into something larger than was probably imagined by the three co-founders. Still, they shared setbacks and losses that accompanied their successes and wins. “We would form alliances with anybody, which meant that sometimes we sided with internet service providers [on issues like copyright] and sometimes we were against them [on issues like telecom],” said Sohn. An ingredient in the interest group’s success was its desire to work with everyone. Congress didn’t have a clue on digital rights What drove the trio together was a shared view that “Congress had no vision for the future of the internet,” explained Sohn. Much of our early work was spend explaining how digitation works to Congress, she said. The 2000s were a time of great activity and massive growth in the digital industry and lawmakers at the Hill were not acquainted well with screens, computers, and the internet. They took on the role of explaining to members of Congress what the interests of their constituents were when it came to digitization. Public Knowledge helped popularize digital issues and by “walking [digital information] across the street to [Capitol Hill] at the time created an operational reality with digitization,” said Bollier. Racine remarked about the influence Linux software maker Red Hat had during its 2002 initial public offering. She said the founders of Red Hat pushed open source beyond a business model and into a philosophy in ways that hadn’t been done before. During the early days of Public Knowledge, all sorts of legacy tech was being rolled out. Apple’s iTunes, Windows XP, and the first Xbox launched. Nokia and Sony were the leaders in cellphones at the time, augmenting the rise of technology in the coming digital age. Racine said consumers needed someone in Washington who could represent their interests amid the new software and hardware and embrace the idea of open source technologies for the future. Also speaking at the event was Public Knowledge CEO Chris Lewis, who said Public Knowledge was at the forefront of new technology issues as it was already holding 3D printing symposiums before Congress, something totally unfamiliar at the time. Fair Use is Essential But its Enforcement is Broken, Says Senate Intellectual Property Subcommittee July 28, 2020 — “Fair use” is an essential doctrine of copyright law that is unevenly applied, said participants in a Senate Intellectual Property Subcommittee hearing Tuesday. The hearing, “How Does the DMCA Contemplate Limitations and Exceptions Like Fair Use,” saw participants discuss whether the Digital Millennium Copyright Act still permits fair uses of copyrighted content that would be otherwise infringing. The DMCA, passed in 1998, criminalizes the manufacture, sale or other distribution of technologies designed to decrypt encoded copyrighted material. This ban on anti-circumvention tools does not appear to account for fair use. The fair use exception to copyright law allows the republication or redistribution of copyrighted works for commentary, criticism or educational purposes without having to obtain permission from the copyright holder. However, Joseph Gratz, partner at Durie Tangri, said that fair use often clearly applies but is not enforced, leaving users of the legally obtained content to deal with automated content censors. “Fair use depends on context, and machines can’t consider context,” he said. “A video, for example, that incidentally captures a song playing in the background at a political rally or a protest is clearly fair use but may be detected by an automated filter.” When an automated filter detects a song on a platform like YouTube, it redirects advertising revenue from the creator of the video to the creator of the song, often erroneously. Rick Beato, who owns a music education YouTube channel with over one-and-a-half million subscribers, said that he does not receive ad revenue from hundreds of his videos. “One of my recent videos called ‘The Mixolydian Mode’ was manually claimed by Sony ATV because I played ten seconds of a Beatles song on my acoustic guitar to demonstrate how the melody is derived from the scale,” he said. “This is an obvious example of fair use, I would argue.” Grammy-winning recording artist Yolanda Adams testified that she sees the problems of fair use employment as about more than simply receiving money. “As a gospel artist, I’m not just an entertainer,” she said. “I see my mission as using my gift to spread the gospel — so for me, fair use is not just about money. It’s about access.” Digital Millennium Copyright Act Insufficient, Artists Testify in Senate Intellectual Property Subcommittee Hearing June 3, 2020 — The protections against redistribution of copyrighted content as enumerated in the Digital Millennium Copyright Act are insufficient, said participants in a Senate Intellectual Property Subcommittee hearing on Tuesday. The Subcommittee hosted several artists of various trades to testify about the ways the DMCA has affected them. Many of them expressed concern at what they see as the legislation’s shortcomings. Don Henley, lead vocalist for the Eagles, said that big tech companies have repeatedly abused the DMCA to use licensed music for free illegally. “When a simple online search for a song returns an endless list of sites that never asked the copyright owner for permission, never received a license and never passed on a penny to the artist for the use of their music, the system is not working,” he said. The DCMA criminalizes the production of equipment or services intended to distribute copyrighted material illegally. At the same time, Section 512 of the law creates a “notice and takedown” process to streamline the removal of allegedly infringing material from tech platforms. So long as the tech platforms follow Section 512’s procedures, they remain immune from contributory copyright infringement. This provision of the law has provided a great deal of certainty in the internet content ecosystem. Since the legislation was signed into law in 1998, various provisions of the law do not properly address the impact of newer digital technologies that can reproduce and redistribute digital media, panelists said. Kerry Muzzey, an instrumental soundtrack composer, said that large companies have used his work illegally under the guise of “fair use,” and that despite his attempts to remedy the situation and receive payment, the companies have been mostly unresponsive and sometimes hostile. “I began to send DMCA takedown requests on these tens of thousands of uses, and I quickly learned just how broken the DMCA was,” Muzzey said. “As I filed takedowns, I began receiving counter-notifications forwarded by YouTube. These notifications were from the companies and organizations using my music, as well as individual YouTube users — all of whom said that their use of my music in their ads, commercials and fundraisers was fair use under U.S. Copyright Law.” Other participants reported similar experiences. Photographer Jeffrey Sedlik claimed that tech companies do not do all they could to protect against illegal use of his photographs. “Instead of using readily available technologies to identify and mitigate copyright infringement,” he said, “service providers [ignore] illegal activity, allowing infringers to infringe, exploit and monetize my work with impunity.” One of Sedlik’s proposed solutions is a government requirement that online service providers must perform recognition scans to identify and take action on illegal uses. “This is not the effective, balanced system envisioned by Congress when it enacted the DMCA,” he said. “The fact that millions of takedown notices are issued each day is not a sign of success. It is a sign of an unbalanced system under strain and on the verge of failure, if not beyond.” - Kate Forscey: Mobile Broadband Gap Needs to Be Remedied, Too - FCC Proposal for Robotexts, FCC Mapping Problems, TikTok’s Preliminary Deal - As Middle Mile Program Deadline Approaches, NTIA Proposes ‘Buy America’ Exemptions - Reason 5 to Attend Broadband Mapping Masterclass: Understanding Public Challenges - FCC Spectrum Authority Expires on September 30, Agency Seeks Renewal - NTCA Smart Rural Communities, International Telecommunications Union Conference, Carr on TikTok Signup for Broadband Breakfast Broadband Roundup4 weeks ago Comcast and Charter’s State Grants, AT&T Fiber in Arizona, New US Cellular Lobbyist Broadband Roundup3 weeks ago AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding #broadbandlive4 weeks ago Broadband Breakfast on September 21, 2022 – Broadband Mapping and Data Broadband Roundup4 weeks ago Promoting Affordable Connectivity Program, Google Bars Truth Social, T-Mobile Wins 2.5 GHz Auction Fiber4 weeks ago Missouri City Utility to Complete Fiber Build Using Utility Lease Model Rural4 weeks ago FCC Commits Additional $800 Million From Rural Digital Opportunity Fund #broadbandlive4 weeks ago Broadband Breakfast on September 14, 2022 – How Can Cities Take Advantage of Federal Broadband Funding? #broadbandlive4 weeks ago Broadband Breakfast on September 7, 2022 – Assessing the NTIA’s Middle Mile Grant Application Process
<urn:uuid:8263537b-1d8b-4c52-a026-3094eee60c6a>
CC-MAIN-2022-40
https://broadbandbreakfast.com/2020/10/in-google-v-oracle-supreme-court-hears-landmark-fair-use-case-on-software-copyright/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00734.warc.gz
en
0.945158
2,831
2.6875
3
Due to be submitted to Ofqual next week, OCR’s new Computer Science GCSE aims to equip students with the computer skills that could see them as future cyber-spooks for MI5. A major feature of the new qualification is a focus on cyber security, including phishing, malware, firewalls and social engineering. Signalling the first time that cyber security has been taught at the GCSE level, pupils will also learn the ethical and legal concerns around computer science technologies. Since September 2014, compulsory computing in the curriculum has moved away from using applications to learning how to create them. Central to this new qualification is ‘computational thinking’, which will represent 60% of the course content. This will involve breaking a complex problem down into smaller parts, establishing a pattern, ignoring unnecessary information and designing a solution through programming. 20% of the GCSE will be focused on applying newly learnt programming skills to work on an independent coding project. OCR will partner with specialist education technology company, Codio, to provide schools with a cloud based programming and course content platform where students can learn the theory and apply it in real life situations, in any computing language. This platform will not only help school students programming, but also help support teachers to enhance their own computer science knowledge and skills at the same time. Rob Leeman, Subject Specialist for Computer Science and ICT at OCR, said: "This specification builds on OCR’s pioneering qualification development in this subject area. We have consulted with companies such as Google, Microsoft and Cisco, as well as teachers and higher education academics and organisations like Computing At School (CAS) to ensure that the content is relevant. "There is growing demand for digital skills worldwide. Whether students fancy themselves as the next cyber-spook, Mark Zuckerberg or Linus Torvalds, our new qualification will be the first exciting step towards any career that requires competence in computing." James Lyne, Global Head of Research at Sophos, said: "The specific inclusion of cyber security in the GCSE curriculum is long overdue so this is a welcome move from OCR. Not only is it key that we develop skilled professionals to close the existing skills gap and protect our future technology and infrastructure, but being able to secure yourself is a key skill for any member of modern society who is connected to the Internet, from a very early age onwards."
<urn:uuid:096118f4-a625-4b1c-b659-4884645603a1>
CC-MAIN-2022-40
https://techmonitor.ai/technology/cybersecurity/cyber-security-gcse-to-equip-future-cyber-spooks-4582929
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00734.warc.gz
en
0.943078
497
3.21875
3
By David Ashamalla, Director of Security Operations What is Phishing? Phishing (pronounced: “Fishing”) is the art of sending emails under a false name (misrepresenting the actual sender) in order to trick an end user into voluntarily giving up personal information, running a program, or granting access to funds. These attacks are frequent, and take advantage of the fact that sending e-mail is an inexpensive proposition. Early versions of phishing were slightly more suspicious as attackers assumed the identities of celebrities popular with the press at the time, such as Bill Gates and Paris Hilton. To bypass early spam filters, attackers may have adjusted the spelling of the (faked) sender’s name, replacing letters with similar looking characters and numbers like Par1s and Bi11. The goal was to get people to view a single web page, usually to drive up online advertising revenue. More recent variations of phishing campaigns include tricking the targets into running software that mines Bitcoin on behalf of the Phisherman! In Phishing, attackers cast a “Wide Net”, sending these emails to large groups of targets, knowing at least a few will fall for it. A more targeted form of attack is called “Spear Phishing”. Like hunting with a spear, these attacks are customized using information already gleaned from public sources like LinkedIn, Facebook, and other Internet sources to make you trust the attacker. They can then request that you take action for them to make bank transfers, or as we have recently seen, request for Gift Cards to be bought in bulk. IE, please buy 50 Apple gift cards. No need to send them, just email me the codes. (This is all you need to use them over the internet.) Modern spear phishing uses personal and professional details to establish trust and sell the attacker’s fake identity. - PhishMe’s yearly report, phishing attempts have grown 65% this year. - Verizon’s Data Breach Investigation’s report, 30% of the phishing messages get opened, 12% of the users click the link. - According to SANS institute, 95% of all attacks on enterprise networks are the result of successful spear phishing. A Case Study Phishing exploits trust and assumption, and can be devastating to a business. A recent spear phishing attack on a customer nearly cost them $36,000! (Note: we have been given permission from the business to share the story and have removed identifying information). The initial request was designed to look like it was from one of the founders. An email was received directly by an employee in the accounting department. It was sufficiently vague enough to prompt a response. The tone and content of the employee’s reply indicated that they believed this was from the falsified sender, which told the attacker that they had them “on the line.” The attacker then initiated a “call to action”. They requested that the employee make a wire transfer in the amount of over $18,000 and provided the employee with all the details necessary to do so. The employee then received a second request, seemingly from another founder who simply asked “Are you in the office.” The employee verified that they were, and the attacker had them hooked again. In this second email thread, the attacker again provided account numbers and requested a second bank transfer in the amount of approximately $17,000.00. Luckily for the business, the Phisherman mistyped the account details for the second transfer and the transaction was rejected. The employee created a new e-mail to follow up with the partner, only this one was to the correct email address. This is where they began to realize what had happened, and the Incident Response process began. It was determined by the local police that the account details of one of the partners has been compromised. They were able to freeze the account used in the first transfer, unfortunately that transfer had already gone through and the money was lost. Determining the Target The attacker was able to use a lot of public information when choosing their target: - The company had been the subject of numerous news articles highlighting its phenomenal growth. - Five days before the phishing attack the news of this particular employee’s hire was announced on one of the local news channel’s website. - The founders’ names, email addresses, work and education histories are readily accessible on the Internet. - There is no evidence that the e-mail accounts were actually compromised. In fact, the phisherman’s reply-to addresses are clearly similar, only slightly changed from the actual domain name. For example, something like email@example.com Reducing Your Risk - Utilize a process that verifies bank transfer requests verbally vs solely via computer. - Train employees to be vigilant and able to spot attacks. Your best line of defense is your employees. CIO Solutions provides solutions for training employees to be aware, suspicious, and therefore vigilant forces in protecting your business from attack.
<urn:uuid:5397d47a-2838-4285-9a5f-75815a0c7557>
CC-MAIN-2022-40
https://www.ciosolutions.com/dangers-of-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00734.warc.gz
en
0.962468
1,064
3.421875
3
Published at SCmagazine on August 28, 2018 by Raz Rafaeli Spaceships and little green men; self-driving cars and self-flying drones; virtual reality and alternative universes; and, if you can believe it, the cybersecurity authentication in use today, all “predicted” decades ago. The movies aren’t just a place to go to get away from it all; they’re a place to go to learn about what we can expect in the future. The list of future tech predicted by Hollywood is very impressive – maybe even in some cases prophetic – when you look at them in aggregate. But among the most surprising predictions from Hollywood over the years have been those relating to cybersecurity – or, more specifically, security authentication to access systems, offices, buildings, etc. At a time when “security” meant two locks on the front door, and the term “cybersecurity” wasn’t even a gleam in the eye of Webster, films depicted the dangers that could ensue for advanced systems that were unprotected – and presented ways to ensure that they remained secure. Here are some examples: Read Full Article
<urn:uuid:8a647c10-a584-4104-be77-7f172c149d94>
CC-MAIN-2022-40
https://doubleoctopus.com/news-events/in-the-news/authentication-and-the-movies-how-hollywood-predicted-our-cybersecurity-present/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00734.warc.gz
en
0.951858
239
2.625
3
The world is full of complexity and lots of problems are surrounded nearby which need to unravel. Perhaps, it's not startling that problem-solving is one of the most demanding skills. If you can break an issue separated, and concoct an answer, your abilities will consistently be required. We will discuss one kind of problem-solving capability termed as system analysis. It is a scenario that includes glancing at the wider system, tearing apart the portions, and working out how it works to attain a specific motive. System analysis is one of the crucial systems that give an organized and wider perspective on understanding, analyzing, and creating systems to fulfil particular motives. We will discuss the topic; 1. About System Analysis 2. Tools and Techniques of System Analysis 3. Benefits of System Analysis 4. What is System Analysis and Design? About System Analysis System analysis is the way toward noticing the system for investigating or improvement purposes. It is applied to information technology, where computer-based systems need to be characterized for examination as per their makeup and plan. In the Information Technology domain, systems analysis can incorporate glancing at end-client usage of a product bundle or item; glancing top to bottom at source code to characterize the systems utilized in building programming. System analysis experts are regularly called upon to glance critically at systems and redesign or suggest changes as vital. Inside and outside of the business world, systems analysis helps to assess whether a system is feasible or effective inside the setting of its general design and helps to reveal the alternatives accessible to the employing industry or another party. Accordingly, system analysis is unique to system officials, who keep up systems every day, and their roles normally include a high-level perspective on a system to decide its general viability as per its plan. (Related blog: Dark side of Information Technology industry) Objectives of the system analysis Most importantly, It assists with planning systems where subsystems may have clashing destinations and it enables in comprehension of complicated structures. Likewise, System analysis assists with accomplishing bury similarity and solidarity of the subsystems. System analysis gives a favourable position of comprehension and contrasting the subsystems capacities and complete system. "With the subsequent strong support from cybernetics, the concepts of systems thinking and systems theory became integral parts of the established scientific language and led to numerous new methodologies and applications - systems engineering, systems analysis, systems dynamics, and so on." - Fritjof Capra (Also check: OpenCV: Applications and Functions) Tools and Techniques of System Analysis Now, here we will catch a glimpse of some tools and techniques of system analysis. Grid charts are an even strategy for speaking to the connection between two arrangements of elements. A grid chart analysis is valuable in dispensing with pointless reports or superfluous information things from reports. It can likewise be utilized for distinguishing the obligations of different directors for a specific sub-system. Also, grid charts can be viably used to follow the stream of different exchanges and reports in the association. The simulation includes the development of a model that is generally numerical. As opposed to legitimately portraying the general conduct of the system, the simulation model depicts the activity of the system as far as individual occasions of the individual segments of the system. Thus, simulation generally is nothing more or less than the method of conducting testing investigations on the model of the system. A few choices include a series of steps. The result of the first choice aids the second; the third choice relies upon the result of the second, etc. In such kind of circumstances of decision making uncertainty encompasses each progression, so we face uncertainty, heaped on uncertainty. So, decision trees are the model to manage such an issue. They are additionally significant in decision making in a probabilistic circumstance where different feelings or choices can be drawn (as though they are the parts of a tree) and the ultimate results can be perceived. A system flow chart is a chart or pictorial portrayal of the coherent progression of activities and data in an association. It portrays the connection between input handling and yields thinking about the whole system. A standard arrangement of images is commonly used for the development of system flow charts. Decision tables are graphical methods for speaking to a grouping of legitimate decisions. It is set up in a tabular structure. It records every conceivable condition and related arrangement of activities. A decision table comprises the four sections that are condition stub, condition entries, action stub, and action entries. (Also read: What is the role of technology in business?) Benefits of System Analysis There are several explanations why you might want to analyze a system and here you will see some of the benefits of system analysis. Analyzing the plans to be undertaken by any business is very important. However, there can indeed be no 'perfect path'. Still, when the steps to be taken are properly analyzed before implementation, it can prove to be of great benefit. Firstly, there are certain places where the cost will be reduced. Also, it will minimize the chances of fatal errors and prevent the downfall of the business. Last but not the least, the correct path also reduces the scope for future errors. Effective skill use Another important aspect of system analysis is that it is not very difficult to learn. This means that it does not require any degree or professional skills. It can be easily taught. Thus, employers can teach system analysis to employees by using diagrams, which makes it less time consuming and also cost-efficient. (Recommend Read: 7 Types Of Agile Methodologies) System analysis ensures that a product is made properly and delivered timely. This may seem to be a small detail, but it plays a vital role in the field of business. When the system of making a product is analyzed properly, it will greatly reduce the scope for errors. Furthermore, the timely delivery of products ensures consumer satisfaction. It provides the capability to make use of human resources to its full potential. (Also read: What is edge computing?) Enable better management System analysis makes managing the business easier and much more convenient. If the products are finalized without analysis, there is a huge possibility of having a lot of errors in the final products. Also, when system analysis is implemented, it makes the software more flexible. This way, it can adapt to the changing business environment. Otherwise, the software will have to be made again from scratch, which will cost a lot of time, money, and resources. (Must read: 7-Top Trends in Software Development) What is System Analysis and Design? Now, let's move towards another interesting part that is System Analysis and Design or SAD. Systems modification is an organized procedure that involves phrases like planning, analysis, design, deployment, and maintenance. Here, we will focus on two things, that is system analysis and system design. Basically, it is a procedure of obtaining and analyzing information, specifying the hardships, and decomposition of a system into its elements. If you are thinking about why system analysis is conducted, then here is the answer. It is conducted to survey a system or you can say its portions to specify its goals. It is a problem-solving method that expands the system and guarantees that all the segments of the system work proficiently to achieve their motivation and analysis indicates what the system ought to do. It is a cycle of arranging another business system or supplanting a current system by characterizing its segments or modules to fulfil the particular necessities. Before arranging, you have to comprehend the old system altogether and decide how computers can best be used to work productively. Thus, system design centres around how to achieve the target of the system. The way to achievement in business is the capacity to gather, sort out, and interpret data. System analysis and design is a demonstrated approach that enables both huge and independent companies to receive the benefits of using data to its full limit. As a systems expert, the individual in the association is generally engaged with system analysis and design, and you will appreciate a rich vocation that will upgrade both your computer and relational aptitudes. (Must read: What is Agile Software Development?) “A design system acts as the connective tissue that holds together your entire platform”- Drew Bridewell System Analysis and Design or SAD is an energizing, dynamic field wherein analysts constantly learn new strategies and ways to deal with creating systems all the more adequately and productively. The significant objective of system analysis and design is to improve authoritative systems. Regularly this cycle includes creating or procuring application software and preparing workers to use it. Consequently, System analysis and design generally concentrate on Systems where there are several properties and elements of systems. After that, Processes, and Technology. (Referred read: What is Neuromorphic Computing?) So, in conclusion, we can say that system analysis is a problem-solving strategy that includes glimpsing at the more extensive system, breaking the separated parts, and sorting out how it works to accomplish a specific objective. There are several definitions of system analysis like another definition is its examination of a specific system to observe the sectors of modifications and prepare any essential enhancements, if required. (Recommended blog: Robotics with IoT) It is one of the significant stages in the growth of any system by detailing the requirements that satisfy alterations in the future. Hope so, this blog makes you clear about every aspect of the system analysis like its definition, objectives, benefits, system analysis and design, and so on.
<urn:uuid:fee53617-beba-4999-8267-4c7e8b472e5b>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/what-system-analysis-and-design
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00734.warc.gz
en
0.93424
1,988
3.25
3
As more and more data is being stored in various databases, it doesn’t come as a surprise that data breaches and leaks are getting more common as well. You don’t need to be an IT professional to know that data breaches are a big deal and can cause a lot of harm. Whenever a large data breach occurs, the media is sure to let people know what has happened. Especially if a lot of data has been stolen — or the target of the breach is a major organization. Luckily, by being prepared and learning more, you can minimize the risk of becoming a victim of a data breach or a leak. In case a data breach still happens and confidential data gets into the wrong hands, there are some things you can do to prevent things from getting worse. First, let’s get the definitions straight. A data breach happens when someone gets through security barriers and gains unauthorized access to a database. Naturally, this is done without the knowledge of the owner of the data. Because of this, data breaches are a form of cyber crime and thus are punishable by law. On top of that, the data that is accessed, stolen or destroyed in a data breach is often sensitive and confidential in nature. Such data may include people’s financial information, medical records, passwords and other personal details. For individuals, the biggest threats posed by data breaches include account takeovers and identity theft done with the stolen information. Criminals may also threaten companies and organizations after successfully carrying out a data breach. Both large and small businesses and organizations can become the targets of a security incident where data is stolen. The criminals often use the stolen data for blackmailing the companies whose data has been stolen. After becoming a victim of a data breach, the targeted company is threatened with the release of stolen information if they do not pay a ransom. However, this should not be confused with ransomware, a type of malware. Now you know what a data breach is, but what about data leaks? These two terms are often used as synonyms, so it’s understandable to be confused. However, there are some differences between data breaches and data leaks. Although both data leaks and data breaches involve confidential data getting into the wrong hands, the way this data is obtained can be different. Many data breaches and leaks have made their way into international news as the personal data of hundreds of thousands or even millions of individuals have been stolen. Here are a couple of well-known examples from recent history: The Equifax data breach (2017) What makes the Equifax data breach famous is both its size and the type of data that was compromised. Equifax is a large American consumer credit agency and the data breach against them compromised the information of more than 147 million American citizens as well as millions of people in Great Britain. The personal ID, credit card numbers and other highly confidential information of more than 200 000 Americans were stolen by the criminals. The Equifax data breach was traced back to a group of Chinese hackers. Yahoo data breaches Personally identifiable information of more than 3 billion Yahoo users was compromised in multiple data breaches that spanned several years. The Yahoo data breaches include two large incidents in 2013 and 2014 which were made public later in 2016. These data breaches account for the largest such security incident in the history of the internet, with affected individuals in several countries. The stolen data includes names, email addresses, telephone numbers, birth dates and more. In addition to individuals whose data is being stored, both large and small companies need to take data breaches seriously. The consequences for businesses and organizations can be very serious if they become the victim of a data breach. According to a 2021 report by IBM, the average cost of a data breach was more than 4.2 million USD. In other words, the financial damage caused by a data breach is significant. On top of that, the harm to a business's reputation is great as a consequence of a data breach. Businesses and organizations must also make a report within 72 hours of discovering the data breach in case the breach poses a risk to individuals. In more severe cases where the individuals whose data has been compromised are at risk, they should be informed personally by the data processor. Luckily there are some things you can do to protect yourself against data breaches. Although you may not be in control of the databases where your sensitive data is stored, you can minimize the amount of information that databases have about you. This way you can limit what criminals can steal in a data breach. In case they steal your password, you can change it and prevent them from accessing your data with it. These tips are ones that everyone should consider to protect themselves and their devices. The unfortunate fact about data breaches is that in many cases you are not the one in control of the stored data and thus cannot prevent a data breach from taking place yourself. Has your personal information been exposed in a data leak? Check for free with F‑Secure Identity Theft Checker. Identity theft is no small nuisance. Data stolen in a breach or leak can be used against you. For example, if your personal data has been compromised it can be used to make purchases in your name. The leaked personal information can also be used to impersonate you on social media. F‑Secure ID PROTECTION helps you avoid identity theft. It comes with around-the-clock data breach monitoring as well as a password vault for easy logging in and storing your passwords. In case a data breach occurs and your personal data is compromised, ID PROTECTION alerts you. This gives you time to secure your personal information online and minimize risks. You will also receive advice from our cyber security experts. Read more and try ID PROTECTION for free.
<urn:uuid:fddd28de-7cb3-4f21-af50-6f5e9b85781f>
CC-MAIN-2022-40
https://www.f-secure.com/us-en/home/articles/what-is-a-data-breach
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00734.warc.gz
en
0.959233
1,169
3.03125
3
When people think about artificial intelligence (AI) today, they might think of computers that can speak to us like Alexa or Siri, or grand projects like self-driving cars. These are very exciting and attention-grabbing, but the reality of AI is actually thousands of tools and apps running quietly behind the scenes, making our lives more straightforward by automating simple tasks or making predictions. This is true across every industry and business function, and particularly true in marketing, where leveraging AI to put products and services in front of potential customers has been standard practice for some time, even though we may not always realize it! In business today, the term AI is used to describe software that is capable of learning and getting better at doing its job without input from humans. This means that while we’ve become used to using machines to help us with the heavy lifting, now they can start to help us with jobs that require thinking and decision-making, too. A huge number of questions that would previously have needed human intervention to answer – such as “will this person be interested in my products?” or “what results will I get from this advertising campaign?” can now be answered by machines – if they are given the right data. And because machines can answer questions far more quickly than humans, they can easily chain together complex strings of queries to come up with predictions, such as who is most likely to buy your products and where the best places to advertise might be. That’s the basic principle behind all business AI today – automating the processes of learning and decision-making in order to create knowledge (usually referred to as “insight”) that helps to improve performance. And marketing is one area where it’s certainly been put to good use! The high-level use case for AI in marketing is that it improves ROI by making your marketing – often one of a company’s biggest expenses – more efficient. In the old days, before online advertising, businesses would pay huge amounts of money for TV, radio, or newspaper adverts, in the full knowledge that only a small number of the people who saw their ads would ever become customers. This was tremendously inefficient, but companies didn't have any choice if they wanted to position themselves as market leaders. In the online age, we’ve developed the ability to learn a great deal about who is or isn’t interested in our products and services. The first breakthroughs came thanks to the likes of Amazon with their recommendation engine technology and Google and Facebook with their targeted advertising platforms. Today, each of those platforms has been augmented with machine learning technology that allows them to become increasingly effective as they are fed more data on customers and their buying habits. AI-driven content marketing The rise in social media marketing and our growing appetite for online content has made content-based marketing the dominant form of marketing in many industries. AI lends a hand here by helping us work out what type of content our customers and potential customers are interested in and what the most efficient ways are to distribute our content to them. Advertising creatives have always strived to find formulas for creating adverts that will get people talking and sharing the message with their friends. Now, this can be done automatically using any number of AI-powered tools. For example, headline generation algorithms that monitor how successful they are and tweak their output to achieve better metrics, such as the open rate of emails, or the share rate of social media posts. Taking this a step further, AI is developing the ability to take care of the entire content generation process itself, creating copy and images that it knows are likely to be well-received by its audience. A huge buzzword in this space will be personalization – where individual customers are served content that’s specifically tweaked to them, perhaps using information and reference points that the AI knows are relevant to them, intertwined with the overall marketing messages. AI will also increasingly be useful for identifying what stage of the buying process a customer is at. If it detects that they are “shopping around” – comparing products and services that are available - it can serve content designed to differentiate your product or service from those of competitors. If it detects that they are ready to make a purchase, it can target them with promotions urging them to “act now” to take advantage of a limited-time offer. A digital marketing agency called 123 Internet has embraced the ongoing industry developments by utilizing various AI-based technologies to improve service delivery. Scott Jones, CEO said: “We’ve been using AI tools for a while now, in particular automatically checking website designs in hundreds of screen and browser types, this speeds up our design and development process”. Their team also use an AI generated website audit which can be downloaded from their website and runs without human interaction. Influencers are another huge trend in marketing right now, and AI algorithms are already in use to make sure the personalities that are most likely to appeal to you are appearing in your search results and social feeds. Increasingly, advertisers will also use AI to identify smaller influencers that are most likely to gel with their brands and audiences. This has led to the emergence of “micro-influencers” – typically everyday people, rather than celebrities, who have a specialist knowledge they’ve used to build a niche audience that cares about their opinion. AI enables companies to find the micro-influencers with the right audiences for them, across a large number of niches and audience segments. AI helps establish when it makes sense to pay 100 people $1,000 each to talk about their product, rather than pay $100,000 to Justin Bieber or a Kardashian. Once again, here it is about creating efficiency by following the data, rather than simply doing what a marketer thinks or feels is the best plan. AI in CRM Customer relationship management is an essential function for any marketer to master, as existing customers are often the most important source of a company’s revenue. Here, AI can be used to reduce the risk of customer “churn” – by identifying patterns of behavior that are likely to lead to customers heading elsewhere. These customers can then be automatically targeted with personalized promotions or incentives to hopefully restore their loyalty. AI-augmented marketers are also increasingly turning to chatbot technology – powered by natural language processing. This can segment incoming customer inquiries, meaning those who require a quick response can be urgently catered to, to minimize dissatisfaction. AI-driven CRM will also allow businesses to more accurately forecast sales across all the markets where a company operates, meaning stock and resources can be more efficiently distributed. Additionally, it can be used to maintain the quality of data in the CRM system, identifying customer records where errors or duplicates are likely to exist. The future of the marketer If you work in marketing, you would be forgiven for worrying that we’re heading for a future where humans in your role will be redundant. You can take heart, though, from current predictions that state AI will end up creating more jobs than it destroys. It’s inevitable that your job will change, though. Marketers will spend less time on technical tasks such as forecasting or segmenting customers and more time on creative and strategic tasks. Those who are competent at working with technology, and identifying new technological solutions as they become available, will be hugely valuable to their companies and are likely to have a bright future!
<urn:uuid:50eb39b9-83e9-4617-a6dc-afe967e01825>
CC-MAIN-2022-40
https://bernardmarr.com/how-ai-is-transforming-the-future-of-digital-marketing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00734.warc.gz
en
0.963724
1,552
2.65625
3
Information and data stored in your emails is highly sought after by cyber-criminals. The defensive measures in place by your email services, like Gmail , tend to not be strong enough to stop the most persistent of hackers from reading your emails. It’s no surprise that email encryption services play a big role for cyber security companies as a whole. Encrypted vs Unencrypted Emails When you use unencrypted email, it’s possible for third party users to access the email content. In other words, you could potentially be exposing important information. Depending on the industry you’re in, you could also be violating data privacy regulations by sending unencrypted emails. Sending an unencrypted email with important information is like sending traditional mail with confidential information written on the outside of the envelope. To see this information, all you need to do is look for it. There is nothing protecting it. In contrast, when your email is encrypted, it’s close to impossible for somebody to read the content because they don’t have the decryption key. Email encryption also stops spam messages or malware from being spread with your name. How Does Email Encryption Work You can digitally sign your emails with an email certificate with the help of an IT services provider in CT . This allows the recipient to verify that the message is legitimate, and keeps the content of the email secure. The typical method for email encryption uses Public Key Infrastructure (PKI). PKI works through the interaction of public and private keys. You keep the private key for yourself—you give out the public key to others. People that have the public key can use it to encrypt their messages sent to you. However, you’re the only person who can decrypt and read the messages. That’s because the private key, which only you have a hold of, is necessary to decrypt the messages. In order to make usage of PKI, you should encrypt all your messages—not just the ones with important, or confidential, information. This throws a smokescreen over all of your emails, so hackers won’t know which emails are worth decrypting. In addition, this is the reasoning behind why companies use infrastructure-wide email encryption. And you should use it, as well. Does your business need email encryption? CorCystems can help implement email encryption for your organization. Our team of data security experts has successfully helped dozens of businesses in Connecticut and New York. To learn more about the solutions we use, reach out to us at (203) 431-1341
<urn:uuid:aa57b9db-a279-4be1-98b5-9b7bd34cf84d>
CC-MAIN-2022-40
https://www.corcystems.com/insights/why-your-business-needs-email-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00134.warc.gz
en
0.925819
540
2.65625
3
Scratch the surface of virtually any business, and you’ll find open-source technologies used for everything from Web servers, to databases, to operating systems for mobile devices. Although cost is often a factor for choosing an open-source solution over a proprietary one, most businesses aren’t just looking for a free solution when they choose open source: They’re usually also interested in the fact that open-source products are often more innovative, more secure and more agile in responding to the needs of the user community than their proprietary counterparts. Indeed, the most successful open-source projects are built by communities based on principles of inclusion, transparency, meritocracy and an “Upstream first” development philosophy. By adhering to these principles, they can deliver significant value to both device manufacturers and service providers that transcend what’s offered by makers of proprietary platforms. Open-source Principle No. 1: Inclusion Inclusion, rather than exclusion, has always been a key tenet of open source. The idea is simple: No matter how many smart people your company employs, there are many, many other smart people in the world — wouldn’t it be great if you could get them to contribute to your project? Successful open-source projects realize that contributions can come from anywhere and everywhere. By harnessing the capabilities of a larger community, open-source developers can deliver solutions that are often superior to proprietary ones. Contributions range from writing and debugging source code to testing new releases, writing documentation, translating software into another language or helping other uses. Open-source Principle No. 2: Transparency For a community development effort to work, members of the community need transparency. They need to know what’s happening at all times. By putting forums in place to encourage public discussions such as mailing lists and IRC, creating open-access problem-tracking databases using tools like Bugzilla and building systems for soliciting requests for new features, open-source developers develop a culture of trust that embraces transparency and tears down barriers — which can stifle innovation — between contributors. For device manufacturers and service providers, this transparency translates directly into their ability to improve their own time-to-market and compete on a level playing field. The “Release early, release often” philosophy that frequently accompanies a transparent culture means they can evaluate new features earlier in the development cycle than they can with proprietary products. They can also provide feedback in time to influence the final release of a product. In contrast, when a developer withholds source code until final release, those involved in the development process have a built-in time-to-market advantage over those without early access. Open-source Principle No. 3: Meritocracy Unlike the hierarchy of a traditional technology firm, the open-source community is one based on merit. Contributors to an open-source product prove themselves through the quality and quantity of their contributions. As their reputation for doing good work grows, so does their influence. This merit-based reward system creates much more stable development environments than those based on seniority, academic pedigree or political connections. The Linux kernel is probably the best-known open-source project and operates on the principle of meritocracy. Linus Torvalds, founder of the Linux project, and a number of other maintainers coordinate the efforts of thousands of developers to create and test new releases of the Linux kernel. The Linux maintainers have achieved that status as a result of proven contributions to the project over a number of years. Although many of the key Linux maintainers are employed by major corporations, such as Intel and Red Hat, their status is a result of their contribution — not their company affiliation. Open-source Principle No. 4: “Upstream First” Philosophy Finally, open-source developers who take advantage of existing open-source software projects rather than adopting a “Not invented here” attitude tend to be innovation leaders, especially in the fastest-paced markets. While it is sometimes tempting to simply take source code from a project and modify it for your needs, without worrying about whether or not the original (upstream) project will accept your modifications, this approach typically leads to suboptimal results. First, you will be stuck maintaining this forked version of the project for as long as you need to use it. Second, if everyone adopted this approach, the upstream project would not benefit from the improvements made by others, which defeats one of the key benefits of open source. So, the most successful projects that rely on components from other upstream projects have adopted an “Upstream first” philosophy, which means that the primary goal is to get the upstream project to adopt any modifications you have made to their source code. With this approach, your platform and other derivative (downstream) projects benefit from those upstream enhancements, and the platform does not incur the maintenance expense associated with maintaining a branched version of the upstream project.
<urn:uuid:36173c20-47db-4a1e-aab4-961f5d025700>
CC-MAIN-2022-40
https://intelligenceinsoftware.com/4KeyPrinciplesforCreatingTrulyOpenOpen-sourceProjects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00134.warc.gz
en
0.944299
1,016
2.90625
3
While we strengthen our roots in the land of digitization and technological advancements, cyber-attacks have become unavoidable. There is a lot at stake, and any flaw or error in cybersecurity will risk it all. Some companies acknowledge that they still have to tread a long cybersecurity path, while some tend to live in their false glory. According to Forbes, 78% of the companies lack confidence in their present security model. In his article for Security Intelligence, C.J Haughey defines and mentions the benefits of database security. Defining Database Security Database security is an information security model that comprises tools, controls, and processes. It significantly helps in enhancing confidentiality and inclusivity. Besides, it also safeguards databases from unauthorized access and malicious ransomware, malware, and spyware. 2020 witnessed a sudden growth in ransomware attacks, especially in the education and healthcare sectors. Moreover, several retail companies have begun to experience the threat of a ‘direct denial to service’ (DDOS) attack. To tackle such problems and save the company assets in general, it is better to invest in a reliable database security system. Lessens Human Error Varonis, in its report, states that 95% of cybersecurity breaches are the result of human error. When you have more significant problems to face, you cannot waste your resources on reviewing and cross-checking for human error. Database security enables automated detection for vulnerabilities and security breaches that do not leave any margin for error. Not only is it quicker and accurate, but database security automation also analyzes the pattern and makes the operations significantly faster. Enhances Business Dynamics Nowadays, clients and consumers are pretty vigilant when it comes to sharing their personal information with a company. It should be your organization’s responsibility to acknowledge their concerns and create a transparent and secure environment. In the entire process, database security helps companies resolve the problems of their target market. Click on the link to read the original article:
<urn:uuid:d1b162d8-cdb7-49f8-b268-5027b5d904fb>
CC-MAIN-2022-40
https://cybersecurity-journal.com/2021/11/12/what-are-the-key-benefits-of-database-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00134.warc.gz
en
0.934466
402
2.703125
3
A personal data breach is a security risk that affects personal data in some way. If a breach occurs, the data controller has to do certain things. Depending on how severe the breach is, the data controller has to act in different ways. This means that a data processor should always report a breach to the data controller Reading time: 1,5 minutes. What does a personal data breach mean? A personal data breach may put data subjects’ rights and freedoms at risk. This can include physical, material or non-material risks. Examples are identity theft, fraud and other financial loss. Other cases include damage to reputation or social disadvantage. When must it be reported, and to whom? When it is unlikely that the breach will lead to risks, reporting is not necessary. However, if it is likely, the breach must be reported. In cases where it is likely that the breach will lead to high risks, you must report the breach. In such a case, data controllers need to inform affected individuals as well. The information given to individuals needs to include the potential consequences of the breach. Information about what is being done to minimise the subsequent risks is also needed. Need templates, second opinion or support for your DPAs? Connect with leading experts with a multitude of templates. Reviewing a customer DPA – ask for a second opinion from our experts. Track record with leading European startup, mid-size companies and listed global enterprises. Get a quote today from the business law firm Sharp Cookie Advisors Sometimes, the supervisory authority may instruct data controllers to inform data subjects. Such instructions may also include telling them how to tell data subjects. Companies and organisations must report the data breach to the supervisory authority concerned within 72 hours. The 72-hour rule was newly introduced with the GDPR. The clock starts counting down after the data controller has been made aware of the breach. What should the report include? The following are examples of what the report to the supervisory authority needs to include: - a description of the nature of the personal data breach; - the contact details of the Data Protection Officer or other relevant people; - the likely consequences of the personal data breach; and - what actions have been taken or proposed to resolve the personal data breach.
<urn:uuid:9459a04b-047a-4cc9-b110-d489a7f7a356>
CC-MAIN-2022-40
https://www.gdprsummary.com/what-is-a-personal-data-breach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00134.warc.gz
en
0.939589
470
2.53125
3
Two-factor authentication (2FA) is a good way to secure your online accounts in a time where data breaches and stolen account credentials hit the news headlines on a daily basis. This article explains the terminology, what 2FA is and why you should use it. Then, taking the Google account as an example, it describes how you can enable 2FA for your online accounts and use an Authenticator mobile app to generate your 2FA codes. In a presentation at Usenix's Enigma 2018 security conference in California, Google software engineer Grzegorz Milka revealed (youtube presentation) that less than 10 per cent of active Google accounts use two-factor authentication (source). That was over a year ago, but my guess is that the adoption rate of 2FA has not even exceeded 25% since then. An extra reason to write this article on how to setup 2FA. - What is two-factor authentication? - Why you should use 2FA? - 2FA? There's an app for that! - How to enable 2FA for your (Google) account? - Setting up the Authenticator app: - Enable 2FA for other accounts What is two-factor authentication? Authentication is the process of verifying the identity of a user. When, for example, Gmail or Facebook asks you to login with a password, it assumes you are the only one knowing the password that belongs to the username and thus identifying you as the rightful owner of the account. In this example the password is one identifying factor for a user account, but in fact anything that you know (password, PIN, etc.), have (physical key, smart card, phone etc.) or are (biometrics like a finger print or iris-scan, etc.) that would uniquely identify you, could be used as an authenticating factor for your account. Multi-factor authentication (also called two-factor authentication or 2-step verification) means that more than one factor is used to identify the owner of a piece of property (e.g. house, vault, or user account). Ideally each factor would come from a different category (know, have, are). In multi-factor authentication you would be asked to provide all required factors to access that piece of property. Some examples of multi-factor authentication you might already use or know, are the following: - a credit card and a PIN for a bank account, - a password and a one-time generated code (by an app or device) for an email account, - a physical key and a finger print for a house, or - a secret door-knock and passphrase for kids playing in a tree-house. In this article I use the term 2FA, but know that it can be interchanged with MFA or 2-step verification. Why you should use 2FA Consider account authentication with one factor, like a password. If that single factor would fall into the wrong hands, the entire account could be compromised. With the ever growing list of data breaches, chances are this password may be leaked somewhere in the future, or already has. Having proper password hygiene like using strong passwords and never reusing a password reduces the risk of your account being compromised when it is part of a breach. However, when the service being breached stores passwords as plain-text or uses a weak hashing algorithm or you have your password written down on a post-it for example, you still may have a problem: Any account you own may be the victim of a hack. While the new passwordless Web Authentication (WebAuthn) standard developed by W3C and FIDO alliance is still being adopted, multi-factor authentication is still a great way to secure your accounts. This article describes how to do that. Sidenote: If you would like to verify whether your e-mail address (or even password) were exposed in any previous databreaches, I highly recommend checking out the https://haveibeenpwned.com/ website setup by Troy Hunt. By the end of this article you will have an authenticator app on your phone that will generate a new code every 30 seconds. This code is your second factor and is also known as a Time-based One-Time password (TOTP). When you login to an account that has 2FA enabled, you will be prompted for a code after you've submitted your username and password. Check the app for the current code and enter that. When the code and the password are correct, you will have access to your account. 2FA? There's an app for that! Before we start, let's install an authenticator app on your mobile phone. There are several popular authenticator apps on the market. While it will depend on your personal preference and requirements, I recommend using an app that allows you to create a backup (or synchronize) of your codes. More on that in the Good practice chapter below. The following apps are a few of the most popular ones: - Authenticator Plus website (App: iOS, Android) free or one time $3,99 to unlock pro features - Authy website (App: iOS, Android) 100 authentications free per month (more pricing) - Google Authenticator website (App: iOS, Android) free (read remarks below) Google Authenticator does not allow to create backups of your codes, so losing your phone will require you to recreate an authenticator code for each of your accounts. Since you won't be able to access your account without a code, unless you have (and you should by the way) written down the backup codes you get when you enabled 2FA on an account, you might be in trouble. In the past I made the decision to go with Authenticator Plus. This because of its pricing and usage model, the fact that you can import from a previous Google Authenticator installation and can setup your own cloud provider for backups. I'm positive Authy is a great tool that can do many of these things as well nowadays. In this article I included setup guides for both Authy and Authenticator Plus. In a general sense it is a good idea to regularly explore what data or access you may lose when a device you own is no longer accessible (broken, stolen, etc.) and what can you do to prevent this loss. If your phone is stolen, how much trouble will it give you? Will you lose all your precious photo's or do you regularly make backups? Can you easily (or even at all) recover from these backups? These are important questions to ask yourself when you decide to put your valuable assets in the hands of any medium. How to enable 2FA for your account? Many online services that require an account have the option to enable 2FA. You usually find the page to enable 2FA in your account settings of that service. As an example, for this article, I describe this process for Google accounts. This because Google is widely used and enabling 2FA for Google involves a little more complexity than activating it for other services. We'll first enable 2FA in the Google account, and then register it in the authenticator app of you choice. - Enable 2FA in Google - Account registration in Authenticator app - Finishing up - Using 2FA Enable 2FA in Google Turning 2FA on for Google will enable this feature in all the Google products like Google Drive, GMail, Calendar, etc. - Sign in to your Google account. - In the top-right corner you'll see an icon with the picture you chose for your account or a letter of your first name, in my case C. Click on that icon and then on 'Google Account'. This will open your Google account settings. - In the menu on the left click 'Security'. This will show a page with your security settings: - On the Security page, in the category 'Signing in to Google', notice '2-Step Verification' is set to off. - Click the '2-Step Verification' line. Press 'Get Started' and login again with your username and password. - In the popup that appears, enter your phone number. This is done as a an extra temporary means of verification until you have 2FA with you app enabled. Select how you want to get the temporary code and click 'Next' - On the phone number you entered in your previous step, you will receive an SMS or phone call with a verification code. In the next screen (see below), enter that code and press 'Next'. - If the code is correct, a confirmation screen appears stating everything is set to turn on 2-step verification. Click 'Turn On'. 2FA is now enabled for your account. If you would stop the process at this stage, an SMS (or phonecall) with a verification code will be sent to you every time you login to Google. Using your phonenumber as a primary means for 2FA is a bad idea. Granted, it is better than no 2FA, but hackers can fairly easily intercept SMS traffic. Check out this video for an example. We want to add the Google account to the authenticator app we installed in a previous chapter. Account registration in Authenticator app After click 'Turn On' in the last step of the previous chapter, Google shows a page where you can add an Authenticator app as a second factor. In other words: register the Google 2FA account within the Authenticator app you have on your mobile phone. Click the "Authenticator app" option (depicted by a green frame in the image below). Note that it has a Google Authenticator icon, but it doesn't matter which authenticator app you use to set this up. Clicking the 'Authenticator app' option pops up a wizard-dialog that allows you to choose whether you use an Android or an iPhone app. The choice doesn't really matter for the end-result, it only determines whether to provide you a link to the Android or Apple version of the Google Authenticator app on the next page. Make a choice and click 'Next'. The following page shows a QR Code: The QR-code depicts a unique code for the registration of your Google 2FA account. This is the moment to take your authenticator app of choice and scan the QR code. This article describes the registration process for Authy and Athenticator Plus: Register 2FA account with Authy When you start Authy app for the first time, you will see a big Plus-sign on the screen. Click this +-sign to add an authenticator account. Doing so will display a dialog where you can choose between scanning a QR-code and manually entering a code. Some services provide this manual entry as an option when they do not support QR-codes. The example we use here fortunately supports QR-codes, so press the blue bar with the camera and use your phone's camera to scan the QR-code provided by Google in the previous chapter. If all went well, Authy prompts you for an account name. Enter any name here that will allow you to recognize the code later. After you press done, your first account is created. Authy's main screen will immediately start generating codes for this account every 30 seconds: You've now successfully registered your 2FA token for Google in the Authy app. Continue with the chapter 'Finishing up'. Register 2FA account with Authenticator Plus I'm using Authenticator Plus myself, so my app's main screen already contains a few entries: Click the big +-sign on the bottom-right to add an authenticator account. Doing so will show a pop-up menu with 3 options: The bottom option is the one we're aiming for: Scan QR code. You can also manually add an account if the service you try to add does not support QR-codes. The 'More ways to add' option has some other options like using a third-party scanner for scanning codes. For this example, select the 'Scan QR code'-option. Now use your phone's camera to scan the QR-code provided by Google in the previous chapter. As soon as you've done this, a new account entry will be added to the app's main screen: You've now successfully registered your 2FA token for Google in the Authenticator Plus app. Continue with the next chapter 'Finishing up'. After you have successfully scanned Google's QR-code and added the Google 2FA account to your favourite Authenticator app, press 'Next' on the dialog with Google's QR-code on it. In order to make sure you have correctly added the code, Google will ask for one of the generated codes as a verification step: Type the code you see in your Authenticator app for the account you just added and press 'Verify'. If all is ok, the dialog closes. The Google security page will now have an entry stating that your Google account is protected with Two-factor authentication (or as Google displays '2-Step Verification') via an authenticator app. Now you've setup 2FA in Google and have your Google account in the Authenticator app of your choice, you will be prompted for a 2-step verification code everytime the Google service you're asked for your Google password. When you're prompted, just fill in the code your authenticator app provides at that moment, just like you did during the setup of your account. Enable 2FA for other accounts For completeness I added some links on where you can enable 2FA for other common accounts. To enable Two-factor authentication for your facebook account, go to your account's Settings, then the 'Security and login' tab. You can also follow this link directly: https://www.facebook.com/settings?tab=security You'll find the 'Use two-factor authentication' option on this page (see figure below). To enable two-factor authentication for you account, go to your account's Settings (the little cogwheel next to your name on your profile page), then choose 'Privacy and security'. You can also follow this link: https://www.instagram.com/accounts/privacy_and_security/. Half-way the page you'll find the 'Two-Factor Authentication' option that allows you to edit its settings (direct link: https://www.instagram.com/accounts/two_factor_authentication/). Note: At the time of writing, Instragram does not allow setting up 2FA with an authenticator App on the web. The mobile app does allow this option, so for now I suggest using your Instagram mobile app to set this up. Cover photo by Ales Nesetril
<urn:uuid:8204a953-38b4-4882-ada0-cdd0481d5fb3>
CC-MAIN-2022-40
https://www.coengoedegebure.com/2fa-how-to-secure-your-accounts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00134.warc.gz
en
0.898073
3,078
3.5625
4
The Real-Time Computer Complex (RTCC) is located at the NASA Mission Control Center in Houston, TX. In 1962, the RTCC housed several IBM large-scale data processing mainframe digital computers. Think of the RTCC as the computing brain that processes mountains of data to guide nearly every portion of a NASA spaceflight mission. Flight controllers and engineers in the Mission Control Center depended on the RTCC. On April 11, 1970, a portion of the Apollo 13 command service module exploded while it was halfway to the moon. Numerous voices from flight controllers in the Mission Control room desperately attempted to ascertain how serious the situation was while communicating with the astronauts aboard the Apollo 13 command module. NASA Flight Director Gene Kranz directs his Mission Control team by clearly and firmly saying, “OK, listen up … Quiet down, people. Procedures, I need another computer up in the RTCC.” The quick thinking and resourcefulness of NASA flight controllers and engineers, along with the courage and professionalism of the Apollo 13 astronauts, resulted in their safe return to earth. Credit for their safe return should also be acknowledged to the five high-performance IBM System/360 Model 75 computers in the RTCC. About 16 years earlier, the 1954 IBM 704 digital mainframe computer operated using a low-level assembly language and a high-speed magnetic core storage memory, replacing the electrostatic tube storage used in previous IBM computers. In 1957, Sputnik 1, Earth’s first artificial satellite, was tracked during its orbit around the planet by two IBM 704 computers. In 1959, the IBM 1401 mainframe computer was built using a high-level programming language with FORTRAN (Formula Translation/Translator) computer language coding system created by IBM programmer John Backus in 1957 and tested on the IBM 704. Backus said FORTRAN took what had previously required 1,000 machine statement instructions to be written in only 47 statements, significantly increasing computer programmer productivity. In 1961, NASA launched two crewed Mercury suborbital flights. IBM 7090 computers installed in NASA Ames Research Center assisted engineers and mission flight controllers by quickly performing thousands of calculations per second. The 1965 NASA Gemini spacecraft’s 59-pound onboard digital guidance computer was manufactured by IBM. It used a 7.143-hertz processor clock and could execute more than 7,000 calculations per second. In 1969, IBM’s computer reliability was credited with keeping Apollo 12 on its proper trajectory after a potentially catastrophic event. On Nov. 14, 1969. About 37 seconds after the Apollo 12 Saturn V rocket left the launchpad from Cape Canaveral, two lightning bolts struck it, knocking out all of the command module’s onboard instrumentation systems and telemetry with Mission Control in Houston. “What the hell was that?” shouted Apollo 12 command module pilot Richard Gordon after lightning struck the Saturn V rocket traveling at 6,000 mph. Fortunately, two-way radio communications were still functioning between Mission Control and the command module spacecraft. “I just lost the whole platform,” Apollo 12 mission commander Charles Conrad Jr. radioed Mission Control. “We had everything in the world drop out,” he added. The static discharge from the lightning caused a voltage outage, knocking out most of the Apollo 12 command module control systems, including the disconnection of its vital telemetry communications link with Mission Control. Loud, overlapping voices could be heard in Mission Control as engineers and flight controllers worked on what course of action to take. Fortunately, the Apollo 12 Saturn V rocket did not deviate from its planned trajectory. Instead, the IBM 60-pound Launch Vehicle Digital Computer (LVDC) housed inside the Instrument Unit section of the rocket’s third stage contained the required processing power to continue the Saturn V’s programmed course. Meanwhile, Mission Control engineers saw strange data pattern readings on their control screens and desperately worked to find a solution. NASA Mission flight controller and engineer John Aron recalled similar data patterns during simulation tests. He remembered it meant the Signal Conditioning Electronics were down. “Flight, try SCE to AUX,” Aaron recommended to Mission Flight Director Gerry Griffin. Griffin instructed the recommendation to be radioed to the astronauts in the command module. One minute after the lightning strike, Mission Control radioed the astronauts in the Apollo 12 command module with the following: “Apollo 12, Houston. Try SCE to Auxiliary. Over.” There was a brief pause as the astronauts heard what they thought was the acronym “FCE” instead of “SCE.” “Try FCE to Auxiliary. What the hell is that?” Conrad questioned Mission Control. “SCE – SCE to Auxiliary,” Mission Control slowly repeated with emphasis. Apollo 12 pilot astronaut Alan Bean was familiar with the SCE switch inside the command module. So, turning around in his seat, he flipped SCE to AUX, which restored and normalized the command module instrumentation data and telemetry transmissions. Apollo 12 was able to complete its mission to the moon, thanks in significant part to the reliability of the IBM LVDC and, of course, Aaron’s “SCE to AUX.” In 1962, science fiction writer Arthur C. Clarke witnessed a demonstration in Bell Labs where its scientists used the IBM 7094 computer to create a synthesized human voice singing the song “Daisy Bell (Bicycle Built for Two).” This demonstration by the IBM computer inspired Clarke to write a much-remembered scene in the 1968 science fiction movie “2001: A Space Odyssey” featuring the somewhat sentient “Heuristically programmed ALgorithmic” computer known as the HAL 9000. In the movie, the HAL 9000 computer is singing “Daisy Bell (Bicycle Built for Two)” while deactivating to inoperability as astronaut David Bowman removes its computing modules. For the record, the HAL 9000 was not an IBM computer. Wendy Chen is the CEO of Omnistream, a retail automation company helping retailers bring joy to consumers Every innovator, at some point, faces the same challenge. You’ve built a revolutionary mousetrap, but you need to convince people to actually take a chance on your product—and stop using whatever solution they’re currently using to keep the rodent population under control. That’s a tough sell because, by definition, your new product is unproven. Even if you’ve been around a while and you have a clear record of success, and even if you can show how much ROI your product will generate on paper, customers quite reasonably worry about the potential for things to go wrong. To drive things forward, it’s important to build your sales pipeline—and even your product itself—with your customers’ pain points in mind. Here are five ways to convince your customers to bet on innovation and take a chance on your product: Understand The Friction It isn’t enough to show your buyer that your product is better than the alternative. You need to understand and account for the friction that keeps them from wanting to make changes. That isn’t just conservatism—it’s a rational disinclination toward any sort of change. Some industries, some companies and some product categories bring more inherent friction than others. It’s up to you to understand that and find ways to lubricate the wheels and create momentum for change. Minimize The Risk The biggest source of friction, of course, is the risk inherent in trying something new. If there’s a working product in place, then making any change brings a non-zero chance that things will stop working—and that usually ends with someone getting fired. Understandably, people in positions to make these decisions often prioritize minimizing risk rather than maximizing value, and it’s up to you to account for that fact. One smart approach: Instead of trying to sell customers on a widespread rollout, offer to run a low-cost, low-risk pilot project. My company is a retail tech solutions vendor, and we often use pilot projects or small-scale tests with a handful of stores across one or two product categories to convince potential customers to try us out. We then measure their incremental growth and resulting store-level profitability having used our solutions against control stores. Keep Costs Low Nobody wants to spend money on unproven technology, and no matter how great your product, every customer will view it as unproven until they’ve seen it delivering consistent results for their specific use-case. Finding creative ways to keep costs low, especially during the early stages, is vital. Some SaaS companies now use consumption-based pricing, rather than regular monthly subscriptions, to reassure customers they’ll only pay for what they use. Others, like my company, peg our price to the increased performance we deliver. It's important to do everything necessary to make sure your retail clients succeed, so they know they’re always coming out ahead. It’s also important to ensure your product plays nicely with legacy infrastructure and is complementary to your existing investments: It doesn’t matter how great your product is if it requires your customer to completely rebuild their backend IT or POS systems. Simple integration into your existing core systems ensures a speedy execution. Another great option is to offer a modular offering, which allows customers to choose only the processes they want to ensure full integration into your entire existing supply chain, retail planning and forecasting systems. Help Your Advocates Communicate Your Value As the saying goes, nobody gets fired for buying IBM. Your goal during the pilot project is to develop advocates for your product—people at all levels, from end-user to the C suite—who are willing to stick their necks out and say your product is worth implementing more broadly. To do that, you need to ensure you’re delivering at all levels of the organization: Change management support for the implementation team, a streamlined experience for users, real benefits (results) for their supervisors and clear metrics that document your product’s value and allow it to be easily communicated up the command chain. Make Your Pilot Scalable Once you’ve secured buy-in for your product, you need to be able to communicate a clear strategy for scaling up the pilot and delivering broader value. This needs to be baked into the DNA of your pilot: If you’ve focused on a handful of stores for one to two product categories, for instance, then make it easy to add a couple more stores or categories—or quickly scale up and add entire regions. For bonus points, make your product more valuable as it scales. You’ve shown your product works across a couple of locations—but can you offer additional learnings and customer insights as you bring more locations into your network? You’ll also need to show willingness to customize your product in order to serve your customers’ unique needs and fringe cases and stay aligned with their own strategy for growth, so they’re motivated to lean into the relationship as they expand. We’re raised to view innovators as mavericks—people who think differently and change the world by the sheer force of their creativity and contrarianism. But the reality is that innovation is a team sport, and it’s only by convincing other people to join your mission that you’ll be able to win top-to-bottom buy-in and truly bring your product to scale. To succeed as B2B software innovators, we need to spend as much time thinking about how to turn our customers into innovators as we do on planning our own innovations. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? A survey by IBM Security has revealed that data breaches are higher-impact and costlier than ever before, with the global average reaching an all-time high of $4.35 million. Conducted on behalf of IBM by the Poneman Institute, the 2022 Cost of a Data Breach Report was based on in-depth analysis of real-world data breaches experienced by 550 organisations globally between March 2021 and March 2022. The report showed breach costs rising by nearly 13 per cent over the past two years, with the results suggesting the incidents may also be contributing to the rising costs of goods and services, with 60 per cent of surveyed organisations reportedly having raised their product or services prices due to a breach. The survey also showed that 83 per cent of those studied had experienced more than one data breach in their lifetime. Another factor shown to be rising over time was the aftereffects of breaches lingering long after they occur, with 50 per cent of breach costs incurred more than a year after a breach. Other key findings of the report revealed that ransomware victims who decide to pay threat actors’ random demands only incurred $610,000 less in breach costs than those who chose not to pay. The study shows that 80 per cent of critical infrastructure organisations studied don’t adopt ‘zero trust’ strategies, seeing average breach costs rise to $5.4 million – a $1.17 million increase compared with those who do. Immature cloud security practices in clouds – with 43 per cent reporting only being in the early stages of applying security measures to the cloud - resulted in $660,000 higher breach costs on average than organisations with mature security across their cloud environments. Commenting on the report, Charles Henderson, global head of IBM security X-force, said: “This report shows that the right strategies coupled with the right technologies can help make all the difference when businesses are attacked.” Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here. Could artificial intelligence (AI) help companies meet growing expectations for environmental, social and governance (ESG) reporting? Certainly, over the past couple of years, ESG issues have soared in importance for corporate stakeholders, with increasing demands from investors, employees and customers. According to S&P Global, in 2022 corporate boards and government leaders “will face rising pressure to demonstrate that they are adequately equipped to understand and oversee ESG issues — from climate change to human rights to social unrest.” ESG investing, in particular, has been a big part of this boom: Bloomberg Intelligence found that ESG assets are on track to exceed $50 trillion by 2025, representing more than a third of the projected $140.5 trillion in total global assets under management. Meanwhile, ESG reporting has become a top priority that goes beyond ticking off regulatory boxes. It’s used as a tool to attract investors and financing, as well as to meet expectations of today’s consumers and employees. But according to a exact Oracle ESG global study, 91% of business leaders are currently facing major challenges in making progress on sustainability and ESG initiatives. These include finding the right data to track progress, and time-consuming manual processes to report on ESG metrics. “A lot of the data that needs to be collected either doesn’t exist yet or needs to come from many systems,” said Sem J. de Spa, senior manager of digital risk solutions at Deloitte. “It’s also way more complex than just your company, because it’s your suppliers, but also the suppliers of your suppliers.” That is where AI has increasingly become part of the ESG equation. AI can help manage data, glean data insights, operationalize data and report against it, said Christina Shim, VP of strategy and sustainability, AI applications software at IBM. “We need to make sure that we’re gathering the mass amounts of data when they’re in completely different silos, that we’re leveraging that data to Excellerate operations within the business, that we’re reporting that data to a variety of stakeholders and against a very confusing landscape of ESG frameworks,” she said. According to Deloitte, although a BlackRock survey found that 92% of S&P companies were reporting ESG metrics by the end of 2020, 53% of global respondents cited “poor quality or availability of ESG data and analytics” and another 33% cited “poor quality of sustainability investment reporting” as the two biggest barriers to adopting sustainable investing. Making progress is a must, experts say. Increasingly, these ESG and sustainability commitments are no longer simply nice to have,” said Shim. “It’s really becoming kind of like a basis of what organizations need to be focused on and there are increasingly higher standards that have to be integrated into the operations of all businesses,” she explained. “The challenge is huge, especially as new regulations and standards emerge and ESG requirements are under more scrutiny,” said De Spa. This has led to hundreds of technology vendors flooding the market that use AI to help tackle these issues. “We need all of them, at least a lot of them, to solve these challenges,” he said. On top of the operational challenges around ESG, the Oracle study found 96% of business leaders admit human bias and emotion often distract from the end ESG goals. In fact, 93% of business leaders say they would trust a bot over a human to make sustainability and social decisions. “We have people who are coming up now who are hardwired for ESG,” Pamela Rucker, CIO advisor, instructor for Harvard Professional Development, who helped put together the Oracle study. “The idea that they would trust a computer isn’t different for them. They already trust a computer to guide them to work, to provide them directions, to tell them where the best prices are.” But, she added, humans can work with technology to create more meaningful change and the survey also found that business leaders believe there is still a place for humans in ESG efforts, including managing making changes (48%), educating others (46%), and making strategic decisions (42%). “Having a machine that might be able to sift through some of that data will allow the humans to come in and look at places where they can add some context around places where we might have some ambiguity, or we might have places where there’s an opportunity,” said Rucker. “AI gives you a chance to see more of that data, and you can spend more time trying to come up with the insights.” Seth Dobrin, chief AI officer at IBM, told VentureBeat that companies should get started now on using AI to harness ESG data. “Don’t wait for additional regulations to come,” he said. Getting a handle on data is essential as companies begin their journey towards bringing AI technologies into the mix. “You need a baseline to understand where you are, because you can make all the goals and imperatives, you can commit to whatever you want, but until you know where you are, you’re never gonna figure out how to get to where you need to get to,” he said. Dobrin said he also sees organizations moving from a defensive, risk management posture around ESG to a proactive approach that is open to AI and other technologies to help. “It’s still somewhat of a compliance exercise, but it’s shifting,” he said. “Companies know they need to get on board and think proactively so that they are considered a thought leader in the space and not just a laggard doing the bare minimum.” One of the key areas IBM is focusing on, he added, is helping clients connect their ESG data and the data monitoring with the genuine operations of the business. “If we’re thinking about business facilities and assets, infrastructure and supply chain as something that’s relevant across industries, all the data that’s being sourced needs to be rolled up and integrated with data and process flows within the ESG reporting and management piece,” he said. “You’re sourcing the data from the business.” Deloitte recently partnered with Signal AI, which offers AI-powered media intelligence, to help the consulting firm’s clients spot and address provider risks related to ESG issues. “With the rise of ESG and as businesses are navigating a more complex environment than ever before, the world has become awash in unstructured data,” said David Benigson, CEO of Signal AI. “Businesses may find themselves constantly on the back foot, responding to these issues reactively rather than having the sort of data and insights at their fingertips to be at the forefront.” The emergence of machine learning and AI, he said, can fundamentally address those challenges. “We can transform data into structured insights that help business leaders and organizations better understand their environment and get ahead of those risks, those threats faster, but also spot those opportunities more efficiently too – providing more of an outside-in perspective on issues such as ESG.” He pointed to exact backlash around “greenwashing,” including by Elon Musk (who called ESG a “scam” because Tesla was removed from S&P 500’s ESG Index). “There are accusations that organizations are essentially marking their own homework when it comes to sorting their performance and alignment against these sorts of ESG commitments,” he said. “At Signal, we provide the counter to that – we don’t necessarily analyze what the company says they’re going to do, but what the world thinks about what that company is doing and what that company is actually doing in the wild.” Deloitte’s de Spa said the firm uses Signal AI for what it calls a “responsible value chain” – basically, provider risk management. “For example, a sustainable organization that cleans oceans and rivers from all kinds of waste asked us to help them get more insight into their own value chain,” he said. “They have a small number of often small suppliers they are dependent on and you cannot easily keep track of what they’re doing.” With Signal AI, he explained, Deloitte can follow what is happening with those companies to identify if there are any risks – if they are no longer able to deliver, for example, if there is a scandal that puts them out of business, or if the company is causing issues related to sustainability.” In one case, Deloitte discovered a company that was not treating their workers fairly. “You can definitely fight greenwashing because you can see what is going on,” he said. “You can leverage millions of sources to identify what is really happening.” As sustainability and other ESG-related regulations begin to proliferate around the world, AI and smart technology will continue to play a crucial role, said Deloitte’s de Spa. “It’s not just about carbon, or even having a responsible value chain that has a net zero footprint,” he said. “But it’s also about modern slavery and farmers and other social types of things that companies will need to report on in the next few years.” Going forward, a key factor will be how to connect and integrate data together using AI, said IBM’s Dobrin. “Many offer a carbon piece or sell AI just for energy efficiency or supply chain transparency,” he said. “But you need to connect all of it together in a one-stop-shop, that will be a total game-changer in this space.” No matter what, said Rucker, there is certainly going to be more for AI-driven tools to measure when it comes to ESG. “One of the reasons I get excited about this is because it’s not just about a carbon footprint anymore, and those massive amounts of data mean you’re going to have to have heavy lifting done by a machine,” she said. “I see an ESG future where the human needs the machine and the machine needs the human. I don’t think that they can exist without each other.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. I've written many times about having joined the investment industry in 1969 when the "Nifty Fifty" stocks were in full flower. My first employer, First National City Bank, as well as many of the other "money-center banks" (the leading investment managers of the day), were enthralled with these companies, with their powerful business models and flawless prospects. Sentiment surrounding their stocks was uniformly positive, and portfolio managers found great safety in numbers. For example, a common refrain at the time was "you can't be fired for buying IBM," the era's quintessential growth company. I've also written extensively about the fate of these stocks. In 1973-74, the OPEC oil embargo and the resultant recession took the S&P 500 Index down a total of 47%. And many of the Nifty Fifty, for which it had been thought that "no price was too high," did far worse, falling from peak p/e ratios of 60-90 to trough multiples in the single digits. Thus, their devotees lost almost all of their money in the stocks of companies that "everyone knew" were great. This was my first chance to see what can happen to assets that are on what I call "the pedestal of popularity." In 1978, I was asked to move to the bank's bond department to start funds in convertible bonds and, shortly thereafter, high yield bonds. Now I was investing in securities most fiduciaries considered "uninvestable" and which practically no one knew about, cared about, or deemed desirable... and I was making money steadily and safely. I quickly recognized that my strong performance resulted in large part from precisely that fact: I was investing in securities that practically no one knew about, cared about, or deemed desirable. This brought home the key money-making lesson of the Efficient Market Hypothesis, which I had been introduced to at the University of Chicago Business School: If you seek superior investment results, you have to invest in things that others haven't flocked to and caused to be fully valued. In other words, you have to do something different. In 2006, I wrote a memo called Dare to Be Great. It was mostly about having high aspirations, and it included a rant against conformity and investment bureaucracy, as well as an assertion that the route to superior returns by necessity runs through unconventionality. The element of that memo that people still talk to me about is a simple two-by-two matrix: |Conventional Behavior||Unconventional Behavior| |Favorable Outcomes||Average good results||Above average results| |Unfavorable Outcomes||Average bad results||Below average results| Here's how I explained the situation: Of course, it's not easy and clear-cut, but I think it's the general situation. If your behavior and that of your managers are conventional, you're likely to get conventional results - either good or bad. Only if the behavior is unconventional is your performance likely to be unconventional... and only if the judgments are superior is your performance likely to be above average. The consensus opinion of market participants is baked into market prices. Thus, if investors lack the insight that is superior to the average of the people who make up the consensus, they should expect average risk-adjusted performance. Many years have passed since I wrote that memo, and the investing world has gotten a lot more sophisticated, but the message conveyed by the matrix and the accompanying explanation remains unchanged. Talk about simple - in the memo, I reduced the issue to a single sentence: "This just in: You can't take the same actions as everyone else and expect to outperform." The best way to understand this idea is by thinking through a highly logical and almost mathematical process (greatly simplified, as usual, for illustrative purposes): A certain (but unascertainable) number of dollars will be made over any given period by all investors collectively in an individual stock, a given market, or all markets taken together. That amount will be a function of (a) how companies or assets fare in fundamental terms (e.g., how their profits grow or decline) and (b) how people feel about those fundamentals and treat asset prices. On average, all investors will do average. If you're happy doing average, you can simply invest in a broad swath of the assets in question, buying some of each in proportion to its representation in the relevant universe or index. By engaging in average behavior in this way, you're guaranteed average performance. (Obviously, this is the idea behind index funds.) If you want to be above average, you have to depart from consensus behavior. You have to overweight some securities, asset classes, or markets and underweight others. In other words, you have to do something different. The challenge lies in the fact that (a) market prices are the result of everyone's collective thinking and (b) it's hard for any individual to consistently figure out when the consensus is wrong and an asset is priced too high or too low. Nevertheless, "active investors" place active bets in an effort to be above average. Investor A decides stocks as a whole are too cheap, and he sells bonds in order to overweight stocks. Investor B thinks stocks are too expensive, so she moves to an underweighting by selling some of her stocks to Investor A and putting the proceeds into bonds. Investor X decides a certain stock is too cheap and overweights it, buying from investor Y, who thinks it's too expensive and therefore wants to underweight it. It's essential to note that in each of the above cases, one investor is right and the other is wrong. Now go back to the first bullet point above: Since the total dollars earned by all investors collectively are fixed in amount, all active bets, taken together, constitute a zero-sum game (or negative-sum after commissions and other costs). The investor who is right earns an above-average return, and by definition, the one who's wrong earns a below-average return. Thus, every active bet placed in the pursuit of above-average returns carries with it the risk of below-average returns. There's no way to make an active bet such that you'll win if it works but not lose if it doesn't. Financial innovations are often described as offering some version of this impossible bargain, but they invariably fail to live up to the hype. The bottom line of the above is simple: You can't hope to earn above-average returns if you don't place active bets, but if your active bets are wrong, your return will be below average. Investing strikes me as being very much like golf, where playing conditions and the performance of competitors can change from day to day, as can the placement of the holes. On some days, one approach to the course is appropriate, but on other days, different tactics are called for. To win, you have to either do a better job than others of selecting your approach or executing on it or both. The same is true for investors. It's simple: If you hope to distinguish yourself in terms of performance, you have to depart from the pack. But, having departed, the difference will only be positive if your choice of strategies and tactics is correct and/or you're able to execute better. In 2009, when Columbia Business School Publishing was considering whether to publish my book The Most Important Thing, they asked to see a demo chapter. As has often been my experience, I sat down and described a concept I hadn't previously written about or named. That description became the book's first chapter, addressing one of its most important topics: second-level thinking. It's certainly the concept from the book that people ask me about most often. The idea of second-level thinking builds on what I wrote in Dare to Be Great. First, I repeated my view that success in investing means doing better than others. All active investors (and certainly money managers hoping to earn a living) are driven by the pursuit of superior returns. But that universality also makes beating the market a difficult task. Millions of people are competing for each dollar of investment gain. Who'll get it? The person who's a step ahead. In some pursuits, getting up to the front of the pack means more schooling, more time in the gym or the library, better nutrition, more perspiration, greater stamina, or better equipment. But in investing, where these things count for less, it calls for more perceptive thinking... at what I call the second level. The basic idea behind second-level thinking is easily summarized: In order to outperform, your thinking has to be different and better. Remember, your goal in investing isn't to earn average returns; you want to do better than average. Thus, your thinking has to be better than that of others - both more powerful and at a higher level. Since other investors may be smart, well-informed, and highly computerized, you must find an edge they don't have. You must think of something they haven't thought of, see things they miss, or bring insight they don't possess. You have to react differently and behave differently. In short, being right may be a necessary condition for investment success, but it won't be sufficient. You have to be more right than others... which by definition means your thinking has to be different. Having made the case, I went on to distinguish second-level thinkers from those who operate at the first level: First-level thinking is simplistic and superficial, and just about everyone can do it (a bad sign for anything involving an attempt at superiority). All the first-level thinker needs is an opinion about the future, as in "The outlook for the company is favorable, meaning the stock will go up." Second-level thinking is deep, complex, and convoluted. The second-level thinker takes a great many things into account: What is the range of likely future outcomes? What outcome do I think will occur? What's the probability I'm right? What does the consensus think? How does my expectation differ from the consensus? How does the current price for the asset comport with the consensus view of the future, and with mine? Is the consensus psychology that's incorporated in the price too bullish or bearish? What will happen to the asset's price if the consensus turns out to be right, and what if I'm right? The difference in workload between first-level and second-level thinking is clearly massive, and the number of people capable of the latter is tiny compared to the number capable of the former. First-level thinkers look for simple formulas and easy answers. Second-level thinkers know that success in investing is the antithesis of simple. Speaking about difficulty reminds me of an important idea that arose in my discussions with my son Andrew during the pandemic (described in the memo Something of Value, published in January 2021). In the memo's extensive discussion of how efficient most markets have become in exact decades, Andrew makes a terrific point: "Readily available quantitative information with regard to the present cannot be the source of superior performance." After all, everyone has access to this type of information - with regard to public U.S. securities, that's the whole point of the SEC's Reg FD (for fair disclosure) - and nowadays all investors should know how to manipulate data and run screens. So, then, how can investors who are intent on outperforming hope to reach their goal? As Andrew and I said on a podcast where we discussed Something of Value, they have to go beyond readily available quantitative information with regard to the present. Instead, their superiority has to come from an ability to: better understand the significance of the published numbers, better assess the qualitative aspects of the company, and/or better divine the future. Obviously, none of these things can be determined with certainty, measured empirically, or processed using surefire formulas. Unlike present-day quantitative information, there's no source you can turn to for easy answers. They all come down to judgment or insight. Second-level thinkers who have better judgment are likely to achieve superior returns, and those who are less insightful are likely to generate inferior performance. This all leads me back to something Charlie Munger told me around the time The Most Important Thing was published: "It's not supposed to be easy. Anyone who finds it easy is stupid." Anyone who thinks there's a formula for investing that guarantees success (and that they can possess it) clearly doesn't understand the complex, dynamic, and competitive nature of the investing process. The prize for superior investing can amount to a lot of money. In the highly competitive investment arena, it simply can't be easy to be the one who pockets the extra dollars. There's a concept in the investing world that's closely related to being different: contrarianism. "The investment herd" refers to the masses of people (or institutions) that drive security prices one way or the other. It's their actions that take asset prices to bull market highs and sometimes bubbles and, in the other direction, to bear market territory and occasional crashes. At these extremes, which are invariably overdone, it's essential to act in a contrary fashion. Joining in the swings described above causes people to own or buy assets at high prices and to sell or fail to buy at low prices. For this reason, it can be important to part company with the herd and behave in a way that's contrary to the actions of most others. Contrarianism received its own chapter in The Most Important Thing. Here's how I set forth the logic: Markets swing dramatically, from bullish to bearish, and from overpriced to underpriced. Their movements are driven by the actions of "the crowd," "the herd," and "most people." Bull markets occur because more people want to buy than sell, or the buyers are more highly motivated than the sellers. The market rises as people switch from being sellers to being buyers, and as buyers become even more motivated and the sellers less so. (If buyers didn't predominate, the market wouldn't be rising.) Market extremes represent inflection points. These occur when bullishness or bearishness reaches a maximum. Figuratively speaking, a top occurs when the last person who will become a buyer does so. Since every buyer has joined the bullish herd by the time the top is reached, bullishness can go no further, and the market is as high as it can go. Buying or holding is dangerous. Since there's no one left to turn bullish, the market stops going up. And if the next day one person switches from buyer to seller, it will start to go down. So at the extremes, which are created by what "most people" believe, most people are wrong. Therefore, the key to investment success has to lie in doing the opposite: in diverging from the crowd. Those who recognize the errors that others make can profit enormously from contrarianism. To sum up, if the extreme highs and lows are excessive and the result of the concerted, mistaken actions of most investors, then it's essential to leave the crowd and be a contrarian. In his 2000 book, Pioneering Portfolio Management, David Swensen, the former chief investment officer of Yale University, explained why investing institutions are vulnerable to conformity with current consensus belief and why they should instead embrace contrarianism. (For more on Swensen's approach to investing, see "A Case in Point" below.) He also stressed the importance of building infrastructure that enables contrarianism to be employed successfully: Unless institutions maintain contrarian positions through difficult times, the resulting damage imposes severe financial and reputational costs on the institution. Casually researched, consensus-oriented investment positions provide the little prospect for producing superior results in the intensely competitive investment management world. Unfortunately, overcoming the tendency to follow the crowd, while necessary, proves insufficient to guarantee investment success... While courage to take a different path enhances chances for success, investors face likely failure unless a thoughtful set of investment principles undergirds the courage. Before I leave the subject of contrarianism, I want to make something else very clear. First-level thinkers - to the extent they're interested in the concept of contrarianism - might believe contrarianism means doing the opposite of what most people are doing, so selling when the market rises and buying when it falls. But this overly simplistic definition of contrarianism is unlikely to be of much help to investors. Instead, the understanding of contrarianism itself has to take place at a second level. In The Most Important Thing Illuminated, an annotated edition of my book, four professional investors and academics provided commentary on what I had written. My good friend Joel Greenblatt, an exceptional equity investor, provided a very apt observation regarding knee-jerk contrarianism: "... just because no one else will jump in front of a Mack truck barreling down the highway doesn't mean that you should." In other words, the mass of investors aren't wrong all the time, or wrong so dependably that it's always right to do the opposite of what they do. Rather, to be an effective contrarian, you have to figure out: what the herd is doing; why it's doing it; what's wrong, if anything, with what it's doing; and what you should do about it. Like the second-level thought process laid out in bullet points on page four, intelligent contrarianism is deep and complex. It amounts to much more than simply doing the opposite of the crowd. Nevertheless, good investment decisions made at the best opportunities - at the most overdone market extremes - invariably include an element of contrarian thinking. There are only so many syllabus I find worth writing about, and since I know I'll never know all there is to know about them, I return to some from time to time and add to what I've written previously. Thus, in 2014, I followed up on 2006's Dare to Be Great with a memo creatively titled Dare to Be Great II. To begin, I repeated my insistence on the importance of being different: If your portfolio looks like everyone else's, you may do well, or you may do poorly, but you can't do differently. And being different is absolutely essential if you want a chance at being superior... I followed that with a discussion of the challenges associated with being different: Most great investments begin in discomfort. The things most people feel good about - investments where the underlying premise is widely accepted, the exact performance has been positive, and the outlook is rosy - are unlikely to be available at bargain prices. Rather, bargains are usually found among things that are controversial, that people are pessimistic about, and that have been performing badly of late. But then, perhaps most importantly, I took the idea a step further, moving from daring to be different to its natural corollary: daring to be wrong. Most investment books are about how to be right, not the possibility of being wrong. And yet, the would-be active investor must understand that every attempt at success by necessity carries with it the chance for failure. The two are absolutely inseparable, as I described at the top of page three. In a market that is even moderately efficient, everything you do to depart from the consensus in pursuit of above-average returns has the potential to result in below-average returns if your departure turns out to be a mistake. Overweighting something versus underweighting it; concentrating versus diversifying; holding versus selling; hedging versus not hedging - these are all double-edged swords. You gain when you make the right choice and lose when you're wrong. One of my favorite sayings came from a pit boss at a Las Vegas casino: "The more you bet, the more you win when you win." Absolutely inarguable. But the pit boss conveniently omitted the converse: "The more you bet, the more you lose when you lose." Clearly, those two ideas go together. In a presentation I occasionally make to institutional clients, I employ PowerPoint animation to graphically portray the essence of this situation: A bubble drops down, containing the words "Try to be right." That's what active investing is all about. But then a few more words show up in the bubble: "Run the risk of being wrong." The bottom line is that you simply can't do the former without also doing the latter. They're inextricably intertwined. Then another bubble drops down, with the label "Can't lose." There are can't-lose strategies in investing. If you buy T-bills, you can't have a negative return. If you invest in an index fund, you can't underperform the index. But then two more words appear in the second bubble: "Can't win." People who use can't-lose strategies by necessity surrender the possibility of winning. T-bill investors can't earn more than the lowest of yields. Index fund investors can't outperform. And that brings me to the assignment I imagine receiving from unenlightened clients: "Just apply the first set of words from each bubble: Try to outperform while employing can't-lose strategies." But that combination happens to be unavailable. The above shows that active investing carries a cost that goes beyond commissions and management fees: heightened risk of inferior performance. Thus, every investor has to make a conscious decision about which course to follow. Pursue superior returns at the risk of coming in behind the pack, or hug the consensus position and ensure average performance. It should be clear that you can't hope to earn superior returns if you're unwilling to bear the risk of sub-par results. And that brings me to my favorite fortune cookie, which I received with dessert 40-50 years ago. The message inside was simple: The cautious seldom err or write great poetry. In my college classes in Japanese studies, I learned about the koan, which Oxford Languages defines as "a paradoxical anecdote or riddle, used in Zen Buddhism to demonstrate the inadequacy of logical reasoning and to provoke enlightenment." I think of my fortune that way because it raises a question I find paradoxical and capable of leading to enlightenment. But what does the fortune mean? That you should be cautious because cautious people seldom make mistakes? Or that you shouldn't be cautious, because cautious people rarely accomplish great things? The fortune can be read both ways, and both conclusions seem reasonable. Thus the key question is, "Which meaning is right for you?" As an investor, do you like the idea of avoiding error, or would you rather try for superiority? Which path is more likely to lead to success as you define it, and which is more feasible for you? You can follow either path, but clearly not both simultaneously. Thus, investors have to answer what should be a very basic question: Will you (a) strive to be above average, which costs money, is far from sure to work, and can result in your being below average, or (b) accept average performance - which helps you reduce those costs but also means you'll have to look on with envy as winners report mouth-watering successes. Here's how I put it in Dare to Be Great II: How much emphasis should be put on diversifying, avoiding risk, and ensuring against below-pack performance, and how much on sacrificing these things in the hope of doing better? And here's how I described some of the considerations: Unconventional behavior is the only road to superior investment results, but it isn't for everyone. In addition to superior skill, successful investing requires the ability to look wrong for a while and survive some mistakes. Thus each person has to assess whether he's temperamentally equipped to do these things and whether his circumstances - in terms of employers, clients and the impact of other people's opinions - will allow it... when the chips are down and the early going makes him look wrong, as it invariably will. You can't have it both ways. And as in so many aspects of investing, there's no right or wrong, only right or wrong for you. The aforementioned David Swensen ran Yale University's endowment from 1985 until his passing in 2021, an unusual 36-year tenure. He was a true pioneer, developing what has come to be called "the Yale Model" or "the Endowment Model." He radically reduced Yale's holdings of public stocks and bonds and invested heavily in innovative, illiquid strategies such as hedge funds, venture capital, and private equity at a time when almost no other institutions were doing so. He identified managers in those fields who went on to generate superior results, several of whom earned investment fame. Yale's resulting performance beat almost all other endowments by miles. In addition, Swensen sent out into the endowment community a number of disciples who produced enviable performances for other institutions. Many endowments emulated Yale's approach, especially beginning around 2003-04 after these institutions had been punished by the bursting of the tech/Internet bubble. But few if any duplicated Yale's success. They did the same things, but not nearly as early or as well. To sum up all the above, I'd say Swensen dared to be different. He did things others didn't do. He did these things long before most others picked up the thread. He did them to a degree that others didn't approach. And he did them with exceptional skill. What a great formula for outperformance. In Pioneering Portfolio Management, Swensen provided a description of the challenge at the core of investing - especially institutional investing. It's one of the best paragraphs I've ever read and includes a two-word phrase (which I've bolded for emphasis) that for me reads like sheer investment poetry. I've borrowed it countless times: ...Active management strategies demand uninstitutional behavior from institutions, creating a paradox that few can unravel. Establishing and maintaining an unconventional investment profile requires acceptance of uncomfortably idiosyncratic portfolios, which frequently appear downright imprudent in the eyes of conventional wisdom. As with many great quotes, this one from Swensen says a great deal in just a few words. Let's parse its meaning: Idiosyncratic - When all investors love something, it's likely their buying will render it highly-priced. When they hate it, their selling will probably cause it to become cheap. Thus, it's preferable to buy things most people hate and sell things most people love. Such behavior is by definition highly idiosyncratic (i.e., "eccentric," "quirky," or "peculiar"). Uncomfortable - The mass of investors take the positions they take for reasons they find convincing. We witness the same developments they do and are impacted by the same news. Yet, we realize that if we want to be above average, our reaction to those inputs - and thus our behavior - should in many instances be different from that of others. Regardless of the reasons, if millions of investors are doing A, it may be quite uncomfortable to do B. And if we do bring ourselves to do B, our action is unlikely to prove correct right away. After we've sold a market darling because we think it's overvalued, its price probably won't start to drop the next day. Most of the time, the hot asset you've sold will keep rising for a while, and sometimes a good while. As John Maynard Keynes said, "Markets can remain irrational longer than you can remain solvent." And as the old adage goes, "Being too far ahead of your time is indistinguishable from being wrong." These two ideas are closely related to another great Keynes quote: "Worldly wisdom teaches that it is better for the reputation to fail conventionally than to succeed unconventionally." Departing from the mainstream can be embarrassing and painful. Uninstitutional behavior from institutions - We all know what Swensen meant by the word "institutions": bureaucratic, hidebound, conservative, conventional, risk-averse, and ruled by consensus; in short, unlikely mavericks. In such settings, the cost of being different and wrong can be viewed as highly unacceptable relative to the potential benefit of being different and right. For the people involved, passing up profitable investments (errors of omission) poses far less risk than making investments that produce losses (errors of commission). Thus, investing entities that behave "institutionally" are, by their nature, highly unlikely to engage in idiosyncratic behavior. Early in his time at Yale, Swensen chose to: minimize holdings of public stocks; vastly overweight strategies falling under the heading "alternative investments" (although he started to do so well before that label was created); in so doing, commit a substantial portion of Yale's endowment to illiquid investments for which there was no market; and hire managers without lengthy track records on the basis of what he perceived to be their investment acumen. To use his words, these actions probably appeared "downright imprudent in the eyes of conventional wisdom." Swensen's behavior was certainly idiosyncratic and uninstitutional, but he understood that the only way to outperform was to risk being wrong, and he accepted that risk with great results. To conclude, I want to describe a exact occurrence. In mid-June, we held the London edition of Oaktree's biannual conference, which followed on the heels of the Los Angeles version. My assigned subject at both conferences was the market environment. I faced a dilemma while preparing for the London conference because so much had changed between the two events: On May 19, the S&P 500 was at roughly 3,900, but by June 21 it was at approximately 3,750, down almost 4% in roughly a month. Here was my issue: Should I update my slides, which had become somewhat dated, or reuse the LA slides to deliver a consistent message to both audiences? I decided to use the LA slides as the jumping-off point for a discussion of how much things had changed in that short period. The key segment of my London presentation consisted of a stream-of-consciousness discussion of the concerns of the day. I told the attendees that I pay close attention to the questions people ask most often at any given point in time, as the questions tell me what's on people's minds. And the questions I'm asked these days overwhelmingly surround: the outlook for inflation, the extent to which the Federal Reserve will raise interest rates to bring it under control, and whether doing so will produce a soft landing or a recession (and if the latter, how bad). Afterward, I wasn't completely happy with my remarks, so I rethought them over lunch. And when it was time to resume the program, I went up on stage for another two minutes. Here's what I said: All the discussion surrounding inflation, rates, and recession falls under the same heading: the short term. And yet: We can't know much about the short-term future (or, I should say, we can't dependably know more than the consensus). If we have an opinion about the short term, we can't (or shouldn't) have much confidence in it. If we reach a conclusion, there's not much we can do about it - most investors can't and won't meaningfully revamp their portfolios based on such opinions. We really shouldn't care about the short term - after all, we're investors, not traders. I think it's the last point that matters most. The question is whether you agree or not. For example, when asked whether we're heading toward a recession, my usual answer is that whenever we're not in a recession, we're heading toward one. The question is when. I believe we'll always have cycles, which means recessions and recoveries will always lie ahead. Does the fact that there's a recession ahead mean we should reduce our investments or alter our portfolio allocation? I don't think so. Since 1920, there have been 17 recessions as well as one Great Depression, a World War and several smaller wars, multiple periods of worry about global cataclysm, and now a pandemic. And yet, as I mentioned in my January memo, Selling Out, the S&P 500 has returned about 10½% a year on average over that century-plus. Would investors have improved their performance by getting in and out of the market to avoid those problem spots... or would doing so have diminished it? Ever since I quoted Bill Miller in that memo, I've been impressed by his formulation that "it's time, not timing" that leads to real wealth accumulation. Thus, most investors would be better off ignoring short-term considerations if they want to enjoy the benefits of long-term compounding. Two of the six tenets of Oaktree's investment philosophy say (a) we don't base our investment decisions on macro forecasts and (b) we're not market timers. I told the London audience our main goal is to buy debt or make loans that will be repaid and to buy interests in companies that will do well and make money. None of that has anything to do with the short term. From time to time, when we consider it warranted, we do vary our balance between aggressiveness and defensiveness, primarily by altering the size of our closed-end funds, the pace at which we invest, and the level of risk we'll accept. But we do these things on the basis of current market conditions, not expectations regarding future events. Everyone at Oaktree has opinions on the short-run phenomena mentioned above. We just don't bet heavily that they're right. During our exact meetings with clients in London, Bruce Karsh and I spent a lot of time discussing the significance of the short-term concerns. Here's how he followed up in a note to me: ...Will things be as bad or worse or better than expected? Unknowable... and equally unknowable how much is priced in, i.e. what the market is truly expecting. One would think a recession is priced in, but many analysts say that's not the case. This stuff is hard...!!! Bruce's comment highlights another weakness of having a short-term focus. Even if we think we know what's in store in terms of things like inflation, recessions, and interest rates, there's absolutely no way to know how market prices comport with those expectations. This is more significant than most people realize. If you've developed opinions regarding the issues of the day, or have access to those of pundits you respect, take a look at any asset and ask yourself whether it's priced rich, cheap, or fair in light of those views. That's what matters when you're pursuing investments that are reasonably priced. The possibility - or even the fact - that a negative event lies ahead isn't in itself a reason to reduce risk; investors should only do so if the event lies ahead and it isn't appropriately reflected in asset prices. But, as Bruce says, there's usually no way to know. At the beginning of my career, we thought in terms of investing in a stock for five or six years; something held for less than a year was considered a short-term trade. One of the biggest changes I've witnessed since then is the incredible shortening of time horizons. Money managers know their returns in real-time, and many clients are fixated on how their managers did in the most exact quarter. No strategy - and no level of brilliance - will make every quarter or every year a successful one. Strategies become more or less effective as the environment changes and their popularity waxes and wanes. In fact, highly disciplined managers who hold most rigorously to a given approach will tend to report the worst performance when that approach goes out of favor. Regardless of the appropriateness of a strategy and the quality of investment decisions, every portfolio and every manager will experience good and bad quarters and years that have no lasting impact and say nothing about the manager's ability. Often this poor performance will be due to unforeseen and unforeseeable developments. Thus, what does it mean that someone or something has performed poorly for a while? No one should fire managers or change strategies based on short-term results. Rather than taking capital away from underperformers, clients should consider increasing their allocations in the spirit of contrarianism (but few do). I find it incredibly simple: If you wait at a bus stop long enough, you're guaranteed to catch a bus, but if you run from bus stop to bus stop, you may never catch a bus. I believe most investors have their eye on the wrong ball. One quarter's or one year's performance is meaningless at best and a harmful distraction at worst. But most investment committees still spend the first hour of every meeting discussing returns in the most exact quarter and the year to date. If everyone else is focusing on something that doesn't matter and ignoring the thing that does, investors can profitably diverge from the pack by blocking out short-term concerns and maintaining a laser focus on long-term capital deployment. A final quote from Pioneering Portfolio Management does a great job of summing up how institutions can pursue the superior performance most want. (Its concepts are also relevant to individuals): Appropriate investment procedures contribute significantly to investment success, allowing investors to pursue profitable long-term contrarian investment positions. By reducing pressures to produce in the short run, liberated managers gain the freedom to create portfolios positioned to take advantage of opportunities created by short-term players. By encouraging managers to make potentially embarrassing out-of-favor investments, fiduciaries increase the likelihood of investment success. Oaktree is probably in the extreme minority in its relative indifference to macro projections, especially regarding the short term. Most investors fuss over expectations regarding short-term phenomena, but I wonder whether they actually do much about their concerns and whether it helps. Many investors - and especially institutions such as pension funds, endowments, insurance companies, and sovereign wealth funds, all of which are relatively insulated from the risk of sudden withdrawals - have the luxury of being able to focus exclusively on the long term... if they will take advantage of it. Thus, my suggestion to you is to depart from the investment crowd, with its unhelpful preoccupation with the short term, and to instead join us in focusing on the things that really matter. Editor's Note: The summary bullets for this article were chosen by Seeking Alpha editors. In April, Jim Hannon ascended to CEO at Altus Group after almost two years as president of Altus Analytics, a subsidiary. He’s looking to continue the company’s long policy of aggressive acquisition of proptech startups that feed its valuation, tax appeal, project management and due diligence platform for real estate investors and owners. Founded in 2005, the publicly traded, Toronto-based Altus Group was an early proponent of providing real estate technology data as what it calls “intelligence as a service.” Commercial Observer spoke with Hannon in late July from his home in Naples, Fla., about Altus’ role in the real estate investment and ownership world and about his views on proptech in the near and longer term. The interview has been edited for length and clarity. Commercial Observer: With a $2 billion market cap, Altus Group is a huge company in the proptech sector, and one with many services. As CEO, what’s your elevator pitch for Altus? Jim Hannon: In a nutshell, we’re No. 1 in providing valuations via technology advisory services for commercial real estate. We are the No. 1 or 2 player in the core markets that we serve to make it easier to do tax appeals and have successful outcomes in lowering your taxes and getting better returns out of your assets. We help developers determine when and where, or if, they should invest. And if they choose to invest, we help them project-manage large investments and development. So those are the things we do: valuation, tax appeal, project management, and due diligence. Our clients are investors, asset managers, developers, lenders, and, for the tax business, property owners. Is Altus too large, or not large enough, for what you’re trying to accomplish as a technology source for your clients? That’s an interesting observation. I started my career at IBM, so this doesn’t feel very large to me at all. Actually, it’s a very tight-knit community inside Altus. It came together through acquisitions over the years. But it feels like a tightly focused company from my chair compared to the size of the companies that I’ve been at. How big is Altus in employees and revenue? We have 2,600 employees. We’re in a blackout period right at the moment, so I can’t get too specific, but I can tell you that last year we did $625 million Canadian in revenue ($485 million today). As you mentioned, Altus has grown quite a bit through acquisition. What does that look like these days? Is there more opportunity to acquire proptech startups that fit your platform, or have innovative startup opportunities slowed down? There’s always opportunity to acquire proptech startups. We keep a close eye on the market, as well as on our capital structure, making sure we’re deploying investments in the right areas. Last year, we did three significant acquisitions. We purchased a company in Paris called Finance Active. We’re heavily in the valuation business around equity investments in commercial real estate. Finance Active put us into the debt management side of those investments and it significantly increased the size of our international footprint. In March of last year, we bought a company called Stratodem, which gave us an analytics engine and thousands of macroeconomic data points to pull into our advanced analytics. And, in November, we purchased a company in New York City called Reonomy, which gave us a significant amount of data on about 53 million commercial real estate assets in the U.S. It also gave us the underlying technology to link attributes of assets to the drivers of performance. This year we purchased a tax technology company called Rethink Solutions, which gave us automated workflow and some predictive analytics capabilities for taxes, as well. What made those proptech companies attractive to Altus? On the tax side, we want technology that improves workflow, or improves the predictability of a successful outcome of a tax appeal. In the Canadian market, we’re the No. 1 commercial real estate tax appraisal adviser. So, basically, we help make the process of appealing tax assessment easier. In the U.K., we’re No. 1. In the U.S., it’s hard to exactly get the size of the market, but our estimate is that we’re No. 2, but still in a single-digit type of market share. It’s a very fragmented market in the U.S. so acquisitions that can help us automate the processes or predict which assets are going to have the highest probability of a successful outcome is interesting technology for us. It allows us to expand our market to clients who want to self-serve, or have a lighter advisory touch if they choose, or if they want to leverage the expertise of our teams. On the analytics side, our core franchises have been in commercial real estate valuations — mark to market. We are by far the leaders, whether it’s from a technology perspective with our Argus enterprise, our flagship product, or through our advisory services. As we generate valuations, we throw off a tremendous amount of exhaustive data, which allows us to look at the commercial real estate market and say, “OK, what drove performance of various types of assets?” How do you see the industry in the midst of so much technological change? The industry is at an inflection point. It feels very similar to me as financial services did over a decade ago, where there’s fantastic technology and expert services to go along with that technology, to say, “What just happened in the market? How do I get a better understanding of what’s going on around me?” The next step is, “Why did that happen?” We can draw correlations using our analytics technology, especially with our exact acquisitions. Then, most importantly, it’s, “What’s going to happen next? Where should I invest? Why should I invest? And how do I think about asset performance across vast portfolios of investments?” That’s where we were going with our acquisitions last year. What is the most exciting thing you have found in becoming CEO? It’s the opportunity to be in front of the whole industry. We’re very early in the adoption curve of advanced analytics, in thinking about the investment side of commercial real estate. There are great firms out there, they have their own data strategies, and some of them are significantly larger than we are. But this is what we do: Investment firms should have data strategies, and we’re here to enable those data strategies for them. Putting together assets like Stratodem with Reonomy to create advanced offers, and pairing them with Argus and our advisory business, and even the data we split off in our tax franchise, there’s no other company in the world that has our data set and the potential to change this industry like we do. And it was just too much fun of an opportunity to pass on. On the demand side, how do your clients view the adoption of proptech? They’re hungry for it. If we put it in context of today’s economic situation: When you look at rising interest rates and headwinds, that’s going to change investment theses and the way owners think about how they maximize their return on their assets. They are focused on the tenant experience, as they should be. I think that side of the business has as much potential as our side, the investment and performance management side. There’s so much opportunity to Excellerate the services inside buildings and to bring all sorts of technology to bear in this current economic cycle. It’s even more important to be thinking about productivity, efficiency and differentiation. The various proptech companies that are out there, they’re all coming at it with some angle on that. I think the owners understand that investments in technology are going to enable their future growth and the best outcomes with their tenants. We’re seeing strong demand. We’re in about 100 markets overall in six core countries — Canada, U.S., U.K., Germany, France and Australia — and we see the addressable market for those six countries alone at about a $5 billion opportunity. When you add in the rest of the world, our model says that globally it’s a $10 billion market. What kinds of data questions are clients asking Argus about? The first set of conversations that I had with CXO-level folks in the industry were surprisingly to me about just the core management of data. “How do I harmonize data from investments in three different countries to get a portfolio view?” I understood that problem. If this is where they’re at now, even the most advanced ones are still trying to figure out how do they corral their data and look at it on a country or global basis. Then think about all the various attributes of performance. That’s a core problem across the industry, and the technology we’re building organically with the acquisitions that we executed last year directly addresses that problem. Is there any particular sector of real estate that you’re concentrating on for your clients, whether it be construction or office or residential? There’s a blurring of the lines that happens. We stick to our core strategy, which is commercial real estate. However, as investors are moving into single-family residential rentals as a commercial asset class, that changes our perspective on what is commercial. The legacy definitions don’t necessarily hold if you’re looking at it from an investor perspective. So that’s not where our core strength is, but we’re building up those analytics capabilities. In our Stratodem acquisition, we actually picked up a tremendous amount of data on macro-residential information, which we built into our models. It informs the performance of commercial real estate assets. Across the classes of commercial real estate, we’re building up data and analytics on all of it. We have our tax practices. We look to target and segment into areas of growth like data centers or green energy. For the rest of this year, or in the near future, how do you view the adoption and use of technology in real estate, and how will that affect Altus’ strategy? I have to be careful to not answer a specific question about the rest of the year that could in any way come across as guidance. I’ll talk about the industry in general and our positioning. We’re in a great place. In markets that go up or down, you’re going to have investors either looking to buy or looking to sell. We’ve gone through various economic cycles over the last 15 years, and we are very resilient, because buyers and sellers are looking for that next piece of information to determine what they should do next. We’ve been there with expert services, information and analytics capabilities, and the adoption of that technology is accelerating. That puts us in a great place as a trusted partner to many of the world’s largest investors. Philip Russo can be reached at firstname.lastname@example.org. Autism is known as a spectrum disorder because every autistic person is different, with unique strengths and challenges. Varney says many autistic people experienced education as a system that focused on these challenges, which can include social difficulties and anxiety. He is pleased this is changing, with exact reforms embracing autistic students’ strengths. But the unemployment rate of autistic people remains disturbingly high. ABS data from 2018 shows 34.1 per cent of autistic people are unemployed – three times higher than that of people with any type of disability and almost eight times that of those without a disability. “A lot of the time people hear that someone’s autistic and they assume incompetence,” says Varney, who was this week appointed the chair of the Victorian Disability Advisory Council. “But we have unique strengths, specifically hyper focus, great creativity, and we can think outside the box, which is a great asset in workplaces.” In Israel, the defence force has a specialist intelligence unit made up exclusively of autistic soldiers, whose skills are deployed in analysing, interpreting and understanding satellite images and maps. Locally, organisations that actively recruit autistic talent include software giant SAP, Westpac, IBM, ANZ, the Australian Tax Office, Telstra, NAB and PricewaterhouseCoopers. Chris Pedron is a junior data analyst at Australian Spatial Analytics, a social enterprise that says on its website “neurodiversity is our advantage – our team is simply faster and more precise at data processing”. He was hired after an informal chat. (Australian Spatial Analytics also often provides interview questions 48 hours in advance.) Pedron says the traditional recruitment process can work against autistic people because there are a lot of unwritten social cues, such as body language, which he doesn’t always pick up on. “If I’m going in and I’m acting a bit physically standoffish, I’ve got my arms crossed or something, it’s not that I’m not wanting to be there, it’s just that new social interaction is something that causes anxiety.” Pedron also finds eye contact uncomfortable and has had to train himself over the years to concentrate on a point on someone’s face. Australian Spatial Analytics addresses a skills shortage by delivering a range of data services that were traditionally outsourced offshore. Projects include digital farm maps for the grazing industry, technical documentation for large infrastructure and map creation for land administration. Pedron has always found it easy to map things out in his head. “A lot of the work done here at ASA is geospatial so having autistic people with a very visual mindset is very much an advantage for this particular job.” Pedron listens to music on headphones in the office, which helps him concentrate, and stops him from being distracted. He says the simpler and clearer the instructions, the easier it is for him to understand. “The less I have to read between the lines to understand what is required of me the better.” Australian Spatial Analytics is one of three jobs-focused social enterprises launched by Queensland charity White Box Enterprises. It has grown from three to 80 employees in 18 months and – thanks to philanthropist Naomi Milgrom, who has provided office space in Cremorne – has this year expanded to Melbourne, enabling Australian Spatial Analytics to create 50 roles for Victorians by the end of the year. Chief executive Geoff Smith hopes they are at the front of a wave of employers recognising that hiring autistic people can make good business sense. “Rather than focus on the deficits of the person, focus on the strengths. A quarter of National Disability Insurance Scheme plans name autism as the primary disability, so society has no choice – there’s going to be such a huge number of people who are young and looking for jobs who are autistic. There is a skills shortage as it is, so you need to look at neurodiverse talent.” In 2017, IBM launched a campaign to hire more neurodiverse (a term that covers a range of conditions including autism, Attention Deficit Hyperactivity Disorder, or ADHD, and dyslexia) candidates. The initiative was in part inspired by software and data quality engineering services firm Ultranauts, who boasted at an event “they ate IBM’s lunch at testing by using an all-autistic staff”. The following year Belinda Sheehan, a senior managing consultant at IBM, was tasked with rolling out a pilot at its client innovation centre in Ballarat. “IBM is very big on inclusivity,” says Sheehan. “And if we don’t have diversity of thought, we won’t have innovation. So those two things go hand in hand.” Sheehan worked with Specialisterne Australia, a social enterprise that assists businesses in recruiting and supporting autistic people, to find talent using a non-traditional recruitment process that included a week-long task. Candidates were asked to work together to find a way for a record shop to connect with customers when the bricks and mortar store was closed due to COVID. Ten employees were eventually selected. They started in July 2019 and work in roles across IBM, including data analysis, testing, user experience design, data engineering, automation, blockchain and software development. Another eight employees were hired in July 2021. Sheehan says clients have been delighted with their ideas. “The UX [user experience] designer, for example, comes in with such a different lens. Particularly as we go to artificial intelligence, you need those different thinkers.” One client said if they had to describe the most valuable contribution to the project in two words it would be “ludicrous speed”. Another said: “automation genius.” IBM has sought to make the office more inclusive by creating calming, low sensory spaces. It has formed a business resource group for neurodiverse employees and their allies, with four squads focusing on recruitment, awareness, career advancement and policies and procedures. And it has hired a neurodiversity coach to work with individuals and managers. Sheehan says that challenges have included some employees getting frustrated because they did not have enough work. “These individuals want to come to work and get the work done – they are not going off for a coffee and chatting.” Increased productivity is a good problem to have, Sheehan says, but as a manager, she needs to come up with ways they can enhance their skills in their downtime. There have also been difficulties around different communication styles, with staff finding some autistic employees a bit blunt. Sheehan encourages all staff to do a neurodiversity 101 training course run by IBM. “Something may be perceived as rude, but we have to turn that into a positive. It’s good to have someone who is direct, at least we all know what that person is thinking.” Chris Varney is delighted to see neurodiversity programs in some industries but points out that every autistic person has different interests and abilities. Some are non-verbal, for example, and not all have the stereotypical autism skills that make them excel at data analysis. “We’ve seen a big recognition that autistic people are an asset to banks and IT firms, but there’s a lot more work to be done,” Varney says. “We need to see jobs for a diverse range of autistic people.” The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here. Covering COVID-19 is a daily Poynter briefing of story ideas about the coronavirus and other timely syllabus for journalists, written by senior faculty Al Tompkins. Sign up here to have it delivered to your inbox every weekday morning. We saw this play out with President Joe Biden’s bout with COVID-19: It takes longer than you might expect to test negative. Indeed, the CDC found, “Between 5 and 9 days after symptom onset or after initial diagnosis with SARS-CoV-2 infection, 54% of persons had positive SARS-CoV-2 antigen test results.” The LA Times says the rule of thumb “five days and you are clear” is a misconception: “If your test turns out to be positive after five days, don’t be upset because the majority of people still test positive until at least Day 7, to Day 10 even,” Dr. Clayton Chau, director of the Orange County Health Care Agency, said during a briefing Thursday. “So that’s the majority. That’s the norm.” Dr. Robert Kosnik, director of UC San Francisco’s occupational health program, said at a campus town hall in July that there’s an expectation people will test negative on Day 5 and can return to work the next day. “Don’t get your hopes up,” Kosnik told his colleagues. “Don’t be disappointed if you’re one of the group that continues to test positive.” In fact, some 60% to 70% of infected people still test positive on a rapid test five days after the onset of symptoms or their first positive test, meaning they should still stay in isolation, Kosnik said. “It doesn’t significantly fall off until Day 8,” he said. The California Department of Health gives clear guidance on what to do once you test positive for COVID-19: If you test positive or have symptoms of COVID-19, you should stay away from others, even at home and even if you have been vaccinated. Isolate for at least 5 full days after your symptoms start, or after your first positive test date if you don’t have symptoms. Ending isolation: You can end isolation after 5 days if you test negative (use an antigen test) on Day 5 or later – as long as you do not have a fever and your symptoms are getting better. If you still test positive on or after Day 5 or if you don’t test, isolate for 10 full days, and until you don’t have a fever. It is strongly recommended that you wear a well-fitting mask around others – especially when indoors – for 10 days, even if you stop isolating earlier. For those of you who have been traveling this summer (maybe you attended a journalism conference or other big event) the guidelines say, “If you have been exposed to someone with COVID-19, even if you are vaccinated, test 3-5 days after your exposure. Isolate if you test positive. If you had COVID-19 in the last 90 days, only test if you have new symptoms, using an antigen test.” And let’s face it: If you have been anywhere more than 50 feet from your front door this summer, you have been exposed to someone who has COVID-19. There are 7 million Americans who need insulin to control their diabetes. That number alone makes the Inflation Reduction Act important to a lot of your viewers/readers/listeners. Two administrations representing both political parties have promised they would do something to control insulin prices. And once again, the plan is stalled in Congress. Senate Democrats hoped to include a provision in the Inflation Reduction Act that would cap the cost of insulin at $35 not only for people with private healthcare but also for people who are covered by Medicare. But because the legislation before the Senate over the weekend was a budget reconciliation bill, it had to comply with rules that the Senate Parliamentarian said made that provision out of line. Democrats tried to keep the provision in the final legislation anyway, but it failed even with the support of seven GOP senators including Sens. Bill Cassidy (Louisiana), Susan Collins (Maine), Josh Hawley (Missouri), Cindy Hyde-Smith (Mississippi), John Kennedy (Louisiana), Lisa Murkowski (Alaska) and Dan Sullivan (Alaska). The final vote was 57-43 but under Senate rules it needed 60 votes to pass. Interestingly, more Republicans supported the proposal when Donald Trumped backed such a plan to limit the cost of insulin. The Senate votes means that the plan heading for a House vote next caps out-of-pocket costs only for Medicare patients who use insulin, around a quarter of whom pay more than $35 per month right now. Some states have imposed a $30 monthly cap on insulin for some patients with private insurance. Let me provide you an idea of how many people this affects. FierceHealthCare summarized the latest findings: Kaiser Family Foundation looked at 2018 enrollee data for all individual and small group Affordable Care Act plans sold on and off the exchanges. It also looked at claims data from that year from people who had large employer coverage using IBM MarketScan data. Overall, the analysis explored 110 million out of 160 million Americans with private insurance. Kaiser added that about 1 million people among those studied got an insulin prescription filled in 2018. Researchers looked at how many enrollees paid more than $420 a year out-of-pocket on insulin, which is the average of $35 a month. It found that 26% in the individual market and another 31% in the small group market paid more than $420 a year. The large employer market had only 19% of people who paid more than that figure annually, as this group tends to have lower deductibles and copayments. People who work for smaller companies or employers and people who do not have employer-sponsored healthcare pay the most, as you would suspect. The Kaiser Family Foundation says a $35-a-month cap on out-of-pocket insulin costs could benefit more than one in four Americans on the individual and small group markets and one in five in large employer-sponsored plans. Critics say insulin costs a few dollars to produce but for some people has become so expensive they are rationing their care. Bloomberg reports that as women return to the office, they also are finding that old shoe-wearing habits are a big pain in the bunion. Podiatrists are seeing an uptick in injuries brought on by a return to the office, in-person conferences and other professional events that require a return to more formal footwear. Dr. Miguel Cunha of Gotham Footcare in Manhattan said his offices have recently seen an influx of overuse injuries, from shin splints to plantar fasciitis, among patients wearing heels again after ditching them for two years. During the pandemic, lower levels of activity and going barefoot led to weakness and tightness of muscles and tendons “Once the restrictions of the pandemic were lifted, many women resumed their use of heels for work without giving their body adequate time to transition back to their pre-pandemic activity levels,” Dr. Cunha said. For many, that’s led to intensified foot pain and discomfort. “The body doesn’t like any kind of abrupt change,” said Dr. James Hanna, former president of the New York State Podiatric Medical Association. “Whenever you’re forced to do something all at once, suddenly you’re going back to the office, and now you’re wearing these shoes you haven’t worn in two years, that’s really like asking for trouble.” During my time in Las Vegas last week for the NABJ/NAHJ convention, every conversation I had with cab drivers centered on the weather and water. As I watched fountains flow and thought of the water the skyscraping hotels must use, I was interested to learn how Vegas, oddly, may be modeling how other cities, desperate for water, may adapt. CBS News found that the city is so dry that it is ripping up what little grass the city maintains. A new law, the first of its kind in the nation, bans non-functional grass — defined as grass that is used to make roadways and roundabouts look good while serving no other purpose. The city’s already pulled up about four million square feet of grass on public property so far this year, because thirsty green parkways are something they just can’t afford anymore. “The grass that you see behind me is not long for this world,” Mack told correspondent Tracy Smith. “In fact, within the next couple of months to a year, this grass will be completely eliminated, and it’ll be replaced with drip-irrigated trees and plants.” And John Entsminger, the general manager of the Southern Nevada Water Authority said: “Everything we use indoors is recycled. If it hits a drain in Las Vegas, we clean it. We put it back in Lake Mead,” Entsminger said. “You could literally leave every faucet, every shower running in every hotel room, and it won’t consume any more water.” In the past two decades, Lake Mead has dropped a startling 180 feet due to a the ongoing megadrought, made worse by climate change and the rapid growth of cities and agriculture in the Southwest. Southern Nevada, though, has beaten the odds by cutting its overall water use by 26% while also adding 750,000 people to its population since 2002. How is your community thinking about sustainability in parks, along highways and streets? As cities grow, what requirements are the communities placing on developers to keep sustainability in mind for what they plant and how much water the landscaping will require? The Natural Resources Defense Council warns that even while attempting to save water, it is a bad idea to rip up greenspace. NRDC promotes “green infrastructure”: Green infrastructure encompasses a variety of water management practices, such as vegetated rooftops, roadside plantings, absorbent gardens, and other measures that capture, filter, and reduce stormwater. In doing so, it cuts down on the amount of flooding and reduces the polluted runoff that reaches sewers, streams, rivers, lakes, and oceans. Green infrastructure captures the rain where it falls. It mimics natural hydrological processes and uses natural elements such as soil and plants to turn rainfall into a resource instead of a waste. It also increases the quality and quantity of local water supplies and provides myriad other environmental, economic, and health benefits—often in nature-starved urban areas. And NRDC recommends that cities build rain gardens, which are between sidewalks and streets where runoff waters flows rather than flooding streets. NRDC points to porous sidewalks that “allows rainfall to seep through to underlying layers of pollutant-filtering soil before making its way to groundwater aquifers. Once installation costs are factored in, it can cost as much as 20 percent less up front than conventional pavement systems, and it can be cheaper in the long run to maintain.” Journalists, the summer of 2022 has been jam-packed with floods and storms. The climate experts tell us worse is coming. America is spending billions on new infrastructure. Wouldn’t this be the time to adapt to anticipate our future rather than just react to the past? As summers get hotter, homeowners may be tempted to ask, “Do we need a pool?” Thankfully the answer in my household has always been that our kids had friends who had pools and we lived fairly close to a nice public pool, so we avoided digging up the backyard. But now, this being 2022 and all, let me introduce you to the concept of a “plunge pool”: a shallow, maybe 10 by 20 feet in-ground tub that is enough to wallow in but not big enough to swim in. You avoid the expensive maintenance, and you cause less harm to tree roots for about half the price. The New York Times will show you pictures. I have a heavy week of travel and teaching ahead so I will be away from the newsletter for a bit. See you soon-ish. We’ll be back soon with a new edition of Covering COVID-19. Sign up here to get it delivered right to your inbox. Al Tompkins is senior faculty at Poynter. He can be reached at email@example.com or on Twitter, @atompkins.
<urn:uuid:6b02740f-f942-4148-8c0f-209c5d0a572d>
CC-MAIN-2022-40
http://babelouedstory.com/bibliographies/accueil_bibliographie/training-pdf-guide.php?pdf=000-023-IBM-Tivoli-Support-Provider-Tools-and-Processes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00134.warc.gz
en
0.960763
19,766
3.671875
4
A Bug is an unexpected and relatively small defect, fault, flaw, or imperfection in an information system or device. These small defects or faults are generally due to human error when writing the source code or in the design of the medium that causes the system or crash or simply not work. Bugs can sometimes end up as security vulnerabilities that need to be patched through updates for the relevant software or device. The origin of this term is allegedly from the IBM Mainframe days in the early 1950s when a moth was found dead within the internals of the room-sized computer. This “BUG” caused the whole mainframe to malfunction, bringing the word “Bug” into our common language usage. What Does A Bug Mean For My SMB? SMBs need solutions in place to manage bugs. These typically come in the form of a patch management solution to quickly install software fixes from vendors when released to the public. Patches often address important security vulnerabilities. SMBs and MSPs need to plan ahead by creating policies that dictate how quickly to react based upon the criticality of a particular vulnerability. For CyberHoot users, the Policy Template library contains a Vulnerability Alert Management Process (VAMP) in place. With this process in place, you have clear guidelines for when to jump and how high to jump for a given vulnerability or exposure. Consider deploying a cloud-based patch management solution to automatically update software whenever and wherever necessary. Most Managed Service Providers leverage one of the big three Remote Monitoring and Management (RMM) solutions (Connectwise, Datto, and Kaseya) for patching their managed systems. These RMM solutions also provide monitoring, and remote access in addition to tested and validated patching services to their clients. SMB PROTECTIONS BEYOND PATCH MANAGEMENT In addition to adopting a patch management system, CyberHoot recommends the following best practices to protect individuals and businesses against, and limit damages from, online cyber attacks: - Adopt a password manager for better personal/work password hygiene - Require two-factor authentication on any SaaS solution or critical accounts - Require 14+ character Passwords in your Governance Policies - Train employees to spot and avoid email-based phishing attacks - Check that employees can spot and avoid phishing emails by testing them - Backup data using the 3-2-1 method - Incorporate the Principle of Least Privilege - Perform a risk assessment every two to three years CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:2bc49106-a463-4700-914c-0c595debee71>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/bug/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00335.warc.gz
en
0.913309
645
2.875
3
A Rootkit is a hacking program or collection of programs that give a threat actor remote access to and control over a computing device. While there have been legitimate uses for this type of software, such as to provide remote end-user support, most rootkits open a backdoor on victim systems to introduce malicious software. The malicious software includes viruses, ransomware, keylogger programs, or other types of malware to use the system for security attacks. Rootkits can often hide from detection by antivirus software because they control the underlying hardware sometimes making them invisible to the operating system environment where antivirus lives. Rootkits can be installed in a number of ways, including phishing attacks or social engineering strategies which trick users into giving permission to hackers to install malware on the victim system, often giving cybercriminals remote administrative access to infected systems. Once installed, a rootkit gives the hacker access to and control almost every aspect of the operating system (OS). Older antivirus programs often struggled to detect rootkits. Today, some antimalware programs have the ability to scan for and remove rootkits hiding within a system but not all. If you suspect a problem with your system, it’s best to have it checked by a cybersecurity professional. Know that some rootkits infect the system bios of your computer and in the worst cases, there may not be any way to remove the rootkit. What does this mean for an SMB? Rootkits are designed to be difficult to detect and remove, the rootkit developers strive to hide their malware from users and administrators, as well as from many types of security products. Once a rootkit compromises a system, the potential for malicious activity is very high. Typically, rootkit detection requires specific add-ons to antimalware packages, special-purpose ‘anti-rootkit’ scanning software, or booting off special media to analyze the Root partition of a hard disk drive looking for malware. While anti-malware solutions are great, the best way to keep your business secure is by preventing an infection from happening to begin with. SMBs can improve their chances of preventing root kit infections and other malware through employee awareness programs and by governing employees with prescriptive policies. Below are CyberHoot’s ten steps every SMB should take to protect themselves from cyber attacks: - Train employees on the cybersecurity best practices. - Phish test employees to keep them vigilant in their inboxes. - Govern staff with policies to guide behaviors and independent decision-making. - Adopt a Password Manager for all employees. - Enable two-factor authentication on all critical Internet-enabled services. - Regularly back up all your critical data using the 3-2-1 approach. - Implement the Principle of Least Privilege. Remove administrator rights from employee local Microsoft Windows workstations. - Build a robust network at your firm that is properly segmented. Network segmentation is to computer networks what sealed ballasts are to Submarines. They enable damaged sections of a company or submarine to be completely isolated to prevent the whole network or submarine from sinking. - Implement email security including third-party SPAM protection, DNS security for Mail Exchange records (DMARC, DKIM, and SPF) all combined with external email banners to give employees a fighting chance. - Finally, if and when a breach does occur, buy enough Cyber Insurance to cover your recovery from a catastrophic breach event.
<urn:uuid:cb11bc4e-7716-4370-9bc7-b2fb7cf5a5b2>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/rootkit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00335.warc.gz
en
0.904858
712
3.25
3
In this article, we will cover:1. What is a Business Glossary ↵2. What is a Data Dictionary ↵3. What is a Data Catalog ↵4. Adopting Best Practices for Data Initiatives ↵Data is a critical asset for any business. We know that. It doesn’t matter the size of an organization—large, medium, or small—its data is essential to making business decisions and to remaining competitive. We also know that as the volume of data continues to grow, companies need to make managing their data a priority if they want to understand what has happened in the business, answer questions about why it happened, and make informed decisions going forward.Data management needs to be part of the overall business strategy so that everyone in the organization understands data and uses it in the same way. But where do you start? There are three tools we recommend that will help keep you organized and will enhance your data management strategy: a business glossary, data dictionary, and data catalog.All three tools—business glossary, data dictionary, and data catalog—can help an organization better manage its data. Here’s a list of pros and cons for each. Although they are related, these tools are in fact very different tools that your organization can use for different purposes. In this blog, we will define all three—business glossary, data dictionary, and data catalog—and discuss what’s needed to build and govern each, as well as pros and cons to consider.What is a Business Glossary?A business glossary contains concepts and definitions of business terms frequently used in day-to-day activities within an organization—across all business functions—and is meant to be a single authoritative source for commonly used terms for all business users. It is the entry point for all organizations that have any kind of data initiative in play. A business glossary is the red thread that connects the business terms and concepts to policies, business rules, and associated terms within the organization. When creating a business glossary, you should have:Cross-functional input, as well as consensus/approval for agreed-upon understandings of key business concepts and business terms.Accessibility to common business terms—words, phrases, acronyms, or business concepts—across the organization so that everyone speaks the same business language.Cross reference terms and their relationships. This will help provide context and easily identify relationships between terms for business users.Although you do not need to have a data governance program in place to build, use, and maintain a business glossary, you should still have a governance strategy for the business glossary itself. In order to have cross-functional consensus, you need stakeholders from all business functions whose responsibility it is to meet regularly to discuss terms and concepts that might overlap departments. This will allow for approval and documentation of definitions, which is important, especially if two departments define the same metric differently. It’s fine to have two different definitions so long as the stakeholders have verified that it is an acceptable deviation, and it is documented and made accessible for the business users who need it. In some cases, you may have a tie-breaking decider—such as a CEO—choosing one definition over the other.Once the business term or concept is defined and approved, the designated stakeholders need to ensure that definition is used consistently throughout the organization. A business glossary is a key artifact for any data-driven organization and will help in setting up future data initiatives as the company’s analytics needs mature. Here’s what to consider when creating a business glossary:Pro: You don’t need to invest in new technology to create a business glossary. You can use something as simple as Microsoft Excel or Google Sheets to set up the business glossary and place it in SharePoint or Google Workplace to provide access to your business users.Pro: It is the lexicon of business language, which will allow for cross-functional collaboration among your business users.Pro: It can be used as an onboarding and coaching tool for new employees within your organization.Con: If not implemented correctly, it could lead to misunderstandings, with emphasis on bureaucracy, as well as introduce bias into your business language.As stated earlier, a business glossary is the starting point for any data initiative, but it also a pre-requisite to building a data dictionary.Alation’s Business Glossary enables the creation of definitions, policies, rules, and KPIs through a rich, user-friendly interface. A business glossary can be initiated with Microsoft Excel or Google Sheets to get the process started and ensure that it’s working properly. Photo: AlationWhat is a Data Dictionary?A data dictionary is a more technical and thorough documentation of data and its metadata. It consists of detailed definitions and descriptions of data dimension and measure names (in databases, data tables, etc.), their calculations, their types, and related information. Whereas with a business glossary you provide definitions for terms and concepts, in a data dictionary, you provide information on the type of data you have and everything that is related to it. This information is most commonly useful for technical users that work on the backend of your systems and applications so that they can more easily design a relational database or data structure to meet business requirements. When creating a data dictionary, you should have:A business glossary already in place, and you should have a governance strategy to ensure your business users are using it.A data integration tool that will automate the process for building and maintaining the data dictionary. The effort required to do this manually is not worth the value you will get out of it. Take advantage of tools that have built-in capabilities, such as dbt, where you can enter descriptions as you are programming, and they will be automatically documented to create a data dictionary. dbt also includes an automated data impact and lineage graph. There are lots of tools that have these built-in capabilities, so check to see if your existing tool does, or look around for one that fits your purposes.Attributes such as data type, size, allowed values, default values, and constraints, as well as any additional technical metadata that is relevant included in your data dictionary. Taking the time to do this upfront and making your data dictionary more user-friendly will help with data quality across the organization.Watch the CTO of iFit discuss how having a data dictionary empowered their data teams and removed data engineering bottlenecks:Unlike a business glossary, a data dictionary will likely require you have a more formal data governance program in place with a governance committee made up of individuals from both the business and IT side.The business team should be responsible for requesting changes to a metric’s definition, while the IT team should be responsible for implementing the change and communicating it with the organization. Establishing lines of communication between the two groups will promote trust. Here’s what to consider when creating a data dictionary:Pro: Having a data dictionary will ultimately serve as a lexicon of business language for technical teams across the organization and help with metadata management—allowing them to do their jobs more effectively. The technical metadata of each data element within a data dictionary helps to clarify business requirements for the IT team working on the backend of systems or applications.Pro: A data dictionary helps to improve master data management and ensure data quality across the organization, as well as to integrate data from multiple sources more efficiently. Depending on the tool you use, you can enter the definition once and use it for multiple applications.Con: Although a data dictionary helps reduce overall time and costs with data initiatives when implemented properly, it does require an extra step for data integration developers. When building code for a new job that will integrate two data sources, they will need to either look to a business glossary for definitions or work with the data governance committee to get the definitions and add them to the code. If the data dictionary is not automated, the developer will have to manually document the new definition in the data dictionary in addition to adding it to the code.A data dictionary is a subset of a business glossary, but both are required to build a data catalog.Whether your data is stored in a data warehouse, data lake, or lakehouse, running dbt docs will propagate table and column definitions to create an automated data dictionary. Source dbt What is a Data Catalog?A data catalog is the pathway—or a bridge—between a business glossary and a data dictionary. It is an organized inventory of an organization’s data assets that informs users—both business and technical—on available datasets about a topic and helps them to locate it quickly. Users have a clear, accessible view of what data the organization has, where it came from, where it is located now, who has access to it, and what risks or sensitivities may be involved—all in one central location. When creating a data catalog, you should have:A business glossary and a data dictionary already in place, and you should have a data governance committee to ensure your business and technical users are using both.A tool that can automate the process. A data catalog should not be set up manually; you will need to use a tool to set it up, as well as to maintain it. There are lots of tools to choose from, including Alation, Alteryx, and Qlik just to name a few. You may also already have cataloging capabilities built into existing tools—whether it’s your source system or a business intelligence (BI) tool.Subject-matter experts. Because a data catalog is a comprehensive artifact, and it is built for both the business and technical users, you will need individuals who have competencies in both.In terms of governance, you should follow the same structure as with a data dictionary. However, you should have another committee—a subset of individuals—made up of individuals who have both technical and business competencies that work alongside the data governance committee set up for a data dictionary. The best way to maintain a data catalog is to integrate it as naturally as possible, or intuitively as possible with existing processes put in place, such as whenever a new data source is added, updating the data catalog should be part of whatever process is in place for doing that job.Here’s what to consider when creating a data catalog:Pro: A data catalog supports regulatory compliance by providing quick and easy access to where certain data is stored and who uses it.Pro: It fosters a data culture throughout the organization by providing data and content for self-service applications. It allows users to get what they want, when they need it, and trust that it is accurate because of the transparency a data catalog provides.Con: Although a data catalog helps to reduce risk and improves data efficiency and analysis, it requires skill to develop. Individuals who can do this are rare and in high demand as it needs business and technical abilities to create a good data catalog.A data catalog is an organized inventory of data assets and provides knowledge of all aspects of metadata. Users can access a data catalog without access to the data asset itself. This helps in saving time and improves employee productivity, as well as, promoting transparency and trust in the data.Adopting Best Practices for Data InitiativesAlthough the terms—business glossary, data dictionary, and data catalog—sound similar, they play very different roles within your organization. Each is valuable, but not completely necessary for each organization—at least not right away. It depends on where you are at with your analytics maturity and how much time and resources you have to dedicate to build and maintain each artifact. As you consider your options, start with:Building a Business Glossary: This is the easiest way to get started, and it is also a pre-requisite for any data initiative you have. Once you create a business glossary and take the necessary steps to maintain it, you will be one step closer to building a data-driven culture within your organization and to scale up with your data and analytics maturity.Examine Your Existing Tools: Before you make any new technology purchases, take the time to see what capabilities you have built-in with your existing tools. If you find that you have data dictionary capabilities, use them and start building it into your data integration processes to update and maintain.Promote a Data Culture in Your Organization: The key thing for any of these programs to work is the willingness of the organization to want to do it. Just because you ask business or technical users to adopt a program, doesn’t mean they fully understand and endorse it. The more you encourage a data culture and communicate the importance behind, the more natural it becomes for everyone to get on board.
<urn:uuid:b3254a24-bddd-427d-aac7-b5b802406f5f>
CC-MAIN-2022-40
https://www.analytics8.com/blog/data-dictionary-vs-data-catalog-vs-business-glossary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00335.warc.gz
en
0.937114
2,597
2.828125
3
When you think of a hacker, you always get the image of some scruffy young guy (normally a teenager) stuck away in a dark, untidy room. His room, as you can no doubt imagine, will be strewn with half-eaten pizzas and empty drink cans and the walls will be decorated with posters of ´underground´ bands. This of course, is the stereotypical image, and like all good stereotypes, it is false. Many hackers don ties and work in plush offices, although some don´t; many hackers hide out in dark rooms, others work from their university… But they do all have one thing in common: the computer. But what kind of computer do they use? In 99.9 percent of cases it is a PC. The processor is the same as almost all other PCs around the world (Intel, AMD or Motorola). The operating system may be a commercial release or free version, but will be widely-distributed (Windows, Linux or OS/X). With these computers they launch attacks on other similar systems, i.e. simple desktop PCs or corporate servers, but with the same basic equipment. It is true that they also launch attacks against systems with platforms that don´t coincide, such as a large servers belonging to governments, universities or multinational companies. But remember these attacks are few and far between, and are carried out by a very small group of hackers. Hackers today, or at least those that call themselves hackers, focus their ´research´ on the security of the most vulnerable and easy to attack systems: home user PCs or systems in small and mid-sized businesses. It is easier to be successful against weak targets, as there is less need for in-depth IT knowledge. Exploits for many systems can be found on many web pages along with applications for carrying out intrusions, all you need to do is simply run a program. However, these self-proclaimed IT security researchers are conspicuously avoiding targets that could really demonstrate their knowledge and keenness for research. I´m talking about the new IT technologies that are being implemented around the globe. Have any of these hackers ever tried tampering with a grid system? Have they ever managed, even just for a moment, to use the calculation time offered by grid-connected computers to researchers around the world? I guess not, as here they won´t be able to steal any personal data or find any current accounts they can use for financial gain. Neither will they be able to sell the hijacked computers in order to send spam or launch denial of service attacks. These systems are designed for scientific research; they are not kids´ Christmas presents used to download pirate software from the Internet. Another possible target for hackers could be the so-called supercomputers. These systems would really be a challenge, as the processing capacity they have is the spearhead of today´s IT technology; these systems really ought to be in the sights of hackers. There are now many supercomputers installed around the world, such as “Mare Nostrum”, “Earth Simulator”, “Blue Gene / L” or “Columbia”. All of them dedicated to tasks that do not offer, at least on paper, any direct financial benefit to an attacker, simply the spiritual pleasure of having triumphed over the security barriers. These systems are used for researching the human genome, protein folding, medicines, climate change… basically the highest levels of scientific investigation. In these systems there are no credit cards, nor users entering their bank details on websites. Moreover, the creators of these systems have a thoroughly different concept of security from hackers of desktops and bargain-basement computers. Each teraflop is highly expensive, and therefore monitoring of each and every process in execution is extremely rigorous. Each wasted clockcycle represents a financial loss for system owners, so anyone even dreaming that a process could be hidden on the system is, frankly, on another planet. Hacktivism is today just simply about making money or crashing systems which, half-the-time, crash anyway whenever their owners install a new game. So this is hacking? This is just crass computer violence.
<urn:uuid:6ad4199b-eb1c-46be-89d9-748e79e275a7>
CC-MAIN-2022-40
https://it-observer.com/advanced-hacking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00335.warc.gz
en
0.953702
866
2.90625
3
Keeping Us Digitally Connected During the COVID-19 Crisis For years, internet and mobile network operators have been improving their networks to increase coverage and bandwidth in preparation for natural disasters, terrorist attacks, medical emergencies, cyberattacks, and other public safety incidents, like the coronavirus (COVID-19) pandemic. Although the worldwide outbreak has created a strain on networks, the convergence of technology has improved the ability to provide continuous service with traffic management, alternative technologies for long-distance backhaul routing, and security. As a result of COVID-19, we’re in a time of unprecedented speed of change on a global level. Aggressive moves have been implemented to help manage bandwidth to ensure we remain digitally connected — despite how the ways we work and live have changed overnight. Companies across the globe have shut down their offices, moving many workers to work from home. Schools have closed, forcing students to study remotely. Businesses and restaurants have closed their doors in many countries to allow only limited takeaway and delivery, varying by country and region. Can the internet and networks handle the strain? At the same time, the new situation provides a very dramatic shift in internet and wireless usage patterns, locations, and available bandwidth. The original wireless and fixed coverage and usage maps are turned upside down with remote work, increased video usage to keep human connections (which is what the platforms were designed for), and heightened video game usage as a result of more free time. Industry data shows internet traffic during peak hours is up more than 41% this month due to working from home in response to COVID-19, according to OpenVault, which measures network operator traffic. The fixed and wireless networks assumed coverage, density, and usage maps based on fixed office and school locations. Now, with millions of people changing locations, cell towers, and usage demands — all at once — the people relying on those networks are feeling the effects. Most of the providers have been accommodating changing bandwidth demand but, of course, this level of change is unprecedented. Several large U.S. internet and wireless providers have eased data limits for customers, allotted free hot spots, increased bandwidth, and expanded discounted services to help families in financial need. In several countries in Europe, telecom providers gave guidance to users on preserving bandwidth and network usage in order to think about the bigger picture and maintain network continuity. David Belson, senior director of internet research and analysis at the Internet Society, believes the internet will be able to absorb the increased traffic. Automation-enabled bandwidth continuity for all Intelligent automation is one way to help automatically monitor networks to identify and manage bandwidth configuration, peak traffic, and patterns. Through systematic reporting by Robotic Process Automation (RPA) , it’s possible to more quickly pinpoint and resolve issues (see Figure 1). Further complexity can be simplified by mapping business and school closures and disease hot spots to new business needs and communities. In some cases, blanket access, such as providing rural communities with improved bandwidth for teleeducation, is a necessity. In other cases, timing the access is the key, such as accommodating a video peak for business in morning hours. Software bots can also identify problem areas and the need to reroute network traffic. In many cases, telecommunication companies have been able to remotely pinpoint, diagnose, troubleshoot, and resolve issues. This avoids needing to send a technician on-site and the expense of that maintenance. It also reduces customer friction through fewer calls or queries to the contact center. Many mobile network operators have temporarily closed their retail stores. Thankfully, many activities are possible online through customer care or mobile apps. Operators are also keeping available Wi-Fi hot spots where possible and waiving disconnections where the inability to pay is linked to COVID-19. In the U.S., for example, several carriers are following the pledge from the Federal Communications Commission to “Keep Americans Connected” in public-private partnerships. Empowering humans to solve customer problems The disruptions worldwide lead to more calls and chats to customer care and the contact center, the need for additional bandwidth at home, etc. A prime area of successful automation deployment is in the contact center with Digital Workers working alongside human workers to handle repetitive manual tasks and dramatically reduce call time. Through attended automation, humans are able to be more available to customers while subtasks such as routing for signature, account validation, and credit checks run simultaneously in the background. In the end, we can only plan so much for the unknown. The work the tech community has done over the last several years in data advances, cyber protection, increased bandwidth, and capacity planning has clearly empowered a temporary remote workforce and school system. The current crisis no doubt will further our advances for an even more digitally connected society.
<urn:uuid:7e1622df-daaf-4e55-8120-a57de259eb9a>
CC-MAIN-2022-40
https://www.automationanywhere.com/company/blog/rpa-thought-leadership/keeping-us-digitally-connected-during-the-covid-19-crisis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00335.warc.gz
en
0.945269
991
2.75
3
Fiber-optic cable has two propagation modes: multimode and single mode. They perform differently with respect to both attenuation and time dispersion. The single-mode fiber-optic cable provides much better performance with lower attenuation. To understand the difference between these types, you must understand what is meant by "mode of propagation." Light has a dual nature and can be viewed as either a wave phenomenon or a particle phenomenon that includes photons and solitons. Solitons are special localized waves that exhibit particle-like behavior. For this discussion, let's consider the wave mechanics of light. When the light wave is guided down a fiber-optic cable, it exhibits certain modes. These are variations in the intensity of the light, both over the cable cross section and down the cable length. These modes are actually numbered from lowest to highest. In a very simple sense, each of these modes can be thought of as a ray of light. For a given fiber-optic cable, the number of modes that exist depends on the dimensions of the cable and the variation of the indices of refraction of both core and cladding across the cross section. The various modes include multimode step index, single-mode step index, single-mode dual-step index, and multimode graded index. Multimode Step Index Consider the illustration in Figure 3-8. This diagram corresponds to multimode propagation with a refractive index profile that is called step index. As you can see, the diameter of the core is fairly large relative to the cladding. There is also a sharp discontinuity in the index of refraction as you go from core to cladding. As a result, when light enters the fiber-optic cable on the left, it propagates down toward the right in multiple rays or multiple modes. This yields the designation multimode. As indicated, the lowest-order mode travels straight down the center. It travels along the cylindrical axis of the core. The higher modes, represented by rays, bounce back and forth, going down the cable to the left. The higher the mode, the more bounces per unit distance down to the right. Figure 3-8 Multimode Step Index The illustration also shows the input pulse and the resulting output pulse. Note that the output pulse is significantly attenuated relative to the input pulse. It also suffers significant time dispersion. The reasons for this are as follows. The higher-order modes, the bouncing rays, tend to leak into the cladding as they propagate down the fiber-optic cable. They lose some of their energy into heat. This results in an attenuated output signal. The input pulse is split among the different rays that travel down the fiber-optic cable. The bouncing rays and the lowest-order mode, traveling down the center axis, are all traversing paths of different lengths from input to output. Consequently, they do not all reach the right end of the fiber-optic cable at the same time. When the output pulse is constructed from these separate ray components, the result is chromatic dispersion. Fiber-optic cable that exhibits multimode propagation with a step index profile is thereby characterized as having higher attenuation and more time dispersion than the other propagation candidates. However, it is also the least costly and is widely used in the premises environment. It is especially attractive for link lengths up to 5 kilometers. It can be fabricated either from glass, plastic, or PCS. Usually, MMF core diameters are 50 or 62.5 m. Typically, 50-m MMF propagates only 300 modes as compared to 1100 modes for 62.5-m fiber. The 50-m MMF supports 1 Gbps at 850-nm wavelengths for distances up to 1 kilometer versus 275 meters for 62.5-m MMF. Furthermore, 50-m MMF supports 10 Gbps at 850-nm wavelengths for distances up to 300 meters versus 33 meters for 62.5-m MMF. This makes 50-m MMF the fiber of choice for low-cost, high-bandwidth campus and multitenant unit (MTU) applications. Single-Mode Step Index Single-mode propagation is illustrated in Figure 3-9. This diagram corresponds to single-mode propagation with a refractive index profile that is called step index. As the figure shows, the diameter of the core is fairly small relative to the cladding. Because of this, when light enters the fiber-optic cable on the left, it propagates down toward the right in just a single ray, a single mode, which is the lowest-order mode. In extremely simple terms, this lowest-order mode is confined to a thin cylinder around the axis of the core. The higher-order modes are absent. Figure 3-9 Single-Mode Step Index Consequently, extremely little or no energy is lost to heat through the leakage of the higher modes into the cladding, because they are not present. All energy is confined to this single, lowest-order mode. Because the higher-order mode energy is not lost, attenuation is not significant. Also, because the input signal is confined to a single ray path, that of the lowest-order mode, very little chromatic dispersion occurs. Single-mode propagation exists only above a certain specific wavelength called the cutoff wavelength. The cutoff wavelength is the smallest operating wavelength when SMFs propagate only the fundamental mode. At this wavelength, the second-order mode becomes lossy and radiates out of the fiber core. As the operating wavelength becomes longer than the cutoff wavelength, the fundamental mode becomes increasingly lossy. The higher the operating wavelength is above the cutoff wavelength, the more power is transmitted through the fiber cladding. As the fundamental mode extends into the cladding material, it becomes increasingly sensitive to bending loss. Comparing the output pulse and the input pulse, note that there is little attenuation and time dispersion. Lower chromatic dispersion results in higher bandwidth. However, single-mode fiber-optic cable is also the most costly in the premises environment. For this reason, it has been used more with metropolitan- and wide-area networks than with premises data communications. Single-mode fiber-optic cable has also been getting increased attention as local-area networks have been extended to greater distances over corporate campuses. The core diameter for this type of fiber-optic cable is exceedingly small, ranging from 8 microns to 10 microns. The standard cladding diameter is 125 microns. SMF step index fibers are manufactured using the outside vapor deposition (OVD) process. OVD fibers are made of a core and cladding, each with slightly different compositions and refractive indices. The OVD process produces consistent, controlled fiber profiles and geometry. Fiber consistency is important, to produce seamless spliced interconnections using fiber-optic cable from different manufacturers. Single-mode fiber-optic cable is fabricated from silica glass. Because of the thickness of the core, plastic cannot be used to fabricate single-mode fiber-optic cable. Note that not all SMFs use a step index profile. Some SMF variants use a graded index method of construction to optimize performance at a particular wavelength or transmission band. Single-Mode Dual-Step Index These fibers are single-mode and have a dual cladding. Depressed-clad fiber is also known as doubly clad fiber. Figure 3-10 corresponds to single-mode propagation with a refractive index profile that is called dual-step index. A depressed-clad fiber has the advantage of very low macrobending losses. It also has two zero-dispersion points and low dispersion over a much wider wavelength range than a singly clad fiber. SMF depressed-clad fibers are manufactured using the inside vapor deposition (IVD) process. The IVD or modified chemical vapor deposition (MCVD) process produces what is called depressed-clad fiber because of the shape of its refractive index profile, with the index of the glass adjacent to the core depressed. Each cladding has a refractive index that is lower than that of the core. The inner cladding a the lower refractive index than the outer cladding. Figure 3-10 Single-Mode Dual-Step Index Multimode Graded Index Multimode graded index fiber has a higher refractive index in the core that gradually reduces as it extends from the cylindrical axis outward. The core and cladding are essentially a single graded unit. Consider the illustration in Figure 3-11. This corresponds to multimode propagation with a refractive index profile that is called graded index. Here the variation of the index of refraction is gradual as it extends out from the axis of the core through the core to the cladding. There is no sharp discontinuity in the indices of refraction between core and cladding. The core here is much larger than in the single-mode step index case previously discussed. Multimode propagation exists with a graded index. As illustrated, however, the paths of the higher-order modes are somewhat confined. They appear to follow a series of ellipses. Because the higher-mode paths are confined, the attenuation through them due to leakage is more limited than with a step index. The time dispersion is more limited than with a step index; therefore, attenuation and time dispersion are present, but limited. In Figure 3-11, the input pulse is shown on the left, and the resulting output pulse is shown on the right. When comparing the output pulse and the input pulse, note that there is some attenuation and time dispersion, but not nearly as much as with multimode step index fiber-optic cable. Figure 3-11 Multimode Graded Index Fiber-optic cable that exhibits multimode propagation with a graded index profile is characterized as having levels of attenuation and time-dispersion properties that fall between the other two candidates. Likewise, its cost is somewhere between the other two candidates. Popular graded index fiber-optic cables have core diameters of 50, 62.5, and 85 microns. They have a cladding diameter of 125 micronsthe same as single-mode fiber-optic cables. This type of fiber-optic cable is extremely popular in premise data communications applications. In particular, the 62.5/125 fiber-optic cable is the most popular and most widely used in these applications. Glass is generally used to fabricate multimode graded index fiber-optic cable.
<urn:uuid:1441e1b8-7d7f-406e-a69b-f3f420581ad5>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=170740&seqNum=5
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00335.warc.gz
en
0.916454
2,173
3.453125
3
- June 13, 2022 - Posted by: Aanchal Iyer - Category: Data Science How Data Science Can Add Value to a Business As modern technology enables the creation and storage of ever-increasing quantities of information, data volumes have just exploded. In these last two years, more than 90% of data has been created. A treasure, right? It depends on how this data is used and managed. The huge amount of data that is continuously being collated and stored using technology can bring excellent benefits to societies and organizations around the world, but only if we can interpret this data. This is where Data Science can help. What is Data Science? Data Science refers to the mining of meaningful information from raw data. This discipline combines various fields including scientific methods, statistics, and data analysis. Data scientists work by combining a large variety of skills to analyze the data collected from the web, customers, smartphones, sensors, and other sources. Data Science reveals trends and creates intelligence that organizations can use to make smart and better decisions, predict change, and of course, create more innovative services and products. It enables machine learning (ML) models to learn from the huge amounts of data available to them. Data Science as a Service (DSaaS) is a complete package of data science capabilities and resources. This includes algorithms, people, data, and a cloud-based platform that allows organizations to be data-driven. Speed-to-value here is important, as many organizations struggle to start with data science because they are fearful of “bad data.” Every organization has some bad data as data can never be perfect. DSaaS helps in getting a business’ data to work as-is and, in the process of that work, cleaning it and making it Artificial Intelligence (AI) ready. Once data is AI-ready, it can quickly deliver value. One other advantage of DSaaS is that it utilizes the scalability of the cloud, just like other software-as-a-service offerings. Using the cloud allows organizations to do more in terms of AI and predictive data science. How can Data Science is Useful in different sectors? Data science can be applied to different sectors, here are some examples: The healthcare sector receives great benefits from Data Science applications. - Medical image analysis: Google has designed the LYNA tool, which detects breast cancer tumors that metastasize to nearby lymph nodes. - Drug development: Data science applications and ML algorithms simplify and shorten the process of drug development. - Virtual assistance for patients and customer care: AI-powered mobile applications can offer basic healthcare, typically as chatbots. Logistics and Transportation The most significant evolution that data science has provided us in the field of transportation is the introduction of autonomous cars. It offers us safer driving environments, enhancing vehicle performance, adding autonomy to the driver, and much more. UPS: Optimizing Packet Routing UPS uses data science to improve package transportation. Network Planning Tools (NPT), is a platform that incorporates Artificial Intelligence and Machine Learning (ML) to solve logistics challenges. UBER EATS: Home delivery Data scientists at Uber Eats ensure to deliver hot food quickly. However, making that happen requires ML, advanced statistical modelling, and on-staff meteorologists. Applying DaaS throughout an organization can add value in multiple ways across decision making, training, recruiting, marketing, and more. Data analysis can result in making well-informed decisions that enable an organization to grow in smart, strategic ways. Taking the time to use data science as a tool that every organization should find valuable.
<urn:uuid:34619f9b-9ff6-46a1-b99e-d375947d6b33>
CC-MAIN-2022-40
https://www.aretove.com/how-data-science-can-add-value-to-a-business
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00335.warc.gz
en
0.914348
754
3.21875
3
The scope of artificial intelligence (AI) is far greater than most people can begin to realize. While even tech-averse individuals are increasingly aware of machine learning and natural language processing, many have yet to discover the exciting world of computer vision. A high-level technology and area of interdisciplinary study, computer vision holds the potential to transform a variety of industries, as well as habits we currently take for granted. Keep reading to learn more about its development and its implications for the future of artificial intelligence: What Is Computer Vision and How Does it Work? The human vision system is surprisingly complicated, to the point that it’s taken considerable effort to create any semblance of it in computer form. However, we’ve finally accomplished what once seemed impossible: Computers can now view digital images or watch videos to glean high-level insights. This ability is known as computer vision. The quest for computer vision spanned several decades. The concept began to gain traction in the 1950s when early experiments sought to determine whether it was a viable addition to the growing tech sector. It wasn’t long before devices with computer vision could tell the difference between text that was typed or written by hand. Today, computer vision involves processing at the pixel level. Computers are exposed to a vast amount of visual data to facilitate pattern recognition. They then use algorithms to extract relevant information and determine how to move forward with such insights. How Can Computer Vision Be Used in Everyday Life? Computer vision is no longer the stuff of science fiction. Already, this exciting technology is being implemented in several areas of daily life. You may already take advantage of technologies powered by computer vision without even knowing it. The following are a few of the most intriguing applications already present in several industries and around the house: At one time, keeping digital photos organized was a hassle, with users forced to enter dozens of tags manually. Today, however, many photo storage solutions integrate computer vision and geotagging to help users keep track of images based not only on who is featured but also where – and under what circumstances – they were taken. Of course, photo-oriented computer vision is valuable for individual users with thousands of photos. However, it’s even more helpful for businesses with a huge volume of image-based data and limited time to sort through visual media manually. In the world of eCommerce, computer vision can optimize digital marketing and user experience. For example, computer detection solutions can be deployed to identify objects within images. This, in turn, allows customers to easily shop for multiple products highlighted within a single picture. Likewise, computer vision allows for more accurate image classification in eCommerce listings. This is crucial for many online businesses that may rely on precise product categorization to ensure that customers find the right items. Computer vision can also play a significant role in shopping at physical locations. For example, this technology powers Amazon’s checkout-free grocery store. This unique retailer aims for a “Just Walk Out” experience, in which visitors can simply grab what they need off the shelf and leave without waiting in line or fumbling with cash. Banks are always on the hunt for new ways to increase consumer security while also making everyday transactions more convenient. Computer vision lies behind many of these efforts. Select banks allow users to make ATM withdrawals with help from facial recognition technology. Likewise, computer vision can aid Know Your Customer (KYC) by linking digital images. For example, customers can submit both selfies and photo identification to verify their identity. Internet of Things The integration of computer vision with the Internet of Things (IoT) allows for the efficient gathering and analysis of huge volumes of data to make users’ lives more efficient and convenient. As such, computer vision can make a smart home that much more effective. Already, thermostat systems observe the patterns of building inhabitants to determine when certain features or settings may be required and when energy can be conserved. Even more impressive? Samsung has developed an advanced system designed to identify items stored in refrigerators. This solution’s AI-powered View Inside camera lets users know if new items have been added or if existing products have been depleted. Integration with Smart Recipes allows for optimized shopping lists and recipes that take current ingredient availability into account. NerdsToGo: Your Resource for All Things AI If you’re ready to implement computer vision or other promising AI technologies into your business functions or daily life, look to the experts at NerdsToGo for help. Our certified Nerds are passionate about all things technology – and they make a point of remaining at the cutting-edge of the growing world of AI.
<urn:uuid:bab41be5-6c52-4cdc-b69a-9e2e0c4b29f0>
CC-MAIN-2022-40
https://www.nerdstogo.com/blog/2020/september/why-computer-vision-holds-promise-for-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00335.warc.gz
en
0.937954
967
3.4375
3
In today’s IT environments, there are many different concepts that businesses must keep up with and integrate into their own IT strategy. One key concept that all businesses should be aware of is that of data resilience. Here’s what you should know about data resilience and how to incorporate it into your overall IT strategy. What is data resilience? Put simply, data resilience refers to the durability of an IT system when faced with potential issues. Data is resilient when tools and systems can automatically detect and mitigate problems that could result in data loss. Of course, that also includes restoring compromised data. Data resilience is also intimately tied to the IT concept of high availability. That’s where systems are designed to function for as long as possible without failure. Though it is an important part of an overall data resilience strategy, it is important not to confuse resilience with data replication. Data archiving, for example, can be useful for creating a copy of critical data, which itself is a critical component of having resilient data. However, backing up or archiving data alone isn’t enough to achieve robust data resilience. Why data resilience is so important for businesses Data resilience is fundamentally important to businesses of all sizes. To sum it up, that’s because of its ability to reduce downtime and allow for business continuity. Because of the heavily data-dependent nature of modern business, IT downtime can cost your company considerable sums of money. In fact, unplanned downtime is estimated to cost companies a minimum of $926 per minute, with maximum amounts being well into the thousands of dollars per minute. Data resilience can also help your business in preparing for a variety of different IT contingencies. Disaster recovery, for example, is closely linked to data resilience, as having continual access to data is an important part of bringing technology assets back online after a disaster occurs. By pursuing a data resilience strategy, you can give yourself a leg up in other data-related areas of IT planning. How can you achieve data resilience? The first step in achieving data resilience is to pursue a data reliability strategy. Data reliability involves replicating data in multiple forms, usually including both onsite and off-site storage media. Cloud-based data storage solutions can also be helpful in achieving data reliability. Simply replicating data, though, is only one part of data resilience. Another important component is risk management, both within your own business and at the vendor level. By consistently identifying and mitigating IT risks, you can help to prevent potential failures from occurring in the first place. Cybersecurity also plays a critical role in achieving data resilience. With some 14 million American businesses subject to major risks from hacking, strong security protocols are needed to ensure that your business’ data remains safe and available. Layered security systems that protect your data in multiple ways can be very effective in counteracting various cyber threats. By pursuing a data resilience strategy, you can ensure that important data remains available. It also reduces the amount of downtime your business will have to deal with. Whether your existing IT team can make your data more resilient or you will need an outside IT consultant to help, a strong data resilience strategy is well worth investing in.
<urn:uuid:c59c62bf-69bb-4644-91b4-30c99c067f03>
CC-MAIN-2022-40
https://www.freeitdata.com/examining-data-resilience-for-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00335.warc.gz
en
0.93831
653
2.59375
3
British company ARM is one of the three major non-x86 players, the others being the MIPS and Texas Instruments, both based in the US. However, ARM is by far the most dominant force in the booming mobile arena with all major semiconductor companies (even Intel) licensing its intellectual property. The company, which can cite Apple as one of its co-founders, operates a very different business model to its main rival, Intel, which explains its exponential rise. Rather than designing, manufacturing and selling chips (system-on-chip or processors), ARM kicks off the first part of the process allowing its plethora of partners to do the rest, only collecting royalties and license fees as and when they make money. (opens in new tab)As a result, ARM generates significantly less revenue than Intel (£0.5bn vs £34bn) but its capital expenditure is minimal (Intel spent more than $10 billion in CapEx in 2011) and its business model more resilient since its stakeholders include the majority of the Global top 20 semiconductor companies. Hence fighting ARM’s ecosystem would mean taking on the likes of Samsung, LG, Panasonic. Nvidia etc. The emergence of ARM on the wider consumer market is due almost entirely to the booming mobile device market. Of the hundreds of millions smartphones and tablets that have been sold over the last few years, more than 90 per cent are powered by ARM technology found in so called System-on-Chip products. To make life easier for you, we’ve compiled a handy list of all the mainstream System-on-Chip families from all the major manufacturers on the market. But first let’s clear up some confusion. (a) In layman’s terms the differences between the terms SoC, Processor and Chipset which are (unfortunately) often used interchangeably are as follows: SoC is the term used generally by technical media and analysts, and describes the packaging that houses the CPU, the graphics subs-system, memory and more. It is sometimes referred to as an Application Processor. The term processor (or mobile processor) is used increasingly because it is a “consumer-friendly term” while the word chipset refers to additional components on top of the system-on-chip - such as the baseband chip. (b) ARM doesn’t manufacture processors. It provides licenses that its partners can then use according to their needs. At its simplest, the time it takes to market an SoC can be cut drastically by using off the shelf designs (a ready-to-use but rather inflexible, so-called Hard Macro implementation). A handful of partners have an architecture license that gives them carte-blanche to be more creative. Apple, Nvidia, Cavium, Marvell, TI, Qualcomm, AppliedMicro, Microsoft and Intel all have those precious and expensive laissez-passer. (c) ARM also designs an important part of the SoC puzzle, the Mali GPU (Graphics Processing Unit), using IP from the purchase of Falanx in June 2006. This means that a company can pick the GPU and the processor from ARM and other parts from other companies, assemble the SoC jigsaw and either choose to innovate (which will extend the time to market and require R&D investments) or stick to a vanilla version and bring parts to market as quickly as possible. (d) ARM has fielded around 900 licenses with nearly 300 of them for the latest generation of ARM processor, Cortex. Two billion ARM technology-based chips shipped in Q2 2012, earning ARM, on average a mere 4.8 cents (£0.03) per chip. According to a report published by market research firm Strategy Analytics (opens in new tab) in August, Qualcomm SoC revenues accounted for nearly $1.1bn or roughly 44 per cent of the entire market making it the biggest player. It is followed by Samsung, Texas Instruments, Broadcom, Mediatek and Marvell. Unlike the x86 market where there are two main players (Intel and AMD) and one main OS (WIndows), the ARM mobile ecosystem is made up of more than a dozen companies that operate across a number of mobile platforms (iOS, Android, Windows Phone, Blackberry OS, Tizen), making apples-to-apples performance comparisons a highly speculative and risky exercise. Below is the list of all the semiconductor companies that have products aimed at the smartphone and tablet industries. Sony Xperia, HTC One series, Motorola Razr M, all LTE smartphones on the market Most Samsung Galaxy Tablets and smartphones Apple iPhone and iPad Entry level Sony Xperia models (U & Sola) Most Android tablets, the Microsoft Surface tablet, HTC One X series, LG Optimus 4x series Archos tablets, some Huawei smartphones, old Motorola phones Some Nokia handsets Mostly in Chinese tablets and smartphones HP Palm Pre Mostly in Chinese tablets and smartphones Mostly in Chinese tablets and Smartphones Huawei Ascend P1D & Media Pad Many Chinese-branded tablets. Panasonic AV products
<urn:uuid:027b3e3d-5fef-442e-a7fc-b8aafc488716>
CC-MAIN-2022-40
https://www.itproportal.com/2012/10/11/all-you-need-know-arm-smartphone-chip-makers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00535.warc.gz
en
0.916069
1,163
2.640625
3
According to a recent Krebs on Security article, a group of hackers released four million debit and credit cards stolen from four different restaurant chains across the Midwest and Eastern US. The hackers breached the remote access service created to maintain the payment processing systems by spreading malicious code to it. It is not clear whether the remote access services were poorly configured. They distributed the malware to approximately 50% of the more than 1750 locations. The obtained data was then sent out of the POS system to a hacker’s server relatively slowly for almost four months to avoid detection by making it appear like regular traffic. Once installed in the POS system of a merchant, the malware gave the unrestricted hacker control over the terminals of the POS. The malicious code functions by obtaining payment data when the card is swiped through the checkout machine of a retail store. To extract card data, the malware scrapes the POS terminal’s RAM, where it is possible to decrypt the data. The Significance of the Event The breach illustrates that consumers’ payment card data is susceptible to cybersecurity attacks at their preferred retail stores, including restaurant chains. A majority of consumers and retail chains are not diligent when it comes to the payment card security. While secure chip-based cards and security standards for companies that deal with payment cards have been rolled out, a majority of consumers are yet to switch to the cards, and restaurant chains have not yet implemented the standards. Additionally, the restaurant chains have not purchased the secure chip-based readers to facilitate the change to secure cards by their customers. Conversely, firms that have already implemented the standards and purchased secure chip-based cards have noted a decrease in the number of payment card data that can be compromised. Krebs (2019) indicates that 80% of businesses that receive chip cards realize a drop of 87% in counterfeit fraud for consumers and retail owners that have upgraded their payment cards to chip cards. Therefore, firms and consumers need to invest in more secure chip cards to reduce breaches of crucial data from the businesses’ POS systems. Steps to Solve the Issue To prevent such breaches proactively, retailers can use software that offers end-to-end encryption, install two-factor authentication for remotely accessing their POS, install an antivirus on the POS system, and fully comply with PCI standards to prevent potential issues in the future. Usually, end-to-end encryption tools offer fortification by encrypting card data right after the POS device receives it and after it is sent out to the server of the software. Thus, the data is secured, offering protection regardless of where the hackers may install malware. What is more, business firms can install endpoint protection software on the POS system to thwart infiltration by malicious malware. The antivirus scans the POS software and detects any problematic files or apps that should be removed instantaneously. Also, the antivirus can provide alerts concerning areas that may be affected to facilitate the cleansing process to guarantee malicious code does not obtain any data. Ensuring that all elements of the POS system, including online shopping carts, servers, card readers, networks, and routers, are PCI compliant can alleviate the chances of malware infiltrating the system. Avoiding Being a Victim of Counterfeiting Fraud First, I would recommend consumers to upgrade their payment cards from the conventional model to the more secure chip cards. According to Krebs (2019), it is more expensive to counterfeit chip cards compared to traditional cards, which discourages cyber-thieves from attempting to crack their details. Secondly, I would do some basic research to establish the retail shops that accept cards to ensure that I use payment cards at stores that accept chip cards only while insisting on using cash at stores that are yet to upgrade their card readers. The step will alleviate the chances of my payment cards being intercepted at one of the branches of the vulnerable restaurant chains Thirdly, I would notify stores that I often shop at about the need to upgrade their card readers to improve the security of their POS systems and in turn, the details of their customers’ payment cards. The susceptibility of POS systems provides the main point of breach and improving its security would significantly reduce the incidence of successful cyberattacks. How can CBI help? CBI can be of much assistance to retail companies and restaurant chains with vulnerable POS systems regarding the prevention of breaches. We can assess the vulnerability of a merchant’s POS system through penetration tests. Additionally, CBI have personnel that are updated on matters concerning POS system security, making them the most suitable choice for analyzing a firm’s POS system. Upgrade events can be relatively expensive and sophisticated for businesses that need to implement them. Still, CBI can recommend how implementation can be accomplished in phases depending on the clients’ budgets. Vulnerable firms can also seek the assistance of CBI when incorporating PCI standards to guarantee efficiency.
<urn:uuid:e7b53558-4e79-47b3-97f3-c9fa1fe8344a>
CC-MAIN-2022-40
https://cbisecure.com/insights/cbi-security-alert-restaurant-credit-card-breach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00535.warc.gz
en
0.953193
983
2.515625
3
Every organization has valuable information assets -- whether it’s intellectual property; commercially valuable information and IT systems; or data on employees, customers and suppliers. An IT system failure, therefore, will adversely impact the organization to some degree. IT professionals are charged with the often-daunting task of providing an assessment of the risk -- and potential damage -- associated with specific threats to company information systems. Complicating the task is the need to explain to senior management how a risk, and the likelihood that it will cause harm to the organization, was calculated. With IT-related risks, you can’t construct tools that satisfy measurement theory. Even ISO Standard 27005 information security risk management, which is designed to help the implementation of information security based on a risk management approach -- doesn’t specify, recommend or even name any specific risk analysis methods. Indeed, measuring the level of risk an organization faces is a big undertaking, so it makes sense to split risk assessments into defined areas of the business. These could include a physical location, such as a call center, or a business process, such as order fulfillment. Documenting all the threats and quantifying the associated risks -- even for a small office or basic process -- usually takes a few weeks and can last up to several months for more complex regulated entities. Even if your company contracts with an outside consultant, internal staff will need to be involved. It’s therefore essential that everyone understands the terminology and concepts behind a risk assessment. Any reports to senior management should begin by explaining these key concepts. The terms may seem basic, but it is important that everyone involved is using the same vocabulary and applying the terms in the same context. A threat is something that can potentially cause damage to the organization. A vulnerability is a weakness within the organization that can be exploited by a threat. Risk is the possibility that a threat exploits a vulnerability and causes damage to the organization. The estimated damage to the organization is its impact. It should be made clear at this point that every organization has to live with threats; you cannot eliminate the threat of either lightning strikes or malicious cyber or even physical attacks. The first task, then, is to identify all the threats to your assets in the scope of the risk assessment. To learn more about how to conduct a risk assessment -- and the tools that can be used to measure and report risk -- download the free guide to measuring IT security risk. Have a comment on this story? Please click "Add a Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.
<urn:uuid:612f5817-ff19-43e7-a178-fa148d333ca0>
CC-MAIN-2022-40
https://www.darkreading.com/analytics/measuring-risk-a-security-pro-s-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00535.warc.gz
en
0.943021
532
2.5625
3
These days, it may not enough to send your college freshman off to school with new clothes, a set of twin sheets, and a shower caddy. The Identity Theft Resource Center (ITRC) says it’s equally as important to arm them with a cross-cut shredder, a locking storage box, and knowledge about identity theft and other scams that they may encounter while living on their own for the first time. Why Would College Students Be a Target for Identity Theft? Many college students may think that identity theft won’t necessarily affect them. Students aren’t usually known for having a lot of money or great credit scores. What could possibly be attractive to an identity thief? But the truth is that identity theft isn’t just about stealing money or other financial assets–it’s about stealing personal or financial information and using those details to try to open credit card accounts, secure a loan, or commit other fraudulent acts. In fact, students can be a target for identity theft. During this transitional time, their identifying information may be in a lot of different places, because of life changes, such as moving into a dorm or apartment, filling out background checks to sign a lease or activate utilities, or applying for colleges or employment. The Federal Trade Commission (FTC) notes that 20 percent of identity theft incidents reported in its Consumer Sentinel Network Data Book in 2018 were committed against victims ages 29 and under. Recent Graduates and College Students: Consider These Steps to Help Better Protect Yourself from Identity Theft - Be Cautious With Your Social Security Number – Rather than carrying your Social Security card with you, consider keeping it in a locked, safe place. Also, be thoughtful about whom you share your Social Security number with. You may be able to provide an identifier other than a Social Security number when you have to access or open an account. In addition, most schools now use a student identification number instead of a Social Security number. - Use a Parent’s Address or P.O. Box for Important Mail – It may be best to avoid mailing important documents to a dorm or apartment where your mailbox may not be secure. Instead, consider using a parent or relative’s address or getting a post office box. - Sort and Shred Mail and Documents – Instead of letting mail pile up where others can easily access it, consider getting a shredder, and shred all important documents, such as bank statements, credit card offers, and anything that contains an account number or Social Security number. Make sure any items you throw away—including prescription drug containers—do not contain personal information. - Secure Your Laptop and Other Devices – Consider storing your laptop and other devices in a locking storage box if you leave them in your dorm room or apartment. It’s a good idea to log out of secure sites, such as online banking, and make sure your web browser doesn’t automatically save login and password combinations for sensitive sites. - Surf and Shop Wisely – Look for the “https” and padlock icon on websites, as websites that don’t use proper encryption may make you an easier target for thieves. Avoid making payments on public WiFi, as these networks may not be secure. - Use Stronger Passwords – Consider creating stronger passwords, such as a ”passphrase” that would be difficult for hackers to guess, and use different passwords for different accounts. You may want to use a secure password manager or memorize your username and password combinations rather than storing them on your computer. - Be Cautious When Sharing on Social Media – Students who are comfortable sharing details about their lives on social media sites may post a lot of personal details over time. Keep in mind that fraudsters may be able to mine social media posts for information that could help them get past account security questions and allow them to hack into various sites. - Learn to Spot Phishing Emails – Be wary of emails that “phish” for information. Phishing emails and texts often try to get you to click to what looks like a legitimate site but is actually a website controlled by cybercriminals where your personal information may be recorded. Read: Smishing 101: Steps to Help Better Detect and Avoid Text Message Scams - Check Your Credit Reports – Once you have established credit, check your credit reports with the three nationwide credit bureaus at least annually. If you have never established credit, you may not have a report yet. If there is a credit report in your name, review it to make sure that none of the information is a result of fraudulent activity. If you find suspicious activity, the FTC recommends informing the organizations where fraud occurred about the potential identity theft and placing fraud alerts on your credit reports so lenders will be encouraged to take extra steps to confirm your identity before opening new credit. You might also consider placing a security freeze which could provide additional protection against unauthorized access to help better protect against identity thieves from opening new accounts in your name. If you believe you or one of your family members have been a victim of identity theft, report it to the FTC at https://identitytheft.gov/.
<urn:uuid:c472a1ac-ec2a-43b8-9025-a2ed6d16dbaa>
CC-MAIN-2022-40
https://www.idwatchdog.com/education/high-school-graduate-scams
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00535.warc.gz
en
0.925778
1,056
2.625
3
Saturday, October 1, 2022 Published 2 Years Ago on Friday, Nov 13 2020 By Inside Telecom Staff As contracts, partnerships, and collaborations are being signed left, right and center by telecoms operators around the world for the swift rollout of 5G networks, China is looking a step further. China has successfully launched what has been described as “the world’s first 6G satellite” into space to test the technology, going into orbit alongside 12 other satellites from the Taiyuan Satellite Launch Center in the Shanxi Province. The technology involves use of high-frequency terahertz waves to achieve data-transmission speeds 100 times faster than 5G is likely to be capable of. According to Chinese news services, the satellite also carries technology which will be used for crop disaster monitoring and forest fire prevention. The telecoms industry is still several years away from agreeing on 6G’s specifications, so it is not yet certain whether the tech being trialed will make it into the final standard. However, Chinese telecom experts have heralded the launch as a breakthrough in the exploration of terahertz space communication technologies in China’s space field. As of today, only 38 countries so far are using 5G networks. According to China’s Science and Technology Ministry, it has brought together an expert team to begin working on the sixth generation of wireless telecom technology research and development. In parallel, the government has already triggered the activation of several R&D projects to explore the feasibility and applications of 6G on the nation’s industries. However, the experts have argued that 6G technology is “still developing and it will need to overcome many hurdles in basic research, hardware design, and its environmental impact before it becomes viable for commercial use.” The 6G technology will involve the use of new frequency ranges, new infrastructure, and enhanced integration of space-air-ground-sea communication technologies, which some scientists worry might affect astronomical instruments or public health, or be too expensive for researchers to use. Artificial intelligence (AI) systems are already seeing huge adoption by businesses big and small. Its ability to enhance marketing tactics, customer service, business strategy, market analytics, preventive maintenance, autonomous vehicles, video surveillance, medical, and much more. Making AI technology invaluable across all sectors. Here are the fastest advancing AI trends to watch for in 2022. Small […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:8f32ac51-996d-49bc-9e7e-1cddf487b3e4>
CC-MAIN-2022-40
https://insidetelecom.com/as-the-world-preps-for-5g-china-set-sights-on-6g/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00535.warc.gz
en
0.931992
539
2.875
3
In the wake of super storm Sandy last week, people in the northeast of the US were left not only with huge amounts of damage to their property and infrastructure but also a loss of mobile communications; a crucial resource in the days following such an event. The New York Times highlighted the importance of mobile communications, with many people being unnerved by the lack of access to information about the situation, such as whether water was safe to drink or when power was likely to be restored. The operators reacted quickly as they received notifications from their network equipment and feedback from customers via Twitter and other mediums, and engineering teams were despatched to assess the situation and fix what they could. Immediately following the storm, the Federal Communications Commission (FCC) suggested that 25 percent of cell sites in the affected area were out of operation. By Friday, this had improved to 15 percent as operators and the FCC worked to improve matters. The speed at which service has been resumed is impressive. With the storm’s high winds and water surge, many cell sites were flooded or cut off from power with other connecting infrastructure suffering from damage. Engineering teams were challenged by a lack of power and sites being difficult to access (even Verizon Wireless’ office in Manhattan was flooded). It therefore took time to gain access to sites while generators were brought in to provide temporary power. By Sunday, just five days after Sandy hit, Verizon Wireless, AT&T and T-Mobile were all reporting drastic improvements to the operational level of their networks. Verizon said its network in the northeast was 98.1 percent operational, and T-Mobile, 95 percent. AT&T added that its network was 90 percent operational in New York City and up to 80 percent in Manhattan. AT&T said on Monday it was up to 98 percent coverage in the affected region, with 95 percent in New York City. While repair work was carried out, AT&T and erstwhile acquisition target T-Mobile USA agreed to open up their networks so customers of either company could gain access to mobile services if their own network had poor availability. AT&T deployed temporary towers to boost coverage and, along with Verizon Wireless, continue to provide mobile charging stations for anyone affected by power outages, regardless of whether they are customers. This kind of collaboration is rare in what is a very competitive industry but shows that, in certain circumstances, working together rather than competing makes sense. Operators combined their assets and technical expertise to help get the northeast United States back on its feet. As well as reacting so quickly to restore service, operators also took action to help relieve the other issues people are facing in the affected areas – including lack of power, fuel, food and in some cases, a roof over their head – with appeals for donations for the Red Cross relief effort. Sprint, AT&T and T-Mobile committed US$500,000, US$250,000 and US$100,000 respectively to the Red Cross, while Verizon Customers and the Verizon Foundation pledged a combined total of US$3 million to the cause. Of course the companies will soon return to more day-to-day concerns of LTE coverage battles and potential mergers, but the extreme weather event of a week ago serves to illustrate what the US mobile industry can achieve in the face of adversity. The editorial views expressed in this article are solely those of the author(s) and will not necessarily reflect the views of the GSMA, its Members or Associate Members
<urn:uuid:2bae6359-03b1-44db-8897-db9de217c394>
CC-MAIN-2022-40
https://www.mobileworldlive.com/blog/weathering-the-storm-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00535.warc.gz
en
0.978492
719
2.53125
3
This blog post was written by Vikas Taneja. Attackers use all kinds of attack vectors to steal sensitive information from their targets. Their efforts are not limited to only zero-day vulnerabilities. Malware authors often exploit old vulnerabilities because a large number of organizations still use old vulnerable software. The Trojan Travnet, which steals information, is a classic example of malware that takes advantage of unpatched software. We have recently observed malicious Travnet RTF and Excel documents that exploit old vulnerabilities, such as CVE-2010-3333, in Microsoft Office. During our investigation we identified some samples associated with this campaign that have been active since 2009. Once Travnet infects a machine, it searches for all document files, such as PDF, PPT, and DOC, and uploads this data to remote servers. To evade detection from network-monitoring appliances such as intrusion detection and prevention systems, the malware sends the stolen data in encrypted format. To reduce the data size, it first uses a compression algorithm and then a Base64 algorithm. We have observed the following files actively used in this campaign: - План проведения учения ВМС на 2013 года.xls (“Plan for teaching the Navy in 2013 goda.xls”) These files exploit old vulnerabilities in Office and drop executable files that are embedded in the original malicious files. We found that IP address 18.104.22.168 hosted many domains that are part of this campaign. During our investigation we found that these servers are now hosted at different IPs. The next list shows recent domains associated with this campaign. Most of the domains are registered with the following Registrar and Name Server: Some sites are registered to Li Ming and Zhang Lan, which could be fake names. However, their email IDs are also associated with lots of similar websites that are part of this campaign. The stolen data is being hosted at the following servers: We found on one server other malicious files using different domains: The malware injects a DLL into the Internet Explorer process “IEXPLORE.EXE” and starts collecting information and sending to the remote server: The following image shows how the infected machine sends data to the server: The stolen data is parsed by nettraveler.asp. Here is a snippet of that file: As we write this blog, we continue to analyze the samples to ascertain the nature of the data being collected on the remote servers, the potential victims, and the attacker(s). We are also investigating whether this attack is an advanced persistent threat. McAfee protects against this threat through on-demand User Defined Signatures. Coverage will be included in the Network Security Platform’s next signature release. Thanks to fellow researchers Anil Aphale, Amit Malik, Arunpreet Singh, and Umesh Wanve for their analysis. And thanks to Ravi Balupari and Benjamin Cruz for their valuable input. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:173fabb8-b11b-47c3-b76f-b289d58fde69>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/other-blogs/mcafee-labs/travnet-trojan-could-be-part-of-apt-campaign/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00535.warc.gz
en
0.918872
674
2.546875
3
The nationally publicized security disaster of April 2014 known as the "Heartbleed" bug has certainly drawn attention to the developing need to prioritize security when dealing with information systems. A single, flawed line of code written into an extension of the widely used web encryption software called OpenSSL granted access to stored private and personal data to those who sought to illegally obtain it. While the software was built to maintain periodic open connections between servers in order to regulate operation, the mentioned line of code inadvertently allowed 64 kilobytes of information to be accessed by a web attacker when an open connection was established. Furthermore, given that the process was periodic, an individual extracting information illegally could accumulate valuable data over time by continuously exploiting each open connection. As a consequence, hackers were able to acquire usernames, passwords, credit card information, and each server’s private digital key, which made classified internal documents available to unauthorized parties. Although this vulnerability was ultimately patched, the event made an example of the problems that arise when software is not monitored for exploitable weaknesses. With disaster, however, came useful lessons, and perhaps the best way to stop a heart from bleeding is not simply to patch it up but to prevent whatever is causing it in the first place. Consequently, extensive research has been conducted in order to understand popular trends in cybercrime as well as practical preventative measures. It may be a general cognizance of the intentions and methods of web attackers that allows both private consumers and employees of businesses to identify potential security threats. This approach, of course, requires knowledge, and security software developer Symantec has certainly reached some important conclusions. In their 2015 Website Security Threat Report, Symantec explored various components of cybercrime, including categories and contexts. By understanding this information as well as the conclusions provided by other sources, individuals can take more effective steps to at least ensure the protection of information that they do not want shared. When confronting cybercrime, it is important to understand the motives of web attackers. Some motives often include stealing data for sale in the underground economy, blackmail, extortion for money, distraction from additional cybercrimes, personal revenge, and the social movement that has been given the title of hacktivism. While these certainly follow logically from what someone would expect as the motivation for cybercrime, these observations are covertly invaluable. In each interaction that any individual has with the internet in which they are being asked to perform an action, one should ask whether the request being made could be correlated with one of these motives. Even if this method is not practiced faultlessly, it is the general knowledge that these motives exist that should be integrated into your understanding of the internet experience. In addition to understanding motives, it is also important to acknowledge what kinds of cybercrimes exist altogether. Firstly, it should be observed that not all cybercriminals are those that developed the assaulting software itself nor are they all acting on behalf of themselves. Web attacking software, attack services, and stolen information are all purchasable goods and services within the underground economy. Therefore, the world of cybercrime transcends the epitomic image of the single computer hacker ceaselessly pressing every key on his keyboard and stealing trade secrets. Instead, there are several components that constitute the collective meaning of cybercrime. One of these components is a method called malvertising (malicious advertising) in which a link to a precarious download disguises itself as an authentic advertisement in order to entice a website user to click on it. Sometimes, this can lead an individual to an inescapable web page that indicates that if they do not pay a certain fine, then they will be charged by the authorities for downloading illegal materials. This method is called ransomware, and it is a popular means of collecting profit as a web attacker. Ransomware has further evolved into a program that, when downloaded, encrypts all of the files on the user’s computer and gives them the choice of paying a given amount of money in order to regain access. There is also general malware (malicious software), which often plays a role in both malvertising and ransomware, that infects a computer with a virus or attempts to steal personal information. While there are a significant number of approaches for web attackers to utilize, these three are important to understand when browsing the internet. If you can recognize the approach of a web attacker, you can better identify when you are dealing with one. Although security software continues to develop and to challenge cybercrime, awareness is underrated as an IT security solution. While web attackers often exploit vulnerabilities such as the Heartbleed bug, they do not hesitate to exploit the ignorance of the individual web user as well. The mere recognition that institutions such as advertisements, which we have been conditioned to trust, can let us down is an important step towards ensuring a secure IT environment. Additionally, it is important to regularly upgrade security features as each new upgrade publicizes the vulnerabilities that are being corrected. To have obsolete software is to invite cyber criminals to take advantage of the vulnerability of which they have been made explicitly aware. Fortunately, as Operating Systems and other technology continue to advance, security is becoming a priority. Microsoft has made it clear that Windows 10 will have frequent and voluntarily automatic security updates that seek to quell the vulnerability concerns in general. Moreover, websites are being protected by SSL certificates that only apply to data inputted within a certain time period so that cybercriminals cannot be given an all access pass by obtaining a SSL certificate private key. These are certainly powerful weapons against cybercrime, but it may be cognizance and education that serve as the most effective line of defense. Symantec. Website Security Threat Report 2015. Rep. Print.
<urn:uuid:7f28e436-ed21-463a-a17c-0747e14c4714>
CC-MAIN-2022-40
https://info.focustsi.com/it-services-boston/mending-a-bleeding-heart-cognizance-as-an-it-security-solution
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00535.warc.gz
en
0.961278
1,158
3.015625
3
Have you ever been to a website, and before you can submit or view information, you have had to verify that you are not a robot? While this might be a bit annoying (of course you are not a robot!) there is a very good reason for doing this – to stop automated software, or “bots” from abusing the website. There is any number of “bad actors” out there who would like to exploit weaknesses in your site. What is CAPTCHA/reCAPTCHA? To try and sort the humans from the bots, CAPTCHA was invented as a way of testing who you were: a man or a machine. CAPTCHA is an acronym that stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. In the bad old days, CAPTCHAs were usually images of squiggly text or numbers that you had to read and type into a text box, like the image below. They were not very friendly or pretty. In 2009 Google acquired one of the CAPTCHA systems called reCaptcha. In the words of Google: reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive CAPTCHAs to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease. reCaptcha is more user-friendly, asking users to verify you are not a robot by showing the user squares of images, and asking the user to select ones that meet a criterion, such as those having a street sign in them. You first tick the “I’m not a robot” box and then complete the test. So, if you have a public-facing website that allows users to view or submit information, you will probably be interested in stopping bots from running amok by using something like reCaptcha. What if your publicly facing form is using Nintex K2. Not to worry, you can quite easily add reCaptcha to your Nintex K2 forms. How to add Google reCAPTCHA to your Nintex K2 Forms The first thing you will need is a google account. If you don’t have one, visit Google, and create one. Then visit the reCaptcha site and click on the Get reCaptcha button and enter your Google login details. Once you have logged in you will need to register your site. Give it a label and select the reCaptcha V2 option. You will need to enter the domain name for your Nintex K2 site (e.g. denallix.com) Once you have done this you will be shown the information needed to add reCaptcha to your site. To add reCaptcha to you Nintex K2 form follow these steps: - Create a new view (e.g. named “Google.Recaptcha.Verify.Item”) - Add a data label to the view named “Recaptcha Script Data Label” - Add an expression to your data label and add the following script to the expression, taking care to replace the highlighted value with your site key obtained above - Make sure you mark your data label as a literal - Add another data label called “Result Data Label” At this point, you can run your view, and you should be able to verify you are not a robot. When you are verified your Result Data Label should have a long string in it: Great! You have verified you are not a robot, however, this is only half the story. Even though Google has returned a response (that long string), you have not fully verified. This is because you need to send that response off to Google now and get back a final confirmation. To do this you will need to call Google’s URL. The best and easiest way to do this is via a Nintex K2 REST service. To create a reCaptcha REST service instance you will need a swagger file. A swagger file describes the service and is used by Nintex K2 for it to create the Service Instance and related Service Objects. The quickest and easiest way to create this is by using a service like REST United. There is only one endpoint needed – the following images describe the steps to create it in Rest UNITED. Once you have the swagger file (you can export this from RESTUnited, go to Test & Export, select Swagger, and then Export) place it on a location (i.e. file share or website) accessible by Nintex K2. You can now create a new REST Service Instance and add this location to the “Descriptor Location” setting for your service instance. Once your Google reCaptcha service is created, you can generate the SmartObject you need to perform the verification. The Service Object you need will be called “ValidationResponse” if you followed the steps above for creating the swagger file in RESTUnited. You can ask Nintex K2 to create the SmartObject for you by using the “Generate SmartObjects” button on the Service Instances page in Management. This SmartObject will have a single “Verify” method that you need to call. This method takes two parameters: secret and response. The secret parameter maps to the “secret key” that was generated for you by Google when you registered your site. The response parameter is the long string that was returned to your “Result Data Label.” Instead of having to add the “secret” parameter every time you call this “Verify” SmartObject method, you can add it in as a specific value in the SmartObject method definition. Edit the SmartObject method and choose to bind “secret” to a specific value, and paste your secret key in (don’t worry if Nintex K2 reports the value as “undefined” as per the image below, it will retain the value). Now when you call this method on your form you do not need to pass the secret key in. So now you can complete the final piece of the puzzle, and send Google your response and secret and get verification that the user is not a robot! - Add a new data label to your view called “Recaptcha Valid Data Label”. Optionally you can add another data label to hold any error codes returned by the verify call (e.g. add an “Error Codes Data Label”) - Add a new unbound rule to your view called “Verify Recaptcha Response” and add an action to call a SmartObject method - Select your “Validation Response” SmartObject created above and the “Verify” method - Configure the action and map the “Result Data Label” to the “response” input parameter, and the “Success” property to the “Recaptcha Valid Data Label” in the output mappings - You can now finish this view and add it to a form On your form, you can now call the “Verify reCaptcha Response” rule in response to an event on your form, such as a button click. You will first want to check that the “Result Data Label” has a value so that you can tell if the user has tried the reCaptcha verification. Once you have called the rule to verify the response you can check the value in “Recaptcha Valid Data Label” to see if is true (passed) or false (failed), and then act accordingly. You now have a re-usable view and pattern for implementing Google reCaptcha on any of your Nintex K2 forms!
<urn:uuid:f3dead7a-25e9-4931-bcd8-54c23cd0719d>
CC-MAIN-2022-40
https://www.nintex.com/blog/adding-google-recaptcha-to-k2s-process-automation-forms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00735.warc.gz
en
0.894008
1,779
2.953125
3
It is always tempting to dive straight into prototyping and algorithm development activities. But taking some time at the outset to better understand data always leads to more nuanced insights and relevant results because teams are able to identify issues and nuances early on. The best way to know if data contain useful and relevant information for solving a problem using machine learning is to visualize and study the data. If possible, it is better to work with SMEs who are familiar with the problem statement and the business nature while visualizing and understanding the relevant data sets. In a customer attrition problem, for example, a business SME may be intimately familiar with aggregate data distributions such as higher attrition caused by pricing changes and lower attrition for customers who use a particularly sticky product. Such insights, observed in the data, will increase confidence in the overall data set or, more importantly, will help data scientists focus on data issues to address early in the project. We typically recommend various visualizations of data in order to best understand issues and develop an intuition about the data set. Visualizations may include aggregate distributions, charting of data trends over time, graphic subsets of the data, and visualizing summary statistics, including gaps, missing data, and mean/median values. For problem statements that support supervised modeling approaches, it is beneficial to view data with clear labels for the multiple classes defined, for example customers who have left versus existing customers. For example, when examining data distributions across multiple classes, it is helpful to confirm that the classes display differences. If data differences, however minor, are not apparent during inspection and manual analysis, it is unlikely that AI/ML systems will be successful at discovering them effectively. The following figure shows an example of plotting data distributions across positive (in blue) and negative (in orange) classes across different features in order to ascertain whether there are differences across the two classes. In customer attrition problems, those customers who have left may represent a disproportionate number of inbound requests to call centers. Such an insight during data visualization may lead data scientists to explore customer engagement features more deeply in their experiments. Figure 29: Understanding the impact of individual features on outcomes or class distributions can offer significant insight into the learning problem We also recommend that teams physically print out data sets on paper to visualize and mark up observations and hypotheses. It is often incredibly challenging to understand and absorb data trends on screens. Physical copies are more amenable to deep analysis and collaboration, especially if they are prominently displayed for team members to interact with them, for instance on the walls of a team’s room. Using wall space and printed paper is more effective than even a very large projector. The following figure shows a picture of one of our conference room walls that is covered in data visualizations. Figure 30: Conference room walls covered in data visualizations to facilitate understanding and collaboration during model development In the following figure from an AI-based predictive maintenance example, aligning individual time series signals makes it possible to rapidly scan them for changes that occur before failures. Figure 31: Example of a time series data visualization exercise as part of an AI-based predictive maintenance prototype. By visualizing these data, scientists were able to identify small patterns that can later be learned by algorithms.
<urn:uuid:41b2cc08-2f53-4f6c-9b51-c7b2f2913c7c>
CC-MAIN-2022-40
https://c3.ai/introduction-what-is-machine-learning/getting-started-by-visualizing-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00735.warc.gz
en
0.925198
658
2.890625
3
A new antibody cocktail developed by Israeli researchers may provide natural immunity against COVID-19 for weeks, and possibly months, according to a preliminary study. The recently released study, published last week in bioRxiv and not yet peer-reviewed, was conducted by a team of Israeli scientists led by Dr. Natalia Freund and PhD student Michael Mor at Tel Aviv University’s Sackler Faculty of Medicine. The team isolated and characterized six antibodies derived from the blood of severely ill COVID-19 patients, and then combined three antibodies at a time for a potent in-vitro cocktail against SARS-CoV-2, the virus that causes COVID-19, according to a university announcement. The scientists have filed for a patent of the antibodies through Ramot, the Technology Transfer Arm of Tel Aviv University. Their study is also currently under consideration by the peer-reviewed medical journal PLOS Pathogens. “The antibodies bind to the virus in different spots and neutralize the virus using different mechanisms,” Dr. Freund explains to NoCamels in a phone interview. The antibodies “were more effective when combined as opposed to by themselves, and we got very efficient neutralization of the virus in tissue culture.” Pending successful clinical trials with humans, the data from the study support “the use of combination antibody therapy to prevent and treat COVID-19,” the scientists wrote in their study. The quality of antibodies The study began in April, approximately a month into Israel’s first national coronavirus lockdown. The team sought to understand the antibody response to COVID-19 infection and the development in patients with no symptoms, mild symptoms and severe symptoms “with regard to both the quality and quantity of the anti-viral antibodies produced by the immune system.” The scientists recruited 18 of Israel’s earliest COVID-19 patients – 10 with mild or no symptoms, and eight severely ill patients who were hospitalized, and some even ventilated, at Ichilov and Kaplan Medical Centers. (All the participants recovered from the disease.) They found that only a small portion of the mildly ill participants developed neutralizing antibodies, and some developed none. Meanwhile, the blood of all severely ill patients contained neutralizing antibodies. These findings suggest that those with a mild or asymptomatic infection may possibly contract the disease a second time, while those who recovered from a more severe infection may be protected, according to the university statement. “We sequenced the cells that are producing these antibodies; these cells are called B lymphocytes, these are cells of the immune system,” Dr Freund explains, emphasizing that they are found naturally in the body. “We sequenced thousands of these cells, and eventually we were able to isolate six different antibodies that were neutralizing [of the virus] and produced them in the lab. So we basically reproduced the antibodies and made a monoclonal (lab-made identical immune cells that are all clones of a unique parent cell) antibodies, and we mixed these and tested their use in neutralizing the virus, alone and in combination with each other,” . The concept, she says is the same as the experimental antibody cocktail administered to US President Donald Trump earlier this month when he was found to have tested positive for COVID-19 and began developing symptoms. The cocktail treatment the president received was developed by American biotech company Regeneron; early studies have shown that it can be effective in patients whose immune systems had not mounted their own antibody response. Trump himself called the treatment a “miracle” and has touted it as a cure, but the cocktail has not yet been fully clinically evaluated. The Israeli scientists’ cocktail, explains Dr. Freund, is derived naturally by the patients’ immune systems, which means that it is probably safer for use “because it’s natural human antibodies, and there are no expected adverse effects,” . “Since these antibodies are stable in the blood, a preventive injection can provide protection for several weeks, and possibly even several months.” This type of treatment also differs from other antibody therapies such as those derived from plasma. “We are not taking the antibodies from the plasma and purifying them and using them on patients, or it’s not our intention to,” says Dr. Freund. The vision is that, in the future, the cocktail will be used to treat COVID-19 patients, “until the much-awaited vaccine finally arrives,” says Dr. Freund. COVID-19 vaccine development Currently, close to 200 candidate vaccines or treatments for COVID-19 are in development, 42 of which are in clinical evaluation as of October 2020. These include a promising vaccine candidate developed by Massachusetts-based company Moderna, which is currently in Phase III trials, and a vaccine developed by the University of Oxford which signed a distribution agreement with drugmaker AstraZeneca In August, Israeli researchers from the government-run Israel Institute for Biological Research (IIBR) indicated that they expect to begin human trials for the COVID-19 vaccine candidate they developed after the high holidays. The institute announced several developments over the past several months. In June, researchers indicated that a vaccine they developed for SARS CoV-2, the virus that causes COVID-19, was found to be effective in trials involving hamsters, paving the way for testing with humans. Previously, the center reported “significant progress” on the vaccine and initial trials. The secretive institute has also been working on researching potential treatments and in early May announced that it made a breakthrough on a treatment involving a discovered antibody that neutralizes the virus. That same month, it further announced that researchers found that a combination of two existing antiviral drugs for Gaucher disease appears to inhibit the growth of SARS CoV-2, and may work against other viral infections, including a common flu strain. On Sunday, the Hebrew-language Walla news site reported that clinical trials for the IIBR vaccine may begin later this month, pending the approval of the Health Ministry and that of the Helsinki Committee, a medical panel comprised of physicians and advocates that weighs research approval for human experiments. The study is expected to unfold in three stages: the first will consist of a trial involving a hundred healthy participants aged 18-50; the second will consist of 1,000 participants and is expected to start in December; the third stage will involve some 30,000 volunteers and may begin early next year. The interactions between antibodies, SARS-CoV-2 and immune cells contribute to the pathogenesis of COVID-19 and protective immunity. To understand the differences between antibody responses in mild versus severe cases of COVID-19, we analyzed the B cell responses in patients 1.5 months post SARS-CoV-2 infection. Severe and not mild infection correlated with high titers of IgG against Spike receptor binding domain (RBD) that were capable of viral inhibition. B cell receptor (BCR) sequencing revealed two VH genes, VH3-38 and VH3-53, that were enriched during severe infection. Of the 22 antibodies cloned from two severe donors, six exhibited potent neutralization against live SARS-CoV-2, and inhibited syncytia formation. Using peptide libraries, competition ELISA and RBD mutagenesis, we mapped the epitopes of the neutralizing antibodies (nAbs) to three different sites on the Spike. Finally, we used combinations of nAbs targeting different immune-sites to efficiently block SARS-CoV-2 infection. Analysis of 49 healthy BCR repertoires revealed that the nAbs germline VHJH precursors comprise up to 2.7% of all VHJHs. We demonstrate that severe COVID-19 is associated with unique BCR signatures and multi-clonal neutralizing responses that are relatively frequent in the population. Moreover, our data support the use of combination antibody therapy to prevent and treat COVID-19. reference link : https://www.biorxiv.org/content/10.1101/2020.10.06.323634v1.full
<urn:uuid:630e8ab7-f58a-4cf5-b36c-2de08e691569>
CC-MAIN-2022-40
https://debuglies.com/2020/10/12/covid-19-israeli-researchers-have-developed-a-new-antibody-cocktail-that-provides-natural-immunity-for-months/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00735.warc.gz
en
0.956234
1,725
2.859375
3
At the turn of this century, semiconductor makers were facing a conundrum. Traditionally, they would simply bump up the clock speed to get more performance out of their processors. After doing this so effectively for two decades, the gimmick was running out of gas. More clock speed meant more heat, and processors have gone from no cooling required to a heat sink that dissipated heat, to heat sinks with large fans that were the size of a Rubik’s Cube. “As AMD and Intel have learned, if you stick with one big core you get one big heating problem,” Tony Massimini, chief of technology for Semico Reasearch, told internetnews.com. The solution became multi-core processing. Multi-core is no different from the old days of multiprocessor computers, only instead of two physical chips, there are two cores acting like two CPUs. Windows sees two CPUs, just as it would if there were two physical, single core chips. Two dual-core chips means the system sees four CPUs, and so on. “Multi-core gives you the ability to divvy up the work for a complex app and be able to then run these cores at lower frequencies,” said Massimini. “You are looking at efficient use of power to handle the power dissipation.” These processors ran in the 2.0GHz to 2.6GHz range, which is slower than the 3.8GHz of the Pentium IV. But the two cores made them execute faster under a workload because of the two cores. They were also cooler, running at around 80 to 90 degrees Fahrenheit on average, as opposed to the 100 to 110 degrees or more of a Pentium IV. Two cores eventually turned to four for Intel, with its Kentsfield line, released as the Core 2 Extreme line, in November 2006. Clovertown first quad-core Xeon processor for servers, also released in November 2006. It’s built, but will they come? So, if Intel (Quote) and AMD (Quote) built all of this technology, will the apps come? Not for a while, it seems. As to what applications parallelization will lend itself to, that’s more open to debate. “There is a class of applications that parallelize very well,” Jerry Bautista, director of technology management in the microprocessor research lab at Intel, told internetnews.com. “They span from cinematic content creation like Pixar and DreamWorks through home video, gaming and even in financial analytics. These are all a broad class of apps that we typify as taking advantage of model-based computing.” Margaret Lewis, director of commercial solutions for AMD, sees things differently. “The killer app for multi-core is virtualization,” she said. “For the desktop it’s going to be a little harder for it to take off. In the desktop world, you are one user to a machine. The server is beautiful for multi-core because the server is multiple users, and multiple users means multiple threads.” Virtualization can only happen in the 64-bit world because processors are now free from the restrictions of the 4GB of addressable memory space in a 32-bit processor. A 64-bit chip can, in theory, access 16 exabytes of memory, although hardware vendors are for now sticking to terabytes as the theoretical limit for memory. “This could only happen with the advent of 64-bits,” said Lewis. “The beauty of what’s happening is all of these [technologies] are starting to come together to enable virtualization.” Gartner and IDC trends for the last two months reflect this, with fewer machines sold than in the past. But the machines are considerably more “decked out” with much more memory. One dual- or quad-processor system with 16GB or 32GB of memory is ideal for running dozens of servers in a virtualized environment. Next page: Programming challenges To parallelize an application The next trick, then, becomes parallelizing an application so it can march in two or more rows rather than a single file. Some processes can never be parallelized, but rather have to be done sequentially, such as applications where one step is determined by the results of the previous step. In other cases, though, you just can’t make the whole application parallel, said Lewis. “You may go to one part of the code that can be highly parallelized, then you go to another part of the code and it’s highly serialized and can’t be made parallel.” Intel’s Bautista agreed. “The major challenges are on the software side. Do we have an appropriate programming environment, benchmarks, and optimizers? That’s an issue. A lot of the research is around those areas today.” The problem facing hardware and software vendors alike is that parallel programming is a rare skill and extremely hard to master. Parallel processing has been around for decades, but programming effectively for multiple processors is hardly a commodity skill like Java or C++ programming. Massimini said there has to be a breakthrough in that area just as there has been in every other section of computing. “Someone’s going to have to crack [parallel programming]. Otherwise you will never have a better game or computer or app because we will hit a wall. Saying we can’t do it is not the answer people want to hear.” The solution has been to put parallel code, libraries and intelligence into compilers to detect segments of code that parallelize well. “We’ve got high-level languages today, where I don’t think anyone who programs in C thinks about [assembly language],” said Massimini. “You’re going to have to develop that underlying layer in the software where someone can program in a higher level of code that translates it back into the op code to provide that parallelism.” Intel has announced a new set of tools to do just that, as has Sun. Intel’s new C compiler looks for code that could operate in parallel and is “parallelized,” as Bautista put it. “Would it be as good as a programmer skilled at parallel processing? No, but it can come close.” Lewis agreed this is the best short-term solution. “Long-term, we all need to look at what are some different methods for parallelization,” she said. “But for now, the things we need to do are shielding the developer from having to understand some of the intricacies of parallelization.” The trick then, is for programmers to catch up to the hardware. Intel is quad-core now. AMD will be when Phenom and Barcelona ship later this year. Then Intel goes to eight cores, and the race continues. Massimini is the most optimistic that the industry will make full use of every core. “I like to say software is like a gas. It will expand or contract to fill its available volume. If the hardware community gives them more power, it will suck it up like a leech and take more power.”
<urn:uuid:05a4e78a-52c6-4cc3-bd61-8e0a974d45d1>
CC-MAIN-2022-40
https://www.datamation.com/applications/multi-core-what-is-it-good-for/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00735.warc.gz
en
0.953983
1,542
3.1875
3
At the time of its invention, the supply chain was an ingenious idea, made to streamline and facilitate the journey of raw goods as they made their way from their source to factories and ateliers, were turned into products and consumables, and were finally shipped to retailers and consumers. But that was more than a century ago, before our economies became as globalized and fragmented as they are now. Today, things have become much more complicated and the old model no longer fulfills the needs of the supply chain as it spans over dozens and hundreds of stages and travels across half the globe during its lifetime. Today’s supply chain can at best be described as an opaque, black box that makes it extremely difficult to validate the costs and quality of goods, and offer little in terms of being able to track their geographical displacement. Fortunately, blockchain, the technology that underlies the famous bitcoin cryptocurrency, can help solve many of the problems that are riddling the supply chain, and also create new opportunities. Here’s what you need to know. What is blockchain? I’ve discussed blockchain thoroughly in this post. But in a nutshell, blockchain is a decentralized ledger that uses cryptography to store transactions immutably. The database is replicated across participating nodes, and there’s no central server or trust to control and verify transactions. Here’s what it’s got in stock for the supply chain. The lack of transparency is endemic in today’s supply chain model and technologies. Consumers and other stakeholders have no idea about the material, labor and costs that go into producing the products they buy from stores. And aside from the monetary expenses, we have no way to track the other factors such as slavery, child labor, violence which go into harvesting the material that are used in the products we use. Many companies engage in questionable practices to keep their costs low and profits high, such as hiring labor from regions where prices are cheap and work ethics and standards are substandard. Tracking these things takes a lot of time and effort. Furthermore, other issues such as whether quality grade material has been used in the products are very tough to track. How about getting to know how much profit manufacturers, suppliers and retailers are making? This is something that can be remedied with blockchain technology, which can create the necessary infrastructure for a transparent and ethically minded, community driven supply chain management system. In a blockchain-based system, every raw material and commodity would be registered on the ledger. Transfer of goods between different parties will be immutably recorded on the ledger as well, so you can precisely track what materials were purchased and used for the production of a particular product, who the vendors were, and what is their reputation. Furthermore, information is stored on the ledger as products move through the supply chain and change ownership between different parties until they finally reach retailers and go into the hands of consumers. This is extremely transparent, as the exact cost, timing, geographical and parties involved in every leg of the journey can be tracked. One of the positive consequences of having everything stored on the blockchain ledger is being able to hold everyone accountable. So the next time that a food poisoning crisis erupts, you can track every node to the point of contamination and determine the cause. Also, the extended visibility will better enable us to fight counterfeit products and forgery because you can track down the history of each product down to the provider of the nuts and bolts that were used and eventually to the initial raw material that was used to produce every component. Blockchain can also reveal and prevent illegal and unethical practices, such as purchasing from unauthorized sources, which is the case in some industries such as blood diamonds. One of the greatest features of blockchain is its secure nature and its theoretically unhackable structure. The fact that there’s no central authority makes it extremely difficult to alter records that have been previously stored in the ledger and there’s no single point of failure and no single point of compromise. This is especially useful in cases where sensitive data is being circulated through the supply chain. An example would be the healthcare and pharmaceuticals industry, where the need to store personal information across the chain is vital. An added value of the blockchain would be the possibility of creating a supply circle, a system that would allow consumers to become directly engaged in the supply chain and establishes a platform for cooperation and collaboration. For instance, in the case of food, this paradigm can help deal with the issue of vast swathes of urban areas having become deprived of fresh and affordable produce. In a Medium article that elaborates on the concept, the folks at ConsenSys describe how blockchain and smart contracts help consumers become prosumer—or consumers that produce as well. Automated purchases and new markets In an article posted on Coindesk, Reid Williams describes some interesting uses for bitcoin and blockchain in the supply chain. One of them involves using smart contracts to make automated purchases when certain conditions are met. For instance, a supplier publishes wares on the blockchain at a certain price. A consumer sets up a smart contract that will automatically purchase a predefined amount of the product in question when the price drops below a certain range. These smart contracts will enable suppliers to make calculated decisions on their pricing policies and helps consumers automate their purchases when opportunities arise. The piece also presents the concept of shared marketplaces, where consumers publish and trade goods on the ledger, just like the good old days before money was invented. There’s much more to it The blockchain’s disruptive working model creates a lot of unprecedented possibilities in supply chain management. There’s only so much I can cover here. Stay tuned for more on this in future articles I will write.
<urn:uuid:472b1f2e-e147-4fb8-ad92-7a3ab7df9216>
CC-MAIN-2022-40
https://bdtechtalks.com/2016/10/10/the-blockchains-potential-for-revolutionizing-the-supply-chain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00735.warc.gz
en
0.960003
1,193
2.9375
3
The Details tab in the OpenESQL Assistant shows detailed information on the columns in a table. The columns on the Details tab describe the columns in the selected table as follows: The column name of each column in the table. Notice that each column name is prefixed by the table alias. For example, a table with the name CustID shows on in the Column Name column as A.CustID. A tick in the box to the left of Column Name indicates that the column is currently selected. The data type of the column. These are the data types used by the data source to which you are connected. The data type of a column must match the COBOL picture clause of the host variable that is used to pass values for that column to and from the data source. OpenESQL Assistant can generate a copybook, tablename.cpy, in the current directory that declares all of the necessary host variables, matching them with COBOL picture clauses generated using column data types. See the topic Auxiliary Code for more details. The total number of digits in numerical columns or the length of the column for text columns. The number of digits to which the column is rounded, where relevant. The value of the generated host variable for the column. Generated host variable names take the form: The value of the generated indicator variable for the column. Generated indicator variable names take the form:
<urn:uuid:50c8035a-8973-424b-b8f1-7f0ebc8338b0>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/visual-cobol/vc50pu8/VS2019/HCOMDBSASSS014.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00735.warc.gz
en
0.789906
319
2.578125
3
Certificate Authority Authorisation (CAA) records let you specify which Certificate Authorities (CA) are allowed to issue SSL certificates for your domain. The record can help make the SSL certificate for your domain more trustworthy. SSL certificates, like much of the internet, depend on trust. When you buy something online you trust that the website is genuine and not a fake website that is trying to steal your credit card details. Similarly, when you buy an SSL certificate for your domain you trust that the CA is trustworthy. In reality, the internet is a messy place. There are hundreds of certificate authorities, and not all of them are trustworthy. For instance, would you trust a CA run by an authoritarian government? Probably not. CAs can also be sloppy. The most famous example is Symantec. In 2017, browser vendors decided SSL certificates issued by the authority – at the time one of the largest in the business – could no longer be trusted because of a long list with issues. And certificate authorities can also be compromised. An example is DigiNotar, a Dutch root CA that suffered a security breach in 2011. The CA had issued malicious SSL certificates for, among many others, the domain google.com. The attacker used the SSL certificate to target Iranian Gmail users. Users wouldn’t see anything wrong with the fake Gmail website, and the attack only came to light because the Google Chrome browser has an extra check for domains owned by Google. There is an excellent post mortem of the compromise on slate.com. And if you want to learn more about root certificates and the chain of trust, we got an article about root and intermediate certificates that explains how it all works. For most website owners these type of attacks are unlikely, but they do happen. CAA records can help prevent an attacker issues an SSL certificate for your domain, as the record only allows specific providers to issue a certificate. If your CAA record specifies that only Let’s Encrypt can issue SSL certificates for your domain then it doesn’t matter if ExampleCA has been compromised. They won’t be able to issue an ExampleCA SSL certificate for your domain, as they are not Let’s Encrypt. CAA records have a flag, tag and value. In the below example the flag is “0”, the tag is “issue” and the value is “letsencrypt.org”: example.com. CAA 0 issue "letsencrypt.org" The last part is the easiest to understand. Each CA has its own CAA value. If you have a Let’s Encrypt SSL certificate then you can add “letsencrypt.org”. For other CAs you may need to do an online search (it is typically the providers domain name). The flag can be an integer between 0 and 255, though currently only 0 and 128 are used: The three most command tags are issue, issuewild and iodef: You can have multiple CAA records. For instance, it is fine to create a CAA record for more than one CA. Similarly, if you want to be notified of CAA check failures then you can add a CAA record with the iodef flag. So, it is perfectly fine to have records like these: example.com. CAA 1 issue "digicert.com" dev.example.com. CAA 1 issue "letsencrypt.org" example.com. CAA 0 iodef "mailto:firstname.lastname@example.org" You can easily add one or more CAA records via cPanel’s Zone Editor – the record type is one of the options in the Add Record drop-down menu. Image: adding a CAA record via cPanel’s Zone Editor. The record itself includes the flag, tag and value fields described above. Here, I am adding a CAA record for letsencrypt.org. Image: the flag, tag and value fields. And, as mentioned, you can add multiple CAA records. For instance, here I got a second record using the iodef tag. Any CAA check failures will be reported to the specified email address. Image: my two CAA records. The second record uses the iodef tag. Finally, it is always a good idea to double-check the records. Here, I check the records using dig: $ dig @ns5.catalyst2.net example.com CAA +short 0 issue "letsencrypt.org" 0 iodef "mailto:email@example.com"
<urn:uuid:f8adb95b-2f92-4f77-8b0c-b2c5e4bfa9d2>
CC-MAIN-2022-40
https://www.catalyst2.com/knowledgebase/dns/caa-records/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00135.warc.gz
en
0.914717
960
2.890625
3
A conductive ink (CI) is a thermoplastic viscous paste that conducts electricity by inculcating conductive materials such as silver and copper. This ink comprises binder, conductor, solvent, and surfactants used during its manufacturing process. The Asia-Pacific conductive inks market was valued at $2,349.2 million in 2012, and is projected to reach $2,654.6 million by 2018, growing at a CAGR of 3.0 % from 2013. Asia-Pacific conductive inks market has grown considerably during the past few years and is expected to grow at a more rapid pace in the next five years. Silver flakes are the major type of conductive inks, and has a huge demand in Asia-pacific. The binder helps to clutch together all conductive materials in the ink and provides a strong support to the product. It is particularly used in the applications which require high reliability and flexibility. The conductor is another important part of the ink which allows the passage of electricity. The different types of conductors used in conductive inks are silver, copper, nickel, aluminum, and so on. Similarly, the solvent is used for the formation of solution, whereas the surfactants help in uniform mixing of the ink. Conductive inks have various applications such as photovoltaic, membrane switches, automotive, RFID/smart packaging, bio-sensors, printed circuit boards, and other applications. The continuous rise in production of end products for use within the region and for exports derives a huge demand for the chemicals. The growing demand and policies including emission control, environment friendly products, etc., have led to innovation and developments in the industry, making it a strong chemical hub globally. The exorbitant growth and innovation along with the industry consolidations are expected to ascertain a bright future for the industry in the region. China is the major consumer of conductive inks in Asia-Pacific, accounting for 67.7% of the total consumption. Subsequent to China are Japan and South India. The key countries covered in Asia-Pacific conductive inks market are China, Japan, India, and Others. The types of conductive inks studied include conductive silver ink, conductive copper ink, conductive copper ink, conductive polymers, carbon nanotube ink, dielectric inks, carbon/grapheme ink, and others. Further, as a part of qualitative analysis, the Asia-Pacific Conductive inks market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the conductive inks market. The Asia-Pacific Conductive inks Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Applied nanotech holdings Inc. (U.S.), Conductive Compounds Inc. (U.S.), Creative Materials Inc. (U.S.), and E.I. Du Pont De Nemours and Company (U.S.). With Market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Asia-Pacific Conductive inks market - Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Asia-Pacific Conductive inks Market - Detailed Analysis of various drivers and restraints with their impact on the Asia-Pacific Conductive inks Market - Upcoming opportunities in conductive inks market - Trade data of CI market - SWOT for top companies in conductive inks market - Porters 5 force analysis for conductive inks market - PESTLE analysis for major countries in conductive inks market - New technology trends of the CI market Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:7d2bd93e-7143-488d-8848-f420c9cd9f81>
CC-MAIN-2022-40
http://www.micromarketmonitor.com/market/asia-pacific-conductive-inks-6719939256.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00135.warc.gz
en
0.936171
914
2.703125
3
As the telecommunications industry moves to expand their fiber networks to meet the market's demand for increased bandwidth to provide higher speeds and lower latency, it has caused an increase in the use of micro-trenching in infrastructure construction. Two of the most common barriers to fiber deployment within a city usually revolve around construction cost and community disruption. In the past, city councils had few options for deploying fiber to the home outside of shutting down streets to dig up, on average, 30-by-30-inch trenches, causing, as you could imagine, much disruption and an increase in the budget. Unlike past methods, micro-trenching allows for minimal disruption and minimal cost in hopes of keeping up with the fiber demand in a more efficient manner. By using a specialized machine to make a small incision - typically no wider than 1.5 inches, and no deeper than 12 inches - micro-trenching leads to a dramatic improvement in the timeline, project scope, and budget of laying fiber. What are the benefits? With the decrease in trench size, crews are no longer required to be as large as in the past. The roughly five crew members needed for a micro-trenching project is a fraction of the team size required to complete older, traditional fiber deployment methods. In part because of the lighter, more specialized, and easier to handle construction equipment, and partially due to the significantly smaller project scale as a whole, companies are saving money on manpower so they can continue to focus their profits on expanding their fiber networks even further and faster. Before micro-trenching, there were frequent community complaints of severed gas and water lines and other utility interference during fiber deployment. The smaller trench depth allows companies to avoid disruption of any current, surrounding infrastructure, which typically are buried three feet underground. Being closer to the surface, there is no concern that the micro-trenching will interfere with existing utilities, and the ducts are still buried deep enough to avoid issues with road resurfacing crews in the future, as they typically only remove the top two inches of material during those construction projects. With the ability to open more feet of ditch and install more cable per day, companies are maximizing their ROI by minimizing the cost per home/facility passed during construction. Mike Leddy, manager of network deployment and operations with Google Fiber, says that "micro-trenching is vastly more efficient in areas with urban sidewalks" because "they can micro-trench as many as 50 customers in the time it used to take to bore to one customer" (1). Paired with the decrease in project timeframe, is the decrease in community disruption throughout construction. Because of the smaller crews and equipment required, and because the need to dig out large trenches has been erased with the micro-trenching method, there are very few if any traffic lane closures and virtually zero complete road closures. The same method of construction is applied to sidewalks leading from the roads to the houses or businesses, so there are minimal accessibility issues that would cause a strain on the families or the business profits during the project. Because of the ability to connect 50 points in one day, where it used to take a month, communities hardly even have time to notice the construction before it is completed. The image to the right, taken from Broadband Properties' BBC Magazine, was snapped the day after installation was completed (2). An average observer would hardly notice a micro-trenching project had taken place just the day before. It has become apparent that in order to be fully capable of providing the desired advancements brought on by 5G, companies are realizing that more and more fiber must be deployed. Because the micro-trenching technique gives these companies the ability to complete their projects in a fraction of the time and at a fraction of the cost as what older methods could offer, companies are able to not only keep up with today's demand for fiber more easily, but also prepare for the continued, anticipated growth in fiber demand in the future. Want to stay up-to-date with new fiber deployment technologies and methods? Subscribe to our blog at the top of this page.
<urn:uuid:688e0343-4726-437b-82fa-227789a5e4a6>
CC-MAIN-2022-40
https://blog.3-gis.com/blog/micro-trenching-bringing-fiber-to-the-city
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00135.warc.gz
en
0.961411
843
2.859375
3
If you have been following the rise of software-defined networking (SDN) recently as it makes its way into the thinking of enterprises the world over, it is likely you will have come across OpenFlow. But what exactly is OpenFlow? And what role does it have to play in SDN? It is important to point out that OpenFlow is not SDN, and vice versa. It is one of the building blocks of a software-defined network – and a very important one at that. OpenFlow is a protocol that separates the control of a switch from the switch itself. The high-level control of a switch, which includes packet routing and so on, is moved to a centralised server. The OpenFlow protocol allows switches and controllers that support OpenFlow to communicate with each other. Enabling the network to be programmed independently of its gear makes the whole system much more agile and flexible, not to mention cost-effective. Gartner analysts Joe Skorupa and Mark Fabbi listed OpenFlow as one of the "trigger technologies" in their Hype Cycle for Networking and Communications 2012 report. “In an OpenFlow network, the topology is centrally managed,” their description of the protocol reads. “The data plane still resides on the switch/router, while high-level routing decisions are moved to a separate controller. OpenFlow defines a set of messages that are exchanged by the controller and switch. “OpenFlow's distinction is in a new implementation that provides discrete software options for network definition and management. OpenFlow promises to ease provisioning of large, complex datacentre networks.” Read more about SDN - Juniper Networks outlines SDN strategy - Huawei bets on cloud for SDN - SDN adoption plans still stalling in Europe - Intel and F5 invest into SDN - Why do I need SDN technology? - The New Network: Software-Defined Networking Gets Real The origins of OpenFlow Skorupa and Fabbi go on to say OpenFlow is such a new technology that widespread adoption is still at least two years away. However, the origins of OpenFlow can be traced back to 2006, when Martin Casado, a PhD student at Stanford University in Silicon Valley, California, developed something called Ethane. Intended as a way of centrally managing global policy, it used a “flow-based network and controller with a focus on network security”, according to OpenFlowNetworks.com, a site dedicated to tracking the emerging technology, along with SDN. That idea eventually led to what become known as OpenFlow, thanks to more research conducted jointly by teams at Stanford and the University of California, Berkeley. Although still a nascent technology, it was gaining a fair amount of traction in Silicon Valley. Nicira and Big Switch Networks, early backers of OpenFlow, raised significant amounts of venture capitalist funding to help push their products. In 2011, the Open Networking Foundation was established, with the aim of standardising emerging technologies propelling software to the forefront of networking and datacentre management. While the first version of the OpenFlow protocol (listed as version 1.1) was released in February 2011, the second (version 1.2) was overseen by the Open Networking Foundation, which retains control over the specification. Founding members include the likes of Google, Facebook and Microsoft, while the likes of Citrix, Cisco, Dell, HP, F5 Networks, IBM, NEC, Huawei, Juniper Networks, Oracle and VMware have since joined the Foundation (a full list of members is available here). The promotion of SDN and OpenFlow will be one of the main benefits of the Foundation, according to Stu Bailey, founder and CTO of InfoBlox. If customers adopt OpenFlow, then lock-in to a supplier's routing protocols is eliminated and customers may force switch suppliers to increasingly compete on price Gartner Hype Cycle for Networking and Communications 2012 report “It is a standards body driven by consumers – not suppliers – that have proven by heavy investment over the past 10 years that SDN approaches create better economies and scales within IT,” he says. “Now they are sharing much of that knowledge with the rest of the market through backing emerging standards.” Networking hardware and software becoming a commodity But membership of the Open Network Foundation does not necessarily translate to full support for the OpenFlow protocol. There is one simple reason for this: commoditisation. Currently, a lot of the control and management of switches and routers is done through proprietary software installed on each bit of network kit. Moving that management to a central server, as OpenFlow does, means those boxes will become commodities. That is bad news for the likes of Cisco and Juniper Networks, which have built vast businesses based on selling their proprietary hardware and software paired together. As Gartner’s report on OpenFlow states: “If customers adopt OpenFlow, then lock-in to a supplier's routing protocols is eliminated and customers may force switch suppliers to increasingly compete on price.” Balancing open networking with proprietary business The report goes on to suggest those companies not fully supporting OpenFlow “may attempt to delay market acceptance by offering proprietary variants or to position OpenFlow as a set of application programming interfaces (APIs) to proprietary management systems to avoid the possibility of a loss of account control and product margin erosion”. That is one approach Cisco is taking. As the technology giant explained to Computer Weekly for our software-defined networking buyer's guide, it is helping customers to embrace open networking through its ONE strategy. This includes a set of APIs, as well as a controller framework that supports OpenFlow. Juniper Networks may be threatened by the emerging OpenFlow protocol, but is still aware of the importance of the general shift towards more open networking. Speaking to Computer Weekly, Nigel Stephenson, head of cloud services marketing at the firm, spells out his company’s position, but also the importance of open standards going forward. “In 2012, we released the JUNOS V App Engine, an appliance that allows you to run third-party applications; the other angle is to take our services and run them on other platforms,” he says. “All of this has to be in a standards-based environment. It is possible to use proprietary protocols, but that affects costs and causes lock-in, which customers do not want. “OpenFlow is one of those standards; it has a purpose within this architecture, but it’s not the only one, and we expect more to be developed. Where there are standard protocols, we will use them. Where there are not, we need to work as an industry to make sure we standardise whatever we go forward with.” Embracing OpenFlow and SDN While the approach taken by the likes of Cisco and Juniper is partly aimed at protecting their proprietary business and partly at progressing along with the rest of the networking industry, other suppliers have embraced OpenFlow and SDN wholeheartedly. HP, which is currently undergoing a huge shift to become an end-to-end enterprise technology provider, is one company pushing its OpenFlow credentials. It has been collaborating on the OpenFlow protocol for over five years now, and in February 2012 made its big play, announcing the introduction of a portfolio of OpenFlow-enabled switches, covering 16 models in total and supporting more than 10 million installed ports. HP also plans to extend OpenFlow support to its entire range of FlexNetwork products by the end of 2012. Transforming the future of networking There is no doubt that software-defined networking will ultimately transform the networking industry, as the benefits it can bring are simply too huge to ignore. "The combination of OpenFlow and SDN has the potential to transform the networking market from a bundled hardware and software market to one of separate hardware and software components. It can dramatically simplify network operations and lower complexity and costs,” says Gartner’s Skorupa. However, while OpenFlow will undoubtedly play an important role in software-defined networking, there is a likelihood other similar protocols will emerge to operate alongside it. For now, though, OpenFlow is the clear leader and will remain so for the next few years as this technology matures and propels the networking industry forward.
<urn:uuid:ebc85a40-b365-41d8-85f1-9e80341e4d16>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/The-history-of-OpenFlow
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00135.warc.gz
en
0.949389
1,756
2.96875
3
Security-Forward Organizations Ought to Develop Understanding of How to Apply Quantum Computing to Cybersecurity (Entrepreneur) Hackers are evolving in parallel to technological advancements. Engineers, mathematician and physicists are simultaneously working on innovative concepts that harness the progression of classical encryption methods. New devices are utilizing principles of quantum physics and deploying sophisticated and powerful algorithms for safe communication. Utilization of public keys for encryption and private keys for decryption — each of which are created by algorithm-fueled random number generators — is called asymmetrical cryptography. Genuine randomness is considered unachievable by purely classical means, but can be accomplished with the added application of quantum physics. There are two methods by which large-scale quantum and classical computers can obscure private information. Method #1: Recover the key generated during the key agreement phase. Method #2: Interrupt the encryption algorithm. Quantum key distribution (QKD) is a quantum cryptographic primitive designed to generate unbreakable keys. QKD ensures key agreement, including well-known BB84 and E91 algorithms. In 2017, a Chinese team successfully demonstrated that satellites can perform safe and secure communications with the help of symmetrical cryptography and QKD. QKD alone can’t satisfy all protection requirements, but there are other mechanisms for security enhancement by utilizing “quantum-safe” encryption algorithms based on solving mathematical problems instead of laws of quantum physics. The United States National Institute of Standards and Technology is presently evaluating 69 new methods known as “post-quantum cryptography,” or PQC. Quantum computing offers an eminent, potential solution to cybersecurity and encryption threats. Any security-forward organization ought to develop an understanding of crypto agility.
<urn:uuid:25b84aa6-f0dd-4051-a17d-ae26459af48d>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/security-forward-organizations-ought-to-develop-understanding-of-how-to-apply-quantum-computing-to-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00135.warc.gz
en
0.922691
356
3.109375
3
One of the biggest challenges that individuals deal with when it comes to mobility is keeping devices charged. Anyone with multiple computers, smartphones and tablets within a household knows what a tangled mess can be created at a charging station. This happens when charging cords are suddenly plugged into the same outlet. According to a recent article from Forbes, Apple has been granted two patents that are related to wireless charging. These were filed two years ago by researchers and outline a system that allows a mouse, keyboard, iPhone and other mobile devices to be charged wirelessly through a Macbook Air when it is plugged into a power supply. "The idea is that the computer would create 'a charging region' that would transfer wireless power 'to any number of suitably configured devices.' The technical term for this is 'near field magnetic resonance' or NFMR. "It would include an area about one meter wide," the article reports. Wireless charging is far from being a new idea. Legendary inventor Nikola Tesla laid out the idea over a century ago. There are also a number of companies that have applied for patents that focus on some version of wireless charging. However, so far it has yet to catch on, which makes Apple's acquisition of these patents even more important because it's a company that has jump started a number of solutions that have become mainstream. While it is unlikely that this feature will be available in the immediate future, it is something that could lead more people to start adopting Mac systems. An Apple support service can help any business get itself in the best position to adopt new hardware.
<urn:uuid:80b37bf8-0975-453b-a435-517201e0f7b1>
CC-MAIN-2022-40
https://www.mcservices.com/post/apple-granted-patents-for-wireless-charging
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00135.warc.gz
en
0.970185
317
2.640625
3
Welcome to the heart of networking: the routing protocols. We begin our series of looks at these protocols with one of the originals, and the most simple: RIP, the Routing Information Protocol. RIP, like all routing protocols, is designed to disseminate network information pertinent to routers. At the most basic level, routers need to know what networks are reachable and how far away they are. RIP does this, and it’s still widely used today. In fact, we received feedback regarding last week’s Routing Overview from a reader who happily uses RIP at the ISP he works for. Many people have nasty things to say about RIP. They say it converges slowly, it doesn’t scale, it’s insecure because authentication is only plain text, and it suffers from split-horizon issues. This is all true, but it’s still very useful. We hope this article will shed some light on these issues, and help you understand one of the most widely used interior gateway protocols (IGPs) out there. RIP comes in two flavors: version one and version two. RIPv1 is very limiting because it doesn’t support CIDR addressing. This means it is classfull only, and you can’t slice up, for example, a /24 space into smaller units. RIPv1 also uses broadcast to send its information, which means that hosts can’t ignore RIP advertisements. Remember that each time a broadcast is sent, every host in the broadcast domain will receive an interrupt, and it must process the packet to determine if it’s something it cares about. RIPv2 uses multicast, which will be covered in a future installment of Networking 101. For now, just trust that hosts can ignore multicast without having to process the packet. Remember, we said that RIP is a distance-vector protocol. The distance part is referring to the hop count in RIP, and the vector is the destination. Other distance-vector protocols may use other criteria for the vector, like an AS path in BGP. Both RIP flavors send their information every 30 seconds to UDP port 520. But what do they send? If you assumed “their routing information” you are correct. It can send specific information about networks it can reach, and also advertise itself as the default gateway (by sending 0.0.0.0 with a metric of 1). The RIPv2 packet contains headers, just like any other protocol. Note that RIP is above UDP, so it is essentially an application-layer protocol. Every RIP packet contains a command, a version number, and a routing domain. Then up to 25 routes will follow in the same packet. A RIP command is either a request or a response. When hosts, either a Unix box running gated or a router, first boot, they need to obtain routing information somehow. The “request” is just that, a request broadcast to ask for a routing table. The “response” is a normal RIP message, which will be in response to a request, or simply broadcast every 30 seconds. The Version Number The version number is either one or two. The Routing Domain A routing domain in RIP is an identifier used to specify the routing instance. More than one set of RIP instances can exist on the same network by specifying that a message be only intended for only people in a specific domain. The Rest of the Packet Then the real RIP information starts. This contains up to 25 routes, which entail the following information: - Network Address: to identify the start of the network. - Netmask: to say how large the network is. - Next-hop IP Address: i.e. the router that can get you there. - Metric: how many hops away this network is. An important characteristic of RIP is that it will tell you about networks it heard from other people. You may hear these types of routing protocols called “routing by rumor.” The way this works is that the metric field is incremented before a router broadcasts a RIP packet. If router A tells you that it can get you to router B in two hops, then you know router A can talk directly to B, because it’s only one hop away. Ergo, router A has a link in the same broadcast domain as router B, but you do not. When the metric, or hop count, reaches 16 you’re in trouble. The number 16 means infinity in RIP. Infinity being equal to 16 is a mechanism used to stop the problem of “count to infinity.” This happens because of the “routing by rumor” design. This can get tricky, but bear with this three-router example: Router A knows that it can get to router C in two hops, via B. The picture in your mind should involve a straight line with B in the middle, and A and C on the ends. Now, since B has a direct connection to C, it will know when C goes down. But before B gets a change to tell A about C being down, A chimes in with a RIP update, which will include “I can get to C in two hops!” Router B will of course believe A, which means it thinks that A can get to C. Of course, A cannot, since its path was through B. But B doesn’t know this, because the only information in RIP is the next-hop address, which is A. Finally, when B sends his next update, he will include the route to C, which is now 3 hops. A believes B, because after all, B was the only way to C. This happens a few more times, and we’re at 16. The route is dropped instead of this continuing forever. How is this problem solved? Not with a distance-vector protocol. When we “tell our neighbors about the world” without providing detailed information about each network, count to infinity is possible. Link-state protocols provide an entire view of the network to all routers, and avoid this issue. “Split-horizon” is another method that will help obviate this bug, but it is flawed as well. Split-horizon means we would keep track of the interface an update came in on, and pay attention to updates from other routers that could conflict. This helps, but similar situations to the one above can still exist, when more routers are involved. That example gets really complex, but if you’re interested in RIP, feel free to invent a scenario in which count-to-infinity could happen, even with split-horizon capabilities. The final “issue” with RIP is that it converges slowly. This is true, mostly because of the 30-second wait between updates, but in smaller organizations it doesn’t matter. RIPv2 is implemented on nearly all hardware, even those cheap “home routers” that you buy to NAT a broadband connection. Even if you don’t use RIP exclusively as an IGP, it’s still useful to know about because hosts can use it as well, as an alternative (or supplicant) to manually configuring a default gateway. Finally, if you’re small enough to be using all static routes, RIPv2 is sure to help make your life easier.
<urn:uuid:41d1c2f2-4e2f-4db7-b8ca-077ffe62c6e6>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/standards-protocols/networking-101-understanding-rip-routing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00135.warc.gz
en
0.940686
1,543
3.4375
3
The term "cloud services" refers to a wide range of services delivered on demand to companies and customers over the internet. These services are designed to provide easy, affordable access to applications and resources, without the need for internal infrastructure or hardware. From checking email to collaborating on documents, most employees use cloud services throughout the workday, whether they’re aware of it or not. Cloud services are fully managed by cloud computing vendors and service providers. They’re made available to customers from the providers' servers, so there's no need for a company to host applications on its own on-premises servers. Explore additional multi-cloud topics: When deciding how to leverage cloud services, organizations must also decide which type of environment works best for the business: public cloud, private cloud, or a mix of both. Services that a provider makes available to numerous customers over the web are referred to as public cloud services. The SaaS, IaaS, and PaaS examples noted above are all providing public cloud-based services. The biggest benefit of using public cloud services is the ability to share resources at scale, allowing organizations to offer employees more capabilities than would likely be possible alone. Services that a provider does not make generally available to corporate users or subscribers are referred to as private cloud services. With a private cloud services model, apps and data are made available through the organization’s own internal infrastructure. The platform and software serve one company alone, and are not made available to external users. Companies that work with highly sensitive data, such as those in the healthcare and banking industries, often use private clouds to leverage advanced security protocols and extend resources in a virtualized environment as needed. In a hybrid cloud environment, a private cloud solution is combined with public cloud services. This arrangement is often used when an organization needs to store sensitive data in the private cloud, but wants employees to access apps and resources in the public cloud for day-to-day communication and collaboration. Proprietary software is used to enable communication between the cloud services, often through a single IT management console. Generally speaking, there are three basic types of cloud services: The most widely recognized type of cloud service is known as software as a service, or SaaS. This broad category encompasses a variety of services, such as file storage and backup, web-based email, and project management tools. Examples of SaaS cloud service providers include Dropbox, G Suite, Microsoft Office 365, Slack and Citrix Content Collaboration. In each of these applications, users can access, share, store, and secure information in “the cloud.” Infrastructure as a service, or IaaS, provides the infrastructure that many cloud service providers need to manage SaaS tools—but don’t want to maintain themselves. It serves as the complete datacenter framework, eliminating the need for resource-intensive, on-site installations. Examples of IaaS are Amazon Web Services (AWS), Microsoft Azure and Google Compute Engine. These providers maintain all storage servers and networking hardware, and may also offer load balancing, application firewalls, and more. Many well-known SaaS providers run on IaaS platforms. The cloud service model known as platform as a service, or PaaS, serves as a web-based environment where developers can build cloud apps. PaaS provides a database, operating system and programming language that organizations can use to develop cloud-based software, without having to maintain the underlying elements. Many IaaS vendors, including the examples listed above, also offer PaaS capabilities. Power a smarter, more flexible way to work See how the right cloud services can help you empower employees to do their best work. Key advantages of using cloud services include: Because the cloud service provider supplies all necessary infrastructure and software, there's no need for a company to invest in its own resources or allocate extra IT staff to manage the service. This, in turn, makes it easy for the business to scale the solution as user needs change—whether that means increasing the number of licenses to accommodate a growing workforce or expanding and enhancing the applications themselves. Many cloud services are provided on a monthly or annual subscription basis, eliminating the need to pay for on-premises software licenses. This allows organizations to access software, storage, and other services without having to invest in the underlying infrastructure or handle maintenance and upgrades. With cloud services, companies can procure services on an on-demand, as-needed basis. If and when there’s no longer a need for a particular application or platform, the business can simply cancel the subscription or shut down the service. As the availability of cloud services continues to expand, so will their applications in the corporate world. Whether a company chooses to extend existing on-premises software deployments or move 100% to the cloud, these services will continue to simplify how organizations deliver mission-critical apps and data to the workforce. From application delivery to desktop virtualization solutions, plus a vast array of options in between, cloud services are transforming how people work and the ways businesses operate. With Citrix, it’s easy to adopt cloud services based on what works best for your business. Whether you need to keep business-critical apps in a private cloud or gradually move to multiple public cloud services, Citrix DaaS makes it easy to leverage a full range of cloud service providers such as AWS, Google Cloud, and Microsoft Azure Virtual Desktop. This flexibility allows organizations to scale quickly, making it possible to securely support hundreds or thousands of securely—on any device and from any location.
<urn:uuid:3ed5f485-f4f5-4be9-9d41-22df79cac925>
CC-MAIN-2022-40
https://www.citrix.com/en-nz/solutions/digital-workspace/what-is-a-cloud-service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00135.warc.gz
en
0.93615
1,161
2.90625
3
The most common digital security technique used to protect both media copyright and Internet communications has a major weakness, University of Michigan computer scientists have discovered. RSA authentication is a popular encryption method used in media players, laptop computers, smartphones, servers and other devices. Retailers and banks also depend on it to ensure the safety of their customers’ information online. The scientists found they could foil the security system by varying the voltage supply to the holder of the “private key,” which would be the consumer’s device in the case of copy protection and the retailer or bank in the case of Internet communication. It is highly unlikely that a hacker could use this approach on a large institution, the researchers say. These findings would be more likely to concern media companies and mobile device manufacturers, as well as those who use them. Andrea Pellegrini, a doctoral student in the Department of Electrical Engineering and Computer Science, will present a paper on the research at the upcoming Design, Automation and Test in Europe (DATE) conference in Dresden on March 10. “The RSA algorithm gives security under the assumption that as long as the private key is private, you can’t break in unless you guess it. We’ve shown that that’s not true,” said Valeria Bertacco, an associate professor in the Department of Electrical Engineering and Computer Science. These private keys contain more than 1,000 digits of binary code. To guess a number that large would take longer than the age of the universe, Pellegrini said. Using their voltage tweaking scheme, the U-M researchers were able to extract the private key in approximately 100 hours. They carefully manipulated the voltage with an inexpensive device built for this purpose. Varying the electric current essentially stresses out the computer and causes it to make small mistakes in its communications with other clients. These faults reveal small pieces of the private key. Once the researchers caused enough faults, they were able to reconstruct the key offline. This type of attack doesn’t damage the device, so no tamper evidence is left. “RSA authentication is so popular because it was thought to be so secure,” said Todd Austin, a professor in the Department of Electrical Engineering and Computer Science. “Our work redefines the level of security it offers. It lowers the safety assurance by a significant amount.” Although this paper only discusses the problem, the professors say they’ve identified a solution. It’s a common cryptographic technique called “salting” that changes the order of the digits in a random way every time the key is requested. “We’ve demonstrated that a fault-based attack on the RSA algorithm is possible,” Austin said. “Hopefully, this will cause manufacturers to make a few small changes to their implementation of the algorithm. RSA is a good algorithm and I think, ultimately, it will survive this type of attack.” The paper is called “Fault-based Attack of RSA Authentication”, and you can get it here.
<urn:uuid:7c97d9fd-2017-44b7-98dc-18c9a79acb57>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2010/03/04/rsa-authentication-weakness-discovered/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00135.warc.gz
en
0.947584
640
3.578125
4
The simplicity of the term "proxy" belies the complex topological options available. Understanding the different deployment options will enable your proxy deployment to fit your environment and, more importantly, your applications. It seems so simple in theory. A proxy is a well-understood concept that is not peculiar to networking. Indeed, some folks vote by proxy, they speak by proxy (translators), and even on occasion, marry by proxy. A proxy, regardless of its purpose, sits between two entities and performs a service. In network architectures the most common use of a proxy is to provide load balancing services to enable scale, reliability and even performance for applications. Proxies can log data exchanges, act as a gatekeeper (authentication and authorization), scan inbound and outbound traffic for malicious content and more. Proxies are a key strategic point of control in the data center because they are typically deployed as the go-between for end-users and applications. These go-between services are often referred to as virtual services, and for purposes of this blog that's what we'll call them. It's an important distinction because a single proxy can actually act in multiple modes on a per-virtual service basis. That's all pretty standard stuff. What's not simple is when you start considering how you want your proxy to act. Should it be a full proxy? A half proxy? Should it route or forward? There are multiple options for these components and each has its pros and cons. Understanding each proxy "mode" is an important step toward architecting a suitable solution for your environment as the mode determines the behavior of traffic as it traverses the proxy. Standard Virtual Service (Full Application Proxy) The standard virtual service provided by a full proxy fully terminates the transport layer connections (typically TCP) and establishes completely separate transport layer connections to the applications. This enables the proxy to intercept, inspect and ultimate interact with the data (traffic) as its flowing through the system. Any time you need to inspect payloads (JSON, HTML, XML, etc...) or steer requests based on HTTP headers (URI, cookies, custom variables) on an ongoing basis you'll need a virtual service in full proxy mode. A full proxy is able to perform application layer services. That is, it can act on protocol and data transported via an application protocol, such as HTTP. Performance Layer 4 Service (Packet by Packet Proxy) Before application layer proxy capabilities came into being, the primary model for proxies (and load balancers) was layer 4 virtual services. In this mode, a proxy can make decisions and interact with packets up to layer 4 - the transport layer. For web traffic this almost always equates to TCP. This is the highest layer of the network stack at which SDN architectures based on OpenFlow are able to operate. Today this is often referred to as flow-based processing, as TCP connections are generally considered flows for purposes of configuring network-based services. In this mode, a proxy processes each packet and maps it to a connection (flow) context. This type of virtual service is used for traffic that requires simple load balancing, policy network routing or high-availability at the transport layer. Many proxies deployed on purpose-built hardware take advantage of FPGAs that make this type of virtual service execute at wire speed. A packet-by-packet proxy is able to make decisions based on information related to layer 4 and below. It cannot interact with application-layer data. The connection between the client and the server is actually "stitched" together in this mode, with the proxy primarily acting as a forwarding component after the initial handshake is completed rather than as an endpoint or originating source as is the case with a full proxy. IP Forwarding Virtual Service (Router) For simple packet forwarding where the destination is based not on a pooled resource but simply on a routing table, an IP forwarding virtual service turns your proxy into a packet layer forwarder. A IP forwarding virtual server can be provisioned to rewrite the source IP address as the traffic traverses the service. This is done to force data to return through the proxy and is referred to as SNATing traffic. It uses transport layer (usually TCP) port multiplexing to accomplish stateful address translation. The address it chooses can be load balanced from a pool of addresses (a SNAT pool) or you can use an automatic SNAT capability. Layer 2 Forwarding Virtual Service (Bridge) For situations where a proxy should be used to bridge two different Ethernet collision domains, a layer 2 forwarding virtual service an be used. It can be provisioned to be an opaque, semi-opaque, or transparent bridge. Bridging two Ethernet domains is like an old timey water brigade. One guy fills a bucket of water (the client) and hands it to the next guy (the proxy) who hands it to the destination (the server/service) where it's thrown on the fire. The guy in the middle (the proxy) just bridges the gap (you're thinking what I'm thinking - that's where the term came from, right?) between the two Ethernet domains (networks).
<urn:uuid:6000fe3d-2d77-4a9b-a3b1-cc004d05f32a>
CC-MAIN-2022-40
https://community.f5.com/t5/technical-articles/back-to-basics-the-many-modes-of-proxies/ta-p/288587
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00135.warc.gz
en
0.928134
1,060
3.359375
3
IBM, in conjunction with global retailer Walmart and Tsinghua University in Beijing, China, is harnessing blockchain technology to bring safer food to the table in China. The partnership was announced this morning as part of the opening of the new Walmart Food Safety Collaboration Center in Beijing. The collaboration will pilot blockchain for food authentication and record-keeping in the supply chain, providing a permanent record of every transaction. “China’s rapid economic growth has led to massive opportunities for innovation, but it has also presented quality of life challenges, including ensuring that food sold in the country is safe to eat,” Professor Chai Yueting, National Engineering Laboratory of Electronic Commerce Transaction Technology, Tsinghua University, said in a statement today. [ Related: How blockchain will disrupt your business ] Yueting hopes that Tsinghua University’s work with IBM and Walmart will serve as a global model for food safety that others will be able to follow and replicate. Blockchain is a distributed database that consists of blocks of items – each block is a timestamped batch of valid individual transactions and the hash of the previous block, creating a link between the two. Because each timestamp includes the previous timestamp in its hash, it forms a chain. Each new transaction must be authenticated across the distributed network of computers that form the blockchain before it can form the next block in the chain. A digital food chain The partners say blockchain will enable digitally tracking food products from an ecosystem of suppliers to store shelves and finally on to consumers. It will digitally connect food items to digital product information including farm origination details, batch numbers, factory and processing data, expiration dates, storage temperatures and shipping details. The relevant information will be entered into the blockchain at every step of the process of moving food from suppliers to consumers. The information in each transaction is agreed upon by all members of the business network; once there is a consensus, it becomes a permanent record that can’t be altered. This helps assure that all information about the item is accurate. [ Related: How blockchain can benefit IT outsourcing providers ] The pilot project is designed to trace pork as it moves from suppliers to Walmart’s shelves. By the time food is sold to a consumer at a Walmart store, each individual item will have been authenticated using blockchain technology to create a transparent, security-rich and traceable record. The record created by the blockchain can also help retailers like Walmart better manage the shelf-life of products in individual stores, and further strengthen safeguards related to food authenticity. “Advanced technology has reached into so many aspects of modern life, but it has lagged in food traceability, and in particular in creating more secure food supply chains,” Bridget van Kralingen, senior vice president, Industry Platforms, IBM, said in a statement Wednesday. “Food touches all of us, everywhere, and ensuring the safety of what we eat is a global effort, so we are experimenting in China with Walmart and Tsinghua given the size and scale of food consumption in this country.” China is the world’s leading consumer of pork. According to the University of Pennsylvania’s Penn Wharton Public Policy Initiative, China consumed 57 million metric tons of pork in 2014, more than half of global pork consumption in that year. Although the country accounts for half of the world’s pork production, in recent years it has begun importing pork to meet consumers’ demands — in 2014, Mainland China was the world’s third-largest importer of pork (13 percent), behind Japan (21.1 percent) and Mexico (13.1 percent). In fact, pork is so popular in China that fluctuations in its price have the potential to send the country’s economy into a tailspin. According to The Economist, in 2007, an estimated 45 million pigs in China died from “blue ear pig disease.” The resulting shortage caused the price of pork to skyrocket, in turn causing the annual rate of increase of the consumer price index to hit a 10-year high. Getting strategic about pork Recognizing that food security was a vital national interest, the Chinese government implemented the world’s first strategic pork reserve, a combination of frozen and live pork that can be used to soften the effects of vicissitudes in the market. The Chinese government tapped the reserve earlier this year, releasing 6.1 million pounds of frozen pork into the market over a period of two months in an effort to ease prices that had surged by more than 50 percent. The concerns aren’t just economic. Those economic pressures can lead unscrupulous suppliers to cut corners. In 2011, according to Penn Wharton, China’s largest processed pork manufacturer was discovered using illegal feed additives that contaminated its product. In the wake of many other such food safety scandals, the government has been blamed for its inability to ensure a safe food system. The collaboration between Walmart, IBM and Tsinghua University aims to change that perception. IBM Research – China brings its expertise in rapidly evolving blockchain technology to the table (the IBM Blockchain is based on the open source Hyperledger Project fabric from the Linux Foundation). Top experts in safety transaction security and authentication technology from Tsinghua University are working alongside them, while Walmart brings is team of supply chain, logistics and food safety experts to the project. “As advocates of promoting greater transparency in the food system for our customers, we look forward to working with IBM and Tsinghua University to explore how this technology might be used as a more effective food traceability solution,” Frank Yiannis, vice president, Food Safety, Walmart, said in a statement Wednesday.
<urn:uuid:ef62ea13-c2d9-4895-9e4c-25ba46cf9b71>
CC-MAIN-2022-40
https://www.cio.com/article/236349/can-blockchain-make-food-safer-in-china.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00335.warc.gz
en
0.9404
1,169
2.734375
3
A blockchain partnership is to help document and preserve the evidence of man’s exploration of the Moon and outer space. Blockchain financial services marketplace TODAQ Financial has announced a partnership with For All Moonkind, a non-profit organisation dedicated to preserving mankind’s space exploration heritage, such as the sites of the Apollo Moon landings. The two organisations will work together to map the Moon, registering and documenting human cultural artefacts and landing sites to bring “accountability and immutability” to these records. The aim of the project isn’t to prove conspiracy theorists wrong about faked moon missions – though that will be a useful byproduct – but to prevent the desecration of these sites as the privately funded space age begins in earnest. Catch that Buzz In 2019, it will be 60 years since the first man-made object to reach another celestial body, the Soviet Unions’ Luna 2 probe, crashed onto the Moon’s surface, and 50 years since Neil Armstrong and Buzz Aldrin set foot there. Yet while Mankind has not returned to the Moon since the Apollo 17 mission in 1972, the new space race means that our nearest neighbour is “about to get very crowded”, explained For All Mankind. Japan, China, Russia, and the US are all are considering crewed Moon missions within the next decade. India and China intend to have rovers back on the lunar surface as early as this year, and a number of private companies have plans for landings by 2020. According to space market data service New Space Global, there are now more than 1,000 companies in the commercial space sector, up from just 125 in 2011. “Each of the Apollo lunar landing sites, and the robotic sites that preceded and followed the Apollo missions, are evidence of humanity’s first tentative steps off our planet Earth and to the stars,” said the partnership in an announcement this morning. “They mark an achievement unparalleled in human history, and one that is common to all humankind. They hold valuable scientific and archaeological information. They also serve as poignant memorials to all those who work — and have worked in the past — to make the spacefaring human a reality. “In short, they are unique and irreplaceable cultural and scientific resources. They must be protected from intentional or accidental disturbance or desecration.” A giant leap for blockchain The partners will work to create, develop, publicly disseminate and maintain a decentralised grid-referenced system of cultural heritage sites on the Moon that they believe should be designated for preservation and international protection. These includes sites such as Apollo 11’s Tranquility Base, where Armstrong and Aldrin became the first human beings to set foot on the Moon on 20 July 1969, as well as locations around Mare Imbrium, where Luna 2 hit the surface, and the first remote-controlled rover, Lunokhod 1, later explored. “Creating an accountable register of human cultural artefacts and sites on the Moon, is a first step towards documenting, protecting, and celebrating our history before it is erased,” said Michelle Hanlon, co-founder of For All Moonkind. For its contribution, TODAQ will use the TODA layer-zero blockchain protocol to build The For All Moonkind Moon Register. “While the vast majority of economic and human activity is here on Earth, none of our modern world would function without the contribution of our collective space-based efforts and technology,” said Hassan Khan, CEO and co-founder of TODAQ. “Building an immutable framework powered by the TODA Protocol that can help preserve our common heritage and lay a foundation for future societal interactions in space is vital to start now, and we’re proud to support this initiative.” Internet of Business says Space exploration and a range of technologies, such as blockchain, AI, and analytics are a natural fit for each other, as outer space presents the biggest data there is. But while human beings’ exploration of space is inevitable and of incalculable value in technological and environmental terms – we don’t just explore space from near-Earth orbit, but also look back at our planet from space – the costs associated with physical exploration will remain a challenge for decades to come. This is why many space exploration programmes are now Earth-based, using vast telescope arrays and supercomputers to crunch data about gravity and magnetism on the universal scale, map black holes, track the origins and evolution of the universe, and explore the nature of dark matter and dark energy. For example, stage one of the UK-headquartered Square Kilometre Array (SKA) Project will generate five exabytes a day of raw data, and generate archives with growth rates of up to 500 petabytes a year. Stage two of the project may generate 62,000 petabytes of raw data that needs to be crunched and analysed, meaning that today’s science programmes need to plan ahead for computer processing power and speeds that don’t yet exist.
<urn:uuid:b0a2baf9-41fa-4deb-aa37-dc6fc5f541f6>
CC-MAIN-2022-40
https://internetofbusiness.com/the-crater-scape-blockchain-partnership-to-map-moon-landing-sites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00335.warc.gz
en
0.916879
1,056
2.671875
3
Ciphers, also called encryption algorithms, are systems for encrypting and decrypting data. A cipher converts the original message, called plaintext, into ciphertext using a key to determine how it is done. Ciphers are generally categorized according to how they work and by how their key is used for encryption and decryption. Block ciphers accumulate symbols in a message of a fixed size (the block), and stream ciphers work on a continuous stream of symbols. When a cipher uses the same key for encryption and decryption, they are known as symmetric key algorithms or ciphers. Asymmetric key algorithms or ciphers use a different key for encryption/decryption. Ciphers can be complex algorithms or simple ones. A common cipher, ROT13 (or ROT-13), is a basic letter substitution cipher, shorthand for “rotate by 13 places” in the alphabet. In a message, ROT13 replaces each letter of the alphabet with the letter that is thirteen places ahead of it. "The government of ancient Rome was among the first civilizations to use ciphers to transmit sensitive information such as military conversations."
<urn:uuid:ed3051c1-4cc3-4762-bd69-71970ffa84f6>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/cipher
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00335.warc.gz
en
0.926701
247
4.5
4
Race to Patch Known Cybersecurity Vulnerabilities More than a month since Microsoft rolled out its April 30, 2018 update on Windows 10, the company said nearly 250 million or one-third of the nearly 700 million computers using Windows 10 have applied this update. This Microsoft data shows that nearly 450 million or two-thirds of machines using Windows 10 as their operating system (OS) haven’t applied the April 2018 patch. Prevalence of Delayed Patching A patch is a piece of code that’s inserted (or patched) into an existing software program. It’s meant to improve performance, usability or to fix known cybersecurity vulnerabilities. It’s a known fact that many organizations don’t patch immediately. Researchers at Renditionrevealed that more than a month after Microsoft released its March 2017 update, over 148,000 machines hadn’t applied this particular update. Microsoft’s March 2017 update, in particular, fixes the cybersecurity vulnerabilities which were leaked by the group of hackers who called themselves Shadow Brokers a month after Microsoft’s March 2017 update. The cybersecurity vulnerabilities leaked by Shadow Brokers are believed to be used by the U.S. National Security Agency (NSA). The malicious software (malware) WannaCry infected and locked the files of hundreds of thousands of computers in May 2017 by exploiting the cybersecurity vulnerability called “EternalBlue” – a cybersecurity vulnerability fixed by Microsoft in its March 2017 update. Delayed patching isn’t limited to proprietary software like Microsoft operating systems. Users of open-sourced software are delaying patching as well. Out of the 14,700 cybersecurity vulnerabilities listed by the National Vulnerability Database (NVD), 4,800 were open-sourced cybersecurity vulnerabilities. According to Black Duck, many software applications now contain more open source code (57%) than proprietary code (43%). Seventy-eight percent of open-sourced codebases examined Black Duck contained at least one cybersecurity vulnerability, with an average 64 vulnerabilities per codebase. Apache Struts is one example of an open-sourced software. This software is used by many organizations to create web applications. In March 2017, the U.S. Computer Emergency Readiness Team (US-CERT)sent out an alert of the need to patch a security vulnerability in Apache Struts version 2. The said security vulnerability allows an attacker to take control of a computer system containing this vulnerability, regardless of the geographical location of this affected system. In the same month, Struts 2 users were encouraged to switch to newer versions Struts 2.3.32or Struts 126.96.36.199as these updates fix the security vulnerability in Struts 2. In September 2017, Equifax, one of the world’s largest credit reporting agencies, revealed that information of over 148 million U.S. consumers, nearly 700,000 U.K. residents and more than 19,000 Canadian customers had been compromised. Data of millions of Equifax’s customers were compromised as a result of the company’s failure to patch the known security vulnerability in Apache Struts version 2 used by the company in its online disputes portal web application. Equifax data breach was detected as early as July 2017. “We are sorry to hear news that Equifax suffered from a security breach and information disclosure incident that was potentially carried out by exploiting a vulnerability in the Apache Struts Web Framework,” Apache Struts Project Management Committeesaid in a statement. “Most breaches we become aware of are caused by failure to update software components that are known to be vulnerable for months or even years.” Why It’s Important to Patch As Soon as Possible? Once patches or security updates of cybersecurity vulnerabilities are released, attackers are quick to scan the internet for computer systems that fail to apply the needed patch. Attackers simply automate the process of scanning the internet in looking for unpatched computer systems. According to Rendition researchers, many organizations don’t patch for 30 to 60 days or more. In trying to find out how many organizations haven’t patched the EternalBlue vulnerability via Microsoft’s March 2017 update, Rendition researchers in late April 2017 and the first few days of May 2017 scanned the internet using a “special ping” to make contact with DoublePulsar malware – another spying tool leaked by Shadow Brokers and believed to be used by the NSA. “When the DoublePulsar malware is present, the ping command returns a special response,” Rendition researchers said. “Using this response, we can conclusively determine which machines have been compromised.” While Rendition researchers used their automated process for research purposes, attackers could’ve similarly used an automated process to scan the internet looking for unpatched computers vulnerable to EternalBlue, leading the attackers to launch the WannaCry attack in the 2ndweek of May 2017. In a similar manner, attackers using the early version of SamSam ransomware simply used an automated process in scanning the internet looking for vulnerable computers. One of the earliest versions of SamSam ransomware victimized unpatched servers running Red Hat’s JBoss enterprise products. SamSam attackers used Jexboss, an open-sourced software that scans the internet in looking for unpatched servers running Red Hat’s JBoss enterprise products. Many cyberattacks are the results of the failure of many organizations to patch software components that are known to be vulnerable for months or even years. Cyberattack is a race between attackers trying to exploit unpatched computer systems and organizations and individuals trying to timely roll out patches. Patching is one of cybersecurity’s best practices. It’s important to establish a process for your organization to quickly roll out a patch or security update once it’s available. It’s essential to roll out critical and important patches in terms of hours or a few days, not weeks, months or years. Contact us today if your organization needs assistance in rolling out critical patches like updating your organization’s server operating system. At GenX, our security experts will help you minimize the risk of a data breach.
<urn:uuid:8bc6143e-c639-43e2-85f1-dd3e4b51e2a4>
CC-MAIN-2022-40
https://www.genx.ca/race-to-patch-known-cybersecurity-vulnerabilities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00335.warc.gz
en
0.927048
1,280
2.8125
3
A pandemic is a worldwide spread of a disease. This is a higher order of magnitude than an epidemic. In other words • an outbreak is the occurrence of disease cases in excess of what's normally expected • an epidemic is more than a normal number of cases of an illness, specific health-related behavior or other health-related events in a community or region • a pandemic occurs on a wider scale than an epidemic, and immunity does not exist A pandemic plan is a documented strategy for how an organization plans to provide essential services when there is a widespread outbreak of an infectious disease. Pandemic plans should be sufficiently flexible to effectively address a wide range of possible effects that could result from a pandemic. A review of a specific component of a plan by personnel (other than the owner or author) with appropriate technical or business knowledge for accuracy and completeness. A measurable outcome A process of determining measurable results. Personal Protective Equipment (PPE) Personal protective equipment, commonly referred to as “PPE”, is equipment worn to minimize exposure to hazards with the potential to cause serious workplace injuries and illnesses. These injuries and illnesses may result from contact with chemical, radiological, physical, electrical, mechanical, or other hazards. PPE may include items such as face masks or coverings, face shields, gloves, safety glasses and shoes, earplugs or muffs, hard hats, respirators, or coveralls, vests and full body suits. If PPE is to be used, a PPE program should be implemented. This program should address the hazards present, the selection, the maintenance, and use of PPE, the training of employees, and monitoring of the program to ensure its ongoing effectiveness A structured method for doing or achieving a specific desired result. It involves establishing goals, setting objectives, and defining actions by which goals and objectives are attained. Common types of plan in the industry are Crisis Management Plan, Emergency Management Plan, Emergency Response Plan, etc. Plan, Do, Check, Act (PDCA) A model used to plan, establish, implement and operate, monitor and review, maintain and continually improve the effectiveness of a management system or process. The management process of keeping an organization's business continuity management plans up to date and effective. The intentions and direction of an organization as formally expressed by its Top Management.
<urn:uuid:ed988f9c-6891-404e-b606-8a09bc4d1684>
CC-MAIN-2022-40
https://drj.com/bc_glossary_index/p/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00335.warc.gz
en
0.929515
510
3.09375
3
Sunday, October 2, 2022 Published 2 Years Ago on Monday, Sep 28 2020 By Yehia El Amine Take a moment to think about your appliance drawer at home; everyone has one, and more often than not, it’s filled with pallets of once beloved but now outdated devices – from smartphones, tablets, cameras, and the like. While these devices might not look dangerous, they find themselves at the heart of a major environmental and economic debate, as the world edges closer to the global rollout of 5G: the e-waste problem. Electronic waste, or e-waste, refers to all electronic products that have been discarded without the intent to refurbish or reuse. It’s hard for many to conceive the fate of these devices; in plains, fields, and factories where workers hammer away at them to remove hazardous components such as lithium-ion batteries, copper and platinum. The scene is like a twisted Pixar movie, with doomed gadgets riding an unrelenting conveyor belt into a machine that shreds them to pieces. While 5G markets itself to be the tidal wave of worldwide technological change, a revolution of this kind comes at a price and could usher in an unprecedented wave of electronic waste we’re simply not prepared for. “The exponential growth we’re going to see in electronic waste, especially around obsolescence … [given] the technology curve for these 5G connected devices is so steep at the moment, is a concern,” says Dr. Miles Park, a senior lecturer in industrial design at the University of New South Wales. According to studies done by the United Nations Environment Program, with the annual cycle of modern-day consumer electronic, the rate of e-waste currently produced is up to 50 million tons yearly. Cell phones, tablets, computers, and televisions –lots of old technology already makes its way into landfills. “It is already difficult enough when a new product is launched on the market every 12 months, and it’s just driving obsolescence of the previous generation of product because there is so much innovation and redundancy happening in this field,” Dr. Park added. Some environmental organizations are already calling on tech companies to foot the bill of recycling the electronics they manufacture and sell. This has picked up traction in some parts of Europe, Canada, and in some US states who have passed the so-called Extended Producer Responsibility (EPR) laws, which require manufacturers to set-up and fund systems to recycle or collect obsolete products. An example of this can be seen at Apple, where a smartphone-recycling robot called Daisy was developed in 2018 that has the ability to take apart 200 iPhones per hour, and says it diverted 48,000 metric tons of electronic waste away from landfills. But that’s a drop in a bucket compared with the 50 million tons of e-waste generated globally last year. With 5G being a stone’s throw away, a flood of worldwide e-waste is on the horizon, and recycling alone won’t be enough. Yet there are a number of ideas and solutions being developed, researched, and implemented across the globe that may be enough to inspire the adoption of better practices in the fight against climate change. The world is in need of safer, and more durable electronics that are repairable, and recyclable; in essence, using less hazardous materials. Currently, chemical engineers at Stanford University are working on the world’s first fully biodegradable electronic circuit, which uses dyes that dissolve in acid with a pH 100 times weaker than vinegar. While a group of researchers from both the Indian Institute of Science in Bangalore, India, in collaboration with the Texas Rice University in the US have aimed toward pulverizing electronic printed circuit boards, in specialized mills at ultra-sub-zero temperatures, into reusable nano-dust. In parallel, Ronin8, a Vancouver-based e-waste management company, developed a technology that uses minimal water and energy to separate metals from non-metals via sonic vibrations in recycled water. Widening the scope of EPR will hold tech companies and electronics manufacturers responsible for managing and disposing of their devices at the end of their working lives; this lays the ground for recycling materials such as copper, platinum, gold, and many others, to be reused in the development of newer products. This can be seen through the New York State Electronic Equipment Recycling and Reuse Act which requires manufacturers to provide consumers with free and convenient e-waste recycling. This effort needs all the help it can get, thus commercializing the recycling process to the masses will allow tech companies, as well as waste management companies, to collect older devices for them to be treated. An example of this can be seen with EcoATM, a US-based e-waste management company, which incentivizes people to turn in their older products to one of their 2,700 kiosks across the U.S. The EcoATM evaluates devices based on model and condition, and directly hands you a sum of money based on that evaluation. Across the Pacific Ocean, China’s biggest Internet Company, Baidu, has developed a smartphone app in collaboration with the UN Development Program called Baidu Recycle. Users specify the device in question, enter its measurements along with a convenient pickup date accompanied with their name and address; an accredited recycler will pick it up from you within 24 hours. 11,000 devices have been recycled in the span of two months after its release. A circular economy is one that aims to keep products and all their materials in circulation at their highest value at all times or for as long as possible. According to Stephanie Kersten-Johnston, an adjunct professor in the Sustainability Management program at Columbia University and Director of Circular Ventures at The Recycling Partnership, the “highest value” means what’s closest to the original product, to get the most out of the value of the material and the labor that went into creating the product. Europe has made the circular economy a goal for the whole continent. “Right now, over the length of the contract, you gradually buy outright the phone so the provider can recoup the cost of manufacturing that phone in the first place,” Kersten-Johnston, using the example of smartphones, was quoted as saying. “But at the end of the contract, you’re left with a phone that’s worth basically nothing, that you’ve had to pay for all that time and you can’t do anything with it. That’s a flawed model. But imagine a system where the provider or manufacturer retained ownership of the device through the contract so customers would pay a lower monthly fee and be expected to return the device for an upgrade. The value could be recaptured in the form of parts for remanufacture or materials for recycling, and customers would still get their upgrades,” she added. Kersten-Johnston considers that this business model will happen sooner, rather than later, stating that millennials and the younger generation do not value ownership in the same way older generations do, while being more vocal toward more responsible business practices. On another note, reusing and recycling the materials from these old gadgets brings a myriad of economic benefits across the board. According to the International Telecommunication Union (ITU), the circular economy could generate opportunities worth over $62.5 billion annually and create millions of new jobs worldwide. With this in mind, the UN has set a target to increase global recycling to 30 percent, and reaching 50 percent in countries with legislation on e-waste. The Asus Rog Phone 6 Pro is the latest upgrade to the rog phones family. A great gaming phone with loads of cool features and excellent screen fidelity lets us look closely and see what the fuss is all about. How Good Does It Look The Asus Rog Phone 6 Pro has an intimidating, rough, […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:7e0b21e8-6682-430f-816f-f4061d41d994>
CC-MAIN-2022-40
https://insidetelecom.com/e-waste-in-the-5g-era-threat-or-opportunity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00335.warc.gz
en
0.942455
1,706
3.125
3
Promoting the rights of First Nations People: A personal perspective - Posted on August 8, 2022 - Estimated reading time 3 minutes I remember the first time I heard a Welcome to Country be shared at a corporate event. It was a conference, about a decade ago now. I was sitting in a packed ballroom at one of many round tables waiting for the opening keynote. The organizer announced that a Welcome to Country would be delivered by an Elder of the Gadigal people in the Eora nation. As I listened, I felt emotional and proud as a descendent of Aboriginal and Torres Strait Islander people, the First Nations (or Indigenous) people of this country. International Day of the World’s Indigenous Peoples is about sharing the message about protecting and promoting the rights of Indigenous peoples around the world. This can be a complex topic, and a simple thing anyone can do is to choose to learn. This year’s theme is the role of Indigenous Women in the Preservation and Transmission of Traditional Knowledge. As a First Nations descendent and a proud Tjupan Pinhi woman, I aspire to play a role in this process by sharing what I have learned, and what I continue to learn. I did not learn a lot about Aboriginal and Torres Strait Islander history at school, it was not something that was well covered when I was growing up in Australia. And what I did learn in school, I now know was not the whole truth. I learned most of what I know from my own mob (family) and through my own curiosity and research. What I knew in my youth as facts about our mob, that we came from Mogumber and had been at Sister Kate’s, materialized in later years as my understanding that my grandfather and so many of our mob were of the Stolen Generation. Mogumber (also known as Moore River Native Settlement) and Sister Kate’s were just two of many locations where Aboriginal and Torres Strait Islander children were forcibly taken under the premise of child protection. Learning about my own culture is a continuous experience. Many aspects of it are deeply personal and sensitive, and not something that I have always felt comfortable or safe to discuss, particularly in a workplace context. I have been learning from family, researching, and collecting information since my early school days. My heritage and experience have led me to be deeply passionate about Inclusion and Diversity. Understanding and recognizing the past, however ugly, is a crucial step to reconciliation. Avanade’s approach to Inclusion and Diversity was a key factor in my decision to join the team. I am incredibly pleased to have joined at a time when Avanade Australia is taking positive action to support reconciliation in Australia by developing their first Reconciliation Action Plan. I am proud that I can take part in the reconciliation process at Avanade, and I am very conscious that my experience is not the same lived experience as most of my mob. Aboriginal and Torres Strait Islander peoples are the oldest surviving culture in the world yet represent only 3.2% of the recorded population in Australia. I would never claim to speak for them, and I feel a right and responsibility to help represent my culture and share knowledge. Truly complex challenges cannot have simple solutions, but that does not mean that we cannot all do simple things to contribute to change. If you are wondering what you can do on International Day of the World’s Indigenous Peoples, I recommend learning a little about the First Nations people where you live, and then having a conversation with a friend or family member to share that knowledge. Acknowledgement I acknowledge Wurundjeri Woi Wurrung people of the Kulin Nation where I live and work in Naarm, the Whadjuk Noongar people of the land where I was raised in Boorloo, and to the Wongi from whom I descend. I pay respect to all past, present, and future Traditional Custodians and Elders of this nation, and the continuation of cultural, spiritual, and educational practices of Aboriginal and Torres Strait Islander peoples. About the Artwork and Artist Ancestors is a piece by proud Tjupan Pinhi woman Danielle Ashwin (cousin of Rebecca Jackson). Danielle is an emerging contemporary artist who shares her deep connection to culture through her work. This piece represents Danielle’s and Rebecca’s elders, their Wally Pop (grandfather) and his five siblings and our extended families as their descendants. Ancestors is copyright used here with permission of the artist for this article. © Ashwin Aboriginal Art
<urn:uuid:37164ca9-2265-4da1-9957-c1d55ae57017>
CC-MAIN-2022-40
https://www.avanade.com/en/blogs/inside-avanade/diversity-inclusion/promoting-rights-first-nations
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00335.warc.gz
en
0.967829
938
2.59375
3
How Does a Trojan Horse Work? Trojan horses wrap malicious functionality in a seemingly benign package. Depending on the level of sophistication of the program, the malware may actually perform the benign function — making it more difficult for the victim to catch onto the attack — or may just be designed to achieve execution. The trojan malware may be created wholly by the malware author or be a modified version of the legitimate program. In the second case, the attacker adds the malicious functionality, leaving the rest of the program unchanged and able to perform its original function. The Threat of Trojan Horse Viruses The term “Trojan Horse” covers many different types of malware because it simply refers to the fact that malicious functionality is built into a legitimate program. Various types of malicious functions can be integrated into a trojan horse, and the impact of the malware depends on the exact malicious functionality included in the malware. Types of Trojan Horse Viruses Trojan horses can perform various malicious functions. Some common types of trojan horses include: - Remote Access Trojan (RAT): A RAT is a trojan horse that is designed to gain access to a target system and provide the attacker with the ability to remotely control it. RATs often are built as modular malware, allowing other functionality or malware to be downloaded and deployed as needed. - Mobile Trojan: Mobile trojans are trojan malware that target mobile devices. Often, these are malicious mobile apps that appear in app stores and pretend to be well-known or desirable software. - Spyware: Spyware is malware that is designed to collect information about the users of an infected computer. This could provide access to an online account, be used in fraud, or help target advertising to a particular user. - Banking Trojans: Banking trojans are malware designed to steal the login credentials of users’ online bank accounts. With this information, an attacker can steal money from the accounts or use this information for identity theft. - Backdoor: A backdoor provides access to an infected computer while bypassing the traditional authentication system. Like a RAT, backdoors allow the attacker to remotely control an infected computer without needing the credentials of a legitimate user account. - Botnet Malware: Botnets are collections of infected computers that attackers use to perform automated attacks. Trojan horse malware is one of the methods by which an attacker can gain access to a computer to include it within a botnet. - DDoS Trojan: DDoS trojans are a particular type of botnet malware. After gaining access to and control over the infected machine, the attacker uses it to perform DDoS attacks against other computers. - Downloaders/Droppers: Trojan horses are well suited to gaining initial access to a computer. Droppers and downloaders are malware that gains a foothold on a system and then installs and executes other malware to carry out the attacker’s goals. How To Protect Against Trojan Viruses Trojan horses can infect an organization’s systems in various ways, requiring a comprehensive security strategy. Some best practices for protecting against trojans include: - Endpoint Security Solutions: Endpoint security solutions can identify known trojan horse malware and detect zero-day threats based on their behavior on a device. Deploying a modern endpoint security solution can dramatically reduce the threat of this malware. - Anti-Phishing Protection: Phishing is one of the leading methods by which cybercriminals deliver malware to a device and trick users into executing it. Phishing prevention solutions can identify and block messages carrying trojan malware from reaching users’ inboxes. - Mobile Device Management (MDM): Mobile trojans are malicious apps that are often sideloaded onto a device from unofficial app stores. MDM solutions that check mobile apps for malicious functionality and can restrict the apps that can be installed on a device can help prevent mobile malware infections. - Secure Web Browsing: Trojan malware commonly masquerades as a legitimate and desirable program to get users to download and execute it from a webpage. Secure web browsing solutions that inspect files before allowing them to be downloaded and executed can block these attacks. - Security Awareness Training: Trojan horses often come with a promise that “seems too good to be true”, such as a free version of desirable software. User security awareness training can help employees to understand that anything that seems too good to be true is probably malware. Prevent Trojan Horse Infections with Check Point Trojan horses are a common type of malware, but they are one of several cyber threats that companies face. For more information about the current cyber threat landscape, check out Check Point’s 2022 Cyber Security Report. Check Point Harmony Endpoint provides comprehensive threat prevention against trojans and other types of malware. To see Harmony Endpoint in action, feel free to sign up for a free demo today.
<urn:uuid:c26f0969-20d4-4492-a57a-94b01fb48b8f>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/cyber-security/what-is-trojan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00335.warc.gz
en
0.902079
1,000
3.09375
3
The EU Network and Information Security (NIS) Directive The EU Network and Information Security (NIS) Directive sets out the first EU-wide rules on cyber security. This is in addition to the new requirements for data protection as detailed in the General Data Protection Regulation (GDPR). Among other provisions, the Directive requires operators of essential services (private or public organisations that provide services in critical sectors such as energy, transport, banking, finance and health) and digital service providers (online marketplaces, search engines and Cloud computing services) to implement appropriate security measures to protect, and ensure the continuity of, the network and information systems used to support “essential services”. The Directive entered into force in August 2016. EU member states – including the UK – have until May 2018 to translate the Directive into national laws, and a further six months to identify the “operators of essential services and digital service providers” it applies to. Penalties for non-compliance will be “effective, proportionate and dissuasive”. Free green paper For more information on the Directive, download our free green paper: The EU Network and Information Security (NIS) Directive: Compliance guidance. It summarises the Directive’s requirements, lists the sectors that will be affected, and explains how organisations can use international standards to demonstrate compliance. The NIS Directive requirements Improving national cyber security capabilities Member states must adopt national NIS strategies that define strategic objectives and appropriate policy and regulatory measures. They must also designate “national competent authorities” to monitor the application of the Directive, and set up Computer Security Incident Response Teams (CSIRTs) to handle incidents and risks. Increasing cooperation between EU member states The Directive establishes a cooperation group and a network of national CSIRTs. The cooperation group will comprise representatives of member states, the EU Commission and ENISA (the European Union Agency for Network and Information Security), with the Commission acting as secretariat. The CSIRTs network will comprise representatives of member states’ CSIRTs and CERT-EU (the Computer Emergency Response Team for the EU institutions, agencies and bodies), with the Commission participating as an observer and ENISA acting as secretariat. The UK’s computer security incident response team, CERT-UK, was formed in 2014. Risk management and incident reporting obligations for operators of essential services and digital service providers Operators of essential services Operators of essential services must notify serious incidents to the relevant national authority and take appropriate technical and organisational security measures. These measures must be proportionate to identified risks and include “documented security policies”. Evidence of their implementation – such as the results of an independent audit – must also be maintained. The CPNI (Centre for the Protection of National Infrastructure) has identified 13 sectors that comprise the UK’s national infrastructure, and it is likely that this will form the basis of the UK’s definition of “essential services” for the purpose of the Directive. Digital service providers Digital service providers must also notify the relevant national authority of serious incidents and take appropriate technical and organisational security measures. These measures must be proportionate to identified risks and take into account, among other considerations, business continuity management and compliance with international standards. International standards and NIS Directive compliance Article 19 of the Directive states that, for operators of essential services and digital service providers alike, “Member States shall, without imposing or discriminating in favour of the use of a particular type of technology, encourage the use of European or internationally accepted standards and specifications relevant to the security of network and information systems.” The only relevant international standards against which organisations can achieve independently accredited certification are: ISO 27001, which sets out the requirements for a risk-based ISMS (information security management system), and ISO 22301, the international standard for a BCMS (business continuity management system). IT Governance NIS products and services IT Governance has more than a decade’s experience helping organisations all over the world to carry out governance, risk management and compliance projects. We’ve led more than 400 successful ISO 27001 certification projects, and offer a 100% guarantee of successful certification. Here are a few ways in which we can help meet your NIS Directive compliance needs. ISO 27001 packaged solutions IT Governance has created six ISO 27001 consultancy packages that combine the products and services you need to implement the Standard at a speed and for a budget that is appropriate for your needs and preferred project approach. ISO 22301 consultancy Our business continuity consultants can assess your current BCM plans, policies and procedures, and develop an executive report containing prioritised recommended activities and solutions aligned with ISO 22301. Fixed-price Health Check and FastTrack packages are also available. IT Governance’s Cyber Security Incident Response consultancy service can help you develop the resilience to protect against, remediate and recover from a wide range of cyber incidents, and is based on best-practice frameworks developed by CREST, as well as ISO 27001 and ISO 27035 (the international standard for cyber incident response). Documentation toolkits Creating documentation for your management system is never easy, and can run to hundreds of pages. IT Governance’s documentation toolkits contain fully customisable policies and procedures that have been written by our consultants to comply with international standards. The ISO 27001 ISMS Documentation Toolkit provides you with a comprehensive set of pre-written ISMS documents that comply with ISO/IEC 27001:2013. The ISO 22301 BCMS Implementation Toolkit contains expert guidance and consultant-created content to help you implement an ISO 22301-compliant BCMS quickly and easily, and mitigate the effects of unplanned business disruptions. IT Governance’s publishing arm, ITGP, sources and publishes a wide range of IT GRC books, from pocket guides to implementation manuals. Click here for cyber resilience titles >> vsRisk™ is the industry-leading ISO 27001-compliant risk assessment tool. Proven to save huge amounts of time, effort and expense when tackling complex risk assessment, vsRisk delivers an information security risk assessment quickly and easily. The ISO 27001 Learning Pathway will equip you with the knowledge and skills required to plan, implement, maintain and audit a best-practice ISMS in your organisation. The ISO 22301 Learning Pathway provides delegates with the knowledge and skills to implement and audit an ISO 22301-compliant BCMS. All courses are available in classroom and Live Online formats. Penetration testing is the most effective way of identifying the exploitable vulnerabilities in your company’s Internet-facing applications so that you can take steps to reduce your exposure to cyber attack. IT Governance is a CREST member company, meaning that clients can rest assured that our penetration tests will be carried out to the highest standards by qualified and knowledgeable individuals. To discuss your NIS Directive compliance requirements, please call us on 00 800 48 484 484 or email firstname.lastname@example.org.
<urn:uuid:c7e459b1-ae2d-48c6-a2b3-2ddb80a06c8c>
CC-MAIN-2022-40
https://www.itgovernance.asia/nis-directive
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00335.warc.gz
en
0.898599
1,537
2.5625
3
Companies of all sizes are victimized by clever hackers regularly. Business email compromise (BEC) often occur simply because a smooth criminal posed as a trusted source. You may have heard of the term “social engineering” before, and that’s essentially what it is: malicious “social engineers” using manipulation, deception and influence to persuade an employee or contractor into unwittingly disclosing secure information— or to perform an action which grants unauthorized access to your information systems. And social engineering happens more than you think, as one of the top two techniques used by criminals to steal from organizations just like yours. Educate your staff on the real dangers of social engineering by showing them a few examples of how a hacker might strike: 1. Hackers target via phishing emails or phone calls. One of the most common forms of social engineering is phishing, whereas a hacker attempts to get your employee to click or download a malware-injected attachment to infect a company device— giving the bad guys a doorway in. These crafty emailers often masquerade as important leadership heads, pretending to be a manager or vendor that your staff member can trust. They also often instill a sense of urgency to open a file or perform a specific task, or even use fear to rush the recipients into making a rash judgement call. But phishing emails aren’t the only practice; some hackers use pretext phone calls, AKA voice phishing (vishing)— calling business extensions and posing as authoritative figures to get your workers to share secrets or insider knowledge that’ll help hackers steal information too. We’ve all received threatening voicemails from people saying you were late on a payment or breaking compliance, eager to get you to call back in a panic and share your personal information (PI). Whenever your staff finds an email in their mailbox with an attachment, remind them to think before they click. If they receive a suspicious voicemail, research and call the company, to confirm the call was legitimate. 2. Hackers can imitate a contact in your phone and text you. There’s been buzz around tricky text messages for years: whereas hackers spam phone numbers with intimidating messages that say things like, “$500 was just withdrawn from your bank account, did you do it? If not, call this phone number,” NBC News illustrated as an example. But hackers have picked up new tactics, now using software to pose as a trusted contact— so that you never really know who you’re messaging behind the screen. In one live keynote, for instance, Kevin Mitnick shows how easy it is to spoof a text from your partner or friend, discreetly asking you to do something (about 50 minutes in). A criminal can easily attempt this tactic by posing as you to your employees. They simply request an action and specify, “don’t reply right now, I’m in a meeting” or another excuse that’ll buy them just enough time to get what they want before the target notices anything suspicious. Because of this, it’s always best to ask your staff to call and verify any request out of the norm before complying. Instill this sense in your employees, or better yet, create a protocol to double verify any request from an authority figure via text or email. 3. Hackers can find an easy way in if they know a mother’s maiden name. Have you ever been asked to share your mother’s maiden name during a security screening? This answer was once thought of as a big trip-up for bad guys who stole names and credit card info, stopping them in their tracks. But today’s elite hackers can access a database with easy search functionality for maiden names. All the bad actor needs to know is a first and last name and a rough estimate of your age to find it. And with the massive amount of personal information on public social media profiles, it’s not too hard to fill in the blanks with PI commonly asked in security inquiries. As always, requiring multi factor authentication is preferred to avoid false authorization into your account. Some professionals even recommend providing incorrect PI answers when filling out your security questions, and storing your responses somewhere for safe reference, so as to avoid your questions being guessed. Be very cautious of who you share your mother’s maiden name or other personal information with, both online and in person, for this seemingly innocent info could be used to gain entrance into private portals. 4. Hackers can use social engineering tactics in person too, by gaining false entrance or asking to plug in an infected drive or cable. Hackers aren’t exclusively cyber predators: they can take physical action to gain access into your systems as well. Besides the obvious breakin where the bad guy steals files or devices straight from your office, others can walk right through your door and steal info right before your nose. Bad actors can use a device to steal employee credentials off proximity access cards. Depending on the strength of their toolset, identify your individual staff member's Card and Site IDs just by standing a few feet or inches away from the person carrying the fob. These clever cyber thieves can then gain access to the building after hours, and plug into a server to steal information. Or, in other more public settings, the criminals can create a doorway through your security by simply plugging in a malware-infected USB stick or cable into your employee’s computer. All it could take is a simple question, “Hey, can I plug this in to print something?” or, “Do you mind if I charge my phone on this laptop?” to quickly give them remote access to your worker’s desktop and company servers beyond. To avoid this type of social engineering scheme, always remind your staff to think before plugging an unknown device into their computer, and be stern about not allowing unknown drives or cables to be plugged into company devices. Show Live Examples of Social Engineering Threats Hackers are always developing new ways to trick innocent people into exposing sensitive information for monetary gain. Are you confident that your employees would know how to spot a social engineering attempt if it happened to them? If not, why not show them what one looks like in person? Kevin Mitnick and his Global Ghost Team™ deliver live hacking demonstrations before audiences small and large, revealing exactly how bad actors target people. More importantly, they show you and your team exactly what you can do to prevent it. Learn more about our presentation, “How Hackers Attack & How to Fight Back” and book the world’s leading authority on social engineering to build better security awareness today.
<urn:uuid:9e407ac0-73c0-4f9e-a676-bef9e8371772>
CC-MAIN-2022-40
https://www.mitnicksecurity.com/blog/ways-hackers-use-social-engineering-to-trick-your-employees
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00335.warc.gz
en
0.937073
1,380
2.71875
3
Introduction to LAN Computer Networks can be divided into various types depending upon their size and usability. The size of a network can be assessed by its geographical distribution. It can be as small as a room with a few devices/computers or as widespread as across the world, with a million of interconnected devices. Some of the most important types of computer networks are : In this article, we will try to understand LAN in detail. What is LAN? LAN is abbreviation for Local Area Network. LAN is a network covering a small geographic area and connecting various end devices like computers and printers. LAN is usually limited to a home, office, school or building. Speed is higher (upto 1Gbps / 10 Gbps) and setup is less expensive. The end devices like computers, printers, etc. are connected either with an Ethernet cable to the router or through a wireless router. In case there are multiple LANs, that can be connected over a telephone line or radio wave. The setup is managed and controlled by each customer itself. LAN is considered pretty secured and easy to manage. Types of LANs There are two types of LANs: - client/server LANs - peer-to-peer LANs In this type of LAN, several devices/clients are connected to a central server. This central server is responsible for managing the access to the printer, storage of files and all the traffic through the network. A single device/client can be a PC, a tablet, a laptop or other similar devices capable of running applications. The connect between the central and the devices/clients can be either with ethernet cables or through a wi-fi connection. In this type of LAN, there is no central server and thus cannot handle heavy workloads. Each PC/device is shared equally in running the network. These devices share all the resources and data and are connected either through a wired or wireless connection to a router. The most prominent example of peer-to-peer LAN is the home network. Advantages of LAN The major advantages of using LAN computer networks are : - Cost effective as it significantly reduces the hardware cost. - In terms of software also its economical as there is no need to purchase separate licensed software for each client in the network. - It offers an increased operational efficiency as all the data is stored in the central server. - It provides an ease of communication as transferring data over the connected devices is possible in the real time. - With the advent of wi-fi technology, the spectrum of the type of devices that can be connected in a LAN has broadened. Disadvantages of LAN The main disadvantages of LAN are : - Any discrepancy in the security of centralized data repository can result in unauthorized access of critical data. - The initial cost of installation of Local Area Networks can be quite high. - Privacy can be an issue as the LAN administrator has the access to personal data files of every LAN user. - A constant LAN administration is required to cope up with the issues related to software setup and hardware failures. FAQs Related to LAN: If you want to learn more about LAN, then check our e-book on LAN Interview Questions and Answers in easy to understand PDF Format explained with relevant Diagrams (where required) for better ease of understanding.
<urn:uuid:6fcd3c2d-f436-4f7e-99fe-f3de35403922>
CC-MAIN-2022-40
https://networkinterview.com/lan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00335.warc.gz
en
0.939285
725
3.53125
4
They’re there to be used, but nobody ever speaks of it. Instead, both home users and IT professionals alike just pretend they don’t exist. For some, it’s ignorance. For others, it’s tradition. In the end, though, there’s no logic behind it – they’re there to be used. But suggest this to many people, regardless of IT credentials, and they recoil like you’ve suggested something sacrilegious. “You don’t use those. You just don’t!” They cry. But ask why not, and they stare at you in stunned silence. That’s because they’ve never asked the question. The Mysterious, Unaccounted Drive Letters If you’re a geek over a certain vintage, then you may know of what I speak. That is, the A: and B: drive letters. When it comes to drive assignments, everything starts at C:, and goes through the alphabet through there. But why? The knee-jerk reaction is to mention that the A: and B: drives used to be reserved for floppy disk drives. But when you think on it, that’s a very strange reason to hold two drive letters aside. Back in the day—and we’re talking 1957—computers didn’t have a hard drive. You had one floppy drive which was assigned to A:, and that was it. It cost a whopping $1k for another drive, so B: didn’t really come into the equation. Originally, you had to do a lot of disk-switching. One disk to boot. Switch to another disk with your programs and data. Switch back to the boot disk again to run the command line. And so, computers started coming with a second floppy disk drive, so you could run your common processes and specific programs at the same time. Luxury! It was naturally assumed when hard drives were invented, that they would be assigned to C:. This allowed people of the time to still achieve backwards compatibility with the two-disk programs they’d been using. Fast far forward to 2017: the age of virtual reality headsets, tablets, wi-fi, and cloud storage. You won’t see a home PC with one floppy disk drive anymore, let alone two. Only the unbelievably retro would use a 1.44 MB floppy over a 2 TB USB stick. So it begs the question; why do we reserve these two drives, and go straight to C:? The backwards compatibility argument doesn’t hold much weight. Not only are there very few programs of this era that would even operate on a modern Windows OS, there’s hardly a thriving demand for this feature. Even Experts Find It Unnerving “You just wouldn’t do it.” “It’s the floppy disk drive.” “It’s tradition.” “It just feels weird.” These are just some of the responses I got from IT experts upon posing them the question, “Would you assign a Hard Disk to A: or B:?” The strongest and only argument against using an A: or B: drive was more an argument against installing your OS on anything but C: – some bad software developers hard code in the assumption that you’ve installed Windows on C:, and therefore for compatibility reasons, you always install Windows on C:. Thankfully, many software developers are catching up with the fact that people might install Windows on other drives. But nonetheless, this doesn’t explain why A: and B: shouldn’t have drives assigned to them. One expert told me this story: there had been an incident where he had run out of drive letters to use (From removable drives, virtual machines, etc). And so, at a loss, he stared at the computer wondering what to do. … It took him a few minutes to click that he could assign things to A: and B:. The tradition is just this pervasive; we just don’t think of these letters as being usable, even though there’s no reason not to. I have worked at a place with a shared B: drive. New employees have referred to it as feeling unusual and ‘not like a real drive’, despite it having the same properties as F:, G: and H:. B: is for Backup Some IT professionals have begun to break the mold, particularly when it comes to making memorable letter assignments. E.g: B: for Backups, P: for personal data, S: for Shared, Q for Quickbooks, etc. … But even many of these users keep their hands off the A: drive, which for some reason, is voiced as more a no-go zone than B:. So what are your thoughts on the matter? Would you ever map a drive to A: or B:? Share your thoughts in the comment section below!
<urn:uuid:8722de4c-469f-4bfa-9744-685b972c66a3>
CC-MAIN-2022-40
https://www.backupassist.com/blog/your-forbidden-drives-are-they-its-biggest-taboo
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00335.warc.gz
en
0.956483
1,056
2.5625
3
If you are thinking about investing in a SCADA system, it's important to keep in mind that the data transport you will use is one of the most important decisions you will make. This is because the data your RTUs are gathering is only useful if you can access it at all times reliably. If not, you won't have the visibility over your network necessary to react to alarms and prevent problems. Do you need to brush up on the basics of SCADA? You can start here. There are many ways (radio, cellular, LAN, etc) that your RTU can transmit remote site information to you or to your master station. And if you can choose what's best for you, even if it's a backup transport, then that's a great opportunity. As a trusted remote monitoring and control solutions provider, we need that every network is unique, so there's no such a thing as a one-size-fits-all SCADA data transport. In fact, there are many factors that contribute to this choice being unique. However, there are common choices that do work for many companies. Some of them are Spread Spectrum (unlicensed) radio, licensed radio, and cellular. To help you decide if between these options (and help you make sure they would work for you at all), let's dive into some information about radio and cellular transports and their advantages and disadvantages. Spread Spectrum radio (aka unlicensed radio) is a very common method of SCADA data transmission. It utilizes radio waves to send information from your RTU to your master station. This type of data transport uses a Federal Communications Commission (FCC) based unregulated band of radio waves in the 900MHz range (maximum of 1 Watt) to send your information. The Spread Spectrum radio is a system originally developed for military applications, to provide secure communication and prevent detection and interception of signals. This is done by spreading the noise-like signal over a large frequency band. The fundamental idea of Spread Spectrum is to use more bandwidth than the original message but maintaining the same signal power at the same time. A Spread Spectrum signal doesn't have an obvious peak in the spectrum. So, this makes the signal hard to distinguish from noise, which makes it difficult to intercept. As an example, you might set your radio to send the first bit on data on 921MHz, the second on 925MHz, the third on 929MHz, and the fourth on 936MHz. The device receiving this transmission is configured to understand this pattern, which ensures that you get all the information while avoiding sticking around on the same channel for long. The Spread Spectrum type of transmission can bring you many advantages. First of all, it's unregulated by the FCC, which means that you can buy, install, activate, and tune a system as needed. You don't have to have a permit. The radio equipment itself is not expensive, and there is no subscription required. After the initial costs with equipment, there are no recurring expenses other than with maintenance. You will pay one time and own your equipment and broadcasting rights. Also, the radio transport is a very secure way to transmit information. This is due to over the air encryption, the specialized knowledge and gear necessary to intercept signals, and distance limitations (the person that is trying to intercept your signal will have to be within the limited range of the radio waves). Now, let's talk about the Spread Spectrum disadvantages. To start keep in mind that it needs a direct line of sight between transmitter and receiver. This means that the signal can be completely affected by physical interferences, such as trees, foliage, buildings, and mountains. Due to this limitation, the antenna towers will need to be very tall in order to transmit over obstacles. If you have ideal weather and antenna conditions, the Spread Spectrum can reach up to 15 miles. However, in the real world, where the conditions will usually be average and you'll have sufficient line of sight, expect your signal to reach only 1-2 miles. Another important radio disadvantage to remember is the general threat of lighting strikes. Although most radio antennas will have good lighting protection, you can prevent this natural electric discharge from happening. And when it happens, it can cause major network downtime and major expenses. The Spread Spectrum radio can be efficiently used if you have a short-range application and already have the infrastructure required to receive and transmit signals, such as towers and poles. It can also be very useful for networks located in remote sites where cell coverage is faulty or nonexistent. Although the average SCADA system doesn't use much data, the Spread Spectrum radio's 900MHz range makes it possible for your devices to send information quicker (in ideal conditions) than licensed radio or cellular transmission. Although it has a couple of key differences from the Spread Spectrum radio, the licensed radio is pretty similar to it. The licensed radio, as you probably guessed, is regulated by the FCC. This means that you need to ask for a permit and they will tell you where on the radio spectrum to broadcast. This usually falls below 500MHz, but there are some situations that allow for the use of a 900MHz. Also, you'll see that the power level of a licensed radio is increased to up to 10W. The main difference between Spread Spectrum radio and licensed radio is that the licensed bands can be used only by the company that licensed them, but anyone can use the unlicensed bands. Most of the radio spectrum is licensed by the FCC to certain users, such as television and radio. These users will pay a licensing fee for the exclusive right to transmit on an assigned frequency within a geographical area. With the licensed radio, just like with Spread Spectrum, you will own your own equipment and there are no recurring subscription fees. But also, you'll get greater reliability and better performance. The benefit of paying a fee to FCC is not only that you will get the exclusive right to transmit on a determined frequency, but also be assured that nothing will interfere with your transmission. The FCC regulation will ensure that interference is kept to a minimum. So, if you do get interference in a licensed band, you can work with them to correct the problem. Furthermore, you'll see a better data transmission performance due to the increased broadcasting power, lower noise, and less competition. Licensed radio doesn't have the same strict line of sight requirements as Spread Spectrum, so it can better avoid obstacles and you can send your signal way further - in ideal conditions, it can reach up to 35 miles. Although the licensed radio has better performance, it still requires a general line of sight. So, taller antenna towers will still be needed to make sure the integrity of your transmissions. Also, keep in mind that, the higher end of licensed radio's range, the curvature of the earth comes into play, and you'll need even higher towers. And remember that, you won't be free to retune your radios if needed. You'll be locked into a certain range unless you are allowed to switch. Licensed radio can be used at most of the same applications as Spread Spectrum. The initial time and investment to get it up and running are higher for the licensed radio, but this could be a good solution for you if you have a situation where the distance or interference make Spread Spectrum unreliable. Cellular networks are very popular and you are probably with this technology. Imagine the radio data transmission: it goes from point A directly to point B. On the other hand, cellular technology uses a network of towers, forwarding point A's data from "cell" to "cell" to reach point B. Cellular communication eliminates one of the most important radio's disadvantages - physical interference in the line of sight transmission. When thinking about transporting your SCADA information via cellular networks, remember your cell phone - the principle is the same. Cell providers are, of course, interested in keeping their networks online at all times. This means that they do have a huge amount of resources committed to keeping their networks constantly active. This will ultimately benefit your SCADA system as it will become more reliable. Also, distance is not an issue - if you have network coverage, you'll get data. And there's no need for large towers and lighting-prone antennas. This decreases your costs with maintenance and with possible network downtime. The main disadvantage of cellular data transport is coverage. If you don't have cell signal, then you won't get your data. Another limitation of cellular is that there are recurring subscription fees for as long as you want to use the system. SCADA systems don't use much data, so you won't have expensive costs. Furthermore, it's important to keep the security aspect in mind. Radio has the inherent security that we've talked about previously, but standard cell data transmissions are online. This means that a malicious person can attack your system even from a distance. The cellular transmission is most likely the most versatile option among those three. So, unless you don't have cell coverage at your sites, it can be used in almost any situation. It also requires less investment and has a simpler implementation than either of the previous radio solutions. Is radio transport the right solution for you? Or you'd be better off with cellular communication? Either way, it's important to make sure your RTU can efficiently receive and send information through your chosen transport method. Your optimal situational awareness starts with competent equipment. The NetGuardian 832A G5 RTU with wireless connectivity that can give complete network visibility over your network. It includes a wireless IP modem and antenna (GSM or CDMA, depending on your chosen build option) for alarm reporting and remote access. This RTU has three different wireless connection modes that you can select from depending on your application. The NetGuardian 832A has many other features and capabilities that make it the perfect wireless RTU for many types of networks. As you surely noticed, there are many aspects that can affect the SCADA data transport you choose. These are common transport solutions, though, because they can work for many different applications. However, in most cases, network managers can't really choose the data transport method they want. Simply because it's too expensive to replace an already existent one. But, they can still slowly migrate to newer, more efficient data transport technology. If that's your case, don't worry. We are SCADA experts and we can custom make any of our remote monitoring and control devices to fit the transport you have at your remote sites and the ones you are migrating to. In fact, no matter what the kind of SCADA data transmission - radio, cellular, satellite, fiber, LAN, etc - we can help you. Just treat us like we were your engineering department. Telling us what you're broadly trying to accomplish allows us to come up with inventive solutions. Our catalog is just a collection of what people have needed before, and we always like to expand it. You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:0e477ba6-7264-4619-9165-164055f88b2b>
CC-MAIN-2022-40
https://www.dpstele.com/blog/radio-vs-cellular.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00535.warc.gz
en
0.956066
2,338
3.015625
3
Knowing Your eMBB from your mMIMO: A Guide to 5G Terminology 5G is undoubtedly one of the hottest topics not just in technology at the moment, but in the world of business and enterprise at large. Everywhere you turn, people are talking about 5G, websites and magazines are publishing articles about 5G, trade organisations and technology leaders are hosting conferences about 5G worldwide. The hype is huge, but sometimes it can feel tough to keep up with the conversations. Why? Because like most cutting edge developments in technology, 5G comes with its own jargon, its own technical language, its own sometimes bewildering lists of abbreviations. Ahead of 5G World 2020, we’re publishing a series of blogs all about 5G aiming to give you a complete lowdown on what it is, what it promises, who the key players are and how rollout is progressing. And to help you feel more confident holding your own in conversations around 5G going forward, here’s our essential guide to all the key terminology and main technological concepts you need to know. Let’s throw ourselves right in the deep end and start with one of the more technical topics you are likely to encounter in discussions around 5G - standards. Just to give a little background, standards, which can be thought of as a set of technical specifications, play a crucial role in how mobile technology works. One of the great strengths of mobile - and something which we as mobile users have come to expect - is that it offers consistency in network access and connectivity wherever you go. That would not work if all of the world’s operators were building their networks differently, or if device manufacturers were using their own proprietary technologies to connect to networks. For consistency and universal access, you need everyone singing from the same hymn sheet. You need standards. Mobile standards are therefore very important and technically very detailed pieces of documentation, setting out everything from how networks and base station cells should be configured to the spectrum used to access protocols and security. With each new generation of mobile, new standards come along (sometimes more than one), usually as a result of work by large mobile industry bodies. In reading around and talking about the subject, you are likely to come across reference to two different sets of 5G standards - the IMT-2020, a piece of documentation drawn up by the International Telecommunication Union’s Radiocommunication wing (the ITU-R), and the 5G New Radio (NR) standard developed by 3GPP, a body which has taken the lead on mobile standards development since 3G. How do these two 5G standards relate to each other? Well, rather than being in competition, the IMT-2020 is best understood as a theoretical framework for what 5G should look like and be able to achieve (such as the peak 20Gbps download target), while 5G NR is a practical proposal for how these objectives can be met. We will outline some of the key technologies described in the 5G NR standard below. Three Use Cases One of the best-known features of the IMT-2020 documentation is that it outlines three specific use cases for fifth generation mobile which, since its publication in 2015, have served as the primary source of reference in most discussions about what 5G might achieve. This is a classic example of the abbreviated names for these proposed use cases sounding much more technical and daunting than they actually are, so let’s unpick what they actually refer to. - Enhanced Mobile Broadband (eMBB): Since the arrival of 3G and then 4G mobile, mobile phones have become as much about accessing data services and the internet as making calls and sending texts. eMBB is the promise of 5G taking mobile broadband access to the next level - rivalling and even exceeding wireline broadband services in speed and efficiency, allowing us to access data intensive applications like ultra-HD video streaming on the move without a hitch, perhaps even providing a platform for eventually replacing fibre connections with wireless. - Ultra-Reliable Low-Latency Communications (uRLLC): In setting out its 5G standard, the ITU-R was clear that it wanted the technology to go much further than simply improving on mobile services as we already know them. It wanted 5G to break the glass ceiling on cellular capabilities and enable a much wider range of use cases for mobile, ushering in a brand new era for wireless connectivity. uRLLC is one specific example of this ambition - to create mobile connections so reliable and data communications so close to instantaneous that 5G would become a viable option for use cases where any slip, any lag in the connection could literally be a matter of life or death. Examples of where uRLLC is expected to be applied include autonomous vehicles which need to respond to signals from on-road infrastructure and other vehicles with minimal delay in order to avoid collisions, and surgical robots performing procedures under the guidance of specialists based at a distant hospital, where any issue in data communication could put the well-being of the patient at risk. - Massive Machine-Type Communications (mMTC): Finally, the ITU-R recognised that if licensed cellular services were to become the network option of choice for industrial IoT applications in sectors like manufacturing, utilities and agriculture, they would have to deliver solutions that were more focused on connection density than data speeds, and low cost rather than low latency. Individual IoT sensors only transmit data in low volumes, so speed and capacity is not such an issue. But to make something like a fully automated smart factory or warehouse viable, you have to deploy such sensors in their thousands. So while much of the focus on 5G centres on the potential for data-intensive, mission-critical applications, mMTC outlines how 5G can also become a key enabler in the next phase of development of IoT, by delivering low data, low energy connections at massive scale. So how will 5G deliver on all of these promises? Here are some of the key technologies set out by 3GPP’s 5G NR standard and proposed elsewhere. - Network virtualization: Virtualization is a concept that has, over the course of the past decade or so, been perfected in IT and is best known as the underlying technology which enables cloud computing. In brief, virtualization involves using software to mimic the functions of a physical asset (like a computer server or a mobile network). This is often referred to as ‘abstracting’ functions from the underlying physical resource. The key benefits are that virtualized versions of servers, routers, networks and so on are much more efficient, flexible and scalable than the hardware equivalents. Network virtualization started with 4G, but 5G networks will be completely virtualized, meaning mobile services are managed and provisioned by software, not by physical hardware. This is viewed as critical to achieving many of the core ambitions of 5G, from increasing available capacity through more efficient use of spectrum to handling millions connections within a relatively small area simultaneously to managing and prioritising traffic so there are no signal log jams causing latency. For example, virtualization opens the door to techniques like ‘network slicing’, a multiplexing approach which makes it possible to run multiple services over the same piece of spectrum. - Millimeter Wave (mmWave) Spectrum: Radio waves operate at a range of frequencies defined by their wavelength. Longer frequencies in the lower range of the spectrum have long been used by TV and radio because they are capable of travelling long distances, but their bands are narrow and are now close to capacity in terms of how much signal traffic they can carry. 3G and 4G mobile in particular has made extensive use of mid-range bands, but again, we’re getting close to a stage where these bandwidths are overcrowded. Higher frequencies, or so-called millimeter wave spectrum, represent a huge untapped spectrum resource, not least because shorter wavelengths also mean wider channels - which means lots and lots of extra capacity. 5G is unique amongst the five generations of mobile network technology to date in that it proposes to make use of spectrum at low, mid and high frequencies, as well as high capacity wide bands and lower capacity, lower power narrow bands (i.e. for mMTC). The drawback with short waves is that they only travel short distances. Network providers are proposing to solve this problem by creating incredibly dense networks made up of very small cells. Although such high-density networks are expensive to build, in heavily populated urban centres especially they are viewed as a more efficient long term solution than fibre-to-the-premises (FTTP) wired internet connections. High frequency, high density 5G networks would therefore provide the foundations for a wholesale switch to wireless broadband, a long-term ambition of eMBB, as well as provide infrastructure for public IoT initiatives like Smart Cities. - Massive MIMO: 5G also promises to transform the way devices connect to a mobile network. All previous generations of mobile technology have involved base stations transmitting spectrum like a floodlight across their allocated area, with phones connecting to this broadly dispersed signal. But this leads to a lot of waste. 5G will make use of a technology known as multiple input multiple output (MIMO) which transmits signal as beams rather than dispersed fields, with individual beams tracking a user throughout the cell area. Massive MIMO - massive here being a reference to the high density of beams required - is expected to vastly increase the how efficiently signal available spectrum can be used, which in turn will lead to increases in speed and available capacity.
<urn:uuid:cc92794d-f4f5-4c6f-b7d1-7c3306f30ba7>
CC-MAIN-2022-40
https://tmt.knect365.com/5gworldevent/knowing-your-embb-from-your-mmimo-a-guide-to-5g-terminology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00535.warc.gz
en
0.954136
1,979
2.625
3
Encoding data transforms it into another format so that it can be read by another party that has the system (algorithm, cipher) used to encode it. Data that is encoded is made widely available by the sender, deliberately so, so that it is easily consumed by the recipient(s). The ease of access ensures that the message is delivered and although no key is needed (as in encryption), the recipient does need to have the algorithm that makes and unmakes the transformation between formats. Once the encoded message is run through the algorithm or cipher and is in its original format, it is said to be decoded. "The 2002 American film 'Windtalkers' starring Nicolas Cage and directed by John Woo depicts a true story of US Marines in the Solomon Islands during World War II. The USMC detail is charged with protecting Navajo code talkers, or Native American marines who encoded sensitive messages simply by translating them into Navajo, a language with which the Axis adversaries were unfamiliar."
<urn:uuid:bb7dcae1-4994-491f-8b69-ed73faebcf2e>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/encoding
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00535.warc.gz
en
0.968888
205
3.4375
3
On this video , I’ve defined about introduction to python programming. i.) The way in which of program ii.) what’s program ? iii.) Primary directions seem in each language iv.) Getting began with python v.) downloading and putting in python vi.) options of python vii.) Operating python
<urn:uuid:059ad7a0-92f4-42e9-899d-738ee9f5fcc6>
CC-MAIN-2022-40
https://dztechno.com/introduction-to-python-programming-python-fundamentals-newbie-tutorial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00535.warc.gz
en
0.73495
72
2.515625
3
How to Neutralize Quantum Security Threats (TowardDataScience) . Quantum computers are on the brink of maturity, and they’re so powerful that they can solve complex mathematical problems in minutes and threaten to shatter today’s security protocols. With signs of progress, experts expect quantum use cases, such as simulations for research in medicine, finance, or other fields, to take place as soon as 2022. Full-blown applications should be in use by 2026, and commercial use of quantum computing should be widespread by 2030. These rojections also mean that, by the end of this decade, virtually any encryption we’re using today could be useless. In the worst-case scenario, an irresponsible pioneer in quantum computing could break into the systems of governments, enterprises, or global organizations and wreak pure havoc. We need to think about building encryption that outsmarts quantum computers so that we can reap the benefits of these machines without letting them compromise our security. Two different systems of cryptography exist today. The first one, symmetric or private-key encryption, is when the same key is used to both encrypt and decrypt the data. This type is used for all kinds of communications and stored data. The second system of cryptography, asymmetric or public-key encryption, is when two keys aren’t identical but mathematically linked. It’s used to exchange private keys, but also for any kind of digital authentication. The U.S. government is aware of the threat that quantum computing poses to cryptography. In 2018, the White House published a national strategy for quantum IT, which includes goals regarding quantum security. Congress then passed the National Quantum Initiative Act, which requires the president to be advised about developments in the field as well. In addition, this act puts the National Institute of Standards and Technology (NIST) in charge of checking up on quantum development, notably quantum cybersecurity. The NIST has taken its role seriously: By 2022, it aims to publish a new set of standardsfor post-quantum cryptography. These standards would include algorithms that even quantum computers can’t crack. the World Economic Forum suggests, we also need to build a so-called quantum literacy among government officials. This training would make them less dependent on constant advice and allow them to make fundamental decisions faster. This guideline doesn’t only apply to the government, though. Enterprise leaders should be fluent in quantum technology too. For businesses, there are important preparatory steps that go beyond educating their leaders and adopting security protocols. Enterprises should aim to get their whole infrastructure and their products crypto-agile, i.e., able to adopt new security protocols as soon as they become available. As with most worst-case scenarios, a quantum security apocalypse is not the likeliest of all cases. The fact that the U.S. government is investing heavily in post-quantum security and that top tech firms are involved in the development of new protocols is reassuring. Still, you shouldn’t pretend that the threat doesn’t exist for you. NOTE: Excellent and lengthy article to share with business and enterprise colleagues.
<urn:uuid:99b24ff6-07c0-4187-9d24-6d8779fba487>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/how-to-neutralize-quantum-security-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00535.warc.gz
en
0.935558
644
2.640625
3
Webinar: Join us, Tues 5/24. Nightfall & Hanzo experts will discuss how machine learning can enhance data governance, data security, and the efficiency of legal investigations. Register now ⟶ What is Data Hygiene and Why Is It Important What Is Data Hygiene And Why Is It Important? Many organizations are already cashing in on the promise of big data, hailed as the world’s most valuable resource. However, this crude resource requires refining in the form of data hygiene. Data errors and inconsistencies cost companies millions of dollars a year. Businesses that aren’t able to implement the tools, strategies, and training required often find big data to be more of an obstacle than an advantage. Until business leaders invest in strong data hygiene practices, big data’s promise will continue to remain elusive. What is data hygiene? As you design your approach, it’s helpful to start with the data hygiene definition. Data hygiene is the process of cleaning datasets or groups of data to ensure they’re accurate and organized. “Clean” data is that which is error-free, simple to understand, organized, and easy to duplicate. Data hygiene is a little more complex than simply correcting spelling errors. Data can be outdated, incomplete, duplicated, or inaccurate; as a result, it takes more than using spellcheck to ensure clean data. Why is clean data important? Dirty data is an expensive problem. A survey of global businesses by Experian found that “Over three quarters (77%) say that inaccurate data hurt their ability to respond to market changes during the pandemic, while 39% say poor quality data has negative effects on customer experience.” Experian estimates that dirty data can cost the average business 15% – 25% of revenue: a $3 trillion loss to the US economy each year. Bad data comes from a variety of sources. Human error and poor internal communication are the root causes of most dirty data, but these issues are compounded by the lack of a data strategy in many organizations. “When different departments are entering related data into separate data silos, even good data strategy isn’t going to prevent fouling downstream data warehouses, marts, and lakes,” wrote one expert. “Records can be duplicated with non-canonical data such as different misspellings of names and addresses. Data silos with poor constraints can lead to dates, account numbers or personal information being shown in different formats, which makes them difficult or impossible to automatically reconcile.” Improving data hygiene can also be a time-consuming task when there are no data strategies in place. One estimate found that knowledge workers are spending up to 50% of their time manually finding and correcting inaccurate data. Therefore, instituting data hygiene best practices can not only improve financial outcomes but also reduce the amount of time and resources dedicated to correcting dirty data. Data hygiene best practices Seeking to improve data hygiene at your organization? Here are some steps to follow to reduce the costs of dirty data and optimize data needed for key business decisions. Perform a data audit Before you invest in tools and processes to improve your data hygiene, it’s important to establish a baseline. According to Forbes, “About 27% of business leaders aren’t sure how much of their data is accurate.” Determine the quality of your data to set achievable, quantifiable data hygiene KPIs. Your audit should examine all the systems that your company uses to collect, use and store data. Within each system, determine which data fields are necessary; for both compliance and efficiency, your business should only collect the data it needs. Note any naming conventions or formatting differences from one system to the next. Practice data governance Data governance is the principled approach to managing data during its life cycle — from the moment you generate or collect data to its disposal. By mapping out how data is used throughout your business processes, you can identify points where entry errors or communications mistakes may occur. Assess how data moves through the organization: Where is it collected? Where is it being stored? Who is accessing it, and on what device? Not only can this show you where there is room for error, but it can also reveal where security vulnerabilities may exist. [Read more: 4 Data Governance Best Practices] Standardize data input Create rules for users across the organization who work with datasets. Naming conventions, formatting, and other constraints should be enforced through training. Set rules for things such as: - Abbreviations (Ave., St. vs avenue and street) - Salutations (such as Ms. or Mr.) - Numbers (1,000 or 1000) - Home vs business address (which will you collect?) - Phone numbers (123-1342 vs 1231342) A good general rule of thumb is to keep data entry as simple as possible. Don’t use capitalizations or abbreviations since these can mess up a data set easily. Try to eliminate fussy formatting to reduce the potential for human error. Use data cleansing tools Data monitoring and cleansing tools can help root out instances of inaccurate or messy data. These tools use natural language searching, data modeling, and machine learning to identify patterns and anomalies. Data cleansing tools come in a range of different prices and capabilities. Some tools, like DeDupley, specialize in one area of data cleansing, such as removing duplicates. Other options, such as Experian Data Quality, can help you check emails, addresses, and telephone numbers in bulk. As you explore different tools, look for software that can automate some of the time-intensive manual processes that often result in mistakes. A data loss prevention tool like Nightfall adds an important layer to improve data security. Nightfall automatically scans both structured and unstructured data in cloud security programs for instances in which PII, PHI, PCI, credentials, or secrets have been shared insecurely. This can help improve data hygiene, as detectors can send an alert when a formatting error or dirty data has created a vulnerability in your system. Reduce organizational siloes Finally, a key aspect of data hygiene is sharing consistently among internal teams. For instance, reducing siloes within teams like sales and marketing can significantly improve data hygiene. “Every year, sales departments lose approximately 550 hours in selling time (the equivalent to 27% of each rep’s total selling time) as a result of poor CRM prospect data,” wrote Forbes. “Marketing departments are similarly crippled by the very real pain associated with dirty data. 60% of marketers don’t trust the health of their data.” Training, standardization, and the right tools are all key components of improving data hygiene. By implementing a more streamlined, accurate approach to collecting and using company data, organizations can immediately start saving time and money. Increasingly, data security professionals are using the term data hygiene to refer to their security posture within cloud environments. For example, listen to a clip from our podcast episode with Bent Lassi, the CISO of Bluecore, where he discusses what the term means to him. Within the context of data security, data hygiene specifically refers to the practice of ensuring that sensitive data is only stored within sanctioned environments and that any inappropriately disclosed sensitive data is removed from environments where it doesn’t belong. The risk of poor data hygiene within the context of data security is that misplaced information can be discovered by unauthorized parties. In the case of PII or other customer data, such intrusions can constitute data breaches which can cost organizations upwards of tens to hundreds of thousands of dollars, if not much more. However, even in instances where PII or other customer data isn’t exposed, sensitive data leakage can provide opportunities for lateral movement or privilege escalation into more sensitive areas of an organization’s tech stack. For example, when API keys are posted to GitHub, if they’re discovered by threat actors, they can be used by unauthorized parties to access third-party accounts and services. This risk remains across all SaaS and cloud environments and requires that organizations adopt a zero trust posture towards data security, generally through tools like Nightfall which enable continuous data security and compliance. The rise of cloud misconfigurations and supply chain attacks are two trends that have only increased the urgency of this need. Data security hygine is often wrapped together with the concept of cyber hygine, or more generally, the act of identifying vulnerabilities and lowering the risk of insider threat before they become costly liabilities. Want to learn more? You can find out more about data security hygiene and get started with Nightfall by scheduling a demo at the link below. Subscribe to our newsletter Receive our latest content and updates Nightfall is the industry’s first cloud-native DLP platform that discovers, classifies, and protects data via machine learning. Nightfall is designed to work with popular SaaS applications like Slack, Google Drive, GitHub, Confluence, Jira, and many more via our Developer Platform. You can schedule a demo with us below to see the Nightfall platform in action. Schedule a Demo Select a time that works for you below for 30 minutes. Once confirmed, you’ll receive a calendar invite with a Zoom link. If you don’t see a suitable time, please reach out to us via email at firstname.lastname@example.org.
<urn:uuid:a2759ae6-3b97-4ccf-b59d-eceaa5ea6fd2>
CC-MAIN-2022-40
https://nightfall.ai/what-is-data-hygiene-and-why-is-it-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00535.warc.gz
en
0.916829
1,973
3.046875
3
Organizations of all sizes have a complex array of hardware, software, staff, and vendors. Each of those assets comes with complex configurations and relationships between them. Visualizing and tracking these configurations and relationships over time is critical to quickly responding to incidents. Plus, it helps inform business decisions, especially regarding future IT components and upgrades. Any organization familiar with the ITIL framework will know the term configuration management database (CMDB). This unique database aims to track a company’s assets and all of the complex relationships between them. However, designing a configuration management database is not that easy. You must consider what to include, how to find it, the intricacies of maintaining it, and everything in between. Are you interested in implementing a configuration management database in your IT department or need help improving a CMDB project gone wrong? If so, this guide will help you find a feasible solution to maintain and make it accessible to everyone who needs it. - What Is a CMDB? - Why Is a CMDB Important? - How Does a CMDB Work? - Characteristics of a CMDB - Should You Implement a CMDB? What Is a CMDB? A configuration management database (CMDB) is unlike other databases because it’s designed entirely for internal management and control purposes. A CMDB acts as a central repository. It’s used to track and control the relationships between various IT assets and their established configurations. For any company implementing the Information Technology Infrastructure Library (ITIL) framework, a CMDB is crucial to IT processes. The ITIL framework lays out many crucial IT standards and processes. These pertain to incident response, availability, deployment management, and other key activities. The framework makes suggestions to help better align these IT activities with business objectives. Doing so recognizes that the most up-to-date and accurate information must inform these processes and the resulting decisions. So, to execute the framework, IT departments require good configuration management. That means enlisting the help of a CMDB. Configuration management aims to give a team the context it needs to evaluate an asset. Instead of viewing it in a silo, the IT department can look at the CMDB to see how it relates to other assets. They can then see how changing its configuration will impact the organization. This information allows IT managers and administrators to make better-informed decisions. Thus, a CMDB helps plan releases, deploy new components, and respond to incidents. For example, if something disrupts the business’s network and impacts all workstations in a given department, an IT administrator would have difficulty manually tracking down the routers and servers involved in the issue. This would lead to a great deal of trial and error or information hunting just to start step one of resolving the issue. On the other hand, if that administrator has a CMDB to reference, they can immediately figure out the routers, servers, and other infrastructure involved. Even with a basic example such as this, it’s clear to see that a CMDB is incredibly valuable for IT professionals. CMDBs will take time to set it up and maintain. However, their ability to speed up incident resolution, simplify deployments, and better inform IT decisions means the investment will pay off rapidly. Why Is a CMDB Important? The role of a CMDB in the IT department is clear. With all of the information in front of them, an IT professional can better make decisions pertaining to incident resolution, system updates, and infrastructure upgrades. The result is more efficient resource utilization and less trial-and-error. In turn, that helps the entire organization continue running smoothly. In addition to giving IT insight into how an organization’s data assets are being controlled and connected, a CMDB also reveals data that’s siloed in various departments. This information helps organizations restore accessibility and visibility at scale. A CMDB improves data governance. In turn, that helps support the mission-critical activities of the company’s planners, accountants, and operations staff. - The IT department is empowered to resolve issues more quickly by understanding the connections between affected systems at-a-glance. Likewise, they have the information they need to inform decisions. Integrations, upgrades, and deployments happen more smoothly. Visibility minimizes issues and downtime. - The planning department needs the CMDB in order to plan high-level enterprise architecture. Technology managers also need the insights a CMDB provides to manage assets and capacity at a more granular level. - The accounting department requires a more detailed overview of various assets and their associated costs. This supports accurate billing and budgeting. - The operations department relies on the CMDB to inform change and incident management. The CMDB helps identify root causes, changes, risks, and other key indicators. The Ops team needs this information to prevent issues and keep processes running smoothly. As you can see, the CMDB’s role has a far-reaching impact that ultimately touches every facet of an organization. A lack of visibility will directly impact operations, compliance, and reporting. That’s why implementing CMDB helps businesses overcome inefficiencies. How Does a CMDB Work? CMDBs work by gathering data from different sources and storing information about IT assets and other configuration items in a common place that is easily accessible. Even for a small company, CMDBs are necessary. Once an IT department begins analyzing all of its assets and the complicated relationships between them, it will discover a substantial amount of information that must be stored. Plus, that information needs to be updated often. Using a CMDB is regarded as the most efficient way to store IT information. After all, it can track complicated configurations, relationships, and dependencies with ease. When designing a CMDB, you should plan to enter all known assets. These assets are referred to as “configuration items” (CIs). Once all assets are entered, it is then the responsibility of the IT department to connect the dots. That means defining the various relationships between the CIs. There are several assets that a department may need to track. Some examples include hardware, software, documentation, and vendors. Both manual and automated tools exist to help IT departments discover their assets and the relationships between them. While it’s not possible to achieve and maintain complete accuracy, departments should strive to keep the CMDB as up-to-date as possible. If it’s not updated, the CMDB won’t be able to serve its purpose effectively. As far as who should be in charge of creating the CMDB, it’s a group effort. Once the CIs have been identified, their respective owners should be brought into the process as early as possible. These individuals will hold helpful knowledge about the asset and its complex relationships. The involvement of these stakeholders helps to make sure that the CMDB is accurate and complete. Once data has been brought into the CMDB, the challenge becomes maintaining it. Certain characteristics set a good, usable CMDB apart from those ultimately not maintained. Failing to prioritize these characteristics could mean the CMDB is eventually abandoned due to inefficiencies and resource consumption. Characteristics of a CMDB Now you have a big picture understanding of how a CMDB works and the role it plays in IT and the ITIL framework. However, it’s also important to approach it in a more practical sense. A CMDB may store hundreds, if not thousands, of CIs. How are these discovered, maintained, and utilized on a day-to-day basis? That depends on the exact features and characteristics of the CMDB you’re designing. The first characteristics that need to be identified relate to the creation and maintenance of the database itself. Departments will need to pull in data manually and with API-driven integrations. There should also be automation involved. Without automated discovery, accurately creating and maintaining the CMDB will prove challenging. So, incorporating scanning tools into the CMDB should be a top priority. During the creation and throughout its use, the department needs to maintain a graphical representation of all the CIs in the database. You should be able to see how CIs are dependent on each other at-a-glance. This is known as service mapping. Some CMDB tools can generate a service map automatically. Visualization is important for the organization-wide understanding of the assets. It’s also essential for quickly communicating potential challenges when considering changes to the IT infrastructure. Once established, a CMDB should be intuitive, accessible, and visual whenever possible. This starts by implementing dashboards that track specific metrics about the CIs and their relationships. For instance, IT departments should be able to pinpoint how a change or release impacts the health of relevant CIs. The dashboard should also reveal patterns in incident reports, outstanding issues, and associated costs. The IT department should also have visibility into compliance, especially when working at the enterprise level. Auditors need to know the state of CIs and have access to historical incidents and changes. For that reason, transparency and reporting are critical characteristics of a CMDB. Enabling users to gain access to the database is critical, limiting what they can view and change. For that reason, access controls are another essential characteristic. A lack of access controls will lead to significant data integrity and compliance challenges. As you can see, the design of a CMDB can grow very complicated very fast. This is why the IT department must gather key stakeholders. Teams must discuss the organization’s compliance needs and other considerations before they implement a CMDB. With a well-informed team in place, a business is empowered to design underlying infrastructure that’s feasible to maintain and use daily. Should You Implement a CMDB? Implementing a CMDB helps organizations manage their assets and regain visibility into their data and infrastructure. Any organization following the ITIL framework needs a CMDB. However, smaller companies may feel that they will not be able to realize great value from one. In truth, companies of all sizes—including small businesses—are finding that a CMDB is becoming more important. No matter the size of your operations, you are not exempt from complying with data privacy and protection regulations. As data governance standards grow more strict, visibility is crucial. In addition, a CMDB helps companies improve the observability of their systems. Even smaller companies struggle as data and assets become more distributed across the cloud, on-premises, and third-party applications. With all that in mind, a CMDB is likely a worthy investment for your business. The good news is that you do not have to build your CMDB from scratch. There are several solutions providers that can help your company establish a CMDB. They even come with the associated dashboards, tracking, and access controls in place. The result is a CMDB that’s easier to implement, use, and maintain. Achieving that reality takes the right partners.
<urn:uuid:db2fb386-1352-4e15-9944-0f2518198d61>
CC-MAIN-2022-40
https://www.logicmonitor.com/blog/what-is-a-cmdb-and-what-role-does-it-play-in-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00535.warc.gz
en
0.943334
2,265
2.75
3
A Battery of Options Connectivity can disappear in the blink of an eye, and in the data center world, time is critical to maintaining reliability. In the fractions of a second between an unexpected power outage and back-up power generation firing up, servers could be forced into reboot if they lose power. That can disconnect vital data streams for minutes or even hours – unacceptable in today’s hyper-connected world. Data centers use Uninterruptable Power Supply (UPS), a battery back-up system that fills in that crucial few milliseconds of gap between power outage and back-up power generation to ensure no loss of service. Typically, this includes valve-regulated lead acid batteries (VRLA) or lithium-ion batteries, but each has its positives and negatives. VRLA offers a certain set back-up time, and CyrusOne has chosen to use VRLA at an 8-to-10-minute interval. When used, its voltage drops off drastically for a brief period and then rebounds. That means users must customize the battery size to accommodate the voltage dip. At the beginning of its life, it has an 8-to 10-minute back-up time. By the end of its life, it has a four-minute back-up time. In total, VRLAs have a five-year lifespan. About 96% of utility outages last 10 seconds or less – a short period of time when a data center would have to run on batteries. The idea is to provide a power source allowing two chances for the back-up power generator to start. Normally, they start on the first try. Occasionally they don’t, and they’ll go through a timing sequence and then try a second time. When the VRLA is hit with a load, it has some degradation. Over a five-year period, it can sustain about 500 hits before needing replacement. VRLA are also heavy. If a data center wanted to set up a VRLA battery for a 2-megawatt UPS, the VRLAs would weigh about 55,000 pounds. That requires more storage space within a data center. The time from when the manufacturer ships VRLA batteries until they get installed cannot exceed about 30 days. Once shipped, they must be installed and charged, or they’ll start losing their capacity. In terms of cost, a VRLA battery needs more frequent charging. While cheaper than lithium-ion, the more frequent need to replace makes them more expensive in the long term. Plus, there are other costs and environmental considerations when having to dispose of them safely every five years. Lithium-ion batteries also provide a certain set back-up time and can be hit with a load anywhere from 5,000 to 17,000 times over the course of their 15-year lifespans. CyrusOne uses lithium-ion with a 3-miunte back-up time. And often, lithium-ion is used in peak shaving and other applications besides just transitioning to generator because it can be hit it so many times without affecting its overall life. If a data center wanted to set up a lithium-ion battery for a 2-megawatt UPS, it would weigh about 14,000 pounds, compared with 55,000 for a VRLA. Additionally, lithium-ion can go six to eight months without needing to be charged. It’s a more versatile option than VRLA. No two lithium-ions are alike – each manufacturer has a different recipe or chemistry in their batteries. Some are safer, some have better run times. A data center must pick the right chemistry to suit its needs, which requires investigation. In terms of cost, a lithium-ion battery lasts longer, and environmentally safe disposal is therefore also less frequent. But lithium supplies are increasingly limited, making cost fluctuate based on supply and demand. Lithium-ion batteries have also developed a reputation for being a fire hazard. This might have been true with earlier lithium-ion technologies. But today, lithium-ion systems all have computerized monitoring systems that prevent those fires. Every cell in a lithium-ion application is monitored by a computer. If it starts to overheat, that cell’s is taken out of the circuit. So, the chances of actually having a lithium-ion fire are small. But because past incidents, using lithium-ion in a data center depends on the market. Each market will have different regulations and inspectors who must approve a lithium-ion system to ensure safety. CyrusOne is currently testing lithium-ion systems on-site to ensure efficiency and safety. Nickel-zinc could lead the charge The future of UPS could be Nickel-zinc (NiZn), an option that doesn’t have the same supply issues as lithium-ion and offers more power density and a smaller footprint than VRLA. These types of batteries can operate at a temperature of 85 degrees, while VRLA require a top temperature of 77 degrees. The lifespan for NiZn batteries is 10 to 15 years, with a warranty of 10 years – significantly higher than VRLA. They also weigh 33% less than VRLA. It’s also a cheaper option, with a total cost of ownership over its lifespan some 60% less than VRLA. Even better, NiZn is more sustainable – 98% can be recovered to battery-grade specs without the use of high heat during recycling.
<urn:uuid:31fe50ad-5a2b-4b3d-bb4a-a226724a4b38>
CC-MAIN-2022-40
https://cyrusone.com/blog-post/a-battery-of-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00535.warc.gz
en
0.948234
1,132
2.609375
3
In situations with little or no lighting, a security camera can use infrared light to illuminate the area and the ICR has an important role on that. In this article, you will learn what the infrared cut filter is and what is the meaning of the day/night feature available in professional security cameras. What is ICR (Infrared Cutfilter Removal)? Before talking about ICR, let's understand a little bit about how a security camera works regarding the use of light. During the day there is a lot of light in the environment due to sunlight. A security camera needs to filter out the excess of light that hit the image sensor in order to generate good-quality video and that's done by using an infrared light filter that blocks unwanted light that comes through the camera lenses. ICR (Infrared Cutfilter Removal) is a filter that sits between the lens and the image sensor (CCD or CMOS) of a security camera and is used to filter the excess of light that comes through the camera lenses. This filter helps the camera to produce good-quality images with accurate color. See the following image with a diagram that represents the light reaching up to the camera lens and passing through the filter before reaching the image sensor. When buying a security camera you can look for the information about the existence of this filter (low-cost cameras usually do not have the IR Cut filter). Removing the filter at night-time Due to the reduced light available in the environment at night, the camera automatically removes the infrared filter from its position. This is done in order to have more light reaching the sensor and the camera shows black and white images. You can watch the removal of such a filter at the moment it is occurring. If you are curious to see that, just access the camera menu remove the filter manually. The Day / Night function setup menu The security camera works according to the ambient light, with sunlight it operates in "Day mode" and at night-time, it operates in "Night mode". That's why when buying a security camera you can choose a model that has the Day/Night function and work with the infrared filter (ICR). The difference between true and electronic Day/Night There are models of cameras that have the infrared filter (ICR) and therefore make a correction of the of excess light through such a filter and therefore are considered cameras with True Day Night (have IR Cut Filter). Cameras that do not have this physical filter do the correction of light using an electronic process and therefore are considered cameras with electronic Day/Night (they do not have the IR Cut Filter). The Day/Night function activation menu See below the activation menu of the Day/Night function, which it is available in an IP camera. (You can also activate this function in professional analog cameras). The following picture shows the image during the day. As you can see it's black and white because the ICR filter was removed. What kind of Day/Night is the best? The "True Day/Night" is the best because there is a physical filter that is used during the day and removed at night, it's an optical process. Cameras that have this physical filter work much better at night when infrared illumination is used, so my suggestion is that you give preference to these types of cameras every time you need to buy surveillance devices. The Day/Night activation methods Usually, a security camera that has the Day/Night function uses the automatic removal of the infrared filter as the ambient light drops to a certain level. If necessary, you can change the method of removing this filter, just use the camera menu and look for the option to keep the filter always in its place. You can also schedule the function to be activated at specific times of the day or connect an external sensor to the camera to indicate when the Day/Night function should be activated. The ICR information in the camera catalog Before buying a security camera you can consult the catalog or product manual and look for the type of Day/Night function that is available. Usually, you will find the information as Electronic Day /Night, True Day/Night or ICR Day/Night (which is the same as True Day/Night). See the following picture with the information in a catalog of an IP camera. Problems with the infrared Sometimes it is possible to have problems related to the use of infrared light, which can blur the camera image at night To learn more about this topic read the other article here on the blog: Security camera blurry at night. Before buying your security camera make sure you are getting the model you need for the installation. If using infrared is very important give preference to cameras with the True Day/Night function or look for the ICR information. Want to become a better professional ? If you want to become a professional CCTV installer or designer, take a look at the material available in the blog. Just click the links below: Please share this information with your friends...
<urn:uuid:34549b80-61cb-4b6a-9f6c-c97919868630>
CC-MAIN-2022-40
https://learncctv.com/what-is-icr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00535.warc.gz
en
0.937045
1,041
2.6875
3
August 17, 2022 What Is Malware? Almost 600,000 pieces of new malware are detected every day. Malware, a type of software designed to seriously damage or disable computer systems, can lead to data theft. It can spread through email attachments, downloads or physical media (think DVDs/Blu-rays). Malware can also be disguised as legitimate software. Once installed, malware can collect sensitive information, delete files or damage system hardware. Malware can be difficult to remove, and it often requires special software or manual removal techniques. Some types of malware, such as ransomware, can also encrypt files and demand payment for the decryption key. What are the different types of malware? There are many different types of malware. Each type of malware has its own specific purpose and method of operation. Types of malware include: Viruses are the best-known type of malware. They are small pieces of code that can replicate themselves and spread from one computer to another. Once a virus has infected a computer, it can cause the system to crash or delete important files. Worms are similar to viruses, but they don’t need to attach themselves to a program in order to spread. Instead, they can duplicate themselves and spread through networks of computers. Trojans are another type of malware used by hackers to gain access to a computer system. A Trojan disguises itself as a benign file or program, but once it is installed on a computer, it can allow a hacker to take control of the system. Spyware collects information about a computer user without their knowledge or expressed consent. Spyware can track a user's online activity and even record keystrokes. This helps the attacker steal sensitive information, such as passwords and credit card numbers. How does malware end up on a computer? In most cases, malware is spread through email attachments or downloads from untrustworthy websites. Once it's on a computer, it can do everything from deleting files to stealing personal information. That's why it's important to be careful when opening email attachments and downloading files from the Internet. How to prevent malware There are a few things that can be done to protect against malware. Install and maintain up-to-date anti-virus and anti-malware software. This software will help to detect and remove malware from your system. Use caution when downloading. Be careful about what you download and install on your computer, as malicious software can often be disguised as legitimate programs. Keep your operating system and other software up-to-date. Outdated software can often be exploited by malware. By taking these steps, you can help to protect your computer from malware. Malware defense supports business continuity Businesses that rely on computers are at high risk for malware attacks. This makes having a strong malware defense strategy critical for any business that wants to avoid downtime and business disruption. While there are many different types of malware, they all share the same goal: Harm a business by disrupting its operations or stealing its data. A strong malware defense strategy and a solution designed to detect malware will help to protect a business against these threats. It ensures that operations can continue in the event of an attack. Malware defense starts with a comprehensive understanding of the possible threats a business could face. By knowing which types of malware are most likely to target its systems, a business can take steps to protect itself. Businesses that handle sensitive customer data should invest in robust anti-malware solutions. Similarly, companies that rely heavily on online operations or cloud computing should consider investing in malware defense measures. These might include firewalls and intrusion detection systems. By taking these steps, businesses can make it much harder for attackers to succeed in causing harm and business disruption. Businesses should also develop comprehensive incident response plans. These plans should be designed to minimize the impact of an attack and help the business recover quickly. For example, businesses should have procedures in place for backing up data and restoring systems. They should also have plans for communicating with customers and other stakeholders in the event of an attack. With these plans in place, businesses can mitigate the damage that a malware attack can cause and quickly resume operations. Managing threats to cybersecurity Malware defense is essential for any business that wants to protect itself from the growing threat of malware attacks. By taking steps to understand the threats it faces and investing in defensive technologies, businesses can make it much harder for attackers to succeed.
<urn:uuid:0531caf4-b186-416f-b22b-ccc1e07a2a2f>
CC-MAIN-2022-40
https://www.datto.com/blog/what-is-malware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00735.warc.gz
en
0.945582
931
3.234375
3
Cybersecurity in the Ballot Box, the Bistro and the Bedroom October is National Cybersecurity Awareness Month, a time when organizations across America join together to educate the public about cyberthreats like social engineering (especially phishing attacks). This year, it’s also the last full month to decide your vote for the 2020 election. As citizens consider the future of our country, we see the tech giants coming together to prevent election crime, while tech users struggle to keep up with device security. With online fraud on the rise, how do you know your business is protected from a cyberattack, especially when considering advanced techniques like social engineering? National Cybersecurity Month comes to us from organizations that promote assertiveness, rather than paranoia. We don’t have to be afraid of our connectivity or our devices. On the contrary, we need to embrace them holistically and attentively (and with a little help from the cybersecurity experts). How to stop social engineering attacks at work and at home Do Your Part. #BeCyberSmart. Home Connectivity: This week’s cybersecurity awareness theme is “Securing Devices at Home and Work.” When reviewing the year, did you spend time working from home? Did you have children suddenly in Zoom classes, rather than in a traditional classroom? Did you have the resources you need (virus, malware, and ransomware protection) to stay safe online? Business Technology: Your business couldn’t operate without digital interactions with devices outside of your office walls. Furthermore, your business can’t operate without a dedicated plan for protecting employee and customer data. How do hackers get into your system? Common external penetration methods include baiting, phishing, and spear phishing. Baiting: Curiosity killed the network First of all, baiting attacks can begin with hardware or with software. For example, a hacker can leave a corrupted flash drive on your desk, and the attack begins with the physical action of a user plugging it into a laptop and then clicking through files that install malware throughout the system. How to stop this social engineering technique from attacking your business begins with employee cybersecurity awareness training. October is a perfect month for bringing in external cybersecurity resources to help bolster your team. To begin, we can provide system assessments that surface hacker access points. Then, our engineers can test your users. For example, our security technicians can engineer a scareware drill to make users think they’re clicking to patch, when really they’re getting tricked into a click. If your employees understand the various forms of baiting, then you can prevent a data breach. Phishing: The one that got away Did you ever see a prompt to “click here” or “download now” from an email that was obviously fake? In the past, phishing emails were more obvious. A strange font or a missing signature was clue enough. Unfortunately, advanced social engineering technology now lets a cybercriminal twin a real user’s software behaviors. Because phishing is the most common social engineering tactic, NIST recently developed the Phish Scale, a cybersecurity tool that helps businesses surface network vulnerabilities by assessing cues, click rates, and user interactions in regard to phishing email difficulty levels. This new method of testing phishing attempts assists cybersecurity experts by evaluating spoofed emails through advanced data analysis. CIOs, CISOs, and other technology experts can use this tool to optimize phishing awareness and training programs. Spear Phishing: In IT together Often, a phishing email comes to your inbox addressed specifically to you but without personal information as part of its composition. Therefore, signs of imitation are more easily observed. “Click to download” prompts hesitancy if the email comes with a generic invitation. When an email comes through with more personalized data, like a personal email signature or an attached thread of coworkers, it can trick you into thinking the sender is legit. In this case, a hacker follows the digital footprints of a user and engineers that data to create a personalized phishing attack. Think of this as the Shakespeare of social engineering, and the play is written for you and with you as the inspiration. When organizations create security strategies in an effort to prevent social engineering attacks, phishing prevention is always a sign of a thorough plan. When considering phishing emails, keep in mind that malware can stay undetected in a system for months before the IT department discovers the penetration. Spear phishing can prompt a sly malware that quickly infects an entire network. Vote to Stop Cybercrime At EstesGroup, we know how to stop social engineering attacks from harming your business. Furthermore, we know how to take the worry out of IT (with managed IT). Protecting everything from saved credentials to individual clicks, our cybersecurity experts defend your business while you do the work you love. Do your coworkers need practice in recognizing the fraudulent behaviors fueling social engineering attacks? October is a perfect month to initiate new security policies and procedures, and to test your cybersecurity plan. EstesGroup is a 2020 National Cybersecurity Awareness Month Champion. We provide the most secure cloud solutions available to businesses. Read more about National Cybersecurity Month at the National Cyber Security Alliance (NCSA) or at the Cybersecurity & Infrastructure Security Agency (CISA).
<urn:uuid:0ebac4b5-f1bf-44fd-a237-5606c64d87e8>
CC-MAIN-2022-40
https://www.estesgrp.com/blog/how-to-stop-social-engineering-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00735.warc.gz
en
0.927292
1,090
2.71875
3
Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future. Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021. AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that. The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality. Also see: Top AI Software From Folklore to Fact While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today. Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction. One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics. In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.” Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.” Also see: The Future of Artificial Intelligence Computers, Games, and Alan Turing As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers. Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low. However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer. The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field. The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon. Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps: - Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry). - Identify the end goal (you no longer feel hungry). - Decide the actions you need to take to solve the problem (you make a sandwich and eat it). This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem. Neural Nets and Natural Languages With Cold-War-era governments willing to throw money at anything that might give them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s. This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome. Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research. A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems. In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research. Also see: Data Analytics Trends The First AI Winter In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power. As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter. The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s. Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems. John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today. Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.” Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to give you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants. From Academia to Industry The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont. On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research. Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022. Also see: Real Time Data Management Trends The Second AI Winter In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993. The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink. Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter. Also see: Top Digital Transformation Companies Into the New Millennium and the Modern World of AI The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI. More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003. Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row. One of the most impactful pieces of AI in recent years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history. The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users. Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.” The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable. AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.
<urn:uuid:793f2818-94e8-415e-ada9-d5d1301e17c6>
CC-MAIN-2022-40
https://www.eweek.com/enterprise-apps/history-of-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00735.warc.gz
en
0.958258
2,752
3.25
3
Unsupervised and Supervised NLP Approach Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that is specialized in natural language interactions between computers and humans. NLP is extensively used by today’s AI Chatbots and AI Virtual Assistant Technologies to process, analyze, understand and respond to an input user utterance expressed in natural language either as text (via a Chat Interface) or Voice (via an Interactive Voice Response Interface which converts audio to text). Unsupervised NLP and Supervised NLP play key roles in the success and growth of AI. NLP is extensively used to address a variety of human language challenges for those systems primarily related to Syntax Analysis (arrangement of words in a sentence such that they make grammatical sense) like Lemmatization, Word Segmentation, Part-of-Speech (PoS) Tagging, etc., and Semantic Analysis (understand the meaning and interpretation of words and how sentences are structured) like Named-entity-Recognition (NER), Word-Sense Disambiguation, Natural Language Generation (NLG), and more. AI Chatbots and AI Virtual Assistants use either one or a balanced combination of the two families of NLP Learning: Supervised Learning and Unsupervised Learning. What is Supervised AI Learning? AI Chatbots and AI Virtual Assistants using Supervised Learning are trained using data that is well-labeled (or tagged). During training, those systems learn the best mapping function between a known data input and expected known output. Supervised NLP models then use the best approximating mapping learned during training to analyze unforeseen input data (never seen before) to accurately predict the corresponding output. Usually, Supervised Learning models require extensive and iterative optimization cycles to adjust the input-output mapping until they converge to an expected and well-accepted level of performance. This type of learning keeps the word “supervised” because its way of learning from training data mimics the same process of a teacher supervising the end-to-end learning process. Supervised Learning models are typically capable of achieving excellent levels of performance but only when enough labeled data is available. Furthermore, the building, scaling, deploying, and maintaining of accurate supervised learning models takes time and technical expertise from a team of highly skilled data scientists. For example, a typical task delivered by a supervised learning model for AI chatbot / Virtual Assistants is the classification (via a variety of different algorithms like (Support Vector Machine, Random Forest, Classification Trees, etc.) of an input user utterance into a known class of user intents. The precision achieved by those techniques is really remarkable though the shortfall is limited coverage of intent classes to only those for which labeled data is available for training. Advancing AI with Unsupervised Learning To overcome the limitations of Supervised Learning, academia and industry started pivoting towards the more advanced (but more computationally complex) Unsupervised Learning which promises effective learning using unlabeled data (no labeled data is required for training) and no human supervision (no data scientist or high-technical expertise is required). This is an important advantage compared to Supervised Learning, as unlabeled text in digital form is in abundance, but labeled datasets are usually expensive to construct or acquire, especially for common NLP tasks like PoS tagging or Syntactic Parsing. Unsupervised Learning models are equipped with all the needed intelligence and automation to work on their own and automatically discover information, structure, and patterns from the data itself. This allows for the Unsupervised NLP to shine. The most popular applications of Unsupervised Learning in advanced AI Chatbot / AI Virtual Assistants are clustering (like K-mean, Mean-Shift, Density-based, Spectral clustering, etc.) and association rules methods. Clustering is typically used to automatically group semantically similar user utterances together to accelerate the derivation and verification of an underneath common user intent (notice derivation of a new class, not classification into an existing class). Unsupervised Learning is also used for association rules mining which aims at discovering relationships between features directly from data. This technique is typically used to automatically extract existing dependencies between named entities from input user utterances, or dependencies of intents across a set of user utterances part of the same user/system session, or dependencies of questions and answers from conversational logs capturing the interactions between users and live agents during the problem troubleshooting process. Even though the benefits and level of automation brought by Unsupervised Learning are large and technically very intriguing, Unsupervised Learning, in general, is less accurate and trustworthy compared to Supervised Learning. Indeed, the most advanced AI Chatbot / AI Virtual Assistant technologies in the market strive by achieving the right level of balance between the two technologies, which when exploited correctly can deliver the accuracy and precision of Supervised Learning (tasks for which labeled data is available) coupled with the self-automation of unsupervised learning (tasks for which no labeled data is available). Aisera offers the most feature-comprehensive and technology-advanced AI Virtual Assistant solution for self-service automation in the market perfectly blending together Supervised Learning and Unsupervised Learning, Natural Language Understanding (NLU), AI Virtual Assistant technology, Conversational AI (cognitive search) and Conversational Automation into one SaaS cloud offer for IT Service Desk and Customer Services. Aisera proprietary unsupervised NLP/NLU technology, User Behavioral Intelligence, and Sentiment Analytics are protected by several patents-pending applications.
<urn:uuid:56ab5f0b-ef39-4eb2-8ad3-da308c446675>
CC-MAIN-2022-40
https://aisera.com/blog/unsupervised-and-supervised-nlp-approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00735.warc.gz
en
0.910288
1,153
3.640625
4
Now more than ever, organisations understand the importance of information security and data governance. The GDPR (General Data Protection Regulation) and similar laws have imposed strict rules on the ways organisations must protect the information they process. Anyone who fails to take appropriate steps could face sizeable penalties and be left dealing with the reputational damage accompanying a data breach. One of the most severe types of data breach is data theft and interception. This blog explains why it poses such a serious problem, and how you can mitigate the risk. What is data interception and theft? Data interception and theft are two ways that an unauthorised actor can access an organisation’s sensitive information. Both terms describe the improper access of information, but there is a slight difference between ‘data interception’ and ‘data theft’. Data theft refers to any way sensitive information is compromised, whereas data interception is a specific type of data theft, referring to information that is captured during transmission. An example of data interception is a MITM (man-in-the-middle) attack. This is a hacking technique that exploits how data is shared between a website and a user’s device – whether that’s their computer, phone or tablet. When an attacker compromises an Internet router, they can intercept and decrypt the victim’s transmitted data, giving them access to anything that the victim accesses online. Meanwhile, data theft can be any way that someone obtains sensitive information. For example, a criminal hacker might break into an organisation’s systems or steal an employee’s USB drive. Data theft isn’t limited to cyber attacks. It can also happen when an unauthorised actor discovers records that have been improperly disposed of or when someone uses social engineering techniques to enter the premises and gain access to classified data. Data theft can also occur unintentionally. Employee error is a leading cause of data theft, and might happen when an employee takes home a file containing sensitive information and misplaces it. How to prevent data interception and theft 1. Create password policies Cyber criminals almost always begin their attacks by trying to capture an employee’s password. There’s no need to spend time searching for vulnerabilities if you can find a leaked password online or trick an employee into handing over their details with a scam email. It’s why organisations must adopt secure password policies. They are simple to produce and ensure that employees understand the importance of creating strong, unique passwords and taking appropriate steps to protect them. 2. Identify and classify sensitive data Information classification is a process in which organisations assess the data that they hold and the level of protection it should be given. Organisations usually classify information in terms of confidentiality – i.e. who is granted access to see it. A typical system will comprise four levels: public, internal, restricted and confidential. Classifying data in this way limits who has access to – and who could potentially compromise – sensitive information. 3. Train your staff to understand the importance of data security The measures we’ve described so far only work if employees understand their information security obligations. Organisations must provide regular staff awareness training that explains information security best practices. This training should be conducted whenever a new starter joins and be repeated once or twice a year to ensure that the knowledge remains fresh. 4. Properly dispose of sensitive data Paper records must be shredded when you no longer need them. This ensures that unauthorised personnel cannot view the information once it has left the organisation’s premises. Likewise, organisations must wipe the memories of computers, phones and tablets before throwing them out or recycling them. 5. Seed your data Data seeding is the practice of planting synthetic details in a database. It’s generally done to monitor how information is being used and to identify unauthorised access. This helps organisations detect and address breaches, and can also act as a preventive measure. If employees know that an organisation can identify the source of stolen information, they are less likely to attempt anything untoward. Data seeding can also be used as proof of ownership, ensuring you know when data has – or hasn’t – come from your systems. Additionally, it can be used for process assurance, helping you follow a known user’s journey and the data flow. You can find out more about data seeding with DQM GRC’s dedicated data seeding services. These services have been used successfully for the past 20 years to track the use of valuable data assets on behalf of their owners. Our team will create and share unique seed records with you, which you can insert into your data sets – or we can manage the process for you. Once the seeds are in place, we will monitor any contact made with them. If your data is stolen or misused, we can help you investigate and remediate the breach, protect your data subjects and take action against whoever is responsible. We will also provide you with a detailed monthly report setting out the ways your data has been used, as agreed with you. This typically includes the channels that were used, the time and date on which the use was identified, and evidence of the use, such as an image of a marketing campaign.
<urn:uuid:b4840204-39b1-42bc-9469-906d9b2c9c95>
CC-MAIN-2022-40
https://www.dqmgrc.com/blog/5-ways-to-prevent-data-interception-and-theft
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00735.warc.gz
en
0.921025
1,100
3.28125
3
The Top Six Benefits of Data Modeling – What Is Data Modeling? Understanding the benefits of data modeling is more important than ever. Data modeling is the process of creating a data model to communicate data requirements, documenting data structures and entity types. It serves as a visual guide in designing and deploying databases with high-quality data sources as part of application development. Data modeling has been used for decades to help organizations define and categorize their data, establishing standards and rules so it can be consumed and then used by information systems. Today, data modeling is a cost-effective and efficient way to manage and govern massive volumes of data, aligning data assets with the business functions they serve. You can automatically generate data models and database designs to increase efficiency and reduce errors to make the lives or your data modelers – and other stakeholders – much more productive. In this post: - What Is a Data Model? - Why Is Data Modeling Important? - What Are the Top Six Benefits of Data Modeling? - What’s the Best Data Modeling Tool? What Is a Data Model? A data model is a visual representation of data elements and the relationships between them. Data models help business and technical resources collaborate in the design of information systems and the databases that power them. They show what data is required and how it needs to be structured to support various business processes. There are three types of data models: conceptual, logical and physical, and each has its own purpose defined primarily by the level of operational detail. With each stage of data modeling, the data model becomes more information- and context-rich. A conceptual data model is a rough draft, containing the relevant concepts or entities and the relationships between them. A logical data model, also referred to as information modeling, is the second stage of data modeling. It is a graphical representation of the information requirements for a given business area. A physical data model provides the database-specific context, elaborating on the conceptual and logical models produced prior. Accordingly, physical data models are often treated as the blueprint for a proposed database. Why Is Data Modeling Important? Although data modeling isn’t new, it is becoming an increasingly important practice because of the large amount of data organizations are tasked with processing and storing. A good analogy is that of a house and its architect. The architect designs a house with with the end user/occupant in mind. It has to be constructed with right functionality in the right places. So think of a table of data as a room in the house. But in the context of data management, the house doesn’t have just 10 rooms – it has 10,000, each with varying degrees of interconnectivity and importance to the organization. At this scale, oversight can be catastrophic. Therefore, the visual representation provided by a data model gives organizations the confidence to design their proposed systems and take them live. Data modeling is a critical component of metadata management, data governance and data intelligence. It provides an integrated view of conceptual, logical and physical data models to help business and IT stakeholders understand data structures and their meaning. Quite simply, you can’t manage what you can’t see. Top Six Benefits of Data Modeling Data modeling is the first step to ensuring mission-critical information is used, understood and trusted across the enterprise. It has many benefits. Following are the top six benefits of data modeling organizations can realize: - Improve discovery, standardization and documentation of data sources. - Successfully design and implement databases. - Support regulatory compliance now and into the future by governing data modeling teams, processes, portfolios and lifecycles. - Empower employees by enabling self-service data access and foster collaboration by improving inter-departmental/IT and business alignment. - Improve business intelligence and make it easier to identify new opportunities by expanding data capability, literacy and accountability across the enterprise. - Encourage more cohesive integrations of existing information systems as new systems are implemented with a greater perspective of the organization’s current state. For more information on the benefits of data modeling, click here. What’s the Best Data Modeling Tool? erwin Data Modeler (erwin DM) is an award-winning data modeling tool used by Fortune 500 companies, including some of the world’s leading financial services, healthcare, critical infrastructure and technology firms. Its history and proven track record enables users to benefit from the primary benefits of data modeling. In addition, erwin DM users have the ability to: Visualize any data, from anywhere erwin DM enables organizations to visualize their data whether structured or unstructured, regardless of where its stored – in a relational database, data warehouse or the cloud – within a single interface. Automate data model and database schema generation erwin DM users benefit from greater automation capabilities saving them time, increasing efficiency and reducing errors. Centralize model development and management erwin DM boasts an integrated view for conceptual, logical and physical data models to help bridge gaps in understanding between business and technical stakeholders. Encourage data literacy, collaboration and accountability Improve data intelligence and decision-making across the enterprise by maximizing the ability of stakeholders to use, understand and trust relevant data. Increase agility in application development Consolidate and build applications with hybrid architectures, including traditional, Big Data, cloud and on premise. Reduce risks and costs Automation and standardization of data definitions and structures reduces risks and costs, plus you can test changes and new applications before they go into production. Foster successful cloud adoption Automated schema engineering and deployment accelerates and ensures successful adoption of cloud platforms, like Snowflake, including auto documenting existing schema into reusable models. See for yourself why erwin DM has been named DBTA’s Readers’ Choice for Best Data Modeling Solution for seven years in a row. The Benefits of NoSQL Databases Why data-driven businesses must adopt a new data modeling mindset.Get the eBook
<urn:uuid:91e4a717-09dd-4325-ba40-345f576b7a0b>
CC-MAIN-2022-40
https://blog.erwin.com/blog/top-six-benefits-of-data-modeling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00735.warc.gz
en
0.901375
1,249
3.171875
3