text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Though we're not even into the wild west days of quantum computing, one expert argues we should take some risks on a technology that can keep the nation safe and competitive.
One of today's most promising emerging technologies is quantum computing. It’s so new, in fact, most people don't yet understand how it can improve the government’s efficiency and national security. But it’s showing enough promise that the Trump administration’s latest budget request included funding for quantum computing as it relates to the U.S. exascale computing program.
To get a sense of what quantum computing offers, GCN spoke with Bob Sorenson, vice president of research and technology and chief analyst for quantum computing at Hyperion Research. Here’s a lightly edited transcript of that conversation.
Let’s start at the top: What is quantum computing?
There’s so much confusion and mystery about what the quantum phenomena is and how it works. The best way to address it is to say that what quantum computing brings to the table is the ability to generate new kinds of algorithms, new kinds of applications and new kinds of use cases to solve problems that heretofore were unsolvable on traditional, binary-digit Von Neumann computer architectures.
What’s drawing attention to it now?
The development of the hardware has been going on for a number of years -- I would say certainly more than a decade -- and there was lots of interest as far as what quantum processes could be used for. For a long time, it was in some sense a solution looking for a problem. People were playing around with quantum hardware, but the simple idea, “OK, what do we do with this from a compute standpoint?” was a head-scratcher.
In the last few years we’ve seen an emergence in quantum computing hardware, but also in counterpart development -- in applications research, in algorithm research that asks how we can generate interesting algorithms that could be turned into applications and use cases that change the state of play in attacking some of the more interesting compute problems around the world.
For years, the standard quantum computing algorithm that people rallied around was this idea of Schor’s algorithm, which was fundamentally a process whereby you could take a very large number that was a product of two prime numbers. You didn’t know what the two initial prime numbers were, but you basically could factor it.
For a traditional computer, that’s a really, really hard task. As the numbers get big, the factoring gets really hard, and you could actually have processes whereby it would take hundreds of thousands of years on the fastest computer in the world to factor a number that big. That’s why those kinds of large numbers are the basis for a lot of encryption technology that’s being used out in the world today -- because to break the encryption, you have to do this factoring.
Now, quantum comes along, and this Schor’s algorithm offers a way to factor that number orders of magnitude faster than would ever be possible on a traditional computer. That presented an interesting algorithm, an interesting use case for quantum from a cryptographic standpoint, and that’s what drove quantum development for, say, the last 10 years.
What can government IT managers do to ease into quantum computing?
Get involved in some of the research and development work and perhaps work with the commercial sector to help really define the directions that quantum computing development takes. We’re not looking at, say, the IRS buying a quantum computer next month in order to review tax returns more quickly. Those kinds of end-use applications are years away.
Are any agencies using it now?
Within the Department of Energy, there’s lots of R&D going on. Oak Ridge National Lab has a pretty aggressive quantum computing development program, and the Department of Energy Office of Science writ large is also looking at quantum. There have been some large position papers, if you will, coming out of the executive branch.
So, it’s getting underway.
There’s lots of work going on out there, and as I said, it’s all very nascent. The concern I have is there’s a lot of hype out there right now about what quantum can accomplish, and we’re not even in the wild west stages yet. I think we’re maybe five years away from justifying this as wild west.
I think what we need to do is understand that this is a nascent technology. The hardware field is pretty wide open right now, and it remains to be seen how many interesting algorithms and applications can be gleaned from what’s available. There’s a lot of promise, but it’s still based on a lot of optimism and a lot of hope that interesting things will happen.
What elevator speech could an IT official give to an agency executive to get support for quantum?
You’ve got to get the best tools into the hands of the U.S. industrial base in order for it to remain economically competitive, which, of course is good in terms of national security concerns as well. To do that, you need to take some risks on some interesting far-reaching technologies, and quantum is right now one of the most promising out there. | <urn:uuid:2605bee7-bace-446d-b26e-edf429a27c45> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2018/03/the-quantum-promise/298726/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00299.warc.gz | en | 0.954541 | 1,105 | 2.8125 | 3 |
Defense-in-depth, sometimes called layered security, is a philosophy that embraces the concept of multiple defenses against threats. Rather than putting all the proverbial eggs in one basket and relying upon a single security strategy, multiple and different technologies, policies and practices all work together to provide as thorough and effective protection as is possible. It sounds good on paper, and it works great in practice, but in far too many customer engagements, IT often chooses to pass over patch management software in the false belief that their antivirus software will protect them against all threats.
This is not only dangerous but it’s completely wrong. Antivirus software is a critical protection, and should be installed on all systems, but the purpose of antivirus software is to protect against malware. Whether that is a piece of code that a user tries to download and run, or a malicious script that is hosted on a website, or a worm that tries to propagate from system to system, malware is code that has a recognizable binary pattern and acts in a recognizable way. It’s designed to work against code specifically written to cause harm.
What antivirus software is NOT built for or capable of doing is protecting against faulty code in otherwise approved applications. Patches are designed to fix bad code; collectively called bugs. That code could be a mistake made by a programmer, or an incompatibility with another piece of software, or perhaps instead it is code that just is not as good as it could be. When that mistake can be exploited by an attacker, patching that code may be the only way to prevent the vulnerability from being exploited.
Antivirus software acts upon malware that is already present on your system. How did it get there? Well, frequently that code can get there through a bug. The problem is that malware may do things thanks to an opening created by the bug, but won’t necessarily result in any code picked up by the antivirus software and blocked. When a piece of buggy code allows an attacker remote access to your system, antivirus software will not detect or prevent that access.
Another way of looking at this is to compare antivirus software to a security guard, and patches to good locks. Sure, the guard can react to the presence of a thief, but the locks could proactively keep the thief completely out of the system. If the thief gets in, how much damage could be caused before the guard finds him?
Just as you need antivirus software on all your systems, you want the necessary patches installed on all the systems that require them. The best way to accomplish that is by using patch management software. Patch management software, like GFI LanGuard, for example, provides you with a centralized application that can deploy patches to every system on the network. It can also assess those systems so that you know exactly what each needs. In essence, it does the heavy lifting for you, upgrades the locks and secures the latches. Patching is an on-going task, with both monthly releases from the major operating system vendors and unpredictable releases from software vendors as new vulnerabilities are discovered.
Automatic updates can take care of the operating system, but only if you trust all those patches to work on all your systems without testing. Patch management software will not only make it far easier to test all those patches, but you can use it to roll back any that turn out to have their own set of issues – and it won’t be the first time either.
So while antivirus software is absolutely critical and has its proper place in your network, it’s no substitute for patch management software. Using both will help to bolster your defenses and is a good start towards that layered security approach. | <urn:uuid:a0a92d2c-1a29-416e-8bb0-da814fe2389f> | CC-MAIN-2022-40 | https://techtalk.gfi.com/antivirus-is-not-a-replacement-for-patch-management-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00299.warc.gz | en | 0.958621 | 754 | 3 | 3 |
Overwhelmed with watching the world’s best push the limits of humanity and represent the pride of their respective countries, it’s easy to forget about all the behind-the-scenes infrastructure that makes the Olympic Games possible.
However, the largely unreported cybersecurity hack that almost halted the 2018 Pyeongyang Olympics before it began should serve as a stark warning against complacency. But, is the 2020 iteration of the Games at risk of the same fate? What are the threats? And, what can be done to ensure a safe and smooth Olympic Games where all the focus falls on the athletes?
Japan seems to be taking the threat seriously with emergency cybersecurity measures being put in place.
The fallout from the 2018 Pyeongyang attack have also left us with some valuable lessons, including the importance of well-orchestrated incident response.
The LIFARS New York City Lab was established in collaboration with the FBI, Department of Homeland Security and US Secret Service to examine digital evidence of all forms of cyber crime. We operate globally on cases including ransomware, cyber extortion, data breaches, celebrity hacking, Facebook hacking, insider threats, Twitter hacking, Gmail hacking and more.
What Happened in Pyeongyang?
Setting up an IT infrastructure is no mean feat with over 10,000 PCs, 20,000 mobile devices, 6,300 Wi-Fi routers, and 300 servers in two Seoul data centers forming the backbone of the technology behind the 2018 Winter Olympics.
Just before the kick-off ceremony was about to start, organizers received word that some “bug in the system” was systematically shutting down every domain controller in the Seoul data centers. This crash was already affecting the ability to secure tickets, affected Wi-Fi connectivity, disconnected internet-linked TVs, and disabled some of the facilities’ RFID-based security gates.
Luckily, the cybersecurity team for the Olympics’ organization committee were well-prepared by months of drills and security meetings. Working all through the night, they were able to take down and isolate all servers, identify the malicious service (a non-descript file called winlogon.exe), and, finally, restore servers and systems from backups.
By the next morning, all was good, and athletes, organizers, and attendees had almost no idea what had occurred.
Why the 2020 Tokyo Olympics Might Be at Risk
While you might think that the organizers of the 2020 Olympics will have learned from history and beefed-up cybersecurity for the competition, there are several reasons why the games might be at higher risk than ever:
- Cybersecurity threats have continued to diversify and grow more sophisticated
- We are still in the midst of a drought when it comes to cybersecurity talent and professionals
- Far from being negatively impacted like everyone else, cybercriminals have thrived during the pandemic and cyber-attacks have surged in volume
- We are becoming increasingly dependent on technology in all fields, and the Tokyo Olympics is set to be one of the most technologically advanced and innovative Games yet
Finally, while all countries represented at an Olympics hope to put their best foot forward, the host country falls under unmatched scrutiny. This makes the Olympics a prime target for state-sponsored actors with the resources and motive to undermine the event.
So, what can be done to secure the 2020 Olympics?
Using common-sense cybersecurity principles and learning from the successful response of the 2018 Olympics cybersecurity teams yield some as to how to prepare for the eventuality:
- Preparation, training, and readiness: Cybersecurity teams, and organizations, must be battle-tested to ensure the veracity of processes when cyber incidents occur. Red-teaming, table-top, and other situation-based training has been proven to be effective and preparing teams.
- Effective incident response plans: Part of the reason the IT infrastructure of the 2018 Olympics could be restored efficiently was the effective incident response. As soon as troubling signs were detected, cybersecurity immediately sprang into action to investigate the cause. Solid collaboration, communication, and decisiveness were key in the recovery process. This is only possible with a clear and effective incident response plan in place that describes the processes, lines of communication, and steps post-incident.
- Full accounting of all resources: For an event and organization on the scale of the Olympics, thousands of physical infrastructure devices are in play. Having a comprehensive and organized catalogue of all your IT assets is crucial to being able to isolate, investigate, and restore systems as quickly as possible. | <urn:uuid:80aae7fe-e63c-4305-b18e-e3dd60ba790b> | CC-MAIN-2022-40 | https://www.lifars.com/2021/08/watch-for-cybersecurity-games-at-the-tokyo-olympics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00299.warc.gz | en | 0.956761 | 921 | 2.578125 | 3 |
In a previous lesson I covered the standard access-list, now it’s time to take a look at the extended access-list. This is the topology we’ll use:
Using the extended access-list we can create far more complex statements. Let’s say we have the following requirement:
- Traffic from network 126.96.36.199 /24 is allowed to connect to the HTTP server on R2, but they are only allowed to connect to IP address 188.8.131.52.
- All other traffic has to be denied.
Now we need to translate this to an extended access-list statement. Basically they look like this:
[source] + [ source port] to [destination] + [destination port]
Let’s walk through the configuration together:
R2(config)#access-list 100 ? deny Specify packets to reject dynamic Specify a DYNAMIC list of PERMITs or DENYs permit Specify packets to forward remark Access list entry comment
First of all we need to select a permit or deny. By the way you can also use a remark. You can use this to add a comment to your access-list statements. I’ll select the permit…
R2(config)#access-list 100 permit ? <0-255> An IP protocol number ahp Authentication Header Protocol eigrp Cisco's EIGRP routing protocol esp Encapsulation Security Payload gre Cisco's GRE tunneling icmp Internet Control Message Protocol igmp Internet Gateway Message Protocol ip Any Internet Protocol ipinip IP in IP tunneling nos KA9Q NOS compatible IP over IP tunneling ospf OSPF routing protocol pcp Payload Compression Protocol pim Protocol Independent Multicast tcp Transmission Control Protocol udp User Datagram Protocol
Now we have a lot more options. Since I want something that permits HTTP traffic we’ll have to select TCP. Let’s continue:
R2(config)#access-list 100 permit tcp ? A.B.C.D Source address any Any source host host A single source host
Now we have to select a source. I can either type in a network address with a wildcard or I can use the any or host keyword. These two keywords are “shortcuts”, let me explain:
- If you type “0.0.0.0 255.255.255.255” you have all networks. Instead of typing this we can use the any keyword.
- If you type something like “184.108.40.206 0.0.0.0” we are matching a single IP address. Instead of typing the “0.0.0.0” wildcard we can use the keyword host.
I want to select network 220.127.116.11 /24 as the source so this is what we will do:
R2(config)#access-list 100 permit tcp 18.104.22.168 0.0.0.255 ? A.B.C.D Destination address any Any destination host eq Match only packets on a given port number gt Match only packets with a greater port number host A single destination host lt Match only packets with a lower port number neq Match only packets not on a given port number range Match only packets in the range of port numbers
Besides selecting the source we can also select the source port number. Keep in mind that when I connect from R1 to R2’s HTTP server that my source port number will be random so I’m not going to specify a source port number here.
R2(config)#access-list 100 permit tcp 22.214.171.124 0.0.0.255 host 126.96.36.199 ? ack Match on the ACK bit dscp Match packets with given dscp value eq Match only packets on a given port number established Match established connections fin Match on the FIN bit fragments Check non-initial fragments gt Match only packets with a greater port number log Log matches against this entry log-input Log matches against this entry, including input interface lt Match only packets with a lower port number neq Match only packets not on a given port number precedence Match packets with given precedence value psh Match on the PSH bit range Match only packets in the range of port numbers rst Match on the RST bit syn Match on the SYN bit time-range Specify a time-range tos Match packets with given TOS value urg Match on the URG bit <cr>
We will select the destination which is IP address 188.8.131.52. I could have typed “184.108.40.206 0.0.0.0” but it’s easier to use the host keyword. Besides the destination IP address we can select a destination port number with the eq keyword:
R2(config)#access-list 100 permit tcp 220.127.116.11 0.0.0.255 host 18.104.22.168 eq 80
This will be the end result. Before we apply it to the interface I will add one useful extra statement: | <urn:uuid:6ba63798-3b15-46f5-9143-9a8a84bfa393> | CC-MAIN-2022-40 | https://networklessons.com/cisco/ccnp-enarsi-300-410/extended-access-list-example-on-cisco-router | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00299.warc.gz | en | 0.718819 | 1,092 | 2.796875 | 3 |
Big data analytics is the pursuit of extracting valuable insights from raw data that is high in volume, variety, and/or velocity.
What do I need to know about big data analytics?
Big data analytics systems transform, organize, and model large and complex data sets to draw conclusions and identify patterns. Skilled big data analytics professionals, who generally have a strong expertise in statistics, are called data scientists.
Is data analytics only for big data?
When used generally, the term data analytics can refer to any data analytics process, including data warehouses or data marts. While data warehouses are certainly a form of data analytics, the term itself is evolving to favor big data-capable systems. However, big data analytics refers specifically to the challenge of analyzing data of massive volume, variety, and velocity.
What is the status of the big data analytics marketplace?
Today, the field of data analytics is growing quickly, driven by intense market demand for systems that tolerate the intense requirements of big data, as well as people who have the skills needed for manipulating data queries and translating results. | <urn:uuid:1e4cd1a9-5dc4-407d-8f27-e7a17ac3404f> | CC-MAIN-2022-40 | https://www.informatica.com/au/services-and-training/glossary-of-terms/big-data-analytics-definition.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00299.warc.gz | en | 0.927904 | 220 | 2.78125 | 3 |
A Recent Survey Shows Students Want to Learn. Here are some of the results from that survey.
When it comes to STEM education, high school students across the nation want to see changes made in teaching methods and more access to resources outside of the classroom experience. Students want more tangible learning opportunities. Students find things like learning from textbooks less engaging than hands-on learning methods. Moving away from textbooks would arguably be the most significant shift in education since the dawn of textbooks themselves.
More interesting information the survey revealed about what students want:
- 81 percent of all students are interested in science, with 73 percent of the students that were surveyed showing an interest in biology. This is great news because recent reports have found that science and technology degrees have been decreasing in the U.S.
- Students who are interested in biology classes identified teachers and classes as the most influential factor to their career choices.
- Only 32 percent know an adult that works in a science-based career, and only 22 percent of students know someone that holds a job involving biology.
- The survey found that hands-on learning is the best way for students to learn. Experiments and field-trips showed that they were the most effective in engaging kids to learn, and as a result are also their favorite ways to learn. | <urn:uuid:d66f4973-a700-49ea-8295-4880c7f44248> | CC-MAIN-2022-40 | https://ddsecurity.com/2016/07/28/survey-reveals-students-want-hands-on-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00299.warc.gz | en | 0.971673 | 263 | 3.546875 | 4 |
The current business scenario is data-driven. Data is the essence of every business empire; every idea that is blooming into a viable and profitable business. Data differ from person to person. For some, data is scientific; for some data is newsworthy. But data is all about imparting information. This information is what many top business empires and small enterprises rely on. Since an organization is dependent on consumers’ data to run their business, it is important to have various measures in place.
Data is managed and used in different ways. Through data management, data governance, and data integration, a business needs to look after the data they disseminate and receive. While they’re all interconnected, they are still distinct. To understand the role of data as part of the business, one needs to be able to differentiate between data management and data integration.
What is data management?
Since business is extremely data-dependent, it is important to keep it collected, organized, and simple to optimize work. This is what data management is all about. Different types of data play an important role in the performance of a business. Data management is all about organizing and securely maintaining relevant data. It has to be secure in a cost-effective manner. When it comes to information, an organization has its policies and regulations on how it should be maintained.
Data management takes this into account as well. It is all about managing data within the bounds of the organization’s rules and regulations. Since it is a vital function for any organization, there is a need to draft a data management strategy.
Through this, a business can maintain analytical information that will help in the optimization. For data management to take place, certain procedures, policies, and regulations are established. This further explained by a data management strategy. To have the best dish at the restaurant, the chef needs to experiment with a combination of different ingredients. The same applies to a data management strategy. One needs to use a combination of different resources and activities for a successful strategy.
What is data integration?
Have you ever consolidated facts and figures from different sources and presented on one platform? This is a simple way of understanding the meaning of data integration. To put it in business terms, it is the compilation of data from different sources.
Data integration is a very important practice in the process of decision. Data is first integrated from different sources and then stored securely on the company’s server.
Data is first procured in a raw form. It is surrounded by clutter and other unnecessary information. It needs to be polished, refined, and then integrated with other relevant data. This data can then be ultimately used for decision making, strategy development, and other operative purposes. For successful data integration, a bit of technical knowledge is required.
Since the data procured can be used for analytical purposes, employees often use it for the day to day activities of the firm. This also calls for a security system that warrants the right personnel to have access to the right type of information. If data is integrated in the right manner, then it is cost-effective and time-saving for the company.
Difference between Data Integration vs. Data Management?
It’s always difficult to keep up with technological jargon. But once understood, it is hard to get them mixed up. While data management and data integration sound similar, they are as different as night and day. Inaccurate and incomplete data will hamper the decision-making ability and strategies of any company.
This is where data integration plays a pivotal role. Data integration organizes the data in a consolidated manner, in one place, making it easy to view and analyze. Data management, on the other hand, focuses on how well that data is handled. How well it is organized. While data integration is the consolidation process, data management takes a broader perspective of looking after how the data is refined, how it is kept and shared.
On the other hand, data security is also an important part as cybercrime is getting worse, and small companies do not have the proper environment to protect it. One of the ideal solutions is an SSL certificate. One must install on the website to secure data with an encrypted form.
When it comes to data integration and data management, they are interrelated. Once data is integrated, it needs to be effectively managed for efficient usage. Data is the lifeblood for every business; hence it needs to be handled with care. It can also morph the future of the organization as the data is the building block to all strategic decisions. | <urn:uuid:ae30aba0-b77c-4621-8c17-918886d1a52f> | CC-MAIN-2022-40 | https://www.kovair.com/blog/data-integration-vs-data-management-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00299.warc.gz | en | 0.948287 | 931 | 2.828125 | 3 |
A team of researchers from Binghamton University have been working on a new intrusion detection approach based on monitoring the behavior of systems and spotting when it differs from the one that is considered normal.
The project, titled “Intrusion Detection Systems: Object Access Graphs” and funded by Air Force Office of Scientific Research, is conducted by doctoral students Patricia Moat and Zachary Birnbaum, research scientist Andrey Dolgikh, and they are mentored by Victor Skormin, professor of electrical and computer engineering.
They have chosen not to concentrate on detecting malware, as it can change faster than new signatures for it can be created, but on the systems’ behavior.
“What we do is take a picture of what your computer is doing, and then we compare a picture of your computer behaving normally to one of an infected computer. Then, we just look at the differences,” Birnbaum said. “From that, we can see if your computer has an infection, what type of infection, and from there you know you’re under attack and you can take action.”
These pictures are taken by monitoring system calls that go hand in hand with every computer operation performed.
“System calls accumulated under normal network operation are converted to graph components, and used as part of the IDS normalcy profile,” they explained.
They have developed algorithms to find a system normalcy profile, to find anomalous deviations, to recognize previously detected attacks, and a real-time visualization system to present the results.
“Our IDS has the ability to instantly adopt changes in the normalcy definition,” they pointed out. “Our results demonstrate that achieving efficient anomaly detection is possible through the intelligent application of graph processing algorithms to system behavioral profiling.”
More details about the project can be found here. | <urn:uuid:f4a0270a-8d54-4484-b73b-bf53a610b4b9> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2014/04/10/new-ids-project-spots-anomalous-system-behavior/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00499.warc.gz | en | 0.947234 | 384 | 2.8125 | 3 |
Imagine if you had a time machine and could travel back to a era when people were excited about the future — say, 1955 — and talk to a group of knowledgeable, educated Americans.
You could share the disappointing news that, in the future, there are no flying cars. Meals don’t come in pill form. And, no, we haven’t gotten around to that colony on Mars. Sorry!
The good news, you’d tell them, is that just about everybody’s got a home computer. You can buy robots that clean floors. And TVs of the future are gigantic, high-quality and, yes, in color!
The most amazing part of your conversation would be your description of the Internet. In the future, you’d say, all those home computers are connected to each other, and you can use the computers to instantly access nearly all human knowledge.
You could also tell your incredulous audience another kind of future computer is so small it fits in a pocket. Because this too connects to nearly all knowledge, it’s usable as a kind of always-available Knowledge Machine. And here’s the most amazing part: Everyone’s got one. Even teenagers.
Wait a minute, they’d say. Everyone walks around with a device that can tell them just about anything they’d like to know? Push a couple of buttons, and they can read articles in thousands of newspapers? They can automatically search entire encyclopedias, history books and reference works of every description?
They would see the future as a kind of utopia as the result of this Knowledge Machine.
And finally, you would tell them something they literally could not possibly believe. In the future, students generally don’t use these Knowledge Machines for accessing knowledge, generally don’t know how to, and — worst of all — schools don’t teach kids how to use them. In fact, in the future, Knowledge Machines will be looked at broadly with suspicion, ignorance and hostility by schools and teachers.
How is it possible that we as a civilization have invented an inexpensive, universally available Knowledge Machine, then fail to teach students how to use it?
The root cause of this unacceptable state of affairs is a flaw in our collective logic.
The problem is that cell phones, as well as media players and other electronic devices, are often used by students for distraction and mischief. Teachers want them banned, because kids are using gadgets to tune out their classrooms, pass around porn and cheat.
What teachers and schools don’t seem to understand is that there is no connection between the disruptive and counterproductive use of cell phones by students, and their use for learning.
In fact, schools could make an equally insane argument against paper. Students are passing around notes on paper, viewing porn on paper, and using paper to roll joints and smoke them in the boys’ room. Ban paper in schools!
Whether schools ban objectionable uses for paper is entirely unrelated to the usefulness of paper in classrooms for reading books, jotting down notes and taking tests.
Likewise, banning the disruptive use of cell phones is unrelated to the usefulness of phones for learning.
The reality is that these Knowledge Machines are here to stay. The students in school today will almost never know a moment of their lives where Internet connected gadgets are unavailable. Wasting time on the memorization of just about anything is now and will always be obsolete. The ability to find reliable information via cell phone, and judge it, synthesize it and use it are now among the most centrally important skills anyone can possess — far more important than many of the required areas of study.
It’s time we stop ignoring reality and recognize that a total knowledge-access revolution has taken place. Don’t let any more high school students graduate without demonstrating mastery at using the universal Knowledge Machines they all have in their pockets as they collect their diplomas. | <urn:uuid:a69d1eef-3c92-4d34-b00a-9730c80a2472> | CC-MAIN-2022-40 | https://www.datamation.com/mobile/why-are-knowledge-machines-banned-in-schools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00499.warc.gz | en | 0.961018 | 813 | 3.046875 | 3 |
When you ask someone to give an example of a cybersecurity threat, phishing is probably their first response. Phishing is a type of social engineering where an attacker sends a fraudulent message designed to trick a person into revealing sensitive information to the attacker or to deploy malicious software on the victim's infrastructure like ransomware. Even though phishing attacks have been around for decades, their frequency has increased phenomenally in the last two years, and attackers are using far more creative (e.g. smishing, vishing) and advanced ways to deceive people into clicking malicious links.
The mass adoption of online services due to the switch to remote working settings has created a very lucrative playing field for attackers. The rise of social media platforms also provides additional opportunities to gain more information about individuals by collecting adequate information in order to impersonate someone.
The two main motives behind why attackers conduct phishing attacks are:
- Data: Stealing credentials can provide attackers with access to various information sets that can compromise businesses (e.g. business trade secrets, customer information). Malicious buyers pay good money (e.g. on the dark web) for such data thus increasing the motivation to steal.
- Access: Malware downloaded via malicious links can enable attackers to gain entry to networks and allow them to disrupt or lockdown services such as websites, support channels, etc.
Some of the most common types of phishing attacks are:
- Phishing – typically done by email
- Spearphishing – finely-targeted emails
- Whaling – very targeted email, usually towards executives
- Vishing – by phone calls
- Smishing – by text messages
- Social media phishing – Facebook, LinkedIn and other social media posts
- Pharming – compromising a DNS cache
A recent report on phishing statistics (2020-2022) stated :
- Carriers of phishing/ social engineering attacks are email (96%), malicious websites (3%) and phone (1%)
- Common subject lines of phishing emails are - Urgent, Request and Important
- Countries with the highest attacks are US (74%), UK (66%) and Australia (60%)
- Most impersonated brands are Microsoft, ADP and Amazon. LinkedIn and Google aren’t far behind ;
- Most targeted industries are retail, manufacturing and food and beverage;
- Data types that are compromised in a phishing attack are user credentials (passwords, pin numbers), personal data (name, address) and medical data (insurance claims, health conditions)
And according to a Verizon report, phishing ranks #2 in the most expensive causes of data breaches. Organisations that had been attacked saw a 5% drop in their stock prices in the 6 months following a breach. And this number continues to rise. Therefore, it is imperative for startups and scaleups to ensure they have implemented adequate measures to manage phishing attacks.
How to protect your organisation?
Phishing attacks are unique in the sense they target humans within an organisation rather than technological weaknesses. Therefore, awareness and education are the two most effective ways to prevent a phishing attack. However, merely conducting periodic ‘email phishing campaigns’ isn’t sufficient.
Think about the following scenarios:
- A malicious USB labelled “family pics” is dropped in a company parking lot. There is a high probability that someone will pick it up and plug it into their laptop to try to find out who the USB belongs to. This can lead to the attacker gaining instant access to the company’s network; and
- An attacker socially disguised as a recruiter on LinkedIn sends your employees malicious links claiming to be details of job opportunities. If clicked on, there is a high chance that malware can get downloaded onto the employee’s computer;
Therefore, training programs should also cover simulations of attacks other than just email.
The other key tactical ways to further protect your business are:
- Authenticating email senders using DMARC (Domain-Based Message Authentication, Reporting, and Conformance);
- Enabling multi-factor authentication in order to prevent unauthorised access;
- Using anti-malware programs and firewalls for extra layers of protection. Thought leaders such as Gartner periodically report on the best in class tools; and
- Conducting a periodic ‘social media audit’ (e.g. LinkedIn) to ensure the legitimacy of all the employees claiming to work for your organisation.
If you’d like to learn more about how to set up foundational controls to protect your organisation against phishing attacks, we are offering free twenty-minute Ask Me Anything sessions with our security experts.
If you have any security questions you’d like answered, contact us here. | <urn:uuid:399c6821-6e3a-442e-9208-299df0f429da> | CC-MAIN-2022-40 | https://www.avertro.com/blog/what-is-phishing-and-how-to-protect-your-tech-startup-from-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00499.warc.gz | en | 0.931373 | 973 | 3.25 | 3 |
18 Jan What Can You Do With GIS?
A GIS, known as a Geographic Information System, lets users view, analyze, and understand data in ways that reveal patterns, relationships, and trends. In essence, a GIS lets users visualize and interpret data in an efficient, yet meaningful way. This technology is used in a variety of industries, with the most prominent applications in the government, transportation, telecommunications, and energy sectors.However, GIS is quickly spreading to other industries as well, and even being used for more non-traditional applications including:
Elections and Voting: GIS can be used to identify key areas where voter registrations seem to be underperforming, which can then be analyzed and picked up by the political press for debate.
Education: A detailed analysis of population profiles can be recorded in GIS and then modeled to show current educational services and predict where they will be needed in the future.
Urban and Regional Planning: Local governments, city officials, and urban planners use GIS data to make informed decisions for planning new areas. GIS applications allow for greater transparency in planning because spatial data can be plotted at the local, regional, and national level.
Law Enforcement: GIS technology can help law enforcement officials ensure resources at the street level are deployed in the most efficient manner. For example, a technique known as “hot spot analysis” helps officials determine where to allocate police assets in a specific area in order to reduce crimes.
Emergency and Disaster Management: GIS allows layers of physical data to predict the risk of natural disasters in a certain area. GIS applications can create “what if” scenarios to propose disaster avoidance mechanisms.
At GeoTel Communications, our unique business strategy converges telecommunications infrastructure data with GIS technologies to create telecom GIS data sets. By researching, obtaining, and digitizing fiber maps along with other telecom-related infrastructure in a GIS format, we provide various telecom maps and telecom data products that give valuable insight into the competitive fiber optic landscape. If you need to know the particular telecommunications infrastructure of a city or region, call (800) 277-2172 to order custom-made fiber maps. | <urn:uuid:b57eff0b-eabf-4d5d-8ba8-1f9154054ec7> | CC-MAIN-2022-40 | https://www.geo-tel.com/what-can-you-do-with-gis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00499.warc.gz | en | 0.899956 | 442 | 2.984375 | 3 |
Bias in AI: Why it happens and how to fix it
AI bias is an issue that poses serious concerns for businesses today. At a minimum, bias results in ineffective decision making, lowers productivity, and hurts ROI. However, these performance problems are only the tip of the iceberg.
AI bias—which comes in many forms, such as race or gender bias—can lead to discrimination, negatively impact lives, and create severe legal issues for organizations. Businesses with biased AI have lost millions of dollars in lawsuits and suffered significant reputational damage. The rapid growth of AI means more businesses will face similar struggles if they don’t take action to mitigate AI bias. It is important for businesses to learn about the risks and costs associated with bias, and how to protect against them.
What causes AI bias?
An AI model's training is based on a long series of true/false scenarios. The model is given a scenario and told whether it is either positive or negative. A common example of this is the image verification (CAPTCHA) prompts that ask users to select all images of a certain criteria (select each picture that has a fire hydrant, for example). The model relies on that data in the future. By this process, the AI “learns” what a fire hydrant looks like. According to TowardsDataScience, even a simple image classification requires thousands of images to train.
An AI model’s learning process is primitive compared to a human’s and requires massive amounts of information. Because AI doesn’t involve reason so much as trial and error, the training process is delicate. Incomplete information can result in models being trained incorrectly. The result? Biased and faulty algorithms. For example, if two people in a data set are over six feet tall, and both people have poor credit, the AI may determine that every person who is six feet tall or over has poor credit. This is how discrimination happens.
Humans are flawed, and our flaws permeate into the design of AI models. Any prejudice a designer has may be reflected within a model’s structure and output, even if unintentional. Every variable may disproportionately impact groups of people with different backgrounds in different ways.
Prejudice is a big part of why credit scores are being deemed inherently racist. Though it might seem strange to think that an objective number that measures spending and payment habits is racist, the unfortunate reality is that credit scores are entangled within a history of social inequalities. Even the most financially diligent people may be faced with lower credit scores due to less financial support from their elders. In this way, the generational wealth gap causes bias against marginalized groups in many automated systems.
A developer of a different background might not recognize bias in credit scores and may weigh credit differently in AI models as a result. This is the issue that must be tackled: Biased data is pulled from a biased society made up of biased people, whose opinions are based on their own personal life experiences. Creating a truly unbiased model requires fighting against the tangled web of human prejudice, a difficult but necessary task.
A great resource for learning more about systemic and unconscious bias is “Blindspots: Hidden Biases of Good People.” Authored by two professors, Anthony Greenwald of the University of Washington and Mahzarin Banaji of Harvard University, the book demonstrates how unconscious preferences manifest themselves. Not only is it influential in social psychology, it is a recommended read for everyone working with AI.
Solving AI bias
While no system is perfect, there are ways to minimize bias in AI through people, processes, technology, and regulations.
Build diverse teams: To account for logical bias (what variables to use, how they impact results) in development, construct teams with individuals from diverse backgrounds. Diversity allows teams to pull from a large pool of unique life experience. Building a diverse data science team will help prevent any single background and associated views from dominating the development process.
Conduct post hoc analysis: After a model produces results, the data and model results should be analyzed to check for bias before the model reaches the market. Typically, this is done by training multiple copies of the same AI with different data sets to isolate variables and check for any problematic patterns.
Create process-based transparency: Transparency is an additional layer of defense against bias between development and market. The best way to ensure transparency is through documentation. Thorough documentation allows for easy review of model logic in a way that’s understandable across various specializations and skill levels. By knowing exactly what a model does and how, business owners can make educated decisions, and properly implement the model.
Monitor continuously: No data scientist should claim that their AI model is 100% accurate (if they do, you should be very wary.) Every AI model should be continuously monitored throughout the duration of its life cycle. AI performance and results should be analyzed on a regular basis to ensure each model functions within risk guidelines. Proactive businesses should take this a step further and ask customers for their personal experience with AI, seeking points of contention and a better understanding of where models are lacking.
Encourage and support regulations: To truly help rid AI of bias, it is important to advocate for regulations. Increasing calls for AI regulations as a way to combat AI harms are supported by academia, industry leaders, NGOs, and the general public around the world. In the past three years, over 300 new AI guidelines and principles have been developed. Governing agencies such as the Federal Reserve Board in the U.S., OSFI in Canada, and the PRA in the U.K. have all been soliciting feedback on the use of AI from academia, industry experts, and members of the general public. The European Commission came out with its first draft of AI Regulations on April 21, 2021, by adopting the OECD AI Principles that were established in 2019. These are the same entities that introduced the world to Data Privacy principles which became GDPR—the gold standard for data privacy regulations around the world.
A recent survey found that 81% of technology leaders who use AI think government regulations would be helpful to define and prevent bias. Working together to support and adopt regulations is critical.
In short, AI bias is a real threat. It’s harmful to businesses and the general public. Ranging from being a negative ROI to a plague on society, bias in AI is a problem to be solved, not ignored. Proper due diligence should be expected when creating AI, and this includes proper training, documentation, and monitoring of AI models. When handled correctly, AI is an extremely powerful tool with the power to shape our society. We can strive to move down that path by carefully considering the people, processes, technology, and regulations used to advance AI. | <urn:uuid:c7fcc7ea-def5-496c-997c-7f5985eba346> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=151749 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00499.warc.gz | en | 0.947665 | 1,382 | 3.484375 | 3 |
Hyperconverged architecture is the future of data centers and server providers. Companies want access to simplified, scalable IT services. They increasingly opt for converged and hyperconverged infrastructure solutions that combine storage and processes in a single physical or virtual device.
Hyperconverged systems run on commodity components, reducing deployment and maintenance costs.
They are also easy to manage. Let’s take a detailed look at what makes this approach different and consider the benefits that HCI offers.
HCI in a Nutshell
Traditional server architecture includes three separate tiers with different components used for data storage, computing, and networking needs.
To run a server you needed to install and connect at least three devices, which made server management time-intensive and complicated.
These heterogeneous environments were constantly creating compatibility issues that required extensive troubleshooting. Updates were another problem.
The next step in the evolution of datacenter hardware was the creation of converged infrastructure.
Separate physical devices were combined into a single “box” to reduce compatibility issues, electricity consumption, and cooling needs, and use available floor space more efficiently.
Hyperconverged infrastructure takes this approach even further, offering a single virtualized device instead of a physical “box”.
HCI runs on standard x86 servers, is controlled by a software layer, and uses software-defined storage.
HCI removes complexity and improves performance by getting rid of compatibility issues and the constant need for hardware optimization.
A homogeneous HCI environment speeds up the deployment and integration processes, allowing your IT team to spend time on more meaningful tasks.
Another name for HCI is ultra-converged infrastructure. There are two ways in which HCI solutions can be deployed on a piece of hardware or a hypervisor.
They include APIs and virtual machine translation. If you use special convergence software you can access locations running different hypervisors and translate VMs between them.
APIs, on the other hand, allow companies to use distributed public cloud components in a single virtual space, merge private and public architecture and integrate with different hypervisors if they need to scale.
Main Benefits of Hyperconverged Systems:
- Better reliability, performance, and security.
- Effortless scalability allows you to expand by adding more virtual nodes.
- Operations and upgrades are centralized and automated, removing needless complexities.
- You don’t have to depend on multiple IT vendors to perform daily operations.
- HCI is affordable and cost-effective because it doesn’t use specialized hardware.
- HCI requires less power, space, and cooling than other solutions.
- Reduced hiring costs due to simple management.
- Rapid application development is possible due to streamlined productivity which can lead to better profitability.
- The absence of discrete hardware components reduces the risk of hardware issues.
- You can manage even a large HCI system through a single interface, greatly reducing the amount of maintenance needed.
- Better mobility due to using virtual machines instead of physical servers.
- No need for additional third-party backup. HCI uses virtual machines that have a built-in backup feature and can replicate backup data between instances.
According to experts, the global hyperconvergence market will continue to grow at least until 2023. Large companies are already expanding their HCI use.
So, if you are looking for an extremely scalable IT solution for your business that doesn’t require extensive technical knowledge of different systems and hardware units and can be controlled through a single interface, then HCI is your choice.
From the business point of view, HCI is a worthy investment, as it reduces operating costs, minimizes your reliance on multiple IT vendors and dedicated experts.
Typical HCI can be deployed in a few hours and then configured to better suit your business and technical needs.
You don’t have to overcommit with HCI. Starting with a single node is an affordable approach for smaller companies.
You can pay only for resources you need and scale as your business continues to grow.
When a Company Should Switch To HCI?
There are many areas where a company can benefit from using an HCI solution. General-purpose workloads like infrastructure servers, databases, file servers, and so on can perform better in an HCI environment.
If you want your databases to operate smoothly, increasing employee productivity or reducing customer waiting time (if it’s a database for your e-commerce project), switching to HCI is recommended.
If you need to increase storage or performance, you can simply add new nodes.
Write- and read-intensive applications like analytics and logging can also benefit from using HCI.
Hyperconverged systems offer better data protection and availability, allowing for extremely fast disaster recovery. If you process a lot of valuable or sensitive data, you might consider switching to HCI.
HCI is the best option if you have physical space constraints or have to manage remote infrastructure. Testing infrastructure and migrating data across different sites can also be performed in an HCI environment.
Hyperconverged infrastructure is affordable and cost-effective. More and more companies are opting for HCI for their daily IT needs.
Hyperconvergence will continue to be one of the most important trends in server visualization in the next decade. Thanks to its efficiency, simplicity, and improved performance, hyperconverged infrastructure is well worth the initial investment costs. | <urn:uuid:95bd48d5-cd4b-4a8d-a726-171412c9b596> | CC-MAIN-2022-40 | https://gbhackers.com/do-you-know-about-hyper-convergence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00499.warc.gz | en | 0.921927 | 1,122 | 2.78125 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
What is the Impact of Data Science?
Data scientists have created a comprehensive virtual platform that assists patients using disease predictive modeling. A patient can use these platforms to enter his or her symptoms and receive insights about the various possible diseases based on the confidence rate.
Fremont, CA: Given the massive amounts of data produced, data science is widely regarded as one of the most important parts of any industry in today's marketplace.
Data Science in Healthcare
Providing Virtual Assistance
Data scientists have created a comprehensive virtual platform that assists patients using disease predictive modeling. A patient can use these platforms to enter his or her symptoms and receive insights about the various possible diseases based on the confidence rate. Furthermore, patients suffering from psychological issues such as depression and anxiety and neurodegenerative diseases such as Alzheimer's can use virtual applications to assist them in their daily tasks.
Patient Health Monitoring
Data Science plays an important role in IoT or the Internet of Things. The various IoT devices are present as wearable devices that monitor the users' temperature heartbeat as well as other medical parameters. Data science is used to analyze the data that is collected. Doctors can keep track of a patient's circadian cycle, blood pressure, and calorie intake with the help of analytical tools.
Doctors can monitor a patient's health using home devices in addition to wearable monitoring sensors. There are many systems for chronically ill patients that track their movements, monitor their physical parameters, and analyze the patterns in the data.
Tracking and Preventing Diseases
Data Science is important in monitoring a patient's overall health and alerting necessary steps to be taken to prevent potential diseases from occurring. Data Scientists are employing powerful predictive analytics tools to detect chronic diseases at an early stage.
In the most extreme cases, diseases are not detected at an early stage due to their negligibility. This has a negative impact on the patient's health and the economic costs. As the disease spreads, so does the cost of curing it. As a result, data science plays a significant role in optimizing economic spending on healthcare. | <urn:uuid:4282d046-a08f-40f8-a768-3238e8b846d5> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/what-is-the-impact-of-data-science-nid-9658.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00499.warc.gz | en | 0.943076 | 426 | 3.359375 | 3 |
Fintech encompasses a broad range of firms that develop technology-focused products to enhance financial services' functionality, typically offered by incumbent financial institutions.
The financial industry's information security standards consider the specific defined risks and threats that the financial organizations may face, making them the kingpin to ensure business and operational sanctity.
In India, fintech is primarily regulated by the Reserve Bank of India (RBI); in terms of both business and regulations governing this sector, with the focus lying on digital payments as the most advanced Fintech sub-category in India.
However, not all Fintech organizations fall under the RBI jurisdiction, the Insurance and Regulatory Development Authority and Securities and Exchange Board of India also have powers to govern entities specific to their sectors.
Table of Contents
Security and data privacy challenges faced by Fintech companies
The financial service sector handles sensitive information about individuals and enterprises. With the exponential growth of Fintech, more data is now accessible in digital formats, which ensures that data is readily available to analyze and generate insights.
And this makes the data more vulnerable to security breaches. Plus, the increase in mobile banking services has resulted in a tremendous amount of data - such as personally identifiable information, financial and health information - being exposed to third parties.
Need for Security standards in Fintech technology sectors in India
Fintech has led to start-ups' growth and it has also disrupted how traditional banks offer their services to customers. Banks have integrated multiple digital channels to create an omnichannel customer experience.
Improved methods of identifying and quantifying risks, algorithm-based investments, and defined platforms for users to optimize their portfolios have revolutionized the wealth and asset management industry.
In addition, blockchain technology has also evolved unbelievably, disrupting fintech firms by developing cryptocurrencies, including bitcoin. This has considerably transformed the payments and money transfers by eliminating all middlemen and 'smart' contracts development. Global trends, like increasing globalization and the decreasing age of an average workplace, digitization, and disposable incomes, have significantly contributed to this industry's evolution and growth.
All this has put pressure to re-evaluate the security standards currently applicable to the fintech sector.
Fintech Security guidelines in India
There is a misconception among the fintech entities that RBI governs all security guidelines. However, only those Fintech come under its jurisdiction, which falls under the RBI Act, 1934.
The regulations and Fintech entities that do not fall under RBI will come under the Informational Technology Act, 2000 ("IT Act"), with more specific rules issued according to section 43A of the IT Act.
Every qualified Fintech should ensure they possess Information security procedures as mentioned below:
IT governance should be overseen by a senior member of the management, particularly the CEO or CIO. But, to build a stout security infrastructure, the sole decision should not lie on the c-suite, as security is a shared responsibility across all enterprise levels and departments.
Security efforts should get backed by stringent IT policies and procedures developed by the CISOs and their teams.
The policies and procedures should include:
1. Detail operational procedures for IT infrastructure, such as data center operations
2. Consider inter-dependencies between risk elements in the risk assessment process
3. Lay down standards for hardware or software prescribed by the proposed architecture
4. Implement appropriate measures to ensure adherence to customer privacy requirements applicable to relevant jurisdictions
Information security governance
Financial technology companies are a vital source to accelerate innovation-driven improvements. And this, in turn, presses on the need to prioritize information security governance for all Fintech firms.
A comprehensive security governance program needs to include:
1. Development and ongoing maintenance of security policies
2. Sharing of roles, responsibilities, and accountability for information security across the organization
3. Generation of meaningful metrics of security performance
4. Effective identity and access management processes
Critical Components of Information Security in Fintech
Policies and Procedures
The policies and procedures that a company adopts define its overall corporate culture and security framework. The guidelines should be well-defined after proper analysis of existing infrastructure loopholes and requirements.
The most preliminary requirement to maintain enterprise security is to comprehend the business's need to analyze risk efficiently and plan accordingly to combat the risks.
Security depends on controlling the answers to – who, where, what, and when. There has been a boom in the demand for access control solutions across enterprises. Irrespective of the company size, all access control measures' top objective is to protect the physical, IP, and human assets.
To ensure cybersecurity compliances, enterprises need to maintain design controls based on international security mandates and compliances.
Personnel and Physical Security
The productivity of all enterprises depends on its effectiveness in having the required framework and responsible people. Personnel and physical security infrastructure play the most critical role in securing all critical data of firms.
Training and Awareness
To ensure that each employee realizes their contribution to maintaining the company's cyber resilience, firms need to continuously invest in employee cybersecurity training and awareness programs.
The enterprise incident management process should be well-designed to promptly restore normal service operations to the enterprises in case of any breach. By focusing on developing a robust incident management program, the company needs to mitigate the adverse impact of outages to maintain the optimal service quality level.
Data encryption plays an integral part in ensuring cybersecurity. It helps companies maintain a proactive defense against cyber attacks and breaches and allows businesses to meet the industry regulatory compliances like HIPAA, PCI DSS, and EI3PA, among others.
When it comes to data security, enterprises need to focus on striking an optimum balance between security and productivity.
The prerequisite for achieving the required level of enterprise security is effectively assessing the firm's underlying vulnerabilities in business operations.
Ongoing security monitoring process
Developing stringent security policies and training employees will never suffice. The main key to successful security maintenance is a robust ongoing cybersecurity monitoring process to check whether the set requirements are being adhered.
Patch management is the most commonly ignored security topic that does play an immensely crucial role in maintaining the enterprise security plan. Patch management is the process of handling all the company updates on information systems, including routers, servers, operating systems, firewalls, and anti-viruses.
With the continuously evolving market dynamics, change management is one of the most current topics that enterprises need to prioritize and maintain.
Regular auditing also has a pivotal part to play when it comes to enforcing the cybersecurity mandates across enterprises.
Network security is crucial to prevent accidental damage or malicious use to the network's private data, users, and devices, focusing on keeping the system up and running safely for all legitimate users.
With business moving to the new normal of remote working, remote access control forms the backbone of seamless business operations.
The wireless networks have forced firms to completely reimagine enterprises' network and device security to prevent attacks or misuse that exposed critical assets and confidential data.
As the fintech industry continues to mushroom by leaps and bounds, cybersecurity has become a pivotal concern for the c-suite.
It is not hard to foresee the damage a single cybersecurity breach can cause to the business!
Security architectures at fintech organizations need to consider new trends, cybersecurity laws in India, and the exponential growth of the industry which has brought about larger security threats. Looking at the future, security and data privacy will play a key role in winning over consumer confidence and catalyzing innovations and the development of fintech.
1. Which laws govern Fintech?
Fintech that come under the RBI jurisdiction are governed by the RBI Act, 1934.
Fintech that do not fall under RBI come under the Informational Technology Act, 2000 ("IT Act"), with more specific rules issued according to section 43A of the IT Act.
2. What is the biggest risk of data being breached in Fintech?
Currently, the biggest challenge in protecting data lies in mobile banking services. Personally identifiable information, financial, and health information are greatly exposed to third parties.
3. What is the most easily missed security parameter for Fintech?
Patch management plays a crucial role in maintaining the enterprise security plan but often goes ignored. It involves handling all the company updates on information systems, including servers, routers, operating systems, firewalls, anti-viruses.
4. Who is responsible for enterprise IT governance?
IT governance should be handled by a senior member of the management, particularly the CEO or CIO. However, security is a shared responsibility across all enterprise levels and departments. | <urn:uuid:31c4e4e5-3b7c-40d0-9ec1-d0f24fda450b> | CC-MAIN-2022-40 | https://www.appknox.com/blog/security-standards-for-the-evolving-fintech-landscape-in-india | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00499.warc.gz | en | 0.923157 | 1,853 | 2.5625 | 3 |
Functional Programming Or Microservices: What Suits Your Business App Development
The world of programming has come very far, and it continues to evolve. What this gives you is – options. Out of the many ways to fabricate these programs, the two most common ways are functional programming and microservices. This blog gives you an in-depth view of both of them.
To understand the similarities and differences between these two, we must first understand how they work individually.
What Is Functional Programming?
Functional Programming (also called FP) is a programming paradigm in which everything is bound together by using pure mathematical functions. Its programming style is declarative because it focuses on “what to solve” instead of “how to solve.”
What Are Microservices?
Microservice architecture, or simply Microservices, is a modern perspective on software whereby application code is conveyed distinctively in small, manageable pieces. It mainly focuses on building small, self-contained, and ready-to-run applications that can bring great pliability and added resilience to a code.
Each service functions independently in its own process, making microservices a service-oriented architecture where applications are built as a collection of divergent smaller services rather than one whole monolithic app. The programming languages that support microservices development are – Java, Golang, Python, .Net, and Node JS.
How Are Functional Programming And Microservices Similar?
Although Functional Programming and Microservices are very different, a few similarities can be noted between them.
- They have a shared goal of deployed elasticity
- Both functional programming and microservices are formulated to scale and load-balance simultaneously with any increase in demands
- They are also designed to replace faulty instances
- They both share a concept of encapsulation and clear separation
- Both functional programmes and microservices are small, which means they both carry the minimum amount of logic
- When using microservices or functional programmes, utmost discretion is advised since the major quality of experience problems could develop.
- The most significant point of similarity between the two is that neither functional programming nor microservices should store data within a component.
With the help of transport independence and pattern matching, services can be edited from your system, letting you reposition the topology with each deployment. If you look closer, this benefit emerges from the similarities between microservices and functions from functional programming.
How Is Functional Programming Different From Microservices?
Although often associated together, microservices and functional programming are fundamentally quite different, especially regarding software design and development. A few of these differences are listed and explained below:
- Reasons for small size – Microservices are small to facilitate better generalization, which capacitates developers to contrive applications from a collection of small functional units. Whereas functional programmes are small to perform limited tasks. Functional programs cannot be composed into applications; instead, they are linked with events.
- Functional programming concepts aggressively exercise stateless behavior. It emphasizes mostly on generating the same outputs from the same inputs. While microservices usually exhibit stateful behavior because they conserve state throughout a database, which is also called back-end state control.
- Functional programming performs on asynchronous outside triggers cited as event flows. Alternatively, microservices are called by other software when they are required to be part of a workflow.
- At a specific point during processing, you’re probably calling a microservice if you want to call an individual component. Still, if you want to respond to something external, that’s generally pursuant to functional programming.
- Organized functions have sequences in place of workflows, and every step in the processing of an event is likely set on by the step before it. On the contrary, all activations of your software associated with microservices are explicit calls, so you don’t have to worry about figuring out how to keep the steps in sequence.
- Exploring how they will sequence the steps in their scheduled functions is imperative for functional programming, but concurrency is the key with microservices.
- Since microservices are called by other software, you can control how they are called, so you can control concurrency. Alternatively, you can expect event-driven functions to be concurrent.
Under What Circumstances Are Functional Programming And Microservices Best Employed?
Typically it is used for mathematical operations, AI, and pattern matching; generically, anything that can be broken down into a set of rules that must be utilized to get a solution.
Some of FPs advantages that make it better employed are –
- It helps solve problems effectively in an easy way.
- It increases modularity.
- It allows the implementation of lambda calculus in programmes to solve complex problems.
- Some programming languages endorse nested functions which improve the tenability of the code.
- It improves the productivity of the developer.
- It helps debug any code briskly.
Microservices have quintessentially changed the way server-side engines are fabricated. Making the most of microservices is a unique science and requires discipline.
A few of their advantages that make them better employed are –
- Each microservice, as needed, can evolve independently, enabling constant improvement and faster app updates.
- Resources can individually be increased to the most needed microservices rather than scaling an entire app. Therefore scaling is faster and usually more cost-efficient as well.
- If a particular microservice fails, you can prevent cascading failures that would cause the app to crash by simply isolating the failure to that single service.
- Its smaller codebase allows teams to understand the code more easily, making it simpler to maintain.
- Microservices boost agility by reducing the time to market and speeding up your CI/CD pipeline.
Make the Best of These Programming languages for Your Business
Deciding which programming language is best for you is a critical decision.
Speak to our tech experts at Fingent to see which one is right for you and how you can implement it for your business. | <urn:uuid:76dcb371-53e3-40eb-8a70-9d79763cb14e> | CC-MAIN-2022-40 | https://www.fingent.com/blog/functional-programming-or-microservices-what-suits-your-business-app-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00499.warc.gz | en | 0.936821 | 1,311 | 2.59375 | 3 |
SQL injection is a vulnerability that allows an attacker to alter backend SQL statements by manipulating the user input. An SQL injection occurs when web applications accept user input that is directly placed into a SQL statement and doesn’t properly filter out dangerous characters.
This is one of the most common application layer attacks currently being used on the Internet. Despite the fact that it is relatively easy to protect against, there is a large number of web applications vulnerable.
An attacker may execute arbitrary SQL statements on the vulnerable system. This may compromise the integrity of your database and/or expose sensitive information.
Depending on the back-end database in use, SQL injection vulnerabilities lead to varying levels of data/system access for the attacker. It may be possible to not only manipulate existing queries, but to UNION in arbitrary data, use subselects, or append additional queries. In some cases, it may be possible to read in or write out to files, or to execute shell commands on the underlying operating system.
Certain SQL Servers such as Microsoft SQL Server contain stored and extended procedures (database server functions). If an attacker can obtain access to these procedures it may be possible to compromise the entire machine.
Acunetix SQL Injection Attack
Advanced SQL Injection
Security Focus – Penetration Testing for Web Applications (Part Two)
More Advanced SQL Injection
OWASP Injection Flaws
SQL Injection Attacks by Example
SQL Injection Walkthrough
OWASP PHP Top 5 | <urn:uuid:6b035a9f-532c-4e60-922c-9cfd2f7ac8d2> | CC-MAIN-2022-40 | https://www.acunetix.com/sql-injection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00499.warc.gz | en | 0.830034 | 309 | 3.25 | 3 |
In a recent BGP hijacking incident, internet traffic meant for 200 major networks, content delivery networks (CDNs), and cloud providers were redirected through Russian state-owned telecommunications provider, Rostelecom. The hijack redirected over 8,800 major internet traffic routes through its servers for about an hour. Major companies affected by the hijack included Google, Amazon, Facebook, Akamai, Cloudflare, GoDaddy, Digital Ocean, Joyent, LeaseWeb, Hetzner, and Linode.
The working of BGP hijacking in compromising internet traffic
BGP is an acronym that stands for the Border Gateway Protocol. It is a system used to route internet traffic between networks across the globe. Using the BGP protocol, routers advertise network routes that they have access to. This allows peers to find the shortest route to the destination. The drawback of using this system is that it enables malicious actors to trick other peers that the destination servers are on their network. The tricked peers will then send all data to the hijacker’s server using the routing information provided. The attacker then stores the data for later analysis after decryption. Before advances in the use of cryptography, BGP hijacks allowed malicious entities to perform the man-in-the-middle (MitM) attacks. However, many incidents of BGP hijacking are a result of operator error instead of intentional BGP hijack. Such errors occur when the operator mistypes the ASN (autonomous system number).
Reported BGP hijackings
Various incidents of BGP hijacking have been reported multiple times. In 2018, a reported BGP hijacking incident involved a small Nigerian ISP that hijacked Google’s internet traffic. A year later, China’s state-owned telecom, hijacked the European internet traffic. The Russian telco giant has also been involved in additional BGP hijacking over the past years.
BGPMon founder Andree Toonk believes the latest BGP hijacking happened by accident. He said on Twitter that the internet traffic hijacking happened because Rostelecom was shaping its system and might have accidentally exposed its internal network BGP routes to the public internet.
Because of the nature of BGP hijacking, malicious actors can make the attack appear as an accident. Although it is impossible to conclude whether the latest incident was an honest mistake, it is unlikely that the two countries would be making such errors frequently.
Russian and Chinese BGP hijacking raises eyebrows
Any BGP hijacking involving autocratic countries such as Russia and China raises eyebrows. Similarly, the frequency of BGP hijacking incidents emanating from these two countries is suspicious. China Telecom is the most frequent offender of internet traffic hijacking using this method. Similarly, Russia’s Rostelecom is a habitual offender. In 2017, the Russian telecom giant managed to hijack internet traffic of only major financial institutions, raising questions.
The geopolitical atmosphere existing between the two countries and the United States also raises concerns when such incidents happen. Other concurrent events taking place leads experts to think the BGP hijacking is a result of more than just human error. The latest hijacking by Russian company happened just while the Internet Routing Registry was facing technical difficulties. The IRR operated by RIPE accidentally deleted 2,669 route origin authorization (ROA). The system is used to confirm whether routing information published by a peer is correct. The RIPE system affected servers in Europe, West Asia and states of the former USSR. It would be very unusual that Russia publishes incorrect routing information just when there is no method to verify the information. There is the possibility that Russia was aware of the deletion of the confirmation registry, and took the opportunity.
“This has happened before with China Telecom, a Chinese state-owned telecommunications company, where sensitive internet traffic from different countries has mysteriously found its way through China Telecom’s systems on the way from point A to point B. It has also happened with Rostelecom before — which is why even though this event could be nothing more than an accident, the redirection of internet traffic from many major tech companies through Rostelecom should at the very least raise eyebrows. That is also the case given the way Moscow has worried about the vulnerability of the internet recently, including with protocols, and the possibility that officials are similarly examining how those vulnerabilities could be exploited against others.”
Sherman notes that internet security professionals do not consider the safety of internet protocols despite them being vulnerable BGP is vulnerable to hijacking.
“When we think about internet security worldwide, we probably think more about things like strong passwords and don’t think too much about the protocols that actually handle and route internet data around the world — yet many of them are quite vulnerable to hijacking and manipulation.”
Although he accepts that malfunctions could lead to such incidents, he believes actors could maliciously exploit the opportunity, “Sometimes there are malfunctions around BGP that cause traffic to take an unexpected and unintended path from its source to its destination on the internet. But other times, BGP can be maliciously’ hijacked’ so that internet traffic goes through a particular location — the point being for a malicious actor to potentially access important and sensitive information.” | <urn:uuid:68406d3e-7f1e-488b-aa8e-a6853e137379> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/russian-rostelecom-compromises-internet-traffic-through-bgp-hijacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00699.warc.gz | en | 0.935123 | 1,073 | 2.65625 | 3 |
DLL injection is a technique which allows an attacker to run arbitrary code in the context of the address space of another process. If this process is running with excessive privileges then it could be abused by an attacker in order to execute malicious code in the form of a DLL file in order to elevate privileges.
Specifically this technique follows the steps below:
- A DLL needs to be dropped into the disk
- The “CreateRemoteThread” calls the “LoadLibrary”
- The reflective loader function will try to find the Process Environment Block (PEB) of the target process using the appropriate CPU register and from that will try to find the address in memory of kernel32dll and any other required libraries.
- Discovery of the memory addresses of required API functions such as LoadLibraryA, GetProcAddress, and VirtualAlloc.
- The functions above will be used to properly load the DLL into memory and call its entry point DllMain which will execute the DLL.
This article will describe the tools and the process of performing DLL injection with PowerSploit, Metasploit and a custom tool.
DLL’s can be created from scratch or through Metasploitmsfvenom which can generate DLL files that will contain specific payloads. It should be noted that a 64-bit payload should be used if the process that the DLL will be injected is 64-bit.
The next step is to set up the metasploit listener in order to accept back the connection once the malicious DLL is injected into the process.
There are various tools that can perform DLL injection but one of the most reliable is the Remote DLL Injector from SecurityXploded team which is using the CreateRemoteThread technique and it has the ability to inject DLL into ASLR enabled processes. The process ID and the path of the DLL are the two parameters that the tool needs:
From the moment that RemoteDLLInjector executes will provide the full steps that performs in order to achieve DLL injection.
If the DLL is successfully injected it will return back a meterpreter session with the privileges of the process. Therefore processes with higher privileges than the standard can be abused for privilege escalation.
Metasploit framework has a specific module for performing DLL injection. It only needs to be linked into a meterpreter session and to specify the PID of the process and the path of the DLL.
Privilege escalation via DLL injection it is also possible with PowerSploit as well. The msfvenom can be used to generate the malicious DLL and then through the task manager the PID of the target process can be obtained. If the process is running as SYSTEM then the injected DLL will run with the same privileges as well and the elevation will be achieved.
The Invoke-DLLInjection module will perform the DLL injection as the example below:
The payload inside the DLL will be executed and SYSTEM privileges will be obtained. | <urn:uuid:5697c933-0300-489b-90b9-0f37c6032fe3> | CC-MAIN-2022-40 | https://pentestlab.blog/2017/04/04/dll-injection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00699.warc.gz | en | 0.900997 | 639 | 2.578125 | 3 |
What is Protected Health Information?
Unless you have been living under a rock, you’ll know that HIPAA Compliance is all about ensuring the sanctity, integrity, and security of Protected Health Information or as it is more commonly known, PHI. But what is PHI? Why is it so important that it is kept under lock and key, and only disclosed when it is considered necessary?
HIPAA Protected Health Information, or PHI, is any personal health information that can potentially identify an individual, that was created, used, or disclosed in the course of providing healthcare services, whether it was a diagnosis or treatment. PHI can include:
- The past, present, or future physical health or condition of an individual
- Healthcare services rendered to an individual
- Past, present, or future payment for the healthcare services rendered to an individual, along with any of the identifiers shown below.
To put it simply, PHI is personally identifiable information that appears in medical records as well as conversations between healthcare staff such as Doctors and Nurses regarding patient treatment. PHI also includes billing information and any information that could be used to identify an individual in a health insurance company's records.
Generally, PHI can be found in a wide variety of documents, forms, and communications such as prescriptions, doctor or clinic appointments, MRI or X-Ray results, blood tests, billing information, or records of communication with your doctors or healthcare treatment personnel.
ePHI: PHI in Electronic Format
When PHI is found in an electronic form, like a computer or a digital file, it is called electronically Protected Health Information or ePHI. This is PHI that is transferred, received, or simply saved in an electronic form. ePHI was first described in the HIPAA Security Rule and organizations were instructed to implement administrative, technical, and physical safeguards to ensure its sanctity and integrity. ePHI can be found in a couple of different forms whether it is shared via email or stored on a hard drive, computer, flash drive, disk, cloud hosting platform, or other. It is important to point out that the HIPAA Privacy Rule applies to each and every form of PHI that currently exists or will ever exist in the future. On the other hand, the Security Rule only applies to ePHI and does not apply to paper or oral versions of this same information. Since technological innovation has led to many covered entities, and business associates handling ePHI as a part of their operations, there was a need for a rule that dedicated guidance to this topic. This shift in medical information storage led to the need for the HHS to pass The Security Rule and it's physical, technical, and administrative safeguards in order to ensure the protection of ePHI.
What Information is considered PHI?
PHI is any information that can be used to identify an individual, even if the link appears to be tenuous. HIPAA has laid out 18 identifiers for PHI. If a record contains any one of those 18 identifiers, it is considered to be PHI. If the record has these identifiers removed, it is no longer considered to be Protected Health Information and it is no longer under the restrictions defined by the HIPAA Privacy Rule. These are the 18 Identifiers for PHI:
- Full names or last name and initial
- All geographical identifiers smaller than a state,
- Dates (other than year) directly related to an individual such as birthday or treatment dates
- Phone Numbers including area code
- Fax number/s
- Email address/es
- Social Security number
- Medical record numbers
- Health insurance beneficiary numbers
- Bank Account numbers
- certificates/drivers license numbers
- Vehicle identifiers (including VIN and license plate information)
- Device identifiers and serial numbers;
- Web Uniform Resource Locators (URLs)
- Internet Protocol (IP) address numbers
- Biometric identifiers including fingerprints, retinal, genetic information, and voice prints
- Full face photographs and any comparable images that can identify an individual
- Any other unique identifying number, characteristic, or code except the unique code assigned by the investigator to code the data
The rule of thumb is that if any of the information is personally recognizable to the patient or if it was utilized or discovered during the course of a healthcare service, it is considered to be PHI.
What is not considered PHI?
Generally speaking, PHI does not include information created or maintained for employment records, such as employee health records. Health data that is not shared with a HIPAA covered entity or can not be used to identify an individual doesn’t qualify as PHI, such as a blood sugar reading, a temperature scan, or readings from a heart rate monitor.
To make laying out a narrow definition of PHI even more complicated, HIPAA was conceived in a time when the internet was in its infancy and devices like smartphones were something that you saw on Star Trek. The law was written for a world in which X-rays were physical copies and safeguarding patient data meant keeping files in locked filing cabinets behind closed doors. In today’s world of genetic information, wearable technology, health apps and perhaps even implantables, it can be challenging to determine whether you are using consumer health information or PHI.
So if you are a startup developing an app, and you are trying to decide whether your software needs to be HIPAA Compliant, the general rule of thumb is this: If the product that you are developing transmits health information that can be used to personally identify an individual and that information will be used by a covered entity (medical staff, hospital, or insurance company), then that information is considered PHI and your organization is subject to HIPAA. If you have no plans on sharing this data with a covered entity, then you do not need to worry about HIPAA compliance - yet.
Why is PHI valuable to criminals?
It is common knowledge that healthcare data is very attractive to hackers and data thieves. Hacking and other IT related events make up the majority of PHI breaches, but why is healthcare data such an attractive target to cyber criminals?
According to a study by Trustwave, banking and financial data is worth $5.40 per record, whereas PHI records are worth over $250 each due to their longer shelf life. Stolen financial information must be used quickly for a thief to take advantage of. For example, If your credit card is stolen, you can cancel your card as soon as you are aware of the theft or even loss, leaving the thief a brief period to make fraudulent purchases.
However, If your personal healthcare information stored in these smart medical devices is stolen, you can’t change it and the breach can take a very long time to detect. Information such as your name, date of birth, address, Social Security Number, and older medical claims information can be used to commit fraud, the thief can receive medical care using the victim’s name, purchase prescription drugs, and even commit blackmail.
The results of an unauthorized disclosure of PHI can be far worse than financial fraud, as they can take months or even years before they are detected. Identity theft can take years to recover from. Additionally, the penalties of a HIPAA violation can be quite severe and even crippling to an organization.
Protect Against PHI Breaches. Become HIPAA Compliant
The HIPAA Security Rule requires organizations to take proactive measures against threats to the sanctity of PHI. Organizations must implement administrative, technical, and physical safeguards to ensure the confidentiality and integrity of the PHI under their care. However, aside from saying that safeguards must be implemented, the “how” is left to the discretion of the individual organization, which can be frustrating for the organization in question because when the cost of non-compliance can be so high, they don’t know what they need to do to be compliant.
Related: HIPAA Compliance Checklist
Broadly speaking, these are the actions that an organization needs to take in order to become HIPAA-compliant to safeguard the PHI under your organizations care:
- Select someone from your organization to serve as your HIPAA Privacy officer.
- Conduct regular employee HIPAA training as well as awareness programs to ensure that your staff are aware of the tactics deployed by cybercriminals, as well as more mundane and traditional methods of protecting data.
- Investing in data loss prevention controls like encryption and endpoint security solutions.
- Policies are in place to prevent employees from accessing PHI via non-secure methods as well as controlling access to PHI.
- Storing and transmitting PHI via a method that meets HIPAA compliance – if this is managed by a third-party provider, a business associate agreement must be signed ensuring the third party is also complying with HIPAA.
Accountable was founded with the goal of making HIPAA compliance achievable by creating a framework that will make training employees, adopting applicable policies and procedures, and identifying risk in your organization simple so that you can spend your time focusing on your business, not fretting about threats. We’re so confident that we can meet your needs that you can try it for free.
Don’t wait. Get started on your journey to compliance, today. | <urn:uuid:5ba54a78-76a7-4ecc-b6cd-d4ae9d13aba4> | CC-MAIN-2022-40 | https://www.accountablehq.com/post/what-is-phi | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00699.warc.gz | en | 0.948554 | 1,880 | 3.34375 | 3 |
What if we have to traverse and inscribe hard and uncertain enigmas?
What if people need to prophesy a natural disaster before it happens?
If an individual wants to imperil species or trace affliction when it spreads or to oust it earlier.
All such unanswered problems must be essential to address, must be sustained for improving people’s lives. But how? Well, perhaps the answer will be Artificial Intelligence.
Put simply, AI is the creation of software that practices computers and algorithms in order to decipher data and function on its own account. Artificial intelligence is the fastest-growing industry, it lists range from education, investments, business, trends, to news and politics. (Must read: AI in politics)
This simple blog throw lights on how AI will change the world and making peoples’ lives easier, especially when employs and connects hands with Google, bestowing a voyage to Google AI.
What is Google AI, in actual?
Google AI, which is heretofore known as Google Research also, is presently Google’s Artificial Intelligence R&D division for numbers of fundamental applications of AI. Even, Google has revealed its rebrand of Google AI at Google I/O in 2018.
Google is continuously augmenting its division of AI in all its accompanying fields that include Google Auto ML vision (for image recognition), Google Assistant (for android devices), TensorFlow(for ML and deep learning), DeepMind( for developing deep learning and artificial general intelligence technology)
Google Research undertakes numerous challenging issues in Computer Science and allied fields, Google believes that AI can give distinct methods of cornering problems and enrich the lives of people pointedly. (Click here to know how AI changes our lives)
According to the definition of Google,
The theory and research-development of computer systems that are worthy to execute functions usually compelling human intelligence like visible discernment, speech recognition, decision-making, and interpretation amid languages.
What are the Tools provided by Google AI?
Google is shaping tools and resources that are available to everyone so that everybody can implement technology to solve problems. Some tools are described below;
1. ML Kit
Machine Learning tools Kit delivers ML expertise of Google to mobile developers in terms of a compelling and convenient package. By this, you can shape your iOS and Android apps extra appealing, personalized, and effective alongside optimized solutions to be executed on the device. It helps in;
- Optimized for mobile devices: ML Kit’s transforming takes place on the device for making it fast and unhitches real-time handling cases such as dealing with a camera. It can also function in offline mode and can be accepted for handling images and texts.
- Developed with Google expertise: You can take benefits of ML technologies that boost Google's personal participation in mobile-devices.
- Simple-to-deploy: While consolidating best ML models with superior processing pipelines, it endeavors easy-to-use APIs to empower influential cases in your mobile apps.
2. Fairness Indicators
Fairness Indicators, an elementary computation of ordinarily recognized fairness array of metrics for binary and multi-class classifiers. Priorly, in order to evaluate the fairness watchfulness by utilizing enormous tools on huge datasets and models are not suitable. In regard to this at Google, it facilitates such tools that can operate on billions of user-systems.
Therefore, by fairness indicators, Google can evaluate fairness metrics over any size of data, or used-cases. It incorporates the empower to;
Assess the dissemination of datasets
Figure out the performance of models and classify over across defined groups of users
Decipher all individual parts for discovering motive causes and possibilities for advancement
Don’t you want to learn how fairness indicators can be applied to any product for evaluating fairness interests over a duration of time, Do you?
It is the case study that demonstrates how fairness indicators work with complete videos and programming exercises.
List of Google AI tools
Colaboratory, or you would say “Colab” for short, enables you for writing and running Python your web browser, especially with;
Zero arrangement necessitated
Free and open access to GPUs
"Colab makes work much easier for students, data scientists, and AI researchers"
It is a Jupyter notebook environment that is available for free, it doesn’t need setup and executes wholly i.e, writing, functioning, and sharing code on the cloud.
This is the official video from Google Colab Research where Jake Vanderplas told what exactly you necessitate to get originated with Colab.
4. Google Open Source
Google understands that open-source is best for everyone. After being available freely and openly, Google approves and stimulates collaboration that also addresses in solving real-world problems through the development of technologies.
“It draws open-source’s values to Google and complete resources of Google to an open-source.”
(As discussing open source, go to the link to learn other open-sources applications)
More or less, it implements numerous open-source projects to establish high-scale and authentic products. Also, Google has published millions-lines of code beneath the licenses of open-source for others to use.
“Google's embrace of open source has been important to me as an engineer in ways I can't express. It's fantastic!”- Rob Pike, Distinguished Engineer
This is the link where you get some tutorials on how to use TensorFlow.js with excellent examples, commonly used case models and live demos and examples through TensorFlow.js.
For its working;
Retrain existing models: You can retrain pre-existing ML models by managing your individual data.
(Learn how ML models help Uber to optimize its services)
Google's machine learning and artificial intelligence (or machine intelligence, as the company likes to call it) influences many of its signature products. -John Chae
Google has been inaugurating its platform at the world-level that furnishes equal opportunities for us and observes how an individual company assumes for establishing machine learning systems. Moreover, Google puts enormous efforts into building a massive platform for Artificial Intelligence, and at this time, Google has released it worldwide.
You’ve gone through the understanding of Google AI, its perceptions, and various tools of Google Ai that are available freely and openly for everyone. | <urn:uuid:cede42d8-afd7-440a-8584-493935bb5f5e> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/exploring-google-ai-tools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00699.warc.gz | en | 0.899514 | 1,458 | 2.96875 | 3 |
This feature first appeared in the Summer 2015 issue of Certification Magazine. Click here to get your own print or digital copy.
Tech skills will be required in 80 percent of all jobs in the next decade, yet women in technology have been declining since 1991. At the current rate of decline, fewer than 1 percent of the global tech workforce will be female by 2043.
STEM (science, technology, engineering, and math) jobs are twice as likely to be held by men, even in a randomized, double-blind study. Monica Eaton-Cardone, founder and CIO of Global Risk Technologies, says that in order for today's women to have a chance tomorrow, gender must become a non-factor.
"The opportunity to add a valuable contribution to society through technology is a benefit that should be promoted more — especially to women," says Eaton-Cardone. She believes that women are interested in STEM opportunities, but don't get many chances to develop or pursue that interest. Eaton-Cardone points to the 66 percent of fourth-grade girls who are interested in math and science —yet only 18 percent of college engineering majors are female. Currently, only 1 in 4 STEM jobs is held by a woman. Eaton-Cardone says that in order to change this, women need to be encouraged and women need to be educated on the growing potential of STEM careers.
Eaton-Cardone is a unique case for women in STEM, excelling in a field of men despite having no formal IT background. Her creative solutions for payment processing in Chargebacks911, eConsumerServices, and Global Risk Technologies have enabled merchants, consumers and banks to find solutions for their online businesses. Even taking the rare success stories into account, however, there are many questions when it comes to women in STEM:
- Why do young girls lose their enthusiasm for math and science?
- How can society offer encouragement to women in STEM?
- Why are STEM industries averse to women?
- What programs/opportunities exist to encourage girls in STEM?
- How can companies learn about potential bias in hiring practices? What can they do to change this?
- What are your recommendations to women who would like to get into technology or IT?
Monica Eaton-Cardone made a career out of discovering where there is a problem and then solving it herself. She then develops solutions for others who are experiencing the same problem. Eaton-Cardone has a 20-year background in developing retention campaigns, which entails developing technologies around monitoring key performance indicators to help track advertising performance, customer acquisition trends, and other factors that help to secure customers.
Her career expanded to the online arena, which she claims is a totally different ballgame because it is constantly evolving. As with brick-and-mortar businesses, principles such as "The customer is always right" are important. But the online arena uses a spaceless customer where there is less accountability for both consumer and merchant, and a reduced barrier to entry for merchants on a global scale. That introduces some unique problems in business � problems that are hard to quantify because online business is a moving target.
Take chargeback and credit card processing, for instance: "Technology solutions follow the technology criminals," Eaton-Cardone says. "It is the genius criminals' who are actually responsible for creating a revolution in this industry because we're all trying to stay ahead of them — and keep up with the loopholes that they expose — to make our systems even more secure. However, the minute you think that you have the best, most secure system, you're dead in the water because just a few months go by and someone else has figured out some other way to expose a weakness in your online presence.
"You have to continually re-invent and recognize that the most relevant data to analyze when it comes to the economy today is the present, not necessarily historical. I developed a software program strictly for my own use because I found there was such confusion in handling chargebacks and being able to analyze risk in the online environment. Lo and behold, there were a number of other online merchants who needed that solution as well, which gave birth to Global Risk Technologies to serve these online merchants."
Eaton-Cardone's first project was developing a VOIP (Voice Over IP) technology that connected call centers in four countries so that she could analyze the results. Her education lies in architecture, so she was not exactly interested in IT. What she was interested in, though, was building things and solving problems. She enjoys math, and she likes organizing ways to solve a problem. Technology was something she fell into as a result of solving a problem with well-defined requirements.
Can any of us get away from technology? Eaton-Cardone says that every woman on the planet has a natural interest in technology and would expose that talent if she tried it. She believes that women, in general, have an aptitude for design and creativity, tapping into their talents in structure and organization. Most mothers are proficient multi-taskers, she says. This is what technology is! Women just use a different set of tools.
Eaton-Cardone has a daughter who is 8. Her daughter is as good with Legos as any boy of the same age, and Eaton-Cardone is certain that her daughter would enjoy a robotics class. At the same time, she's certain that her daughter would not consider taking such a class without ample encouragement from her mother. By their teenage years, most girls have not been afforded much opportunity to be exposed to what technology is.
Girls would enjoy developing a computer program on their iPhones because it's creative. They would be designing something — it's not just math; they'd be applying their talents. "Boys are probably more likely to learn math at a faster pace because they take courses like wood shop, and guess what wood shop is?" Eaton-Cardone says. "A bunch of angles that allow them to apply math to their creativity with the wood; they're learning a skill (wood shop) in tandem with math."
How do we provide the same opportunities to girls? The top-down approach — putting pressure on corporations to hire more women — is not only unworkable but is actually damaging. What ends up happening is that people are interviewed to become computer programmers or coders, and there aren't any women to interview. Eaton-Cardone says she may have one woman out of 100 applicants, and she must fight the impulse to hire that one woman, because to do so would make that woman a charity case.
Suppose, for example, that the lone female applicant is not as qualified as the male applicants. So now there is a woman who is setting an example for every other woman, and she's not very good. To hire her simply to make a statement would not be fair to her, to women in general, to the corporation, or to the 99 men who applied. Trying to incentivize female applicants with money or scholarships doesn't work because men and women go to college to pursue their passions. Oftentimes money is not enough to get them to change their passions. One needs to start at a younger age.
Eaton-Cardone says she overheard a man at a seminar, who said, "We're giving everyone the same opportunity." But Eaton-Cardone claims it's not an opportunity if we're telling a 13-year-old to choose between a sewing class and learning robotics. It would be an opportunity to actually require students to take a robotics class to allow that exposure to technology to happen.
"If the only piano players we had on the planet were very young children who expressed a desire to learn piano, we'd have no pianists," Eaton-Cardone says. "You expose them to something with parental stewardship, and the children learn whether or not they have a talent for the skill and become engaged. Boys are drawn to STEM because they are more naturally interested in computer games, and girls are drawn to creativity and design work. Women aren't given an opportunity early in their lives to put their creativity and design talents together with technology."
Technology is a genderless field. One cannot expect a girl at the age of 13 to decide everything in which she's going to be interested. In Asia, girls are required to take trigonometry, which allows them to consider a career in STEM. Many IT pros are given the flexibility to work from home or telecommute. If one has the talent, one has terrific flexibility and opportunity if a STEM field is chosen. Women do not recognize the freedom that is afforded by a STEM field. One is hired because of one's talent and interest in STEM, just as men are.
It would be interesting to see how many girls would pursue a STEM career if they took wood shop. Expose them to areas outside a traditional girl's comfort zone, and more girls will go into STEM. The best learning comes when there is an application method for the theory being taught. Girls can't be expected to naturally excel in math when they are only given theory without any application for it. Boys are naturally choosing things that apply the math theory.
Eaton-Cardone: "We like to say that a person either has the STEM gene or he/she doesn't. At Global Risk Technologies, we hire a woman as an executive assistant, and we will reveal her hidden abilities in numbers. We hear women saying, ��I like to help people,' and women don't realize that majoring in STEM will allow them to help many more people than studying non-STEM subjects. But they've never seen how they can apply their natural abilities. You can learn how to do anything online. Opportunities are boundless. It's back to having confidence in trying new things; thinking outside the box. It is damaging to women to tell them they are the underdog. The men out there are producing. Don't be afraid to start at the very bottom, and if you perform, the company will recognize this. If you feel sorry for yourself, you will become a liability to that company."
Some final advice to women in technology from Monica Eaton-Cardone: Find a subject about which you can be passionate. Find a mentor in that subject area and become excellent at it. This takes work and many hours. Turn your interest into a passion. Invest in yourself. Pick it and stick with it.
Actor Marc Anthony told reporter Meredith Vieira what his father said to him many years ago. "My dad told me early on, he said, "Son, we're both ugly." I swear, he says it to this day. And he goes, "You work on your personality. It builds character." We would have a better planet if both men and women put 100 percent of their efforts into their passion for something. Why can't that something be STEM? | <urn:uuid:39591d5f-7ded-41c5-a903-eb0bca2ec81f> | CC-MAIN-2022-40 | https://www.certmag.com/articles/tackling-problem-getting-women-stem-careers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00699.warc.gz | en | 0.974576 | 2,235 | 2.875 | 3 |
In this NetApp training tutorial, we will cover NetApp SAN implementation on clustered ONTAP. Scroll down for the video and also text tutorial.
NetApp SAN Implementation Video Tutorial
What is a Storage Area Network (SAN)?
Storage Area Network (SAN) provides block-level access to storage resources. The data is stored and accessed in blocks that are stored in LUNs. It allows the clients to manage the allocated space on the storage system similar to a local hard drive.
What is SAN Storage Used For?
SANs are usually used to support databases and performance-sensitive applications due to its speed, availability, and resiliency.
Since SAN also offers a high throughput and low latency, it is commonly used for business-critical applications and other high-performance applications such as:
- Virtualization Deployments
- Virtual Desktop Infrastructure (VDI) Environments
- SAP, ERP, and CRM Environments
NetApp SAN Implementation – LUNs
Let’s start with some terminology. For SAN protocols, the storage system is referred to as the target and the client is known as the initiator. A LUN (Logical Unit Number) is a logical representation of a disk which is presented by the target to the initiator.
SAN protocols provide low-level block access to the storage. The LUN is a logical representation of a disk and the initiator can use it exactly the same way it would use a local hard disk installed in its chassis.
LUNs are specific to the SAN protocols (FC, FCoE, and iSCSI), they are not used for NAS.
In NetApp storage, the LUN is located in either a volume or a qtree. Best practice is to have a dedicated volume or qtree for each LUN. Do not put multiple LUNs in the same qtree or volume (unless each LUN is in a separate dedicated qtree in a volume.)
The steps written in green font on the slide above are to be configured on the iSCSI client, while the rest are configured on the NetApp storage system.
iSCSI CHAP Authentication
Configuring CHAP authentication for iSCSI is optional. For Fibre Channel, initiator access to the SAN is controlled through zoning on our switches, so it is inherently secure.
Ethernet switches however do not support zoning. This is why it is recommended to use CHAP authentication between the initiator and the target. On one-way CHAP, the target only authenticates the initiator while with mutual CHAP, it’s two-way where the initiator also authenticates the target.
Fibre Channel and FCoE (Fibre Channel over Ethernet)
Both Fibre Channel and FCoE use the Fibre Channel Protocol (FCP) and work in exactly the same way. FCoE is Fibre Channel, just encapsulated in an Ethernet header so that it can run over Ethernet networks.
Both are configured as ‘Fibre Channel’ on NetApp storage, there is not a separate page on the System Manager GUI for FCoE.
They both use World Wide Port Name WWPNs for the addressing on the initiators and the targets.
The WWPNs are assigned to Fibre Channel LIFs on the NetApp storage system. Fibre Channel LIFs can be homed on UTA2 (Unified Target Adapter) ports, which can be configured as either native Fibre Channel, or as 10Gb CNA (Converged Network Adapter) ports with FCoE support.
To set up FC or FCoE, first, configure the UTA2 adapters as either type Fibre Channel or type CNA. This step isn’t required for iSCSI since it always uses Ethernet adapters. Other than that, the configuration steps are very similar for iSCSI and FC/FCoE.
The next task is to enable the SAN protocol on the SVM (Storage Virtual Machine). SVMs are how secure multi-tenancy is enabled in ONTAP. Different SVMs appear as separate storage systems to clients.
The World Wide Node Name WWNN (FC/FCoE) or iSCSI Qualified Name IQN identifies the storage system as a whole. For the SAN protocols, different SVMs will have different WWNNs or IQNs so that they appear as separate systems.
After we tie the LUN to the initiator group with a mapping, we don’t need to discover the target from the initiator because this is done automatically within the Fibre Channel Protocol using the fabric login process.
Once the NetApp side is done, go to the client and configure its LUN access using its multipath software.
SAN Logical Interfaces (LIFs)
The system as a whole is identified by WWPN (FC or FCoE) or iQN (iSCSI). Individual ports on the storage system in the SVM are identified by WWPN, if it’s Fibre Channel or FCoE, or IP address if it’s SCSI.
The WWPNs and IP addresses are not applied directly to physical ports, they’re applied to logical interfaces. This is the same as for our NAS protocols.
Separation of the physical port and logical interface provides flexibility which allows multiple LIFs to share the same underlying physical port. SAN LIFs support either Fibre Channel, FCoE, or iSCSI. FC LIFs have WWPNs, iSCSI LIFs have IP addresses.
Clients using NAS protocols cannot use SAN LIFs to access their data. SAN LIFs can, however, share underlying physical interfaces with NAS LIFs. So on the same physical port, we could have a NAS LIF that’s being used for NFS, and a SAN LIF that’s being used for iSCSI for example. The different LIFs on the same physical port would have different IP addresses.
SAN LIFs are assigned a home node and port just like the NAS LIFs. The only difference is, SAN LIFs don’t failover like NAS LIFs do. SAN protocols leverage multipathing intelligence on the clients. If a SAN LIF fails, the clients know the other paths it can take to get to its LUN, so there is no need for the LIF to failover to another port.
SAN LIFs can be migrated from their home port over to different ports or nodes within an SVM. So they’re not going to automatically failover but as an administrator, we can manually move a LIF to a different port.
Best practice is to create at least one LIF per node, per network, per SVM. The example below shows eight paths from the hosts to the storage, with eight LIFs on the storage, two on each node. | <urn:uuid:fdfbdbab-7617-48fc-b35c-88e2c0f0c65d> | CC-MAIN-2022-40 | https://www.flackbox.com/netapp-san-implementation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00699.warc.gz | en | 0.898735 | 1,462 | 2.625 | 3 |
Data Privacy Day is an international holiday that occurs every January 28. The purpose of Data Privacy Day is to raise awareness and promote data privacy education. It is currently celebrated in the U.S., Canada and 27 European countries. In Europe, the holiday is referred to as Data Protection Day. (source: Wikipedia)
According to the official press release from the National Cyber Security Alliance:
“Data Privacy Day strives to educate the world about data privacy protection through its universal theme: Respecting Privacy, Safeguarding Data and Enabling Trust. The global effort is made successful through a flood of activities from more than 140 supporting organizations and individuals across the globe. Plans to celebrate Data Privacy Day have been made in numerous countries including Australia, Japan, India, Belgium and Canada. While here in the U.S., many colleges and universities, businesses and community organizations will celebrate Data Privacy Day in their own unique ways.”
Top 10 Strategies for Protecting Data At Your Company
To celebrate Data Privacy Day, Jay Livens, director of product and solutions marketing at Iron Mountain has compiled a checklist of top 10 strategies for businesses who want to keep their information safe and from getting into the wrong hands. Take a look:
1. Encryption is key. Make sure all of your data is encrypted – whether it’s information you keep in digital storage, tape, or on your employees’ mobile devices. Wherever there is sensitive information, there should also be encryption.
2. Manage Mobile Devices. The ever-mobile employee of today can have a lot of sensitive information on their phones and tablets. Make sure you have a mobile device management solution or policy in place to protect those devices, whether corporate or employee-owned.
3. Out with the Old. Ensure that comprehensive corporate policy accounts for the secure destruction of old and sensitive company, employee and customer information.
4. Store Smart. You should always know how your information is secured – whether it’s in the cloud, in a data center or housed locally.
5. Plan Ahead. Make sure you have an end-of-life plan in place for assets you no longer need or that will be destroyed. People tend to hold on to information for longer than they need. Make sure you dispose of IT assets in a safe and consistent manner to protect from a potential data breach.
6. Password Protect. Use complex passwords, change them frequently and use two factor authentication whenever possible.
7. Virus Protection. It seems like a no-brainer, but keeping up-to-date with virus protection is a great way to keep data safe.
8. Don’t Forget Firewalls. Firewalls and intrusion detection are also a key piece of the data privacy puzzle.
9. Privacy is the Best Policy. Create an enterprise-wide policy to protect private information from unauthorized access or inadvertent disclosure.
10. Education Nation. Properly train your employees to treat information appropriately, and make sure everyone is up to speed on the latest policies and procedures.
Data from Iron Mountain Survey
Recently, we conducted a survey of IT professionals on how organizations will protect data in 2014 and beyond. You can read the official release here, but here are the top-level highlights:
- Data loss ranks as number one concern of IT leaders: With 68 percent probability, the report shows that data loss and privacy breaches are the most prevalent concern for IT leaders over the next 12-18 months.
- Managing increased data volumes will continue to overwhelm organizations: There is a 77 percent likelihood that the rising tide of data will remain the greatest challenge facing IT organizations. Contributing to the issue is that many enterprises have data stored on various technologies, making access to this data a concern as these organizations work to meet growing archiving requirements.
- Backup tape is still an attractive storage option: Respondents indicated with a 62 percent confidence level that IT organizations are grappling with limited funding for aligning data growth and data protection. At the same time, tape’s low total cost of ownership (TCO) makes it an attractive factor for its role in a hybrid backup strategy.
About Iron Moun
Iron Mountain Incorporated (NYSE: IRM) is a leading provider of storage and information management services. The company’s real estate network of over 64 million square feet across more than 1,000 facilities in 36 countries allows it to serve customers with speed and accuracy. And its solutions for records management, data management, document management, and secure shredding help organizations to lower storage costs, comply with regulations, recover from disaster, and better use their information for business advantage. Founded in 1951, Iron Mountain stores and protects billions of information assets, including business documents, backup tapes, electronic files and medical data. Visit www.ironmountain.com for more information. | <urn:uuid:242bfd48-f35c-4b5e-b62a-2a1a2ff9059a> | CC-MAIN-2022-40 | https://www.missioncriticalmagazine.com/blogs/14-the-mission-critical-blog/post/86175-data-center-privacy-day | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00699.warc.gz | en | 0.922051 | 985 | 2.828125 | 3 |
Special characters on your Windows device is possible!
There are pros and cons to all devices and whether you line up with Apple or PC, you’ve probably noticed some things that your device can’t do that the other can (and vice-versa). In this case, unless you open up Word, then copy and paste what you want on your text may take a while. You know there must be a way to not do this lengthy process so let’s cut to the chase: learn how to create a keyboard shortcut for your windows device.
- Download AutoHotKey from autohotkey.com
- Go to your Desktop > right-click on desktop > choose “New” > “AutoHotKey Script”
- Give it any name with “.ahk” at the end. (For example if you want to create an em or en dash, name it “em and en dash.ahk” – without the quotation marks.) The Script will now appear on your desktop.
- Right-click the new Script > choose “Edit Script”
- Notepad will open up, and you’ll see some default text at the top of the document. Insert the following “!-::-” [hit Enter], then insert “+!::–” [hit Enter]. (Again, without the quotation marks.)
- Double-click the script to open it.
- Then try it out. It should work no problem!
This means you have to open that script every time you start up your system. To avoid this process do the following:
- Right-click your script > choose “Create Shortcut”
- Type “Win-R”
- In the dialog, type “shell:startup” to open your Startup items folder.
- Drag the shortcut for your script to the Startup folder.
That’s all there it to it. You can now easily add any other keyboard shortcuts to your script as well. Visit AutoHotKey’s list for creating other shortcuts. | <urn:uuid:6e5ce7c6-c553-442d-8055-4acf86c30f83> | CC-MAIN-2022-40 | https://ddsecurity.com/2016/03/10/create-keyboard-shortcut-windows-10-special-characters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00699.warc.gz | en | 0.791181 | 446 | 2.515625 | 3 |
There is both an enormous amount of convenience and speed in using digital payment systems. For the general public, online payments through credit cards or even cryptocurrency enable customers to buy products worldwide that they might otherwise not have local access to. For businesses, online payments negate any possibility of fraudulent payment through counterfeit cash. Every transaction is a legitimate one that uses real funds.
However, just because the actual transactions are legal and legitimate doesn’t mean the person making the purchase is. This is where payment fraud through online transactions is on the rise due to breaches in cyber security.
Identity Theft and Account Takeovers Lead To Payment Fraud
The way payment fraud occurs today has shifted away from traditional strategies like printing counterfeit money and using that false cash to make purchases. Today’s digital criminals look for vulnerable accounts, seize control of them, and then use the funds or payment system associated with those accounts to make purchases. In other words, the money is real, and the account is legitimate, but the person using the account has stolen that access from the rightful owner or is using the account without the owner knowing it.
This results in the victims eventually receiving receipts and other proofs of payment for purchases they never made. It is the cyber security equivalent of someone having their wallet stolen and then the thief using that cash to make purchases.
Payment fraud is on the rise in a few key areas, most notably:
- Digital Wallets
- Payment Service Provider Transactions
- Cryptocurrency Transactions
- Buy Now, Pay Later Transactions
- Loyalty Reward Points
Billions of dollars in payment fraud occur every year, and one of the reasons for this is the inadequacy of legacy security systems. A single password-only security system, even with multi-factor authentication enabled, is incredibly vulnerable, especially if a careless customer uses an easily deciphered password. Even when a strong, single password is used, criminals use of “keylogging” can defeat this.
Multi-factor authentication is one addition way to “harden” a system against this type of identity theft. These cyber security measures use additional cyber security components, such as physical-digital “keys” or biometric authentication to add stronger layers of security. Even if a password is stolen, the password alone won’t grant access or control of an account without the additional authentication factors.
The challenge, however, lies in ensuring that cyber security measures provide protection without obstruction. If a security feature makes it too difficult to make a purchase, it does more harm than good. This is where initiatives like Nok Nok’s use of FIDO protocols play an important role. Read here to learn more. | <urn:uuid:22799dba-058a-410c-99b9-e7a4d09ad954> | CC-MAIN-2022-40 | https://noknok.com/consumer-payment-fraud-is-a-growing-threat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00099.warc.gz | en | 0.925637 | 554 | 2.75 | 3 |
Fractal wood burning (also known as Lichtenberg) has gained popularity in recent years and has been featured on television shows, social media, and in magazines. It is also becoming more popular as a hobby, with people attending workshops and classes to learn how to do it.
However, the process brings on some dangers you should consider before starting. In this blog post, we will discuss the process and outline the techniques involved and the dangers and precautions you need to take. We’ll also provide some tips for you to do it safely.
What Is Fractal Wood Burning?
This technique is used to create beautiful lightning-like or tree-like patterns on wood. The process is simple but highly dangerous. It involves conducting high-voltage electricity through chemically-treated wood where the electricity burns in the patterns.
What makes it so appealing is that the designs look natural. The process is also known as Lichtenberg, and it is named after a German scientist who first started studying shapes made by electric discharges in the 18th century.
Is Wood Fracking Safe?
The process can be extremely dangerous if not done properly. It has even caused deaths due to electrical shocks and fires. Some incidents involve fractal wood burners left unattended and catching fire; others were electrocutions when people touched the metal probes.
According to the American Association of Woodturners, 30 people were killed using a fractal burner by July 2021. That’s why it is forbidden to use this process. Furthermore, articles promoting this technique are also banned.
Fractal Wood Burning Dangers
With high-voltage electric current and chemicals around, this process involves plenty of dangers:
1. Electrical Shock
Many people decide to make burning devices, mostly from microwave transformers. Improvised devices like these are often poorly constructed and insulated. They frequently cause short circuits. 17 people die because of household appliances each year, and most of them weren’t even trying to tamper with them.
Devices you can buy on the market are not much better, either. Namely, most of them are not certified and pose a considerable safety hazard.
Even if the Lichtenberg machine works fine, there is a danger of electrocution. A transformer is connected by wires to two electrodes used to burn patterns. They conduct hi-power electrical current.
If the two electrodes touch or even come too close, they will cause a massive electrical shock that will probably be fatal to the user. In addition, the process creates fire, smoke, and noise that can confuse you and bring the electrodes too close.
2. Inhalation of Harmful Chemicals
People use different chemical solutions to treat the wood before burning it with a Lichtenberg wood burning transformer. The most “harmless” one is a baking soda and water solution. But even that can cause harmful fumes when it burns. The most dangerous mix is caustic soda and water. Fumes from it can damage your health, no matter if you burn it or not. That’s why you should wear a good-quality mask with a filter.
3. Chemical Burns
We’ve mentioned a caustic soda and water mix before. This Lichtenberg wood burning solution is very dangerous at every stage. When you mix it with water, it generates very high temperatures and starts to boil, spraying the solution everywhere (often on your hands and eyes).
Caustic soda is dangerous even when in a solid state. It can cause burns on your skin because it reacts with the moisture in it.
When you combine high temperatures and wood, you usually get fire. So, you shouldn’t be surprised when you see one here. Namely, the Lichtenberg wood burning machine really burns the wood to create patterns. And it’s up to the user to control that burning.
An inexperienced hobbyist can easily go overboard and lose control, causing a large fire. You may think it is not a big deal, but keep in mind that 17% of fatal residential fires are caused by carelessness.
It is clear that this process should be attempted only by experienced individuals familiar with the risks involved and good knowledge of ways to protect themselves.
Techniques Used in Fractal Wood Burning
You create patterns on the wood by moving the electrified probes. There are two main techniques: the zigzag and the spiral method. Zigzag is the most common and creates patterns that look like lightning bolts.
The spiral method creates designs that look like spirals or whirlpools. The most important thing to remember is not to let the two electrified probes touch each other.
The Equipment Needed for Lichtenberg Wood Burning
The equipment includes a fractal wood burner, gloves, a respirator, chemical solution, and eye protection. Of course, you will also need a piece of wood to decorate.
Fractal devices have two metal probes connected to a power source. They use high voltage and low amps. Due to a relatively low supply of purpose-built devices on the market, people often choose to make their fractal wood burning transformer by taking parts from household appliances.
However, this is the most dangerous option for reasons we’ve described in the “Dangers” section.
2. Chemicals Used in the Process
The process uses chemicals to create patterns in the wood. The most common chemical used is sodium hydroxide (caustic soda). Other chemicals you can use include potassium hydroxide and magnesium sulfate.
You can make the solution by mixing the chemicals with water. The purpose is to conduct electricity better.
3. What Wood Works Best for Fractal Burning?
The best wood for this process is hardwood, as it has a tight grain that burns well. People most commonly use cedar, oak, maple, walnut, and olive. You can also use softwoods like pine and cedar, but they burn more quickly and don’t produce patterns as intricate.
It is best to choose wood that’s not too dark because patterns will stand out better. People also use engineered wood like OSB boards and plywood.
Fractal Burning Process
You create patterns by passing electricity at a very high voltage electrical current between two probes while they are touching a piece of wood. It is vital not to let the electrodes touch one another because it will cause an electric shock.
The first thing to do is apply the chemical solution. Apply it only to the part of the wood you want to decorate, and only in a thin layer. When the probes touch the chemically-treated wood, the electrical current starts to travel between them.
By moving and tilting the electrodes, you can create different designs. Keep in mind that there will be fire and fumes coming out of the wood.
Tips for Safe Fractal Wood Burning
As we’ve already mentioned, this is a dangerous procedure, and you should avoid it. However, if you still haven’t given up, you should do it carefully. Here are a few tips and precautions:
Get Proper Equipment
First, make sure you have the proper equipment and tools. Safety is the biggest concern, so you shouldn’t save on safety equipment. Instead, you should buy insulating rubber gloves and shoes, safety goggles, and the best mask you can find. In addition, you should be prepared if something goes wrong, so get a good first aid kit to deal with burns from fire and chemicals.
Lichtenberg wood burning uses high voltage and low amp. Therefore, you would need a device with a transformer that can generate at least 2000 volts. Be careful where you acquire it. Making your own out of a microwave transformer is dangerous and has caused several deaths.
The electricity can be dangerous if you’re not careful, and the electrodes can be difficult to control. In the best-case scenario, you will get unattractive patterns (you don’t want to hear about the worst-case scenario).
If you don’t have experience with wood burning with electricity, it’s best to start with a small piece of wood and practice on scrap pieces before moving on to anything larger. Create small patterns on the opposite sides of the wood because it will keep the electrodes at a safe distance.
Work in a well-ventilated area because there will be smoke and fumes. It would be best if you practiced extreme caution during the process. Please put on the complete safety equipment before starting the Lichtenberg wood burning, and take it off only after you finish it.
By this, we mean that you shouldn’t take off your insulating rubber boots and gloves before unplugging the device from the socket. This is a common mistake and has caused quite a few shocks.
This may seem like a very straightforward process to many people, and that’s what makes it so appealing. It creates beautiful patterns, which makes it irresistible. But the electricity and chemicals involved make it very dangerous.
So, we advise you to think again and avoid this technique. However, if you still insist on using it, do it properly and with all the safety precautions.
People Also Ask
Can fractal burning be done safely?
In theory, you can do the process safely, but you only need to get it wrong once. Fractal burning is a dangerous procedure because it involves high-voltage electricity and potentially hazardous chemicals.
Some people make it even more dangerous by constructing so-called Lichtenberg burners out of microwave transformers and similar items. This dramatically increases the potential for catastrophe. Even devices sold on the market can be dangerous. Unfortunately, most don’t comply with safety regulations or falsely indicate that certification bodies have approved them.
How do you do fractal burns on wood?
This is a hazardous procedure that can put your life at risk. That’s why it is essential to use caution and proper safety equipment. First, you will put on safety glasses, facemask, and rubber boots and gloves for insulation.
The next step is applying a thin layer of a chemical solution (most often, it is a baking soda and water mix). The final step is placing the electric probes on a piece of wood you want to decorate.
The high-power electricity will burn the patterns into the wood. However, the probes should never touch as it will cause electric shock. Moreover, you should expect flames and fumes, so it should be done in a ventilated room.
Is fractal wood burning legal?
Yes, it is legal. However, it is highly disproved by woodworking professionals around the world as it involves electricity and potentially hazardous chemicals. By June 2021, it caused the deaths of 30 Americans, and these are only confirmed cases. The most common cause of death was electrocution by improvised burning devices. As a result, the American Association of Woodturners has banned the use of fractal wood burning at all its events. | <urn:uuid:cd112e03-98bd-4df6-9982-8240ec4ef19b> | CC-MAIN-2022-40 | https://safeatlast.co/guides/fractal-wood-burning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00099.warc.gz | en | 0.942082 | 2,287 | 2.859375 | 3 |
Serverless computing is the talk of the town these days as different server modes are taking over technical architecture. As the name suggests, serverless architecture is all about enabling developers to write codes without having to bother about server configurations and platforms.
Serverless computing is mainly driven with a single aim: to assist developers in writing code, without incurring too many distractions regarding server configurations and different platform variations. Additionally, the user pays for the compute time consumed so that the idle time is deducted from the usage period. At the same time, code processing can easily be scaled during high load periods.
Today, a lot of companies begin to venture out into the world of serverless computing: AWS, Microsoft Azure, and Google Cloud have become the top names in the industry.
Does Serverless Mean “No Servers”?
Technically speaking, this is not a possibility. Serverless computing is a segment within the Cloud wherein applications are enabled to run in a stateless form within computing containers, which are further driven by an event triggered. One such service by AWS, Lambda, is designed to run snippets of the code which are directed at carrying out single, yet short tasks.
More often than not, these functions are self-contained blocks of code, which are independent and can be deployed and executed from any platform. Functionally, we would realize that in the absence of such facilities, the life of a developer would be very tedious and cumbersome. In an ideal world, where serverless facilities are not available, developers would have to set up a web server before uploading code.
In an alternate technical universe with AWS Lambda, the developer needs only to upload a piece of code which will then be run regularly. Further with the launch of the AWS Lambda, developers don’t need to worry about provision capacity, scaling resources, maintaining security patches, and facilitating updates to the underlying servers. As discussed above, the user is only charged for the compute time they consume.
Managing an Environment Where Serverless is the New Keyword
There are benefits worth noting in a serverless environment. Here’s a list which can help some users decide in favor of AWS Lambda:
Save Time and Ride the Wave with Efficiency: Companies like to save the time of their employees and bring about efficiency in their daily processes. A serverless environment facilitates the use of efficiency, as developers can produce apps without bothering about server configurations.
Function as a Service (FaaS): With the launch of the AWS Lambda, Amazon introduced a whole new level of architecture for different applications within the Cloud. With the use of AWS Lambda, developers can run any application without provisioning servers. All you need to do is upload the code; Lambda would perform the rest of the functionality.
The World of Nanoservices: The serverless environment paves the way for Nanoservices, which help to structure serverless computing applications. Through the use of nanoservices, serverless computing can assist immensely in auto-scaling and load balancing.
The Workload Decides the Scaling Experience: How does the idea of auto-scaling according to your workload sound? What if your application was enabled to scale up automatically? With AWS Lambda, all these features are no longer a myth, but actual reality. As the workload increases, a serverless environment will always allow auto-scaling for best results.
Third Parties Manage the Servers: What can be better than having third-party providers worry about providing server management services? No matter the situation, everything will be taken care of by the third-party service provider. Developers don’t have to worry about these things at the time of developing their applications.
Upload the code, and you are set to go: Bid farewell to complex server configurations and coding issues. Everything else will be taken care of by Lambda services. Such is the working of the serverless space, which benefits small, medium, and large-sized organizations.
Journey to the Cloud with AWS Managed Service Providers
Hybrid Cloud: Defining The Face of Futuristic Cloud Architecture
Cloud Computing in the Healthcare Industry
Four Key Steps That Can Enhance and Endure Cloud Migration | <urn:uuid:f1a7c782-8c9a-4c8e-8df4-4dea4502d53e> | CC-MAIN-2022-40 | https://www.idexcel.com/blog/serverless-computing-in-amazon-web-services-aws/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00099.warc.gz | en | 0.927316 | 867 | 2.796875 | 3 |
Social media algorithms are mathematical equations that are calculated based on the way a person interacts with the site. The purpose of these algorithms is to maximize the user’s engagement with the site by showing them content that is most relevant to them.
How Do They Work?
Social media algorithms consider a variety of factors, such as the user’s location, age, gender, and interests. They also consider the time of day and the user’s activity on the site. social media algorithms are constantly evolving and becoming more sophisticated. As they become more refined, they can better customize the user experience and show people content that they are more likely to find interesting.
What is Facebook’s Algorithm?
Facebook’s algorithm is designed to prioritize posts from family and friends over those from businesses. The reasoning behind this is that users are more likely to be interested in content posted by people they know than by businesses. Facebook also gives priority to posts that are considered important, such as those about upcoming events or news stories.
How Does LinkedIn’s Algorithm Work?
LinkedIn is a site that is used primarily by employees and businesses. LinkedIn’s algorithm is based on connection and engagement. The algorithm aims to promote engagement and prioritize relevant content. LinkedIn has been shown to be an effective way to connect with potential employers and business partners. LinkedIn also allows users to post articles, videos, and other content that can be seen by their connections. LinkedIn’s algorithm ensures that users see the most relevant and engaging content from their connections. This helps to create a more efficient and effective user experience.
How to Protect Your Online Privacy
It’s no secret that social media companies use algorithms to track and analyze our behavior. What many people don’t realize, however, is that these algorithms go beyond just looking at how we interact with the social media site itself. They also consider our web history and cookies.
This means that social media companies have a very detailed picture of our online activity, even if we’re not actively using their site. While this data can be used to provide us with better content recommendations, it can also be used for more sinister purposes, such as targeted advertising or manipulation. It’s important to be aware of how these algorithms work, and what data they’re collecting, to maintain our privacy and autonomy online. | <urn:uuid:b5d5ea26-44f6-4607-aa0c-5bcfa82200da> | CC-MAIN-2022-40 | https://www.ctgmanagedit.com/social-media-algorithms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00099.warc.gz | en | 0.956908 | 485 | 2.78125 | 3 |
For your business’s network to thrive, you have to really know its details and intricacies. One of the best ways to understand your network is to make a network diagram, which is a visual representation of your network’s devices and how they connect to one another.
You can manually create your own network diagram, which can help visualize complex networks. However, it can be difficult and time-consuming to build a coherent, accurate network diagram on your own. The best way to create a clear and detailed network diagram is to invest in network diagramming software. Some network diagram tools are designed to create automated network diagrams, which update as your network’s architecture changes.
One of these automated network diagram systems is SolarWinds® Network Topology Mapper (NTM), an all-encompassing network diagram creator. NTM is designed to create multiple network diagrams from a single scan, keep them updated in real-time, and edit these diagrams to fit your network’s specifications. NTM can also enable you to export these network diagrams to multiple external formats. Click this link to access a 14-day free trial of NTM.
- What Is a Network Diagram?
- Types of Network Diagramming
- Best Tools for Network Diagrams
- Summing Up Network Diagrams + Network Diagramming
What Is a Network Diagram?
A network diagram provides a visual rendering of your network, outlining devices, connections, pathways, and data flows in your network. Network diagramming is also known as network mapping and network topology mapping.
Network diagramming is crucial for learning about your network, checking up on your network’s health, and keeping track of your network’s devices (which are called “nodes” on a network diagram). By discovering the specifics of your network’s nodes and their connections, you can better understand how your network works and what it needs to succeed.
A network diagram can help you observe the logical aspects of your network as well as its physical elements.
Logical Network Diagrams
Logical network diagrams display the inner behaviors of your network, such as routing protocols and subnets, and reveal how information flows through a network. This kind of network mapping is common and offers several use cases.
Physical Network Diagrams
Physical network diagrams show the concrete aspects of your network, such as cables and switches, and how they’re arranged. You can think of a physical network diagram as your network’s “floor plan,” and these kinds of network maps are most useful for network engineers.
To form a complete picture of your network, you’ll want to create both a logical network diagram and a physical network diagram. There are also hyper-specialized kinds of network diagrams, which can help you focus on specific aspects of your network. These can include network security diagrams, which focus on security-related devices, computer network diagrams focused on computers, and a network switch diagram showing the pieces of hardware that connect devices together over the network.
You can combine different types of network diagrams together, designing network diagrams to best fit your needs. Some examples could be building a physical network switch diagram or creating a network diagram focusing on the security of your network’s computers.
The ability to customize your network diagram is part of why network diagramming is so important—and also why using a network diagram creator, like SolarWinds NTM, is helpful when it comes to building clear yet accurate network diagrams.
Types of Network Diagramming
As I’ve discussed above, there are several kinds of network diagrams. Your network diagram can focus on the security of your network, specific devices, or the physical or logical aspects of your network. Now that you know what network diagrams are, you can learn about the different ways to create your own network diagrams—manually, automatically, or semi-automatically.
Manual Network Diagramming Software
When you manually draw a network diagram, you’re in charge of understanding and visualizing the connections between your nodes. Manual network diagrams offer you total customization in your map’s design, which can be useful for unique, intricate networks. However, you’ll also have to manually change your network diagram as your network grows and updates, which makes maintaining a manual network diagram difficult.
Semi-Automatic Network Diagramming Software
Semi-automatic network diagrams are, as the name implies, a mix of both manual and automated network maps. Your network’s devices will be automatically discovered while using a semi-automatic network diagram tool, but you’ll have to position and connect your network’s nodes manually. While this offers customization options, building a network diagram without templates or drag-and-drop features can be hard to execute.
Automated Network Diagramming Software
These network diagramming creators are designed to discover your network’s nodes and their connections and automatically generate a network diagram. An automated network diagram will update along with your network as it changes, making network mapping maintenance easy. Many automated network diagramming tools will give you templates to choose from, so you can still customize your network diagram to match your network’s environment.
Regardless of which network diagram creator you choose, make sure your network diagram software allows for movement within your network. Static network diagrams—maps that don’t change with a network’s changes—become outdated almost immediately. The best network diagram tools will adjust with your network, constantly accounting for changes and keeping your network diagram up to date.
Best Tools for Network Diagrams
With so many options to choose from, it can be daunting to pick a network diagram tool. Different network diagramming creators offer different benefits and possibilities, as well as unique challenges. Below, I’ve shared with you my personal favorite network diagram software, starting with my top recommendation.
1. SolarWinds NTM (Free Trial)
SolarWinds Network Topology Mapper (NTM) is an all-in-one automated network diagram creator designed to build multiple kinds of detailed network diagrams from a single network scan. NTM can enable you to schedule period re-scans of your network, automatically updating your network diagram, so you always stay up to date. You can edit existing nodes on your network diagram using NTM’s drag-and-drop feature or by manually maneuvering your nodes.
NTM is built to give you lots of ways to share network topology diagrams through Cisco, Microsoft Visio, and other formats like PDF and PNG. NTM is a wonderful choice for creating user-friendly automated network diagrams, and you can try it free for 14 days.
Intermapper is an excellent network diagram generator, enabling you to create physical and logical network diagrams that operate in real-time. This network diagramming software is designed to support customization, so you can draw your network diagram exactly as you see it in your head. Intermapper offers a pay-by-device plan, increasing as you add devices, but there’s also a free trial available.
Creately is a detailed network diagrammer with in-system collaboration and chat functions, making it easy to illustrate and share your network diagrams. Features like smart connectors help your network diagram stay legible, while styling options and a full library of node shapes allow you to personalize your network diagrams. Creately integrates well with other platforms such as Microsoft Office, Slack, and Google Workspace; however, it doesn’t offer real-time features for your network diagrams. You can register here for their free version.
Lucidchart network diagram software is a great program for creating flowcharts, process maps, and other kinds of network diagrams. Like SolarWinds NTM, Lucidchart integrates well with other programs and backs up data automatically. However, Lucidchart has no auto-discovery features, so it can’t automatically protect and maintain your network’s devices. There are three professional options as well as a free, single-user option available for download.
SmartDraw is a web-based network diagram tool designed to run anywhere with a stable internet connection. Their goal is to be as easy to use as possible, which results in limited customization options but very neat, professional-looking network diagrams. The tool’s limited customizability, as well as its pay-by-user plan, makes it best for smaller networks. You can sign up for a free online version.
LanFlow offers simple network diagramming tools but powerful design capabilities, including libraries of 2D and 3D icons. Some other helpful features include an optional snap grid, full zoom-in capability, and figure labels that automatically move as you edit. You can also choose from predefined templates or completely create your own and add hyperlinks and websites to your network diagram. LanFlow is a diagramming tool only—there are no autodiscovery features or alarm programming options. Many Cisco professionals use LanFlow for network topology diagrams, and you can download a 30-day free trial.
Microsoft Visio is a reliable network diagram creator designed to easily communicate with other software and applications while helping keep your data secure. Microsoft Visio offers semi-customizable templates and themes but cannot enable automated network diagramming. There’s a Standard version available, plus a more expansive and expensive Pro version, and you can try it free for 30 days.
Summing Up Network Diagrams + Network Diagramming
Network diagrams act as a map of your network, helping you visualize how information is passed between your network’s devices. Creating and using a network diagram is crucial to maintaining your network and helps ensure data is safely getting from where it is to where it needs to go.
Using the help of network diagramming software, you can build your own network diagrams in the best way for you and your team members. Automated network diagram creators, like SolarWinds Network Topology Mapper, are designed to save you time, energy, and resources by automatically discovering and mapping your network’s nodes and connections. NTM automatically generates network diagrams using information gathered, giving you an accurate and clear yet visually striking network topology diagram.
*As of March 2021 | <urn:uuid:97136941-cfaa-45f2-b515-de752180db35> | CC-MAIN-2022-40 | https://logicalread.com/network-diagrams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00099.warc.gz | en | 0.904139 | 2,148 | 3.21875 | 3 |
Subscribe and receive Social Media CyberWatch in your inbox.
Table of Contents
Prominent Privacy Research Findings
New research identifies users from limited data points
A new study published in Scientific Reports demonstrates that only four data points unique to a particular time and place are enough to uniquely identify almost any individual. Data from over 1.5 million people were gathered from mobile devices to support these conclusions. The BBC reports the findings reveal that even if mobile numbers and other personal details were removed from data sets, the mobility information alone may be enough to trace back to a particular individual. This could pose a privacy risk if “anonymized” data sets were shared with third parties. Other recent social media research findings similarly show that such a small number of data points may identify a user. Another report found that Facebook ‘likes’ can form surprisingly accurate personal portraits. Among the researchers’ findings were that male sexuality can be identified with 88 percent accuracy, and U.S. political affiliation (whether Democrat or Republican) with 85 percent accuracy.
Research sheds light on why people don’t act according to their privacy wishes
A recently-published longitudinal study of privacy practices demonstrates that a sample of Facebook users had gradually become less likely to share their personal information publicly. This persisted until policy and interface changes by Facebook partially arrested the trend. Other findings from the same research team argues that the idea of treating privacy as a matter of understanding and control over one’s personal data may be a false comfort. Indeed, people often do not act in their stated best interest when making transactions involving their personal information. Furthermore, the researchers found that more detailed user control over how one’s personal information is used encourages people to share more sensitive information with larger audiences.
Legislative Updates & Responses
Proposed CFAA revision sparks controversy
A recently-proposed revision to the U.S. Computer Fraud and Abuse Act (CFAA) that would broaden its scope has met broad criticism from academics, advocacy groups, the popular press, many of whom criticize the current state of the law as overbroad. The 1986 Act criminalizes gaining unauthorized access to computer systems. A Los Angeles Times editorial argues that the act’s ambiguity as to what constitutes authorization makes it susceptible to abuse. For example, the prosecution of activist Aaron Swartz equated a violation of Terms of Service agreements with unauthorized access. The EFF notes that the proposed revision to the act would quadruple maximum jail sentences for the crimes Swartz was accused of. Meanwhile, law professor Eric Goldman argues the law has evolved from one meant to prevent malicious hacking to one that restricts general unauthorized access to intangible assets such as intellectual property. He proposes the CFAA and similar laws be amended to retain only restrictions on defeating security measures and denial-of-service attacks.
Service providers distance themselves from CISPA as petition campaigns gain traction
The revived Cyber Intelligence Sharing and Protection Act (CISPA) has faced criticism for its broad, ambiguous language that has been argued to create exemptions to privacy laws in the name of cybersecurity. A Wired editorial argues the law would facilitate the usage of personal information collected under the act for prosecutions of crimes unrelated to cybersecurity. In response to the revised act, a campaign to stop the bill organized by advocacy groups and activists seeks petition signatures to send to the U.S. Congress. Similarly, a petition on the White House website to stop the bill has reached over 100,000 signatures, enough to mandate a response from the Obama administration. Shortly thereafter, Facebook joined Microsoft in dropping its support for the bill, the former company citing privacy concerns. Both companies have stated they favour a more “balanced” approach to security and privacy.
Service Provider Landscape
Google shutters another “quasi-public” service
Many users of Google Reader petitioned for it to be saved after the company announced it would be shutting down the service later this year. This is just the latest of a series of high-profile service discontinuations by the tech giant. The demise of Reader particularly frustrated those who use the service to bypass Internet censorship systems. The service has been used to evade many filtering systems because the Reader software tramsits websites securely via Google’s own servers (located in the U.S.), rather than directly from third party servers which may be blocked by censors. While other RSS services that operate in a similar technical manner as Google Reader, these services will face a challenge in replicating Reader’s success as a censorship-circumvention tool because a large part of Reader’s power arguably comes from people’s trust in Google’s brand.
Microsoft releases its first transparency report
Earlier this month, Microsoft released its first Law Enforcement Requests Report, similar to the “transparency reports” released by Google and Twitter. The report reveals that Microsoft complied with 79 percent of U.S. government requests for subscriber data and 83 percent of requests from non-U.S. governments in 2012. The report’s release follows the January publication of an open letter signed by many advocacy groups requesting Microsoft to clarify what information is stored when users communicate via Skype, and to make public any government requests for that data. Microsoft’s report treats Skype as a separate category, explaining in a blog post that Skype data was collected differently due to the fact that the service was only acquired by Microsoft in late 2011. Interestingly, the report claims that Skype did not provide any customer communications content in response to 4,713 total government requests for users data, although an undisclosed amount of transactional data (such as usernames, email accounts and billing information) was provided. Furthermore, the report does not directly respond to the demand raised in the open letter about Microsoft’s relationship with TOM Online, a Chinese company that distributes modified Skype software for the Chinese market that has been found to censor and surveill its users.
Facebook expands ad targeting to include offline purchases
Facebook recently announced a partnership with several data brokers to incorporate their consumer data into the Facebook ad-targeting platform. The social media platform is now working with Datalogix, Epsilon, Acxiom, and BlueKai, companies that gather information about users through online cookies as well as through offline sources sucha as supermarket loyalty cards. Profiles assembled by brokers typically start with a name, address, and contact information, then add demographic information, hobbies, life-events, salary and more. The EFF has posted a guide on how to opt-out of these data brokers to ‘suppress’ your information from certain uses, which may or may not include sharing the information with Facebook.
Read previous editions of Social Media CyberWatch. | <urn:uuid:4fc26547-56eb-4b8a-9c78-064740258773> | CC-MAIN-2022-40 | https://citizenlab.ca/2013/04/social-media-cyberwatch-march-2013/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00099.warc.gz | en | 0.946519 | 1,368 | 2.53125 | 3 |
Cisco Express Forwarding (CEF) is a packet-switching technique used within Cisco routers. The main purpose of CEF is to optimize the forwarding of packets and increase the packet switching speed.
Prior to CEF there were 2 methods for packet-switching – Process- Switching and Fast-Switching.
The first method, process-switching is the oldest and slowest. In short the CPU is involved in every forwarding decision.
With fast-switching, the CPU is still used to determine the destination, but only for the initial packet. This information is stored with a fast-switching cache. Subsequent packets are then switched using the cache rather then CPU.
However, the problem with fast-switching is that the cache is built on-demand and the first packet is always process switched. This means, in the event of the router receiving a high volume of traffic to destinations not yet in cache, the CPU will still be consumed and switching performance affected.
To overcome the problems with process-switching and fast-switching CEF was created.
CEF is built around 2 main components – the Forwarding Information Base (FIB) and the Adjacency Table.
The FIB is an optimized version of the routing table (RIB).
The FIB contains destination reachability information as well as next hop information. This information is then used by the router to make forwarding decisions. The FIB is organized as a multiway trie (Figure1) which allows for very efficient and easy lookups.
Figure1 – source http://www.ciscopress.com/articles/article.asp?p=2244117&seqNum=2
The adjacency table maintains layer 2 or switching information linked to a particular FIB entry, avoiding the need for an ARP request for each table lookup.
CEF provides 2 methods for loadbalancing traffic over multiple links. They are,
- Per packet – As the name suggests, additionally weights can also be assigned to an interfaces. This allows you to send more packets over one link then another. Useful for unequal links.
- Per destination – Also known as per session. Packets are loadbalanced based on the source and destination addresses.
Polarization is a term given when traffic is sent over a single link, even though multiple links are available. An example would be traffic from multiple sources being proxied and using per destination loadbalancing.
To avoid this you can include additional attributes to your CEF hashing options. Here are the command options,
|mls ip cef load-sharing full||Layer 4 only (src/dest ports)|
|mls ip cef load-sharing simple||Layer 3 only (src/dst ip)|
|mls ip cef load-sharing full simple||Layer 3 and 4|
Here are some useful commands for verifying CEF,
|show ip cef||Show CEF table|
|show ip cef [address] [detail]||Show CEF entry within table for a given address|
|show ip cef exact-route [source] [destination]||Show CEF entry within table for source and destination address|
|show cef interface||Show CEF options enabled on each interface|
- How to Configure a BIND Server on Ubuntu - March 15, 2018
- What is a BGP Confederation? - March 6, 2018
- Cisco – What is BGP ORF (Outbound Route Filtering)? - March 5, 2018
Want to become a networking expert?
Here is our hand-picked selection of the best courses you can find online:
Cisco CCNA 200-301 Certification Gold Bootcamp
Complete Cyber Security Course – Network Security
Internet Security Deep Dive course
Python Pro Bootcamp
and our recommended certification practice exams:
AlphaPrep Practice Tests - Free Trial | <urn:uuid:fd0f5127-3260-49b0-b7fe-ec7989dd112b> | CC-MAIN-2022-40 | https://www.fir3net.com/Routers/Cisco/what-is-cef-cisco-express-forwarding.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00099.warc.gz | en | 0.869753 | 864 | 2.984375 | 3 |
In the Middle Ages, the Knights Templar invented a system to enable pilgrims to travel to the Holy Land without having to carry ‘real’ money with them. This system would perhaps be an equivalent to the bank cards we carry with us today.
Obviously we´re not saying they invented magnetic strips or synthetic polymers, but rather a document that enabled the pilgrim to withdraw money in a different location to where it had been deposited. This was a major innovation at the time.
Today, the philosophy behind credit cards is very much the same. We can move from place to place without having to carry cash, even if we´re only talking about going down to the local store. This document, the card, certifies that the vendor can charge the bearer of the card in the knowledge that it has a guarantee (of the bank, for example) up to a certain amount.
And as was the case with the Templar system, the bearer of the card needs to prove their identity. Today, such identification is a complex task (unlike the simple ring used by the Knights Templar) and this represents the main problem for bank card users: there is insufficient awareness of the importance of ID verification when using these types of cards.
There are several security systems for credit cards which users are often unaware of. The most widely used are three sets of numbers that need to be kept secret (in particular the PIN).
One hundred percent security is, as always, impossible to achieve. Regardless of the security system used, there is always a possibility of somebody cloning your card by using a magnetic strip reader or other even more complex dangers. Of these, threats related with the massive use of credit cards over the Internet are now the most costly to users.
Every time you enter your identification code to buy something on the Internet, this code travels across the Net and could be intercepted by malicious users. There are several techniques that enable them to do this:
Man-in-the-middle. This technique allows data thieves to intercept the communication between the user and the real website, acting as if they were a proxy, and potentially listening to all communication between the two. In order for such an attack to be successful, the victim must be redirected to the attacker’s proxy instead of the real server. There are several techniques for doing this, such as using transparent proxies, DNS cache poisoning and URL obscuring.
Exploits of Cross-Site Scripting vulnerabilities on a website, enabling the spoofing of the bank’s secure web page, in such a way that users will not be able to detect anomalies in the address nor the security certificate that appears on the browser.
Exploiting browser vulnerabilities that allow the address that appears in the browser to be spoofed. This means the browser can be redirected to a spoofed website, while the address in the address bar will be the URL of the trusted site. This technique also allows spoofing of pop-ups opened from an authentic website. | <urn:uuid:f9b4c048-e419-4159-8ea7-db49ccc9b1ac> | CC-MAIN-2022-40 | https://it-observer.com/security-against-credit-card-fraud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00099.warc.gz | en | 0.950285 | 608 | 3.03125 | 3 |
Ransomware is a form of malicious cyberattack that uses malware to encrypt the files and data on your computer or mobile devices. As the name suggests, the cyber-criminals behind the malware then make demands for a ransom in order to release your data or access to your data.
Typically, you will be given instructions for payment and will in return be given a decryption key. The ransom amount may range from a couple of hundred to thousands of dollars, though you will most likely have to pay the cyber-fraudsters in cryptocurrency such as Bitcoin.
3 Types of Ransomware
Ransomware attacks range from a mild to very serious. Here are the three types most often encountered:
Contrary to its name, Scareware may be the least scary of the three. It involves a tech support cyber-scam via rogue security software. In this scenario, you may see a pop-up message on your screen claiming the security software has detected malware on your device and you can only get rid of it if you pay a fee.
If you do not pay, they will continue to bombard you with the same pop-up. But annoyance is the extent of the threat. Your files and device are absolutely safe and unaffected.
Note that legitimate cyber-security software will never solicit its users in this way. Real security systems don’t charge you on a per threat basis. They would never ask you for any payment to remove a ransomware infection. Afterall, you already paid them when you purchased the software. And logically speaking, if you never bought the software, how could it detect an infection on your device?
More of a real threat than scareware, screen lockers, can lock you out of your PC entirely. If you restart your computer, you may see a bogus, full screen United States Department of Justice or FBI seal with a message.
The message states that “they” have detected some sort of illegal activity on your computer, and you must pay the penalty fine. It should go without saying that neither the FBI nor any other government entity will lock you out of your device and/or demand money to compensate for illegal activity. Real suspects of crimes, whether perpetrated online or not, will always be prosecuted through legal channels.
Unlike the other two, this ransomware mimics real, offline ransoms. In this scenario, cybercriminals snatch your files, encrypt them, and demand you pay a ransom if you wish ever to see your data again.
What makes this variation of ransomware attack so dangerous is once the cyber-fraudsters take your files, no security or system can really restore them. In theory, abiding by the ransom demands will return you data, but there are no guarantees. If they have your data and your money, you don’t have much leverage over them anymore. You can only hope these criminals are true to their word. It’s not a good bet.
How does Ransomware Work?
One of the most common methods to deliver ransomware is via a phishing scam. This is an attack when ransomware comes as an attachment within an email masquerading as a trusted source. Once you download the attachment, the ransomware takes over your computer. For example, NotPetya exploits loopholes in the system’s security to infect it. Some ransomware attachments come with social engineering tools to trick you into allowing them administrative access.
There are several actions ransomware might perform but the most common of them is to encrypt some or all of your files. You will get a message on your screen that your files got encrypted, and you can only decrypt them once you send an untraceable payment via cryptocurrency, usually Bitcoin. The only thing that can decrypt data is a mathematical key – that is in the possession of the cyber-criminal.
Another variation of ransomware attack is known as doxware or leakware. In this form of attack, the hacker will threaten to publically release sensitive data found on your hard drive unless you pay the ransom money. This is less common purely because finding sensitive data is often difficult and labor-intensive for cybercriminals.
Who Can Be a Ransomware Target?
Ransomware attackers choose the victims (individuals or companies) they target using several ways. The combination of the right victim and the loosest security will often drive a criminal’s decision.
For example, cyber-hackers may target universities or colleges because these institutions tend to deploy smaller security protocols. Additionally, they have disparate users relying on a lot of file sharing, making it easier for attackers to penetrate the defenses.
In other instances, some large corporations of organizations are tempting targets as they might be more likely to pay a ransom. For example, medical facilities and government agencies often require immediate access to their systems and files and can’t operate without access to their data.
Law firms and other agencies dealing with sensitive data will be more willing to pay to cover up the news of the ransomware attack on their network and database. These are also the organizations more prone to leakware attacks due to the sensitivity of information and data they carry.
However, even if you don’t fit any of the above categories, do not delude yourself. Some ransomware attacks are automatic and spread randomly without discrimination.
In Case You Are under Ransomware Attack
If you ever fall prey to a ransomware attack, the number one rule to remember is “Never Pay the Ransom.” This is also endorsed advice by the FBI. If you pay, all it is going to do is encourage these cyber-fraudsters to launch further attacks against you or others.
Does that mean you are stuck? Yes and No. You may still be able to decrypt or retrieve some of your infected files using free decryptors such as Kaspersky. However, many ransomware attacks use sophisticated and advanced encryption algorithms that fall outside of available decryptors. Even worse, using a wrong decryption script may further encrypt your files. Pay close attention to the ransomware message and seek an IT/security expert’s advice on what should be your next step or course of action.
An alternate method may be to download security software known for remediation. It will scan your computer to remove the ransomware threat. It is only a partial solution as this will clean up your system from all infections, but you may not be able to recover your locked or lost files.
A screen-locking ransomware attack often leaves little choice other than full system restoration. If this happens, you can always try scanning your computer using a USB drive or a bootable CD.
To thwart a ransomware attack in action, stay extremely vigilant. If your computer is slowing down for no apparent reason, disconnect it from the Internet and shut it down. Once you re-boot your computer (still offline), the malware will not be able to receive or send any commands from its control server. Without a channel to extract payment or a key for encryption, the ransomware infection may stay idle. At this point, download and/or install security software and run a full computer scan to quarantine the threat.
Specific Steps for Ransomware Removal
In case your computer comes under a ransomware attack, you must regain access and control of your device. Here are some simple steps you must follow, depending on whether you use Windows, MacOS, or a mobile device.
Windows 10 Users
- Reboot your PC in safe mode
- Install anti-malware software
- Scan your system to detect the ransomware file
- Restore your system to a previous state
In the past, the rumor was that Macs were “unhackable” due to their architecture. Sadly, this is not the case. Cyber-criminals dropped the first ransomware bomb on MacOS in 2016, known as KeRanger. This ransomware infected an app known as Transmission that, and once launched, copied the malicious files which kept running covertly in the background. After three days of this stealth operation, it encrypted the user’s files.
Apple did come up with a solution to this issue known as XProtect. The lesson learned was that Mac ransomware is not theoretical anymore. However, Mac users are reliant on Apple to come up with solutions if problems occur.
It was not until the popularity of CryptoLocker in 2014 that ransomware became a common threat for mobile devices. Apps are the common delivery method for malware on phones. Typical mobile ransomware attacks display a message that your smartphone has been locked due to illegal activity and you will have to pay to unlock your device. In case you fall prey to such malware, you must boot your smartphone in safe mode and delete the malicious app to retrieve control.
How to Prevent Ransomware?
You can take several defensive measures that not only help prevent ransomware attacks but other social engineering attacks as well.
- Keep the security software of your computer’s operating system up-to-date and patched. This simple practice will resolve many vulnerabilities and exploitations.
- Do not install any software or grant it any administrative rights unless you are sure about what it does with those privileges.
- Install an antivirus program to detect malicious programs (and apps) in real-time. A good antivirus may also offer you a whitelist feature (where you can allow rights to certain trusted software for automatic execution) to prevent unauthorized software from auto-execution in the first place.
- Last but not least, back up your files automatically and frequently (preferably in a cloud). It won’t prevent a ransomware attack but it can control the damage and prevent permanent loss of your files.
In any event, in case you are not a tech-savvy individual or company, seek advice from IT and cyber-security experts in your locale. The best experts are up-to-date with current, commonly active ransomware as well as security software you should be using. | <urn:uuid:953ee5f1-cdff-4b39-bdcd-d61181c4ab4b> | CC-MAIN-2022-40 | https://getpicnic.com/2021/05/15/ransomware-stealing-your-data-for-fun-and-profit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00099.warc.gz | en | 0.933019 | 2,047 | 2.8125 | 3 |
In its simplest terms, transcoding refers to the process of converting already compressed files, usually audio or video, into a different file format.
One of the most common uses for transcoding is streaming video content. In this context, files are converted from one format to another with the aim of maximizing the number of compatible devices on which a media file can be played. Converting the file to a different format, also ensures that consumers can stream files at the highest possible quality and with less buffering time.
This article will explain how transcoding works and its importance to live streaming as well as how it enhances the experience of both streamers and viewers alike.
Why is Transcoding Important To Video Delivery?
For content that relies heavily on video delivery, such as live streaming and video on demand services, transcoding is an integral part of the process. It’s especially important when using an adaptive bitrate workflow, the most common and effective way of reaching a large number of end-users without compromising on video quality.
Video transcoding also encapsulates transsizing, the process of scaling an image if the output resolution differs from that of the media, and transrating, where files are coded into a lower bitrate without changing the format. Given the range of streaming devices and capabilities these days, both of these are key to successful video delivery.
How Does Transcoding Work?
The transcoding process is two-staged. Firstly, the original file is converted into an uncompressed format through decoding. This file is then re-encoded into a target format which is then compatible with a host of different devices.
This could be a video file that is encoded and compressed using a codec software in a certain format which we then re-encode into an MP4 file, for example, making it possible to reach a wide audience.
The process of transcoding is often said to be ‘computationally intensive.’ While it can be carried out using built-in software, transcoding often benefits from dedicated, specialized hardware as it can take a desktop machine or laptop a long time to carry out the required functions.
If you are looking to transcode high-quality, large files then computer features such as a high-end CPU or large amounts of RAM would be extremely beneficial and are likely to decrease the time required to transcode.
Encoding VS Transcoding
Transcoding differs from encoding, despite the terms often being used interchangeably.
While transcoding is the process of de-encoding a file into an uncompressed format and converting it into another format, encoding is the compression of raw data to create a smaller, more transferable file.
Transcoding should also not be mistaken for transmuxing which changes the delivery format of audio and video files without changing the actual content.
Types of Transcoding in Video Delivery
Lossy To Lossy
Lossy-to-lossy transcoding produces the lowest quality of video or audio file and is the least desirable type of transcoding. Lossy compression is when expendable data is removed from the file for the purpose of making it smaller. Essentially, quality is sacrificed in favor of having a smaller file. Lossy-to-lossy transcoding takes a file with already decreased quality and reduces it even further. While the resulting quality is poor, these files require little storage space making consumption on a portable device, such as tablets and mobile phones, more manageable.
Lossless to Lossless
Lossless compression is when a file does not lose any data as it’s compressed. As a file is transcoded into a different format, its quality remains the same. However, the resulting files require large storage space and are often unsuitable for smaller, portable devices. Lossless-to-lossless transcoding often requires good quality hardware to carry out the necessary functions.
Lossless to Lossy
While files transcoded using lossless-to-lossy are not as good quality as lossless-to-lossless, they are better than lossy-to-lossy. They are also small enough to be stored and consumed on portable devices which makes it a popular form of transcoding. It should be noted that once data is lost through encoding and compressing, it’s discarded and cannot be retrieved. As such, there is no such thing as lossy-to-lossless transcoding.
Benefits of Transcoding in Video Delivery (Streamer’s Side)
Supports Multiple Formats
Transcoding allows your stream to be received by a large audience of end-users who may all be using different kinds of devices and require different video formats. Encoding video into different resolutions and bitrates for a diverse audience can be a complex task. However, transcoding is able to resolve this issue, delivering suitable video files to your viewership at relatively low time and cost for the streamer.
Optimized Video Quality
Achieving optimal video quality is a priority for streamers and can be the difference between losing or gaining regular viewers. Transcoding helps to optimize quality by delivering the best possible file for each user.
Significant Increase on Content Reach
While internet access has improved dramatically in the last few years, speeds differ widely between countries and regions. A viewer in Hong Kong is able to access content far easier than someone in parts of South America, where internet speeds are low. Transcoding aids in content delivery and prevents large blocks of your audience from not being able to view content due to poor download capabilities.
Benefits of Transcoding in Video Delivery (Viewer’s Side)
Transcoding can negate many of the issues associated with streaming that are experienced by viewers.
If viewers have to make do with poor bandwidth, then transcoding can help resolve the issue of buffering by delivering a file size that the viewer’s internet connection can handle.
Supports More Format
Given the huge array of different devices and hardware now available to users, incompatibility can be a real barrier to successful streaming. Transcoding increases the chances of files being compatible with a given device by delivering an appropriately compressed file to each user.
Lessens Playback Failure
Streaming on a device with poor resolution and low streaming capabilities often leads to playback failure. Transcoding helps to sidestep this issue by delivering files that each device can manage, heightening user experience.
Better Resolution/ Video Quality
Conversely, a frustrating issue faced by some viewers is that of low-quality video delivery even when they have good connectivity and streaming capabilities. This is due to streamers wanting to reach as large an audience as possible and catering to people with a bad connection. Transcoding resolves this issue and means viewers can view videos in as good quality as their own bandwidth and resolution will allow.
Streaming Solutions That Offer Transcoding
Despite its uses, not all streaming services offer and support transcoding. Rather, many require files to match their own output format.
CDNetworks’ own Media Delivery service provides ultra-low latency streaming for both live broadcasting and video on demand.
This solution provides transcoding service, allowing it to pull content directly from CDNetworks Cloud Storage and an origin server and deliver high-quality video to a range of end users, no matter what device they’re streaming on. | <urn:uuid:e6e91290-69be-4546-af07-a6c072d1f2e1> | CC-MAIN-2022-40 | https://www.cdnetworks.com/media-delivery-blog/what-is-transcoding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00099.warc.gz | en | 0.936647 | 1,519 | 3.640625 | 4 |
The UK government has drafted three proposals to help both businesses and consumers stay safe as they use Internet of Things (IoT (opens in new tab)) devices. While it isn’t exactly law yet, the government did say that it would look to make it so sometime in the near future.
The proposals, drafted by the Department for Culture, Media and Sport (DCMS), together with the National Cyber Security Centre (NCSC), came after consultations with security experts, retailers and product builders.
Here’s what the government suggests:
- Passwords for IoT devices (opens in new tab) need to be unique and must not be resettable to a factory setting
- IoT builders need to have a point of contact where consumers can report a flaw and where manufacturers can react quickly
- Consumer IoT makers must be clear about the minimum length of time for which the device would be supported with updates and patches
"Our new law will hold firms manufacturing and selling internet-connected devices to account and stop hackers threatening people's privacy and safety," said Matt Warman, minister for digital and broadband at DCMS.
A DCMS spokesperson told ZDNet (opens in new tab) that they’d continue cooperating with retailers and manufacturers as they look to transform these proposals into law.
Cybersecurity is one of the main concerns of consumer IoT devices. Many come with default password settings which are the same for all devices of the same manufacturer. As many consumers can’t be bothered to change the factory settings, they end up being vulnerable to numerous threats.
- What is the IoT? Everything you need to know (opens in new tab) | <urn:uuid:f08c36a4-8136-4822-9559-1436ecdf3b16> | CC-MAIN-2022-40 | https://www.itproportal.com/news/uk-proposes-new-security-rules-for-iot-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00099.warc.gz | en | 0.950551 | 340 | 2.53125 | 3 |
FIDO (Fast Identity Online) Authentication is a set of open technical specifications that define user authentication mechanisms that reduce the reliance on passwords.
FIDO protocols are designed from the ground up to protect user privacy. The protocols do not disclose sensitive user data that can be used by different online services to collaborate and track a user across the services. Other sensitive data like biometric fingerprints and PINs never leaves the user’s device to ensure it cannot be intercepted or compromised by an attacker.
To authenticate a user, an application – often referred to as the relying party – uses FIDO-specified client-side APIs to interact with a user’s registered authenticator. For web applications, client-side APIs include WebAuthn implemented by the web browser, which in turn calls on FIDO CTAP to access the authenticator. | <urn:uuid:fdedd4f4-428c-4809-9a36-6b737d2d7dfe> | CC-MAIN-2022-40 | https://www.cardlogix.com/tag/fido/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00300.warc.gz | en | 0.899082 | 174 | 2.71875 | 3 |
Young Smokers Increase Risk for Multiple Sclerosis
People who start smoking before age 17 may increase their risk for developing multiple sclerosis , according to a study released February 20 that will be presented at the American Academy of Neurology's 61st Annual Meeting in Seattle, April 25 to May 2, 2009.
The study involved 87 people with multiple sclerosis who were among more than 30,000 people in a larger study. The people with multiple sclerosis (MS) were divided into three groups: non-smokers, early smokers (smokers who began before age 17), and late smokers (those who started smoking at 17 or older), and matched by age, gender, and race to 435 people without multiple sclerosis (MS).
Early smokers were 2.7 times more likely to develop multiple sclerosis (MS) than nonsmokers. Late smokers did not have an increased risk for the disease. More than 32 percent of the multiple sclerosis (MS) patients were early smokers, compared to 19 percent of the people without multiple sclerosis (MS).
"Studies show that environmental factors play a prominent role in multiple sclerosis," said study author Joseph Finkelstein, MD, Ph.D, of Johns Hopkins University School of Medicine, in Baltimore, MD, which conducted the study in collaboration with Veterans Affairs MS Center for Excellence. "Early smoking is an environmental factor that can be avoided." | <urn:uuid:7ea36201-0dca-4a3b-b140-39739e680bac> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/764/young-smokers-increase-risk-for-multiple-sclerosis.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00300.warc.gz | en | 0.971019 | 275 | 2.796875 | 3 |
The sad reality with technology is that with new functionalities, come fresh new threats. New breeds of cyber threats and ransomware have emerged in the past couple of years in hopes of inflicting harmful actions on your network’s security and protection. While ransomware is still a focal point in cybersecurity, a 2018 report released by Check Point revealed that cybercriminals are moving in a different direction
The report shows that although hackers are still looking for easy and
quick targets, remote access trojan (RAT), has entered the Global Threat Indexes Top 10 list for the first time. Although this type of malware has been around for almost two decades now, a lot of modern businesses are still unprepared to face such a powerful security threat. In this guide, We have defined what RAT Trojan is. We have also analyzed the business risks it can give to your business.
What is Remote Access Trojan? (RAT)
Remote access trojan or RAT is a type of malware that provides attackers with the ability to control a computer or a device via an established remote connection. One of the goals of this malware is to steal information and spy on your system or network. Typically, RAT trojan enters your system by disguising it as legitimate software. But once it has entered your network, it’s going to give attackers unwanted access by creating a backdoor in your system.
One of the reasons why remote access trojan is harmful to your business is because it is quite difficult to detect. Once you open a file that contains remote access trojan, it can already invite cybercriminals and attackers to enter your system. From there, they can start stealing information and disrupting your network’s security and protection. The attackers can also use your identity and your internet address to attack and infect other systems and networks.
Different Types of Remote Access Trojan(RAT)
Remote access trojan has different types and uses. Below are some of the most commonly known RAT programs:
Back Orifice – This Remote Access Trojan originated in the US and has been around for almost 20 years now. This is one of the oldest remote access trojans that has been refined by other cybercriminals to produce new remote access trojans. One of its earliest victims was Windows 98, and it can hide within a specific operating system.
ZeroAccess – This type of RAT Malware Software is used to steal financial and banking information. ZeroAccess can be difficult to identify due to its advanced rootkit. According to Recorded Future,It also has the ability to use domain generation algorithms (DGA) and peer-to-peer connectivity for C2 communications.
Mirage – Mirage is a type of remote access trojan that is widely used by a group of Chinese hackers known as APT15. This remote access trojan was first used in 2012, and although APT15 went silent after a spying campaign was launched in 2015, a new variant of Mirage was detected in 2018. It was found that this new breed of Mirage, known as MirageFox, was used to secretly investigate UK government contractors.
Beast – Just like Back Orifice, this remote access trojan uses the same technology that gets the malware installed secretively on a computer or an operating system. Beast, which was created in 2002, typically attacks Microsoft systems from Windows 95 up to Windows 10.
How to Address Remote Access Trojans (RAT) Malware Detection
The good thing about remote access trojan is that you can defend your system and network against it. However, addressing such malware can sometimes, be difficult to accomplish. This is due to the technological complexity most cybercriminals and attackers use in creating such malware threats. Nevertheless, your security approach in addressing this type of RAT Malware should be based on the amount of knowledge your enterprise has regarding RATs.
According to a report released by the International Association of Privacy Professionals, 90% of security issues are due to human error. That’s why businesses need to conduct security awareness training to ensure that all employees are knowledgeable enough about the consequences this malware can give to their respective organizations. Your company should also come up with a defense strategy against remote access trojan. This will ensure that your company is ready to prevent and mitigate the risk of a potential security breach should you ever fall vulnerable to such remote access malware attacks.
Remote access trojan usually occurs whenever a remote network connection is made. Once you limit your employee’s access to your network, you’ll be less likely to be infected by Remote Access Trojan malware. You can also use strong passwords and two-factor authentication to ensure that all access made on your network is authorized and authenticated.
Fighting remote access trojan is something that most businesses should always strive to master. Do not let RATs cause your business a major security breach and data theft. As much as possible, take all the necessary steps that can protect you and your network from this harmful malware. | <urn:uuid:789092ed-4ba7-4843-a80b-7e3d352684ba> | CC-MAIN-2022-40 | https://remoteaccess.itarian.com/blog/analyzing-remote-access-trojan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00300.warc.gz | en | 0.954144 | 1,019 | 2.90625 | 3 |
Phishing email is a type of online scam when a cyber criminal sends an email that appears from a legitimate company and asks to provide user’s sensitive data. Phishing emails also consist of delivering malware by sending malicious attachments or links. According to Phishing and Email Fraud Statistics 2019, 76% of businesses reported being a victim of a phishing email and 30% phishing emails were opened by targeted users. Below are 10 ways to detect phishing emails to protect users and companies to fall for the phishing emails.
- A mismatched URL
Sometimes the URL in phishing emails often looks valid and less suspicious. However, users are encouraged to see the actual hyperlinked address (at least in Outlook). The message is probably fraudulent or malicious if the hyperlinked address is different from the address that is displayed.
- URLs with a misleading domain name
Scammers trick users by sending emails that look like companies from Microsoft or Apple. However, users should not fall under this type of scam and have to stay vigilant. Most users do not know how the DNS naming structure for domains works. For example, the domain name info.cyberhome.com would be a child domain of cyberhome.com as cyberhome.come appears at the end of the full domain name. Cyberhome.com.maliousdomain.com would not have originated from cyberhome.com because the reference to cyberhome.com is on the left side of the domain name. The result domain of Microsoft looks like Microsoft.maliciousdomainname.com.
- Poor spellings and grammar error
Legitimate companies have trained staff and whenever they send out large or small messages/emails to staff, they require double checking and then only they send emails to their staff. Thus, if a message has poor spelling and grammar error, it’s always a better option to cross-check first and if the company did not send out the message, users should immediately report it.
- Asking for sensitive information
Sending your sensitive information through email is never safe. If your bank emails you to send your account number through email or asking for your username and password through email, it is recommended to not to send it. A reputable company should never send an email asking for sensitive information including your password, credit card numbers.
- Too good to be true
When a user is looking for an apartment to rent, the apartment crosses the limits of your expectation and the rent is too good to be true. However , the landlord is emailing you to send some deposit first and then you will get the key of the house through email as the landlord is not in the current city you are living. This is a case of fraud and many innocent people have fallen under this type of scam.
- Surprise lottery!
When you get a message of winning a lottery, gift cards or some latest gadgets but you never bought a lottery ticket. This is just another way of scamming people to open the message and click on the link.
- Asking to send money to cover medical expenses
When your friend emails you seeking help financially for medical expenses, people tend to easily fall for this scam. Not only medical expenses, the government often sends email to victims to pay some tax and fees. Thus, before making any transactions, call in the legitimate websites to confirm.
- Unrealistic threats
When a user has a lease car from WellsFargo and no savings and checking accounts with them, the user gets an email saying that his account has been compromised. The message said that the user has to send his photo-id in two days if not all the assets and the bank account will be seized by the bank. However when we get this type of email, we tend to focus on the word “seized” without analyzing the whole email.
- From a government agency
In the United States, governments don’t contact people directly and they do not engage in email-based extortion. Scammers send emails to victims pretending to be the IRS, or FBI asking for personal details of the victim. Most of the IRS, they send direct letters to your house address with the official letter, they do not send email or call you before you get the official letter.
- Something just doesn’t look right – Suspicious message
If you happen to see any email that looks suspicious to you, victims can report it immediately on the US-CERT website and also make sure not to click on any suspicious message. In Las Vegas, casino security teams are taught to look for JDLR which just doesn’t look right.
Therefore, phishing email is the most common attack vector to spread ransomware and to cause disruption in the network. Email scanning, user awareness and educational programs, DNS lookup, and endpoint-based antivirus are some preventive measures to protect from phishing emails. If a company or a user detects any phishing email then they should report immediately and take preventive measures.
In the worst case scenario, if the company suffers from data breach or ransomware attack, disconnect the network and immediately call LIFAR, LLC for data recovery and to deal with the ransomware attack. | <urn:uuid:9c21e3c7-0f0b-4c42-adcf-d4e64cfe2c99> | CC-MAIN-2022-40 | https://www.lifars.com/2020/06/10-ways-to-detect-phishing-emails/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00300.warc.gz | en | 0.935801 | 1,051 | 2.765625 | 3 |
What Is a Security Key for WiFi?
Previously, in our article on “What Is a WiFi Adapter & How Does It Work?”, we vowed to later resume our rant on wireless security protocols. Since we at the IAG strive to keep our word, today’s outing delves into a security key for WiFi. You’ll learn how to keep hackers from encroaching on your home WLAN and safeguard precious WiFi bandwidth for your home devices.
Security and Cryptography
For centuries, people have secured their possessions with locks and keys. Today, in addition to mechanical devices unfastened with metal blades and bows, we also have electronic devices opened by keycards, RFID cards and biometric recognition.
Back in the “daze” of yore, information was secured by encryption—the transformation of comprehensible data into apparent babble. In the 21st-century, cryptography methods using sophisticated algorithms now safeguard sensitive information.
“WiFi” (i.e., IEEE 802.11 standards) uses radio waves. If wireless-capable, your computing device (PC, laptop, tablet, phone) both transmits and receives data across either (or both) 2.4 GHz and 5 GHz frequencies. IEEE 802.11 is an “open” broadcast technology; anyone with compatible gear can access these frequencies.
Hence, when you send or receive info between a device (“client”) and a wireless router (“access point”) using WiFi radio frequencies, your personal data and computer files can surreptitiously be intercepted by rogue devices. So, if using WiFi in hotspots, sensible users employ VPN encryption for their data, protecting it from exploitation by black hats.
When availing home WiFi, cybersecurity is just as important. Hotspots are meant for public use; your home WiFi is strictly for your family’s devices. While VPNs are typically deployed when using public WiFi, most folks use wireless network keys between a device and wireless router to prevent unauthorized access on their home WLANs.
For sure, the following admonition is a case of “the blinding obvious,” but if your home WLAN is “unsecured,” SECURE IT! An unsecured WLAN exposes your family’s sensitive and private data (banking, SSNs, etc.) to hackers.
Moreover, you also potentially allow unauthorized users to carve out part of your WiFi data pipe (which causes network interference). You probably wouldn’t pay for your neighbors’ utilities such as electric, gas or water, so why would you give them free WiFi?
The WiFi Data Pipe
The point made above merits elaboration. Twenty years ago, when 802.11b and 2.4 GHz devices ruled the wireless roost, WiFi was not the ubiquitous presence that it is today.
Now, with the advent of IoT and the proliferation of home wireless devices, WiFi congestion, interference and intermittent connectivity are endemic to 2.4 GHz frequencies. It’s one reason why the Wi-Fi Alliance developed 802.11n and 802.11ac standards; to take advantage of comparatively unencumbered 5 GHz airwaves.
Even with 5 GHz, WiFi data pipes are still congested—and getting more crowded. Engineer and Ignition Design Labs CEO Terry Ngo explains, “Homes inhabited by more than 80% of the U.S. and 50% of the worldwide population are in urban areas where WiFi connections are steadily getting worse.”
Ngo adds, “Not only is every house on your block likely to have a router but quite a few will have more than one, and many communities are also served by public WiFi networks. Second, increased demand for speed demands wider virtual lanes on the WiFi highway, which means there will be fewer of them. And finally, cellular operators are dumping traffic into the WiFi spectrum.”
When two separate WiFi transmissions collide, “all devices go quiet and then try to ‘talk’ again later,” Ngo stated, using language tech laity can readily grasp. “(This) time delay (is) known as ‘backoff.’ With more collisions, the backoff increases and WiFi becomes slower and less reliable.”
Ngo concludes, “WiFi has become a victim of its own success.” But you can provide some measure of protection to your home’s data pipe by establishing network security keys for your WLAN. While you can’t segregate WiFi frequencies for your own exclusive use, you can at least make your neighbors buy their own Internet access and wireless routers.
WiFi Security Keys
As long as there’s been WiFi, the Wi-Fi Alliance—the shepherd of IEEE standards—has secured it with cryptographic algorithms. Below are the various encryption standards the Wi-Fi Alliance has released to protect WiFi data transmissions.
Wired Equivalent Privacy (WEP)
The first encryption protocol, used for 802.11a/b/g, was Wired Equivalent Privacy (WEP). A data encryption protocol using a combination of system- and user-generated key values, the original incantation of WEP supported key values of 64 bits in total length. Later versions added longer key combinations to bolster security; these variations had 128, 152 and 256 total bit values.
Note that these keys were not transmitted across a WiFi network; values were stored in the device’s OS registry or wireless network adapter.
Two types of WEP are (were) available; shared key authentication (SKA) and open system authentication (OSA). While both are (were) greatly flawed, the former is (was) the less secure. WEP’s fatal defect was the addition of system-generated 24 bits to the key-value, called an “initialization vector” (IV). It’s easily deciphered using readily available WiFi network analysis tools.
In 2004, the IEEE “deprecated” WEP as a wireless security protocol following the deployment of its successor, WiFi Protected Access (WPA). Demonstrating the porosity of WEP, in 2005 the FBI spent all of 3 minutes to decipher a 128-bit WEP key at an Information Systems Security Association (ISSA) confab in Los Angeles.
In short, don’t use it.
WiFi Protected Access (WPA)
WPA was introduced by the Wi-Fi Alliance in 2003 as a stopgap to the security issues WEP posed. WPA2, a more robust version of WPA, was released in 2004. WPA3 followed in January 2018.
Officially known as IEEE 802.11i-2004, WPA2 differs from WPA by using the Advanced Encryption Standard (AES). This block cipher algorithm, a subset of the Rijndael cipher (created by Belgian cryptographers) was adopted in 2001 by the U.S. National Institute of Standards and Technology (NIST) as a federal government specification for encrypting electronic data.
WPA2 is a Robust Security Network (RSN) with two handshakes: four-way and group key. Other security enhancements include Message Integrity Check to prevent attackers from substituting data packets and Counter Mode Cipher Block Chaining Message Authentication Code Protocol (CCMP). Key lengths can be either 128, 192, 256, 384 or 512 bits.
Developed for home WiFi, WPA2-PSK (pre-shared key) allows for the encryption of data without the use of an enterprise-level authentication server. Instead of providing a router with an encryption key, the user enters a “plain-language” password between 8 and 63 characters. (More on this below).
Relying on Temporal Key Integrity Protocol (TKIP; now deprecated), the password, along with the network SSID, generates unique encryption keys—which constantly change—for each wireless user. Although deprecated in 2012, TKIP and WPA2-PSK are still widely used.
WPA3 improves on its predecessors by introducing Opportunistic Wireless Encryption (OWE). The Wi-Fi Alliance claims that the 802.11-2016 revised standard alleviates security flaws presented by weak passcodes.
Yet, WPA3 has already been compromised. In 2019, researchers found that its “Simultaneous Authentication of Equals” (SAE) handshake contains several vulnerabilities open to hacker exploitation. Manufacturers have already begun releasing patches to address these security flaws.
View this video on WiFi password security from Powercert:
Depending upon who uses it, Hashcat is either a password recovery tool or a means to hack into a wireless router. Discovered by a researcher seeking to crack the latest WPA3 security standard, Hashcat captures and solves WPA and WPA2 passwords.
The Hashcat deciphering process is straightforward. A hacker captures one Extensible Authentication Protocol over LAN (EAPOL) frame after requesting it from a WiFi access point (AP). The hacker then extracts the Pairwise Master Key Identifier (PMKID) from the EAPOL frame, formats the data so Hashcat recognizes it and then runs the process to decipher the password.
Depending upon password complexity, the process may take hours or days or seconds. Hashcat doesn’t require the hacker to be in range of both the user and the AP at the precise moment when the network handshake occurs (i.e., authentication from a user password) and the user connects to the WiFi network.
The easiest and cheapest way to protect WPA and WPA2 WiFi networks against Hashcat is to create a complex, random and lengthy password?not one created by the router. A suggested secure password includes 16 (randomly mixed-case alphanumeric and special) characters.
If you’re adamant about securing data when online, eschew WiFi totally. An Ethernet local area network (LAN) is hardwired, with access limited to an RJ-45 outlet inside a controlled-access environment. Walls and buildings, of course, don’t necessarily confine WiFi to the environment within.
If you’re truly concerned about data security, here’s a thought to keep you awake at night. Wired keyboards emit electromagnetic frequencies, which can potentially reveal data from keystrokes. That Logitech USB keyboard you bought for your laptop? It’s a security risk.
It’s also been documented that computers emit electromagnetic radiation that leaks data. These “compromising emanations” can even leak plaintext. So, should you see someone wearing a tinfoil hat, look again. The person may be trying to protect data leaking from their Google glasses. | <urn:uuid:ff208b76-da6a-41ae-9a67-93f5923ea874> | CC-MAIN-2022-40 | https://internet-access-guide.com/what-is-wifi-security-key/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00300.warc.gz | en | 0.919073 | 2,252 | 3.09375 | 3 |
Everybody wants sustainable products, and retailers are putting pressure on manufacturers to deliver goods that satisfy consumers’ new preference for green and clean options.
This means manufacturers need to implement technology to track where their products come from and what they are made of. They will also need to plan around shorter shelf lives that result from fewer additives and preservatives. The upside will be a revitalized industry with increased revenues and the opportunity for insurgent companies to challenge the huge consumer packaged goods (CPG) giants.
Prediction 1: Practical sustainability
In the past, it may have been enough for manufacturers to pay lip service to sustainability, consumers are now reading labels and considering environmental impact in their buying decisions. They are willing to pay more for clean-label and sustainable products. The food and beverage industry is no exception, as people shopping at supermarkets and noting the disappearance of single-use plastic will attest. However, there are trade-offs to be made.
Lumina Intelligence, for example, has published a report suggesting that the industry has made 905 sustainability promises over the last 12 months. Setting sustainability targets and making promises is, of course, commendable, however, there is still work to be done by the industry to turn these promises into deliverables.
One example is single-use plastics, which have in recent years become deeply unpopular, and for good reason; they end up in the rivers, streams and oceans, strangle wildlife or break up into micro plastics that get even further into the food chain. However, take the plastic wrapping away from food and it is more likely to spoil before it even gets to market and its shelf life will be significantly compromised. So, without plastics or a bio-friendly alternative, food waste will increase.
Nonetheless, retailers are listening to their customers and reducing packaging. British retail chain Marks and Spencer has reduced its own packaging by 37 percent since 2017. In various laboratories, scientists are developing starch and vegetable-based packaging that works well with some food products but not all. In order to satisfy environmentally-conscious consumers, a number of supermarkets such as Waitrose in the UK, have stated their aim of following the examples set by grocery stores such as GRAM in Sweden and Original Unverpackt in Germany in reducing waste of some products to zero. Greenpeace has started ranking United States-based supermarket chains on their plastic reduction efforts, with value-oriented Aldi U.S. taking the number one spot in 2019.
Retail buyers also want to buy clean-label foods—products sustainably made of natural, simple ingredients with minimal or no preservatives or artificial colors or flavors. Once a differentiator, clean labels are now the cost of entry, for many retail categories, according to a 2019 clean label study by Kerry. The study also finds that some label elements, like non-GMO, have become less important.
One of the factors essential to consumer perception of the sustainability of foods is accurate and clear labeling, which can be a lifesaver for products that otherwise may be banned as unsustainable, like palm oil. The Round Table for Sustainable Palm Oil, for instance, has established a proprietary label that indicates that the palm oil was grown sustainably on existing, cultivated acreages using best practices instead of through clear cutting of the rainforest. While industry organization FoodDrinkEurope published a report that urges transparency, there is a notable lack of focus on comprehensibility. For example, a package that says it is compostable may not be suitable for a garden compost heap.
Despite isolated inconsistencies, however, the general trend is clear. The consumer is becoming more aware by the minute and the industry is reacting. I predict that in 2020, clean labels, eco-friendly packaging and proof of sustainability will be demanded by consumers and retailers alike, which will help insurgent manufacturers challenge CPG giants and eat away at their market share. This will also mean that manufacturers will need enterprise solutions that can support the tracking, tracing and mass-balance reporting that will be required for compliance.
— IFS (@ifs) December 4, 2019
Prediction 2: Packaging and smart labels in focus as the industry realizes that small is beautiful
One thing that makes an item or set of items sustainable is clear provenance, which is an area that has undergone significant developments since sharing my predictions last year. It’s not yet commonplace to have a piece of nanotechnology in a bottle of wine telling you where it came from and how it arrived at the retailer’s shelf. However, it is possible, and it will happen in part because consumers demand it. A 2018 study from the Food Marketing Institute said that 87 percent of shoppers appreciated the ability to learn more about the product they are buying using their smartphone, which they can do with something as simple as a QR code. We’ll return to the role of nanotechnology in packaging, but first let’s consider the packages themselves.
If you live in a family of four or six people, you buy and consume differently than if you have an odd number of family members or live by yourself. Many of us need to buy three yogurts rather than four. However, most food products are available primarily for two or four but not for an odd number of people, which leads to unnecessary waste.
This is changing slowly, and the most visible signs so far are the virtual disappearance of the buy-one-get-one free (BOGOF) approach to discounting in retail and the rise of smaller and more varied pack sizes. This latter increase in pack variety increases pressure on manufacturers who now have more SKUs (stock keeping units) to plan and are forced to produce in less profitable, smaller batch runs.
A lot of food is discarded because of a date printed on the label. These dates are based on tests carried out in laboratory conditions, indicating the food will be unsafe to eat or past its prime after a certain number of days. The dates include safety margins to reduce risk to health, which unfortunately leads to “good” food being wasted. The development of smart labels using nanotechnology, with sensors that detect the gases given off by food as it ages/decays, will eliminate a lot of this waste. Smart labelling on food items could detect whether a steak is edible and change color to indicate its freshness. In practice it would mean a complete revamp of stock control and storage, which will be disruptive to the industry but would be for the common good and reduce unnecessary waste.
I predict that in 2020 there will be an increase in the use of nanotechnology to inform the consumer of provenance. The variety of pack sizes will continue to expand allowing the consumer to buy in the quantity they want. We will also continue to see enhancements in smart labelling and technologies like antimicrobial packaging, and paper-based electrical gas sensors will see initial market adoption on retail shelves in 2020.
Within and even beyond the food and beverage industry, there are other attempts to change the way people shop that have immediate implications for process manufacturing. Just as the software industry started moving to a pay-as-you-go, subscription-based, software-as-a-service (SaaS) model 25 years ago, many industries are trying to move in the same direction.
Prediction 3: Servitization of process manufacturing
Discrete manufacturers have been adopting servitization for many years, charging for the outcomes their products provide rather than through a product sale. Delivering the products from process manufacturers as a service rather than as a one-off purchase may be more challenging. However, there are a couple of ways this can be achieved:
- Literally offer a service as well as product. Upmarket paint supplier Farrow and Ball sells cans of paint through retail channels of distribution. The company will also send an advisor to a consumer’s home to look at the walls and offer counsel on colors, decoration techniques, and where feature walls might work well. Obviously, all the recommended products come from Farrow and Ball. There is then potential to upsell the consumer to a return visit a few years later.
- The other technique for selling goods as a service is to aim for an outcomes-based fee rather than a per-unit price. Pharma giant Novartis was particularly skilled at this a short while ago when, in addition to selling tablets by weight or per unit, it took its drug for congestive heart failure and offered the hospitals a rebate on the cost of the drug if patients returned to hospital within a given time. This was a solid contractual clause; if the patient returned to hospital or died prematurely, the hospital received a refund.
#Servitization is more commonly associated with discrete manufacturers, but process manufacturers are also embracing this change in business strategy. Learn how process manufacturers can servitize here. https://t.co/IGfV9UXAkn
— IFS (@ifs) November 18, 2019
There is no particular reason why food companies shouldn’t pursue a servitization model if they can do so profitably. Will we soon be interacting with Heinz staff selling baked beans in the supermarket? Already, we see special promotions at grocery stores staffed with factory representatives—sometimes for local artisanal brands.
The ultimate servitization for food and beverages would be when retailers no longer buy products at all, they only sell shelf space, which producers and manufacturers keep stocked. The mechanics and logistics of doing this in the food and beverage space is obviously huge, but the advantage to the retailer (or premises-holder, facilitator or whatever we might call them after such a radical change) would be that their supplier would be responsible for disposing of any waste and handling other logistics, similar to how vendor-managed inventory works in a manufacturing setting.
In 2020, I predict we will see a huge increase in the number of process manufacturing companies adopting a servitization model, both in CPG and direct channels. This will mean them having to adapt their business solutions to provide more varied pricing, planning, and distribution models.
Servitization is building revenue streams through services rather than products alone. Find out what stops companies from embracing servitization in this ebook. #ForTheChallengers https://t.co/RGmnBnP3Yy pic.twitter.com/UR1pza3As7
— IFS (@ifs) December 30, 2019
It’s all related
All three of these predictions, when we examine them, are interrelated. Sustainability requires a great deal of clarity, so that consumers have confidence that the product they are buying meets their clean label and social responsibility goals. Packaging must change to help reduce waste, again in large part to align with wishes of the societally-conscious consumer. Similarly, portion sizes need to change so smaller families don’t have to buy the larger amounts and discard items. And the move to a service model will make businesses less inclined simply to “sell stuff” whether it gets used or not, again reducing waste even as the retailer offloads some tasks on their suppliers.
Assuming I am right in my forecasts, this will benefit the customer, the business, and the planet.
If you enjoyed this blog please look out for others on IFSBlogs.
I welcome comments on this or any other topic concerning process manufacturing.
Connect, discuss, and explore using any of the following means:
- Twitter: @ElkinsColin
- Email: firstname.lastname@example.org
- Blog: http://blog.ifs.com/author/colin-elkins
- LinkedIn: https://www.linkedin.com/in/colinelkins | <urn:uuid:18451953-ccea-464f-bf4a-975b55327450> | CC-MAIN-2022-40 | https://blog.ifs.com/2020/01/2020-process-manufacturing-predictions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00300.warc.gz | en | 0.959552 | 2,390 | 2.71875 | 3 |
While hitting Russia with cyberattacks helps ease the pressures Moscow has put on Ukraine, what will happen if hackers use the cyberwar as a pretext to focus on another, unrelated, target?
Setting a precedent can have far-reaching and unforeseen consequences, Kim Zetter, investigative cybersecurity journalist and author, argued at the Nordic-Baltic Security Summit 2022 in Tallinn last week.
One example is the use of Stuxnet, a malicious computer worm developed by the US and Israel. The digital weapon was launched in 2007 and helped attackers to destroy close to one-fifth of Iran’s nuclear centrifuges.
According to Zetter, Stuxnet marked the first time a digital weapon caused actual physical destruction in the real world. However, the successful and arguably well-intentioned attack had unforeseen consequences.
"Once you tell Anonymous that it’s OK to be attacking governments, critical infrastructure in Russia, what’s going to stop them attacking another country whose policies they disagree with,"Kim Zetter, investigative cybersecurity journalist and author, said.
“But what we saw after Stuxnet was discovered was that it was the impetus for a lot of other countries to start launching their own offensive cyber operations. […] We’ve seen probably a couple of dozen other countries shortly after announcing their offensive operations,” Zetter said.
She argued that the attack against Iran showed everyone around the globe that there’s legitimacy in using digital weapons to resolve diplomatic and cultural disputes. It’s highly unlikely that the cyberwar sparked by Russia’s invasion of Ukraine will not leave a similar imprint on the future.
Flames of cyber war
Cyberwarfare has been plaguing Europe since Russia invaded Ukraine on 24 February. Groups supporting Ukraine started targeting organizations in Russia to help the country defend against the invasion.
Kyiv rallied an international IT army to help it fight the digital war, while Anonymous, Hacker Forces, and many other hacktivist groups started targeting Russia’s private and state-owned enterprises.
However, Zetter argues that the way Ukraine mobilized the global hacker community has long-term legal implications, especially given previous international agreements on the waging of cyberwar.
In 2015, major cyber powers, including the US, Russia, the UK, China, and others, agreed to a set of norms for cyber conduct.
“One of the things that they agreed on was that states should not intentionally damage another state and other states’ critical infrastructure […]. And they also agreed that states shouldn’t allow their territory to be used for cyberattacks against other states,” Zetter said.
However, at least some of the cyber exchanges originated from within NATO countries: even though hacker groups and governments keep each other at arm's length, legal questions on who is responsible for what could arise soon.
The floodgates are open
Zetter stressed that there should be no doubt that Russia is the obvious aggressor in the war against Ukraine, and called for Kyiv to be provided with aid to repel the Kremlin’s military advances. However, it would be naïve to expect that volunteer-based cyber warfare will not affect how the wider digital conflict is prosecuted.
“But the problem here is that when you are opening this up in this way to activists and volunteers, you’re creating some legal issues. There’s also the idea of how do you rein that in after you’ve unleashed this – it’s setting a precedent for other actors going forward,” Zetter said.
What makes matters more confusing is that governments in the US and EU aren’t entirely clear about their position towards hacking groups targeting mutual enemies. On the one hand, governments distance themselves from prominent hacking groups, while at the same time endorsing the help that Ukraine receives from them.
Zetter argues that while the situation can seem clear-cut when solely viewed within the confines of Russia’s war against Ukraine, avoiding questions posed by the cyberwar could create future situations that will be difficult to handle.
For example, Zetter asked, what if a Russia-owned company in Germany were to organize an offensive bug-bounty program that targeted Ukrainian critical infrastructure, and shared the discovered vulnerabilities with the Russian intelligence community – would Berlin, Brussels, and Washington deem this acceptable private-sector behavior?
“Once you tell Anonymous that it’s OK to be attacking governments, critical infrastructure in Russia, what’s going to stop them attacking another country whose policies they disagree with?” Zetter said.
The same goes for hacker groups with different allegiances. For example, Russia launched a crippling cyberattack against Estonia in retaliation for its decision to relocate a Soviet-era war statue.
“To ignore the essence of the IT army will wreak havoc on the future stability of cyberspace and, with it, the national security landscape in Europe and beyond,” Zetter said.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:964629e0-d49f-47c6-a635-681120cf58a7> | CC-MAIN-2022-40 | https://cybernews.com/security/cyberwar-against-russia-is-creating-a-risky-legal-precedent-says-expert/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00300.warc.gz | en | 0.94736 | 1,033 | 2.6875 | 3 |
Cyberattacks are constantly getting more sophisticated. Barely a day goes by without news of an elite hacking team creating a more stealth exploit–malware, elaborate spear-phishing attacks, trojans, and a killer array of ransomware that can take factories and other organizations offline, or even hobble entire cities.
What is typosquatting?
First, do not visit the following examples unless you want to get hacked.
Typosquatting is when third parties buy variants of domain names based on simple and common spelling errors, e.g. “gooogle.com,” or “gooogl.com” instead of Google.com.
Most of these typo-domains are either purchased for resale, redirect to a real offer in a shady way, or take you to a minefield of advertising, but there are enough sites with more pernicious goals to merit attention. A recent study by cybersecurity company Sophos Labs found that roughly 2.7 percent of 15,000 domain names probed directed users to websites associated with some form of cybercrime, including hacking, phishing, online fraud, or spamming.
If 2.7 percent seems like a small number, consider that there are currently at least 360 million registered domain names.
Examples of typosquatting are easy to come by. In 2018, security researchers discovered a perfect copy of Reddit.com, one of the five most-visited sites online, under the domain name Reddit.co (.co is the domain name suffix for Colombia). In this instance, the hackers had even acquired an SSL certificate for the domain, meaning that the majority of web browsers displayed a green lock symbol indicating the spoofed site was legit and secure.
A similar campaign in 2016 was used to spread malware to anyone who had the bad luck of typing Netflix.om and Citibank.om (.om is the domain suffix for Oman). Cybersecurity researcher Brian Krebs reported a network of over a thousand domains using the country suffix for Cameroon, .cm, for major brands, such as Hulu and Netflix, that generated nearly 12 million visits over a three-month period. The opportunities for scams are numerous when a single missing letter can take a would-be victim to a completely separate site.
When you consider how easy it is to buy a domain name, the threat begins to seem a little more real and a lot more present. A spoofed website for a major service, as in the case of Reddit.com, can provide hackers with a fresh and current set of login credentials in a cyber space where 50 percent of respondents in a recent study admitted they use the same passwords for personal and work accounts, and that 65 percent of respondents use the same credentials for most or all their accounts. A compromised login and password combination provides an easy point of entry into business networks and emails if two-factor authentication is not in place, creating the potential for larger scale spear-phishing or ransomware attacks, and, of course, financial account attacks of every stripe.
The risk posed by this sort of hack on a business’s reputation is also worth noting. When it comes to “brandjacking,” typosquatters aren’t trying to hack anyone; instead, the goal is damage–most often with a redirect to offensive content. Whitehouse.org is the most famous example, which has been parodying the official Whitehouse.gov website since the early 2000s.
Lego has reportedly spent a fortune trying either reclaim or take down domain names that damage its brands. It shouldn’t be necessary to say that needless embarrassment can be an impediment to success.
What can be done?
Businesses should consider a proactive approach. The best foil to typosquatting is the acquisition of as many similar or related domain names as possible. While it’s extremely unlikely that a business can acquire every possible variation, and it would be inefficient for all but the largest companies to even try, buying the most obvious domain squats is a minor investment for the mitigation of a major risk.
As in virtually every cyber risk, one path to risk mitigation here is education and training. Typosquatting relies on an attention deficit. Train employees to pay attention, to be on the lookout for indications of a spoofed site, and to double-check links with an eye to making absolutely certain that domain names are properly spelled.
Domain names are a sizable part of a company’s attackable surface, and companies or individuals who ignore their own presence on the internet, as well as how it’s represented, do so at their peril. | <urn:uuid:70402108-7b7d-4319-83d1-f75336092d32> | CC-MAIN-2022-40 | https://adamlevin.com/2020/07/23/this-simple-hack-could-tank-your-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00300.warc.gz | en | 0.941621 | 955 | 2.8125 | 3 |
Over the last decade, the number of internet-connected devices in homes has been growing exponentially. A typical American home in 2020 had 10.37 devices connected to the internet. IoT devices make up a little over half of those and the rest are PCs and mobile devices.
IoT stands for Internet of Things, which means any type of “smart device” that connects online. IoT devices can be anything from your smartwatch to your refrigerator. Baby monitors and voice assistants like Alexa are also IoT.
Working remotely has become the norm for many businesses around the world. Home IoT devices are now sharing a Wi-Fi network with business devices and data, and it puts all IoT devices in danger.
Here are two alarming statistics on that issue :
Smart devices can be a risk to any other device on the same network. They are typically easier to breach. That’s why hackers use them as a gateway into more sensitive devices.
When a criminal targets your IoT network they don’t care about the shopping list stored in your smart refrigerator or your jogging route recorded on your smartwatch. They breach IoT devices to see what other devices are connected to the same network.
The hackers can then use the device’s sharing permissions that are often present on home networks. Through them, hackers can gain access to your mobile device or work computer.
Why are IoT devices less secure? Here are a few reasons:
All modern routers have the ability to set up a secondary Wi-Fi network, called a “guest network.”
By putting all your IoT devices on a separate network you cut the bridge that hackers use to go from an IoT device to another device on the same network.
In fact, when you separate those two a hacker can’t even see if you have a PC or a smartphone connected to that network. This is because they’re on the other network.
This is an important additional layer of security you can use. Whether you’re a full-time remote worker or occasionally take your work laptop home for some extra work, it can help.
Here are the steps to separate your IoT devices. (Note, you can also have this done by us, we’ll be happy to help you out with your IT headaches.)
As you add new IoT devices to your home, make sure to connect them to the guest network. This will keep the layer of security effective.
One quick tip: When naming your Wi-Fi networks, don’t use descriptive names like name, address, or router model name. It’s best to avoid names that give the hackers valuable information they can use in attacks. Also, don’t name your networks “IoT network” and “Sensitive devices”, you are just pointing out to hackers which network is valuable and which one can be ignored.
With remote work becoming more popular, hackers have begun targeting home networks. They fish for personal and sensitive business data. Don’t leave yourself open to a breach, schedule a quick 10-15 minute talk about improving your cybersecurity!
You can trust EB Solution to do a great job, no matter how hard the task is! Our track record speaks for itself. We’ve been in this business since 2011, protecting our clients’ networks and databases 24/7, and never had a problem. We offer a wide range of cybersecurity services, including network security, password management, employee training, business continuity and data recovery. | <urn:uuid:ad569ac5-d3fc-4031-8247-95460313f858> | CC-MAIN-2022-40 | https://ebsolution.ca/why-you-should-put-iot-devices-on-a-guest-wi-fi-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00500.warc.gz | en | 0.942699 | 724 | 2.953125 | 3 |
Ransomware dominated the healthcare industry in 2017, with six of the top ten breaches reported to the U.S. Department of Health and Human Services a direct result of the malicious software. An article on Security Current looks at some ransomware attacks from 2017 as well as steps you can take to help avoid becoming a victim.
What is ransomware?
In a ransomware attack, access to your computer systems or files are blocked by the attacker using encryption. These important files are kept locked and held for ransom until the victim pays the requested ransom, at which time the attacker may or may not give the victim the encryption key to recover their data. Even if the individual or organization recovers their data in a ransomware attack, there is no guarantee that the cybercriminal did not steal their data prior to encrypting it.
Why is the healthcare industry a target for cybercriminals?
It is safe to say that the healthcare industry has become a prime target for cybercriminals, but why? One reason may be that organizations holding health data tend to lack a mature security posture compared to other industries, such as finance. Another reason cybercriminals target the healthcare industry is simply due to the value of medical records, which are often more valuable than transient data such as credit card numbers.
In addition, medical facilities rely on access to their patient data around the clock as part of their everyday workflow. When access to critical data is unavailable, patient lives can be at stake, so restoring data in a medical facility is vital following an attack.
Ransomware in 2017
Airway Oxygen, Inc. knows firsthand the trouble that ransomware can cause. In 2017, the organization fell victim to a ransomware attack that affected 500,000 individuals when their technical infrastructure was compromised by unidentified cybercriminals. Purity Cylinder and Airway Oxygen, two affiliated companies were denied access to their data as a result of the attack. PHI involved in the breach include payment information for their customers, names, addresses, phone numbers, dates of birth, diagnosis’, health insurance information and the type of service the individual was receiving.
Another notable ransomware attack in 2017 occurred on Urology Austin, affecting 279,663 individuals. In this attack, data stored on the organization’s servers was encrypted, with the investigation indicating compromised PHI may have included names, addresses, dates of birth, social security numbers and medical information.
How can you minimize the likelihood and impact of a malware/ransomware breach?
- Keep anti-virus and anti-malware installed and up to date across systems
- Keep systems patched and current
- Backup your data off your network as frequently as possible and periodically test your backup process to ensure you can recover all data using backups
- Utilize Group Policy Objects (GPO) restrictions
- Restrict administrative rights across all systems
- Utilize a Secure Internet Gateway on and off the network
- Block users from installing anything on their own
- Utilize a Data Loss Prevention solution and actively monitor it
- Utilize Endpoint Protection and actively monitor it
- Invest in your Information Security program
- Establish routine security awareness training and campaigns
With ransomware growing rapidly, it is important to take the proper steps to ensure your organization does not fall victim and become another statistic. | <urn:uuid:bf53bac7-dd0d-4d50-a473-e9148fa64b1e> | CC-MAIN-2022-40 | https://www.hipaasecurenow.com/ransomware-wreaks-havoc-in-2017/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00500.warc.gz | en | 0.949438 | 671 | 2.734375 | 3 |
What is Static DNS?
Static DNS is a form of Domain Name System (DNS) that doesn’t rely on the server’s IP address to provide translations.
A static DNS record stores information about an Internet domain which will always point to the same computer, regardless of how many times it may change its IP address. Static records are used for servers and other computers with a single connection to the Internet, such as home PCs or laptops. They also allow internal networks like those in schools and companies, where all devices connect through a router, to access resources outside their network without requiring updates every time someone changes their IP address or device name.
What is Static DNS?
Static DNS is the variation of Dynamic DNS. Static DNS is an old system for resolving hostnames to IP addresses but it has also been used in conjunction with Dynamic DNS when there are problems with Dynamic DNS or when data can’t be updated dynamically.
Static DNS is a way to keep track of which hostnames have an IP address. It does this by looking at the last known entry in the table, and then it looks for that host’s current IP address. If there are no records found or if it doesn’t know about the domain name, it sends a query to the local area network’s DNS server.
Static DNS at one time was used because it could get information about IP addresses or hostnames even if the Dynamic DNS servers were not working. Static DNS is sometimes still used when there are issues with Dynamic DNS but for other reasons as well, such as when an Internet Service Provider (ISP) does not allow the usage of Dynamic DNS due to privacy reasons.
Why would I need static DNS?
You would need static DNS if you don’t want to share your current dynamic IP address. The idea with static DNS is to assign an IP address to devices that are always on. By assigning static or “fixed” IP addresses, the device always has the same address, even when the ISP assigns a different address. This way, it’s easier for people to reach you by typing in one address instead of changing it every time your ISP changes your IP address.
All you need to do is change the DNS settings on your device or router. You can’t have more than one static DNS address on a device, so if you have multiple devices that need static DNS, you will need to get a separate service for each of them.
What are the benefits of using static DNS?
There are numerous benefits to using static DNS. A lot of the time your ISP will assign your devices a dynamic IP address that is not likely to change while they are connected to the internet. This means that when you try to visit a site, the host will have no idea where to send your requests. With static DNS, you can configure servers at your company to point towards an internal IP address (like 192.168.1.1) and avoid this problem altogether.
Another benefit of using static DNS is the help with security. When your ISP assigns your device a dynamic IP address, you have very little control over what happens to that address once it has been assigned to you. This means that if someone guesses your address before you have had a chance to change it, they will have access to your network without any extra work on their part. With static DNS, you are in complete control over what happens with that IP address and can automate the entire process for yourself.
One of the most important benefits of using static DNS is that it offers a simple way for you to change your IP addresses on all of your devices with just one set of instructions. This means that if you get a new computer, laptop, phone, or tablet, you can update the DNS settings and have it automatically use your new device.
What are the disadvantages of using static DNS?
The most significant disadvantage of using static DNS is that it can be difficult to troubleshoot or resolve hostname issues. Many network administrators who use static DNS configuration often face the problem of not knowing what has changed on their DNS server. Users are also required to stay at their computers for a longer period of time. Furthermore, it is difficult to ascertain if the DNS settings are taking effect.
Static DNS is a hassle for network administrators since it has to be set up inside the router. Unlike dynamic DNS service, where the details are managed from a central location, users have to configure each device separately. In addition to that, there is no simple way of updating records if the hostname changes.
Static DNS is a way of ensuring that the IP address you assign to your domain name will be accessible even if it changes, which can happen if you’re assigned an IP address by another company. It’s useful for people who want their website or another online service available at all times without having to worry about losing access when they switch providers. Static DNS also has some disadvantages, like costing money and requiring more technical knowledge than dynamic DNS services do. But overall, static DNS offers benefits in terms of reliability and flexibility- especially for small business owners with one location where internet connectivity is provided by only one provider. | <urn:uuid:aa453f75-67b1-4a75-8e37-2029c7f3db18> | CC-MAIN-2022-40 | https://gigmocha.com/what-is-static-dns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00500.warc.gz | en | 0.95404 | 1,065 | 3.421875 | 3 |
Cybersecurity researchers have discovered a critical vulnerability in widely used SQLite database software that exposes billions of deployments to hackers.
Dubbed as ‘Magellan’ by Tencent’s Blade security team, the newly discovered SQLite flaw could allow remote attackers to execute arbitrary or malicious code on affected devices, leak program memory or crash applications.
SQLite is a lightweight, widely used disk-based relational database management system that requires minimal support from operating systems or external libraries, and hence compatible with almost every device, platform, and programming language.
SQLite is the most widely deployed database engine in the world today, which is being used by millions of applications with literally billions of deployments, including IoT devices, macOS and Windows apps, including major web browsers, such as Adobe software, Skype and more.
Since Chromium-based web browsers—including Google Chrome, Opera, Vivaldi, and Brave—also support SQLite through the deprecated Web SQL database API, a remote attacker can easily target users of affected browsers just by convincing them into visiting a specially crafted web-page.
Read more: The Hacker News | <urn:uuid:350f7911-7daf-443f-a591-510f41a436cb> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/critical-sqlite-flaw-leaves-millions-of-apps-vulnerable-to-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00500.warc.gz | en | 0.917945 | 228 | 2.640625 | 3 |
LDAP (Lightweight Directory Access Protocol) is basically a way of
organizing information and providing access to it. It’s commonly used
for user, service and machine information, and it’s incredibly useful. The Linux version is OpenLDAP. It will
handle authentication as well as information (so the password aspect of login
as well as looking up who the user is, where his home directory is, and so
on). However, it’s not as secure as it could be — by default connections are
unencrypted, so your password is being sent out in the open.
Previously, you could deal with this using SSL tunneling, but that has
been deprecated since the retirement of LDAPv2 (in 2003). TLS is another
option, but even using that, LDAP still has security problems.
For security, your best option is Kerberos. Kerberos is specifically designed to handle authentication. It will not do the information lookup that LDAP does. It avoids
password security problems and enables single-sign-on (a very useful feature!).
So the ideal situation is to use both Kerberos and LDAP: one for
authentication and one for information organization and access. In fact, they
do play quite nicely together, but information about on how to achieve this is limited. If you do want to set them up, check out this two-part article I wrote
registration but is free) a while back which is still relevant. You can
also get more information from the LDAP and Kerberos web pages.
This article was first published on ServerWatch.com. | <urn:uuid:e6ec64e5-8b94-4310-bd7a-7f465dceaf01> | CC-MAIN-2022-40 | https://www.datamation.com/networks/ldap-and-kerberos-so-happy-together/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00500.warc.gz | en | 0.945402 | 345 | 3.03125 | 3 |
What is Cybersecurity?
Cybersecurity is a process that enables organizations to protect their applications, data, programs, networks, and systems from cyberattacks and unauthorized access. Cybersecurity threats are rapidly increasing in sophistication as attackers use new techniques and social engineering to extort money from organizations and users, disrupt business processes, and steal or destroy sensitive information.
To protect against these activities, organizations require technology cybersecurity solutions and a robust process to detect and prevent threats and remediate a cybersecurity breach.
Evolution of Cybersecurity
Like many technologies, cybersecurity, according to the prevailing cybersecurity definition, has evolved, but the evolution is often more a result of changing threats than technological advances. For example, because hackers found ways of getting malware past traditional firewalls, engineers have come up with cybersecurity tech that can detect threats based on their behavior instead of their signatures.
The internet of things is also playing a guiding role in the evolution of cybersecurity. This is primarily because so many personal, home, and factory devices have been compromised by hackers looking for an easy entry into a network.
Why Is Cybersecurity Important For Enterprises?
Cybersecurity is crucial for enterprises because, according to a recent IBM report, the average cost of a data breach in the United States is $9.44 million. Worldwide, the price tag of an enterprise breach is $4.35 million. Enterprises need cybersecurity to protect themselves from the hordes of opportunistic hackers and thieves looking to steal data, sabotage systems, and extort funds. If they successfully penetrate an enterprise system, the payout can be significant. For example, attackers can earn, on average, $9,640 from selling access to a hacked network.
In the event of an attack, the damage can expand to include:
- Monetary losses
- Sullied business relationships
- A poor reputation among customers and across your industry
What Are the Different Categories of Cybersecurity?
Various types of cybersecurity enable organizations to defend their various systems. Tools for cybersecurity include:
Segurança de rede
Network security is the use of devices, processes, and technologies to secure corporate networks. Organizations’ increasingly complex networks introduce new vulnerabilities across various areas, including applications, data, devices, locations, and users. Network security tools can prevent threats, close potential vulnerabilities, prevent downtime, and avoid regulatory noncompliance.
Segurança de aplicativos
Application security is the process of enhancing the security of mobile and web applications. This typically occurs during development to ensure apps are safe and protected when deployed, which is crucial as attackers increasingly target attacks against apps. Application security tools enable organizations to test apps, detect threats, and cover them with encryption.
Information security, also known as InfoSec, secures data from unauthorized access, deletion, destruction, modification, or misuse. It involves using practices and processes to protect data when stored on devices and in transit.
Operational security (OPSEC) is a process that protects sensitive information and prevents unauthorized access. OPSEC encourages organizations to look at their infrastructure and operations from the perspective of an attacker. It allows them to detect unusual actions or behavior, as well as discover potential vulnerabilities and poor operation processes.
Addressing these threats and weaknesses enables companies to implement security best practices and monitor communication channels for suspicious behavior.
Disaster Recovery and Business Continuity
Disaster recovery and business continuity enable organizations to regain full access and functionality of their IT infrastructure. Disaster recovery relies on data being backed up, allowing the organization to recover and restore original data and systems.
Employees are organizations’ first line of defense against cyberattacks. It’s therefore crucial that users understand the importance of cybersecurity and the types of threats they face. Organizations also need to ensure employees follow cybersecurity best practices and policies.
How Does Cybersecurity Work?
What is cybersecurity in the context of your enterprise? An effective cybersecurity plan needs to be built on multiple layers of protection. Cybersecurity companies provide solutions that integrate seamlessly and ensure a strong defense against cyberattacks.
Employees need to understand data security and the risks they face, as well as how to report cyber incidents for critical infrastructure. This includes the importance of using secure passwords, avoiding clicking links or opening unusual attachments in emails, and backing up their data. In addition, employees should know exactly what to do when faced with a ransomware attack or if their computer detects ransomware malware. In this way, each employee can help stop attacks before they impact critical systems.
Organizations need a solid framework that helps them define their cybersecurity approach and mitigate a potential attack. It needs to focus on how the organization protects critical systems, detects and responds to a threat, and recovers from an attack. As part of cybersecurity awareness, your infrastructure should also include concrete steps each employee needs to take in the event of an attack. By having this kind of emergency response manual, you can limit the degree to which attacks impact your business.
A cybersecurity solution needs to prevent the risk of vulnerabilities being exploited. This includes protecting all devices, cloud systems, and corporate networks. When thinking about vulnerabilities, it’s also important to include those introduced by remote and hybrid employees. Consider vulnerabilities in the devices they use to work, as well as the networks they may connect to as they log into your system.
Technology is crucial to protecting organizations' devices, networks, and systems. Critical cybersecurity technologies include antivirus software, email security solutions, and next-generation firewalls (NGFWs). It’s important to keep in mind that your technology portfolio is only as good as the frequency and quality of its updates. Frequent updates from reputable manufacturers and developers provide you with the most recent patches, which can mitigate newer attack methods.
What Are the Types of Cybersecurity Threats?
Recent cybersecurity statistics show that organizations face a growing range of threats, including:
Malware is a term that describes malicious software, which attackers use to gain access to networks, infect devices and systems, and steal data. Types of malware include:
Viruses are one of the most common forms of malware. They quickly spread through computer systems to affect performance, corrupt files, and prevent users from accessing the device. Attackers embed malicious code within clean code, often inside an executable file, and wait for users to execute it.
To prevent viruses from spreading, it’s important to educate employees regarding which kind of files they should and should not download on their computers but while connected to your network. For example, some companies choose to discourage employees from downloading files with .exe extensions.
Trojan horses appear as legitimate software, which ensures they are frequently accepted onto users’ devices. Trojans create backdoors that allow other malware to access the device. Because Trojans can be very hard to distinguish from legitimate software, it’s sometimes best to prevent employees from installing any kind of software on their computers without guidance.
Spyware hides on a computer to track user activity and collect information without their knowledge. This allows attackers to collect sensitive data, such as credit card information, login credentials, and passwords. Spyware can also be used to identify the kinds of files that hackers hunt for while committing corporate espionage. By using automation to pinpoint their cyber bounty, attackers can streamline the process of breaching your network, only targeting the segments where they've located valuable information.
Ransomware involves attackers blocking or locking access to data then demanding a fee to restore access. Hackers typically take control of users’ devices and threaten to corrupt, delete, or publish their information unless they pay the ransom fee.
Each ransom attack has to be handled differently. For example, while it’s always a good idea to contact authorities, in some cases, you may be able to find a decryption key on your own, or your cybersecurity insurance policy may provide you with a financial parachute.
Adware results in unwanted adverts appearing on the user’s screen, typically when they attempt to use a web browser. Adware is often attached to other applications or software, enabling it to install onto a device when users install the legitimate program. Adware is especially insipid because many employees don’t realize how serious it is, seeing it as a mere annoyance as opposed to a real threat. But clicking on the wrong adware can introduce damaging malware to your system.
A botnet is a network of devices that have been hijacked by a cyber criminal, who uses it to launch mass attacks, commit data theft, spread malware, and crash servers. One of the most common uses of botnets is to execute a distributed denial-of-service (DDoS) attack, where each computer in the botnet makes false requests to a server, overwhelming it and preventing legitimate requests from going through.
Phishing is an attack vector that directly targets users through email, text, and social messages. Attackers use phishing to pose as a legitimate sender and dupe victims into clicking malicious links and attachments or sending them to spoofed websites. This enables them to steal user data, passwords, credit card data, and account numbers.
Structured Query Language (SQL) injection is used to exploit vulnerabilities in an application’s database. An attack requires the form to allow user-generated SQL to query the database directly. Cyber criminals launch an attack by inserting code into form fields to exploit vulnerabilities in code patterns. If the vulnerability is shared across the application, it can affect every website that uses the same code.
Man-in-the-Middle (MITM) Attacks
A MITM attack happens when attackers exploit weak web-based protocols to steal data. It enables them to snoop on conversations, steal data being shared between people, impersonate employees, launch bots that generate messages, and even spoof entire communications systems.
A denial-of-service (DoS) attack involves attackers flooding a server with internet traffic to prevent access to websites and services. Some attacks are financially motivated, while others are launched by disgruntled employees.
What Are the Major Forms of Threats to Global Cybersecurity?
Global cybersecurity efforts aim to counter three major forms of threats:
A cyber crime occurs when an individual or group targets organizations to cause disruption or for financial gain.
In a cyberattack, cyber criminals target a computer or corporate system. They aim to destroy or steal data, do damage to a network, or gather information for politically motivated reasons.
Cyber terrorism involves attackers undermining electronic systems to cause mass panic and fear.
Five Cybersecurity Best Practices to Prevent Cyber Attacks
How does cybersecurity work? Here are some of the best practices you can implement to prevent cyber attacks:
- Use frequent, periodic data backups. In the event a system gets destroyed or held for ransom, you can use your backup to maintain business continuity. Also, by frequently backing up, you provide yourself access to the most relevant data and settings. You also get a snapshot of a previous state you can use to diagnose the cause of a breach.
- Use multi-factor authentication. With multi-factor authentication, you give hackers at least one extra step they must go through to fraudulently misrepresent themselves. And if one of the measures involves a biometric scan, such as a fingerprint or facial scan, you hoist the hacker hurdle even higher.
- Educate employees about cyber attacks. Once your employees understand what the most common cyber attacks look like and what to do, they become far more effective members of your cyber defense team. They should be taught about how to handle, malware, phishing, ransomware, and other common assaults.
- Encourage or mandate proper password hygiene. Leaving passwords unprotected or choosing ones that are easy to guess is essentially opening the door for attackers. Employees should be encouraged or forced to choose passwords that are hard to guess and keep them safe from thieves.
- Use encryption software. By encrypting the data you hold, you make it virtually impossible for a thief to read because they don’t have the decryption key. Also, with encryption, you make it easier for remote employees to safely use public networks, such as those at coffee shops, because a snooping hacker won't be able to read the data they send or receive from your network.
What Will Cybersecurity Look Like in the Next 10 Years?
Over the next 10 years, cybersecurity will continue to evolve, with the future of cybersecurity adjusting to deal with several threats.
One major concern is ransomware. This continues to be a big moneymaker for attackers, and cybersecurity will have to evolve to prevent a wider variety of ransomware campaigns. Attacks on large enterprises, particularly using USB devices are also likely to escalate over the next 10 years. These will force companies to intertwine cybersecurity and ERM integration.
To meet these challenges, as well as the growing volume of attacks, cybersecurity teams will have to incorporate more automation in their defense strategies, which can save security teams time and improve the accuracy of detection and mitigation.
How Fortinet Can Help?
Fortinet Antivirus detects and prevents potential cyber threats, FortiMail protects organizations from email-borne threats like malware, phishing, and spam, while FortiWeb web application firewalls (WAFs) protect critical web applications from known and unknown vulnerabilities and evolve in line with changes to an organization's attack surface. FortiDDoS provides dynamic, multi-layered protection from known and zero-day attacks, helping organizations fight ever-evolving distributed DoS threats.
Fortinet also provides a range of virtual private network (VPN) solutions that enable users to browse the web securely via encrypted connections regardless of where they log on from.
What is Cybersecurity?
Cybersecurity is a process that enables organizations to protect their applications, data, programs, networks, and systems from cyberattacks and unauthorized access.
How does cybersecurity work?
An effective cybersecurity plan needs to be built on multiple layers of protection. Cybersecurity companies provide solutions that integrate seamlessly and ensure a strong defense against cyberattacks.
What are the major forms of threats to cybersecurity?
Global cybersecurity efforts aim to counter three major forms of threats: cyber crime, cyberattack, and cyber terrorism. | <urn:uuid:8e006d89-1e4e-44c0-825a-df9d239251f6> | CC-MAIN-2022-40 | https://www.fortinet.com/br/resources/cyberglossary/what-is-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00500.warc.gz | en | 0.929784 | 2,973 | 2.96875 | 3 |
The World Cup, the biggest sporting spectacle in the world, is bound to be a bonanza for fraudsters, spies, and data thieves.
While the superstars of football excite and delight on the field, professionals of a different kind — thieves, engaged in deceptive, hard-to-detect data collection — will lurk in the shadows. These opportunists will use every ability — including fake Wi-Fi hotspots, cell signal spoofing, and theft of ID cards — to profit from identity theft. These nefarious attendees could potentially gain information valuable to international espionage, whether it be blackmail material, national security secrets, or sensitive corporate information.
Well-attended events and highly populated areas have always been havens for criminals and spy agencies, but in recent years, the threat has shifted to less-intrusive collection exercises. At the 2018 FIFA World Cup, some of the things that offer customer value and an enhanced experience — such as FIFA's FAN ID program — are targets.
FIFA's FAN ID document is required by the Russian authorities for all attendees of the World Cup. Ticket holders must have a FAN ID with a valid match ticket in order to enter any of the stadiums hosting matches at the World Cup.
Conveniences like FAN ID offer easier access to stadiums during 2018 FIFA World Cup matches and free access to public transportation. But these also lead to data harvesting and malicious behavior on mobile and personal devices — of both officials and fans.
The FAN ID information collected by Russian authorities includes personal information such as name, photo, nationality, and passport number. Russia has said the FAN ID is designed to crack down on unrest and keep away potential threats, but blacklisted fans have found ways to bypass the system and gain entry. Russian officials received nearly a million applications for the FAN ID program.
The Russian Threat
In light of recent events in international data theft, it's notable that the World Cup is being held in Russia, where the world's hotbed of international espionage has attracted hundreds of thousands of people within its borders, and the host country collected personal information on all of them. And it all comes just as the country is ramping up efforts to destabilize democracies and interfere with elections around the world. Consider:
- In February, the US Department of Homeland Security warned Americans attending the Winter Olympics in Pyeongchang that they would be targeted by cybercriminals. Before the games occurred, McAfee found that more than 300 Olympic computer systems were attacked, and many were compromised.
- Once the opening ceremonies began, Russian military spies were found to have hacked computers in South Korea in a "false flag" operation, designed to make it look like the attacks were perpetrated by North Korea.
- In March, DHS confirmed that unauthorized cell-site simulators, known as "stingrays," have been set up throughout Washington, DC. These devices, also known as IMSI (international mobile subscriber identity) catchers, can be used to spoof cell towers and intercept communications. The availability of this technology is so wide that agents can now have it planted in our nation's capital and go undetected for some time while collecting information.
- Russia has shown a key interest in collecting data on citizens in foreign countries, using that targeted information to stir up unrest and influence elections. National security experts believe that after working to influence the 2016 presidential election, Russia is once again ramping up to interfere with the 2018 midterm elections in the US.
Piecing it all together — increased Russian espionage, wide availability of Wi-Fi and cellular spoofing tools, cyberattacks on the rise, and the games being hosted in Russia — anyone can see how the 2018 FIFA World Cup is prime territory for cyber theft.
Still, Russia has been a popular destination for tourists for many years, and the vast majority of those who attend will not likely be targeted. The greater threat for most could be communications concerns, particularly with respect to cell spoofing and public Wi-Fi hotspots. Here again, the fears are justified.
Mobile data, particularly with international roaming charges, doesn't come cheap, which means many visitors will be inclined to utilize free public Wi-Fi hotspots they might encounter during their stay. These can be a gold mine for fraudsters, intercepting all communications coming from mobile devices, including sensitive personal information. A recent study found that more than 7,000 public Wi-Fi hotspots in World Cup host cities are insecure.
The threat of public Wi-Fi is not new — Apple's iPhone warns users before they connect to an unsecured network that it provides "no security" and exposes "all network traffic." But thieves know that human nature is the biggest threat to security, and the desire by fans to be connected while in Russia will drive many to make poor decisions.
How to Stay Safe
- Don't participate in Internet banking or use any apps that might share personal data. The UK's National Cyber Security Centre advises that match goers bring pay-as-you-go mobile devices rather than their regular smartphone. And when possible, use secure mobile data, such as an end-to-end encrypted connection through a VPN, to maximize security.
- In terms of spending, credit cards are preferred over debit cards, due to the protections offered by credit card companies.
- Those in Russia should also be wary of phishing attempts and email spam. World Cup attendees should also let their friends and family know they will be at the games, as fraudsters will frequently reach out to known family members via email, falsely claiming that the person traveling abroad is in trouble, in what is known as the "stranded traveler" phishing attack.
- 10 Tips for More Secure Mobile Devices
- 'Wallchart' Phishing Campaign Exploits World Cup Watchers
- 4 Basic Principles to Help Keep Hackers Out
- Ransomware vs. Cryptojacking
Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info. | <urn:uuid:53e00c29-bdef-42d1-be54-b37b340dda5e> | CC-MAIN-2022-40 | https://www.darkreading.com/endpoint/for-data-thieves-the-world-cup-runneth-over | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00700.warc.gz | en | 0.950591 | 1,255 | 2.546875 | 3 |
Nowadays, managed PoE switches are getting more and more popular among network users. Many people are likely to choose a managed switch with PoE function rather than an unmanaged one. Why does this appear? Are there special reasons? Look at this post to learn why you should use a managed switch with PoE as well as the difference between an unmanaged PoE switch and managed PoE switch.
What Is A Managed Switch?
You may know that network switch can be divided into two types in management level, namely managed switch and unmanaged switch. Then, what is a managed switch? What’s the difference between unmanaged vs. managed switch?
Actually, a managed switch is a switch that allows access to one or more interfaces for the purpose of configuration or management of features such as Spanning Tree Protocol (STP), port speed, VLANs, etc. It can give you more control over your LAN traffic and offer advanced features to control that traffic. For example, the FS S5800-48F4S 10GbE switch, supporting MLAG, VxLAN, SNMP, etc.
On the contrary, an unmanaged switch just simply allows Ethernet devices to communicate with one another, such as a PC or network printer. It is shipped with a fixed configuration and do not allow any changes to this configuration.
Advantages of A Managed Switch
Normally, a managed switch is always better than an unmanaged one since it can provide all the features of an unmanaged switch. Compared with an unmanaged switch, a managed one has the the advantages such as administrative controls, networking monitoring, limited communication for unauthorized devices, etc.
What Is PoE? Why Should You Use A Managed Switch With PoE?
From the introduction above, you may be aware of the importance of a managed switch. Then, why should you use a managed switch with PoE? Do you know what a managed PoE switch is?
Actually, PoE means power over Ethernet. The main advantage or feature of PoE is delivery of data and power at the same time over one Cat5e or Cat6 Ethernet cable. It ends the need for AC or DC power supplies and outlets. What’s more, a remote installation costs less than fiber as no electrician is required.
PoE is not recommended for sending network data over long distances, or for extreme temperatures unless industrial designation is present. It is often seen to be used in a Gigabit Ethernet switch, and it is mainly used with IP cameras, VoIP phones and WAP (wireless access points). These are the reasons why you should use a managed switch with PoE. Here, let’s take FS 8-port Gigabit PoE+ managed switch as an example.
The FS 8-port Gigabit PoE+ managed switch can offer you cost-effective and efficient PoE solution for business. As you can see from the following picture and video, if you need to connect to NVR for better surveillance network building or for IP camera consideration, such a managed PoE switch is an ideal choice.
With all the illustration above, you may have a general understanding of what a managed PoE switch is and why you should use it in certain circumstances. A managed switch with PoE not only includes all the functions that a managed switch has, but also enables you to transfer data and power at the same time over one Cat5e or Cat6 Ethernet cable.
Original Source: http://www.cables-solutions.com/why-should-you-use-managed-switch-with-poe.html | <urn:uuid:eca4d43b-e8fa-4aa5-bc50-94a6b1f95f39> | CC-MAIN-2022-40 | https://www.mytechlogy.com/IT-blogs/24676/why-should-you-use-a-managed-switch-with-poe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00700.warc.gz | en | 0.931028 | 747 | 2.515625 | 3 |
Using photos, video imagery and orthomosaics provided by drones, the National Transportation Safety Board can analyze and recreate accidents across all transportation industries.
Federal agencies plan to continue incorporating drone technology into their business operations, with government pilots currently focusing on threading the needle between automation and safety, with the surveillance aircraft.
One agency looking to capitalize on improved drone technology is the National Transportation Safety Board, which has been employing drones to capture footage from accidents among maritime, aircraft and auto transportation systems since 2016.
“We're looking to expand to have more pilots in the agency and more drones, so that we can respond to more of our accident scenes,” Catherine Gagne, an unmanned aircraft system operator within the National Transportation Safety Board’s Office of Aviation Safety said Tuesday.
Speaking during a panel discussion via the Advanced Technology Academic Research Center, Gagne discussed the agency’s core mission of investigating transit accidents and how drones have been part of site documentation for the last six years.
Using photogrammetry software to process photos, video imagery and orthomosaics, drone footage has been a reliable way for investigators to probe an unsafe or treacherous accident site. In other use cases, drones have been able to use existing data from transit methods like helicopters to replicate conditions that led to an aircraft accident and better understand what caused the crash.
Gagne said that her office continues to find new uses for drones, and currently have five pilots to man seven drone aircraft devices.
Since the drones are technically classified as aircraft, Gagne also noted that NTSB has been working to abide by privacy laws that other flight operators have to abide by.
“Anything we gather with the drone is in our public docket anyway and available,” she said. “But you know, we're very careful about what we shoot due to privacy concerns.”
Amid security precautions, the NTSB has been practicing its automated app to program and control the drones, and additionally work on ensuring pilots can easily regain control of an automated drone.
Given the success of the program so far, Gagne said that the NTSB is looking for more drone pilots.
“The program has proven its worth and we're looking to expand it,” she said.
The NTSB is one of the agencies that has employed drones across various work areas. Aside from common use cases seen with the Department of Defense, government bodies, including The U.S. Postal Service, have toyed with innovative ideas on how to expedite package deliveries with drone technology.
Amid the broad interest, the General Services Administration recently restricted the procurement of drones by federal agencies aside from a group of previously approved devices, citing national security concerns.
NEXT STORY: Get ready for the metaverse | <urn:uuid:8f56758a-6019-4ef7-a46f-a34f4984e2d4> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2022/04/how-drones-document-crash-scenes/366177/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00700.warc.gz | en | 0.943622 | 564 | 2.65625 | 3 |
Wavelength division multiplexing (WDM) is a commonly used technology in optical communications. It combines multiple wavelengths to transmit signals on a single fiber. To realize this process, CWDM and DWDM mux/demux are the essential part. As we all know, there are several different ports on the WDM mux and demux. This article will give a clear explanation to these ports and their applications in WDM network.
Line port, sometimes also called as common port, is the one of the must-have ports on CWDM and DWDM Mux/Demux. The outside fibers are connected to the Mux/Demux unit through this port, and they are often marked as Tx and Rx. All the WDM channels are multiplexed and demultiplexed over this port.
Like the line port, channel ports are another must-have ports. They transmit and receive signals on specific WDM wavelengths. CWDM Mux/Demux supports up to 18 channels from 1270nm to 1610nm with a channel space of 20nm. While DWDM Mux/Demux uses wavelengths from 1470nm to 1625nm usually with channel space of 0.8nm (100GHz) or 0.4nm (50GHz). Services or circuits can be added in any order to the Mux/Demux unit.
Monitor port on CWDM and DWDM Mux/Demux offers a way to test the dB level of the signal without service interruption, which enable users the ability to monitor and troubleshoot networks. If the Mux/Demux is a sing-fiber unit, the monitor port also should be a simplex one, and vice verse.
Expansion port on WDM Mux/Demux is used to add or expand more wavelengths or channels to the network. By using this port, network managers can increase the network capacity easily by connecting the expansion port with the line port of another Mux/Demux supporting different wavelengths. However, not every WDM Mux/Demux has an expansion port.
1310nm and 1550nm are one of WDM wavelengths. Many optical transceivers, especially the CWDM and DWDM SFP/SFP+ transceiver, support long runs transmission over these two wavelengths. By connecting with the same wavelength optical transceivers, these two ports can be used to add 1310nm or 1550nm wavelengths into existing WDM networks.
Although there are several different ports on WDM Mux/Demux, not all of them are used at the same time. Here are some examples of these functioning ports in different connections.
This example is a typical point-to-point network where two switches/routers are connected over CWDM wavelength 1511nm. The CWDM Mux/Demux used has a monitor port and 1310nm port, but the 1310nm does not put into use. In addition, an optical power meter is used to monitor the power on fibers connecting the site A and B.
In this example, two 40 channels DWDM Mux/Demux with monitor port and 1310nm port are used to achieve total 500Gbps services. How to achieve this? First, plug a 1310nm 40G or 100G fiber optical transceiver into the terminal equipment, then use the patch cable to connect it to the existing DWDM network via the 1310nm port on the DWDM Mux/Demux. Since the 1310nm port is combined into a 40 channels DWDM Mux, then this set-up allows the transport of up to 40x10Gbps plus 100Gbpx over one fiber pair, which is total 500Gbps. If use 1550nm port, then the transceiver should be available on the wavelength of 1550nm.
The connection in this example is similar to the last one. The difference is that this connection is achieved with expansion port not 1310nm port. On the left side in the cases, a 8 channels CWDM Mux/Demux and a 4 channels CWDM Mux/Demux are stacked via the expansion port on the latter Mux/Demux. And the two 4 channels CWDM Mux/Demux are combined with the line port. If there is a need, more Mux/Demux modules can be added to increase the wavelengths and expand network capacity.
Different ports on the CWDM and DWDM Mux/Demux have different functions. Knowing more their function is helpful in WDM network deployment. FS.COM supplies various types of CWDM and DWDM Mux/Demux for your preference. And customer services are also available. If you have any needs, welcome to visit our website www.fs.com. | <urn:uuid:cf618534-26b7-4a30-9b68-d9f0f7943751> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/tag/cwdm-muxdemux | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00700.warc.gz | en | 0.915809 | 1,002 | 3.15625 | 3 |
Context is crucial in communicating effectively. In every language, there are homophones that sound similar and would require you to either know the exact spelling or the precise context of the word or term discussed to decipher its meaning. For example, some common homophones in English include words such as:
Now, imagine the confusion introduced when we use the same vocabulary in communicating with customers across various industries. Unless those customers have spent time in those industries or have spent significant time researching specific words and terms, it’s easy for them to misunderstand the context and connotations of that vocabulary.
Let’s take a couple of examples from the finance industry; what is the difference between a “fee” and a “charge,” a “credit” and a “concession,” or “approval” and “qualification”? This isn’t a play with words. Those terms could mean entirely different things with and without proper context. I can site dozens — if not hundreds — of such examples. However, few will help prove this point as well as the examples below from the mortgage-finance industry will.
Readers (especially homeowners) might be surprised to learn the differences between the following terms:
Origination Fee/Origination Charge
Discount pts./Origination pts.
Lender Concession/Lender Credit
Apartments/Condominiums [Credit Policy Specific]
We need a system that has contextual knowledge of the items being compared with regards to these varying terms. In the example above of “See/Sea”, knowing that the user is steering a sailing vessel can help us better predict which “sea” is being referred to. Subsequently, we can then add an element of time to the comparison to make it even more meaningful. For example, have you seen a parent bargaining with their child through conversations like, “if you eat your vegetables, you will get mac-n-cheese later”? In this instance of context, there is value in adding an element of time to a comparison to make it more meaningful.
Keeping with examples from mortgage finance, it would be useful to know how to choose two deals to compare against each other; one where the rate is higher but can close sooner versus another mortgage offer with a slightly lower rate but will close a month later. Time is money, so the answer could vary if borrowers are refinancing or purchasing, and if it is an existing home or new construction builder home. This is what makes the notion of Compare-as-a-Service (CfaaS) a valuable resource for borrowers in this regard.
What is Compare-as-a-Service (CfaaS)?
CfaaS is a cloud-based industry-specific contextual comparison service with the introduction of the concept of ‘time’ to that comparison. It can help compare financial services ranging from mortgages to ancillary services required to fulfill a transaction using container-based abstraction. As a cloud service, CfaaS is highly scalable, available, and configurable with minimal-to-no cost in maintaining protocol across the industry.
Through this orchestration, essential IT functions specific to financial services — such as comparing costs of specific services — can become automated. Through CfaaS, users can centralize their data into a single repository where they can manage it, categorize it, make it available for comparison, search for it, or do whatever they wish with it to facilitate comparisons for borrowers as clients. Because the CfaaS framework works on principles of Web 3.0, it is imperative that borrowers retain ownership and privacy of their data while comparing financial services.
The most significant benefit of CfaaS is that users can apportion their processing/computing power between several virtual environments without having to worry about scalability and integration to new aggregators while allowing the CfaaS provider to do the same for their users. Additionally, users won’t have to worry about more recent releases on standards across the industry, such as moving from MISMO 3.6 over to MISMO 4.0 when it is released.
To put it simply, CfaaS provides the financial services industry the benefit of reduced time, providing no-cost to development, and providing plug-and-play framework for the advancement of existing services (and/or building newer services) to meet the demands of Web 3.0.
MISMO has long made strides to address electronic commerce issues in the mortgage industry. Through broad industry collaboration, MISMO has created — and continues to develop — new standards that support solutions to the industry’s toughest business issues, reduce costs, and improve transparency and communications in housing finance. Now, with CfaaS building upon the progress MISMO has made, smaller players, newer players, and trailblazers all have a real shot at leapfrogging the industry’s gorillas; big banks.
Throughout our history, human beings have shown that they became smarter by learning, but especially learning through comparing. Inventions are inventions because they do something different from what already existed, be it a process or a device. If we compare two items and both are identical, does the second item still qualify as an invention? We have to compare to find out.
Now, imagine the importance of comparison in financial services. There is a cache of use cases for providers of financial services (e.g., appraisal services, title & closing services, inspection services, insurance services, etc.), but let’s focus on borrower use cases. Borrowers can quickly compare insurance providers (auto, life, title), closing services (closing, escrow, attorney, survey/search), inspection services (pest, roof, structural), and more. One excellent use case of CfaaS on behalf of borrowers is to view all of the above services being compared and — with the help of tools like automation and AI — find cheaper opportunities than those that Lenders offer them, using machine learning to create predictive models to craft a custom mortgage for their benefit.
As complex as it might be, comparing financial instruments today is possible, but is widely done by experts using different systems and services available to them. By ensuring comparisons are contextually relevant and can adhere to multiple goals within the same CfaaS search, the industry can help take the comparison of financial products and services to the next level.
The author, Yatin Karnik is the founder and CEO of Confer, Inc. After 18 years in the Home Lending industry with Wells Fargo Home Mortgage as a Senior Vice President, Yatin founded Confer, Inc. and developed a revolutionary and disruptive application that will allow consumers to compare loan estimates. | <urn:uuid:aa13abd6-12be-4e1c-8a29-ae18a189bd85> | CC-MAIN-2022-40 | https://financialtechnologytoday.com/cfaas-compare-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00700.warc.gz | en | 0.940728 | 1,533 | 3.4375 | 3 |
Dutch computer science researchers have warned that viruses embedded in radio frequency identification (RFID) tags used in supply chains to track and trace goods are around the corner.
Although no viruses targeting RFID technology have been released live yet, according to the researchers at Vrije Universiteit Amsterdam in the Netherlands, the tags have several characteristics that could be engineered to exploit vulnerabilities in middleware and back-end databases.
They described RFID malware as a “Pandora's box that’s gathering dust in the corner of 'smart' warehouses and homes." The attacks can come in the form of an SQL injection or a buffer overflow attack, even though the tags themselves may only store a small bit of information, the paper said. For demonstration purposes, the researchers even created a proof-of-concept, self-replicating RFID virus.
One of the university students needed only four hours to write a virus small enough to fit on an RFID tag. The demonstration then used homegrown middleware connected to back-end databases from suppliers such as Oracle and Microsoft, along with open source databases such as MySQL and Postgres.
RFID offers significant potential for tracking and tracing in a number of sectors, especially in preventing counterfeiting in the pharmaceutical industry. It was probably only a matter of time before someone seriously queried its security. | <urn:uuid:fa2790fe-8c00-4bb5-916e-2c20c39b76b5> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/2240062755/RFID-faces-virus-security-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00700.warc.gz | en | 0.936066 | 271 | 2.734375 | 3 |
Virtual Private Networks (VPNs) and Internet Protocol Virtual Private Networks (IP VPNs) sound similar but have some key differences when it comes to performance and security. Understanding the difference and how it affects your service will ensure your business gets the most out of its connectivity.
First of all, what are a VPN and an IP VPN?
VPN is a networking technology that allows users to connect to their main network remotely via the public internet. A VPN allows employees to work from home and connect to the company’s intranet, giving them access to the shared files of their office computers.
An IP VPN works in much the same way, establishing seamless connectivity to the main network across an ISP. However, it also uses multiprotocol label switching (MPLS) technology to prioritize internet traffic and avoid public gateway to increase security.
What’s the Difference?
Both are similar, but the most important difference is the layer of the OSI Model on which they’re classed.
Typical VPNs fall under layers 3 and 4, meaning they establish a connection through the public internet. They also frequently use a public gateway to connect. By using a public gateway, VPNs are exposed to DDoS (Distributed Denial of Service) attacks that decrease speeds and consume valuable bandwidth.
An IP VPN is considered layer 2, meaning it avoids public internet by travelling on a private connection to each remote site, so your vital company data remains secure. As a layer 2 service, IP VPN uses MPLS capabilities that prioritize your company’s internet traffic. This guarantees that mission-critical applications get the bandwidth they need while less important traffic waits in line.
So which option is best for your company?
VPNs often work best for small businesses or sole proprietorships, where employees do not often need remote access. If security is not a major concern, a standard VPN will be fine. IP VPNs are ideal for medium businesses to large enterprises, where multiple employees and branches need the ability to connect to the company intranet remotely and securely while handling sensitive corporate information. IP VPN is also useful for internet traffic that needs prioritization to better serve VoIP, video conferencing, and cloud services. As more employees work from home, an IP VPN will make more sense for businesses of all kinds.
That said, it’s hard to apply a one-size-fits-all solution to your operations. At iTel, our team of experts can work with you to build out the ideal solution for all your needs. Let’s Connect. | <urn:uuid:324dd9f7-6f51-48df-a256-dbb60b705ecc> | CC-MAIN-2022-40 | https://itel.com/difference-vpn-and-ip-vpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00700.warc.gz | en | 0.927641 | 529 | 2.578125 | 3 |
Password protection is one of those things that is so important and yet people need to be reminded of this constantly. Year after year, people create very easy to remember passwords that can be cracked within seconds. But the worse thing is they use these as a means to protect their most sensitive information.
At this point, changing your password to something harder is good, but it would be better to go a step further and make a password that’s totally unhackable – or at the very least a pain for hackers to crack it. To reach that point, consider the following tips.
Avoid The Common Passwords
I’m talking about the passwords that are “password”, “12345”, “qwerty” and others of that nature. But I’m also talking about the other simple passwords like:
- Your birthday.
- Your pet or child’s name.
- Your street address.
- Your phone number.
Using passwords that are personally identifiable isn’t clever as these can be easily cracked by hackers running a Google search on you or looking at your social media posts. That or they could be phished with ease with the hacker posing as a neighbour, an acquaintance, work colleague, or an official.
Avoid Spelled Words
Hackers also have access to tools that allow them to try a process called “brute force dictionary attacks”. As the name suggests, these attacks are cracking passwords by using words you’d find in the dictionary. Through these attacks, hackers go through hundreds of thousands of passwords within a matter of minutes.
In short, if your password is something that’s in the dictionary, then your password is not safe.
This also applies if you add numbers to that password too or replace letters with numbers (like replacing the letter s with a 5 or e with 3.) These systems have thought of everything and will even try those iterations too.
Avoid Letter-By-Letter Passwords
A stronger password is using a combination of letters and numbers. With that in mind, people gravitate towards a sequence of letters that are easy for them to memorize. This results in people’s passwords being things like “aab”, “aac”, “aa1” and so on.
The problem with this strategy is that that same attack I mentioned above does make these attempts too. So stringing a 12 character long letter-by-letter password isn’t going to cut it either.
Do Use Different Passwords
In order to make passwords harder to crack, it’s important you use many different passwords to leave hackers guessing. Of course, it’s going to be tough to memorize every single password you come up with which is why getting a password storage tool will be effective.
Password storage tools store passwords for you and keeps them locked in a vault that can only be accessed by a Master Password that’s encrypted.
Hackers will have a tough time breaking into that Master Password and so a nice layer to that is ensuring you’re using different passwords for everything you log into. This ensures that if a hacker ever somehow cracks a password for one site, they don’t have access to everything else.
Do Use Advanced Leetspeak
Leetspeak is a form of writing where you’re using letters in word and are replaced with numerical likeness or other letters to create identical or similar sounds. I mentioned this briefly with people replacing e’s with 3’s or s’s with 5’s.
While there are other examples of this, I would encourage you to try something I call advanced leetspeak.
The idea s to incorporate 2 characters to represent a single letter. It can also entails the use of uncommon symbols to replace letters as well. Some examples of this are:
- F can be I= or even !=
- H can be || or |-|
- B can be lo
The possibilities are endless with what you can be doing with this.
Make Unique Passwords Through Phrases
Paired with leetspeak, you can get away with using that technique to create a sentence that is memorable to you. For example, if you really like flowers a password sentence can be “I plant lots of flowers on…” and insert the platform’s name you’re logging into.
For example, if you post flower pics on Instagram the sentence would be “I plant lots of flowers on Instagram”
Looks like a complex jumbled mess right? Well you could simplify it a bit here and there but you get the idea of how a phrase can look like a jumbled mess of letters, numbers, and symbols.
Or Look To Password Managers
Of course the unique passwords using leetspeak is one method but not every person is going to remember these or be bothered to do that. Again, what makes our lives so much easier is using password managers.
Not only do password managers only need you to memorize one password at all times, these tools often generate their own passwords for you.
These passwords are often very strong and are an even more jumbled mess than the leetspeak password I mentioned above.
Overall, there are all kinds of methods to generate better passwords but if you’re someone who wants to set it and forget it, having a password generator do the heavy lifting for you is better. Just make sure that the Master Password you are using isn’t an easy to crack password. | <urn:uuid:e33b9e6e-f231-456a-9aa4-636df317af25> | CC-MAIN-2022-40 | https://davidpapp.com/2022/03/07/7-tips-on-creating-an-unhackable-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00700.warc.gz | en | 0.937137 | 1,155 | 2.65625 | 3 |
Security researchers have revealed a new attack to steal passwords, encryption keys and other sensitive information stored on most modern computers, even those with full disk encryption.
The attack is a new variation of a traditional Cold Boot Attack, which is around since 2008 and lets attackers steal information that briefly remains in the memory (RAM) after the computer is shut down.
However, to make the cold boot attacks less effective, most modern computers come bundled with a safeguard, created by the Trusted Computing Group (TCG), that overwrites the contents of the RAM when the power on the device is restored, preventing the data from being read.
Now, researchers from Finnish cyber-security firm F-Secure figured out a new way to disable this overwrite security measure by physically manipulating the computer’s firmware, potentially allowing attackers to recover sensitive data stored on the computer after a cold reboot in a matter of few minutes.
Like the traditional cold boot attack, the new attack also requires physical access to the target device as well as right tools to recover remaining data in the computer’s memory.
According to Olle and his colleague Pasi Saarinen, their new attack technique is believed to be effective against nearly all modern computers and even Apple Macs and can’t be patched easily and quickly.
Read more: The Hacker News | <urn:uuid:efd62d61-3b27-4f5d-91b0-186b5fb6feaa> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/new-cold-boot-attack-unlocks-disk-encryption-on-nearly-all-modern-pcs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00100.warc.gz | en | 0.945515 | 269 | 2.859375 | 3 |
The Department of Defense (DOD) defines cyberspace as a global domain within the information environment consisting of the interdependent network of information technology infrastructures and resident data, including the internet, telecommunications networks, computer systems, and embedded processors and controllers. The DOD Information Network (DODIN) is a global infrastructure carrying DOD, national security, and related intelligence community information and intelligence.
Cyberspace operations are composed of the military, intelligence, and ordinary business operations of the DOD in and through cyberspace. Military cyberspace operations use cyberspace capabilities to create effects that support operations across the physical domains and cyberspace.
Cyberspace operations differ from information operations (IO), which are specifically concerned with the use of information-related capabilities during military operations to affect the decision making of adversaries while protecting our own. IO may use cyberspace as a medium, but it may also employ capabilities from the physical domains. | <urn:uuid:f3bc1992-fc8c-4b28-b7b5-91d6f4804127> | CC-MAIN-2022-40 | https://cybermaterial.com/defense-primer-cyberspace-operations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00100.warc.gz | en | 0.905735 | 194 | 3.078125 | 3 |
There’s no doubt that the coronavirus pandemic has already transformed the world and the way we use technology. In particular, big data and predictive analytics models hold the power to generate information that will be leveraged by everyone — from world leaders, the scientific community and industries such as manufacturing, to the general public and the healthcare sector.
The impact of COVID-19 has prompted many to examine how we can use big data for social good, to help small businesses and to bolster the economy as a whole. In fact, President Trump recently left many scratching their heads when he stated publicly that he intended to loosen social distancing measures in an attempt to re-open the American economy — despite contrary advice from medical professionals and advisors. This has prompted many economists and White House advisors to begin looking for new strategies to “open” the nation for business, without risking a medical catastrophe. You can be certain that experts are now frantically poring over big data for economic growth, analytics and simulations as they attempt to come up with a game plan.
How Different Industries are Leveraging Big Data for Economic Growth
Businesses are having to make critical operational decisions in a time of mass uncertainty. In order to do so, they are turning to data-driven insights. Thanks to big data, they can gauge the spread of the virus, coordinate an appropriate response to the pandemic and drive economic growth (or simply limit economic contraction). Let’s look at how a number of different industries may be leveraging big data analytics tools in the coming weeks, in response to the coronavirus pandemic.
- Manufacturers may use predictive analytics tools to guide their production levels and modify their distribution strategies to meet the varying demand in different regions.
- Predictive healthcare analytics will be used to guide the distribution of medical equipment, medications, personnel and even a coronavirus vaccine once it becomes available. Big data can also be used to run modeling to evaluate the effectiveness of different drugs and treatments.
- Predictive modeling will be used to generate updated coronavirus spread maps, so the public can see which regions are expected to see a rise in COVID-19 infection cases. These predictive mapping technologies may also be leveraged to pre-position healthcare workers, supplies, field hospitals and other resources so they’re in the locations with the greatest need.
- Epidemiologists will almost certainly use predictive analytics modeling to develop a COVID-19 vaccination strategy that will maximize the positive impact and speed the establishment of “herd immunity.”
- Food and grocery delivery companies may use predictive analytics to identify which regions will require an increase in the number of shoppers and delivery drivers.
- Stores and e-commerce companies will likely use PA technology to predict demand for various products. This would allow shops and online retailers to order and pre-position stock so as to minimize shortages and instances where shoppers are met with bare shelves. Predictive modeling can also help with the development of new policies, such as imposing purchase limits on in-demand products.
These are just some of the ways in which industries across the globe will leverage big data analytics tools to handle the COVID-19 pandemic and its fallout. It’s fair to presume that we will see a tremendous move toward big data for economic growth and recovery, since this process must be guided by facts and data if we’re going to see rapid improvement.
While questions still swirl around the spread and ultimate impact of this coronavirus pandemic, one thing is clear: citizens are changing their behaviors and companies are altering their business practices and business strategies in response to COVID-19. This is a prime example of where we can leverage big data for social good and economic recovery, using data to drive decision-making in an uncertain environment.
As a result of this pandemic, some businesses may require data governance tools such as a data lake, new custom software or mobile app development services. Others may require help with cloud integration or establishing a new, secure enterprise software platform to facilitate remote work. This is where the team at 7T can assist, as our team continues to work full-time in a remote capacity amid the coronavirus crisis.
7T is a software development company, dealing in a wide range of custom development projects. Our team of top Dallas software developers specialize in a range of different technologies, including ERP and CRM development, one-of-a-kind mobile app and custom software development projects, tools for data lake creation, data governance, data visualization, cloud integrations and system integrations. In fact, our work speaks volumes! | <urn:uuid:05411c98-27a0-48bb-a47c-24d2ba7f2423> | CC-MAIN-2022-40 | https://7t.co/blog/big-data-for-economic-growth-data-analytics-tools-in-industries-impacted-by-coronavirus-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00100.warc.gz | en | 0.926174 | 998 | 2.734375 | 3 |
Hybrid? When we think hybrid, we think hybrid cars. What does hybrid mean? Mixing old and new technology… Or having the choice of how you want to drive your car? If we translate this to hybrid learning, we see similarities.
But let’s start at the beginning. Why are we even talking about hybrid learning? Hybrid learning is positioned as the next default learning style. It can be seen as Blended Learning 2.0. According to a survey, 49% of the respondents define blended learning as a training program that consists of a mix of face-to-face and eLearning (source: Jane Hart C4LPT Blog). The survey shows that people have different views on blended learning. At ITpreneurs, our blended programs align with the definition agreed upon by the majority of the respondents. We have designed our blended courses such that the theoretical aspects are learned through self-study (eLearning) and practical aspects in a classroom (or virtual classroom) environment. This format enables learners to come together to bring the theory to practice, do case studies, have discussions and prepare for the exam.
Example: Agenda TOGAF Blended Learning
|10h of Self-Study||Classroom Day 1||Classroom Day 2|
So is Blended and Hybrid One of the Same?
If so, why would we need a new term? Well, most training these days have a digital component. This can be an online mock exam, using the web to solve in-class exercises or even downloading and reading a complimentary white paper. Even using eBooks in class can be considered as blended learning. So all modern-day learning can technically be considered blended learning. But what if there are learning programs, like the TOGAF® example above, designed for combining learning strategies? How do we differentiate those types of courses from courses with a digital component? That’s where the term hybrid comes to play.
With hybrid learning, we are entering a new era. Learners have access to devices such as desktops, laptops, smartphones, tablets, phablets… They spend a lot of time on these devices and we want to bring the learning closer to them by offering content on these devices. That requires a different setup for training courses. Providing content that is relevant in-time is the new key to success. Therefore learning programs should support micro-learning (consume content in 5 to 20 minutes) and if required, social learning (interact with peers and experts on the subject). Therefore we see the rise of this new style of online learning programs. They are not slide-based, audio-guided, but full of small videos of experts, animations, web readings, articles, and white papers.
While blended courses combine classroom learning with a digital component, we expect from hybrid courses that the didactics are adjusted based on the new style of learning (refer to my earlier blog posts Video-Based Learning is Hot and Mobile Learning: What You Need to Know). Just like cars, we mix old “technologies” (classroom and learning management systems) with new technologies (mobile and video) and the learner can choose what he/she wants to learn, where and when.
When you Google ‘Hybrid Learning’, we see many authors using blended and hybrid interchangeably. In the same sentence, they mention “flipping the classroom”. The term hybrid is gaining popularity and is replacing blended in many cases. Sometimes it is correct, as the learning didactics deserve the new name, in other cases, it is more or less the same.
Need more information? Please contact ITpreneurs.
About the author
Ellen studies the art of learning. Being a cognitive psychologist, she has the background and knowledge of the science of learning. Having the theoretical knowledge on how the brain works, how we store information and how we retrieve information helps her design and conceptualize innovative learning products. With her creative mind and feel for technology, she is able to turn concepts into products.
As an independent consultant, I advise and support organizations in developing and implementing learning experiences. | <urn:uuid:1d25200c-fefc-41e7-9614-cda5cfa12543> | CC-MAIN-2022-40 | https://www.itpreneurs.com/blog/get-ready-for-hybrid-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00100.warc.gz | en | 0.933324 | 868 | 2.65625 | 3 |
What does it take to connect tens of billions of devices?
Among other things, transmit power control is a lynchpin feature of IoT-scale connectivity.
Blogs don’t start much nerdier than that kind of claim. But, good reader, fear not, for you shall not be disappointed for we shall explain how it is that not every network can connect billions of IoT devices.
First some definitions and explanation are in order. Transmit power control is the ability of an endpoint to adjust how “loudly” it broadcasts its signal enabling only the closest access point (AP) or so to hear it. Perhaps you didn’t realize that when endpoints broadcast their signal it isn’t just a single AP that can hear it. Other APs hear it too. This has a few very important implications. Each AP can only handle so much capacity, or said otherwise, can only handle so many signals that they can take in at once. So the more signals from distant endpoints that are heard by a given AP the less capacity it has to serve the endpoints that it really should be concerned with. Transmit power control drastically reduces this wasted capacity.
Transmit power control solves all of this. As more endpoints are added and APs are added to serve them, the endpoints with transmit power control adjust their “volume” so that only those APs closest to them can hear them. Suddenly, adding new APs actually adds more capacity. YES. This kind of network can connect billions of IoT devices. | <urn:uuid:fa8b519f-79cf-470b-aae2-307924f2baff> | CC-MAIN-2022-40 | https://www.ingenu.com/2015/11/not-every-network-can-connect-billions-of-iot-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00100.warc.gz | en | 0.960141 | 319 | 2.890625 | 3 |
Engage Patients with Educational Messaging
Educating patients about their upcoming medical procedures is an important part of building trust and increasing retention. Using digital technologies to educate patients is just one way to reinforce messaging and improve patient compliance. This article provides some examples of how to digitally engage with patients both before and after undergoing a medical procedure.
Pre-Procedure Patient Education
Let’s say a patient needs to have surgery to correct carpal tunnel. Surgery can be overwhelming and anxiety-inducing, even for a simple outpatient procedure. Even though the patient may have an appointment to discuss the surgery with a doctor or nurse before the procedure, it’s easy to forget the information or have attention lapses because of anxiety.
Instead of relying on the healthcare provider to thoroughly explain everything and the patient writing down or remembering all the instructions, email and text reminders can be used to reinforce the doctor’s instructions. Some healthcare providers already hand out paper instructions, but they are often generalized and are easy to lose in the shuffle.
Sending the detailed instructions in an email makes them easier for patients to review and reference. The information can also be sent exactly when it is needed. Sending notifications a few days before the surgery can help increase compliance and defray the costs of rescheduling. Some examples of appropriate notifications include when to start fasting, when to arrive at the surgery facility, and other pre-surgery instructions.
Post-Procedure Patient Education
In an outpatient setting, engaging patients after a procedure is especially important to make sure they heal properly and do not need additional treatments. A patient recovering from anesthesia is not particularly prepared to listen to discharge instructions. Instead of relying on memories and handouts to convey important information, secure email and texting can be used to help patients manage their care at home.
Some topics to cover include:
- Medication instructions
- Care management or techniques to meet clinical needs
- Individual patient needs or circumstances
- Potential symptoms or side effects patients can expect
After a carpal tunnel procedure, emails can remind patients when to take medication, how to shower with their bandages and keep incisions clean, and some light stretching they can do to increase mobility.
By educating patients on what they can expect and empowering them to manage their own health care, it can reduce patient readmissions and improve health outcomes.
Patient Education Methods
Many patients prefer to receive digital communications. However, we must mention the HIPAA requirements for sending sensitive information electronically. The email examples above constitute ePHI and must comply with HIPAA. Briefly, this means using encryption, archiving messages, and ensuring data integrity through access controls. You can read more about the HIPAA requirements for email and texting in other blog articles on our website.
Once the proper protections are in place, it’s easy to set up automated messages for common procedures. As another example, let’s take an organization that provides colonoscopy procedures. The fasting and medication regimen is well-defined for this procedure. Staff can create a series of automated reminders based on the date and time of the patient’s procedure to remind them when to start fasting, take the medication, and stop drinking liquids.
In fact, a study conducted by the Perelman School of Medicine at the University of Pennsylvania found that a series of text messages were about equally as effective as having nurses call patients with the educational material before their colonoscopies. By reducing the amount of time nurses are making these phone calls, it can streamline operations and reduce costs.
Although digital communication is popular, it should not replace those personal interactions with healthcare providers. These engagement and education tips are most successful when they are used to reinforce messaging from trusted health care providers. If you would like to learn more about how LuxSci can help, please reach out to us. | <urn:uuid:f744be8e-49f7-4153-8c8d-3419c3e42d50> | CC-MAIN-2022-40 | https://luxsci.com/blog/patient-engagement-education.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00300.warc.gz | en | 0.931151 | 787 | 2.78125 | 3 |
According to cybersecurity leader McAfee, there's a big gap in website security at the county level. Many websites that provide pertinent election information like details on candidates and voting locations, were found to be lacking in basic security features that would ensure that visitors are in fact visiting the official website, and are receiving accurate information.
McAfee’s new report found that nearly 45% of county election board websites lack basic HTTPS encryption, and 80.2% lack a .gov domain name.
Hackers could take advantage of this lapse in security and attempt to launch disinformation campaigns to cause confusion on election day.
Voters already need to be careful about the websites they trust during this time – and these security lapses on real gov associated websites aren’t helping to give security experts any peace of mind.
Steve Grobman, SVP and CTO at McAfee weighed-in with additional insights on the importance of these basic security features via the McAfee blog:
"Using a .GOV web domain reinforces the legitimacy of the site. Government entities that purchase .GOV web domains have submitted evidence to the U.S. government that they truly are the legitimate local, county, or state governments they claimed to be. Websites using .COM, .NET, .ORG, and .US domain names can be purchased without such validation, meaning that there is no governing authority preventing malicious parties from using these names to set up and promote any number of fraudulent web domains mimicking legitimate county government domains.
An adversary could use fake election websites for disinformation and voter suppression by targeting specific citizens in swing states with misleading information on candidates or inaccurate information on the voting process such as poll location and times. In this way, a malicious actor could impact election results without ever physically or digitally interacting with voting machines or systems.
The HTTPS encryption measure assures citizens that any voter registration information shared with the site is encrypted, providing greater confidence in the entity with which they are sharing that information. Websites lacking the combination of .GOV and HTTPS cannot provide 100% assurance that voters seeking election information are visiting legitimate county and county election websites. This leaves an opening for malicious actors to steal information or set up disinformation schemes.
Malicious actors can pass off fake election websites and mislead large numbers of voters before detection by government organizations. A campaign close to election day could confuse voters and prevent votes from being cast, resulting in missing votes or overall loss of confidence in the democratic system."
Chris Howell, CTO, Wickr -- an expert on encryption -- shared his insights about why this is such an alarming discovery by McAfee:
“Verifiable domains and HTTPS encryption are table stakes features for all serious websites built in the past decade. Beyond the threat of exposing voter registration/PII or the potential of disinformation around polling hours and locations, sites that lack these protections today also pose a significant threat to visitors related to the proliferation of malware. With the right tools, attackers can drop malicious payloads into web traffic to siphon sensitive data from visitor computers or execute damaging ransomware or related attacks without having to compromise the website hosting infrastructure.
Election sites are attractive targets for more organized threat actors and nation-states as well, which increases the likelihood that weak sites will be exploited.
Perhaps even more concerning is if so many election sites haven’t done the most basic of things necessary to secure themselves, what else aren’t they doing? Verifiable domains and HTTPS are only the first steps; they don’t keep your servers patched or your application software free of security vulnerabilities, which is how most sites are compromised today. Our banking and e-commerce services know this. Our election system requires no less security; perhaps more.
The scary thing is if we add it all up - a low average security score across the board, a target that attracts the most highly motivated and capable attackers, and the likelihood that even a single attacker “victory” would produce broad chaos amongst the electorate - it’s a recipe for disaster. Those with the power to act should do so quickly and effectively.” | <urn:uuid:5b531cd5-68f8-4c29-af27-907813c74f77> | CC-MAIN-2022-40 | https://www.enterprisesecuritytech.com/post/report-county-election-websites-lack-basic-security-pose-election-disinformation-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00300.warc.gz | en | 0.932061 | 838 | 2.5625 | 3 |
Passwords are widely recognized as one of the weakest links in an organization’s security. In fact, 81 percent of hacking-related data breaches last year were the result of weak, default, or stolen passwords, according to Verizon’s 2017 Data Breach Investigations report.
In spite of this statistic, many companies are still using outdated and just plain bad password policies that actually leave them more vulnerable to attacks. In this blog post, we will explore how these policies put you at risk and what steps you can take to better protect yourself.
Common Password Misconceptions, Revealed
More Complicated Passwords Aren’t Always More Secure
One prevalent misconception is that passwords with mixed cases, numbers, and special characters are always going to be more secure than passwords without those features.
While it’s true that all-lowercase passwords are less secure—a large botnet can crack them in as little as 1.8 seconds—not all complex passwords are created equal. For example, substituting numbers for letters within words (such as 4Ex@mp1e) doesn’t make your password stronger. Hackers have learned to include such things in their password tables.
Moreover, if a password is mixed-case and contains letters and numbers, it still won’t do you much good if that combination isn’t random. Hackers are well-are of common keyboard patterns. In fact, the password “zaq1zaq1” made SplashData’s list of worst passwords in 2016 because the letters and numbers are in an easily recognizable pattern.
Frequent Password Changes Weaken Security
Another popular tactic is to mandate frequent password changes. Unfortunately, this often leads to end-users creating workarounds that cripple security, such as choosing weak passwords, reusing passwords, or transforming them in ways that are highly predictable to hackers. For instance, Cowboysfan#1 becomes cOwboy$sfan#21, then coWboysf@n#E1, and so on. This makes it easy for hackers to utilize social engineering techniques to learn users’ passwords, then hack into the system.
A study carried out by UNC supports this theory: Accounts were subject to mandatory password changes every three months, and it was found that 41 percent of the passwords could be broken offline in a matter of seconds by referencing previous passwords for the same accounts.
Password Policies That are Too Strict or Complex Actually Decrease Usability AND Security
You might think that implementing very strict password policies or demanding complex passwords will increase security, but that simply isn’t the case—such policies actually decrease security and usability.
Even the federal government has come to this conclusion: The latest NIST guidelines represent a drastic departure from previous password standards. They no longer require long, complex passwords, routine password changes, or password hints.
While lengthy passwords are more difficult for hackers to crack, they’re also more challenging for users to remember—after all, the average user accesses 40 accounts. As a result, users are more likely to take shortcuts, such as writing down their passwords or storing them in unencrypted spreadsheets.
In addition, enforcing strict and complex password policies forces employees to spend longer accessing the systems they need to do their jobs or to turn more frequently to the IT department for help, which wastes everyone’s time.
Even the Best Password Policies Aren’t Enough on Their Own
If you absolutely insist on using passwords, make sure you’ve put good policies into place that don’t fall into any of the misconceptions outlined above. But be aware that even the best passwords will not protect your organization against today’s threats. In fact, the most successful attacks simply trick users into giving up their passwords. In 2016 alone, over 3 billion credentials were stolen.
Most attackers these days don’t bother with brute-force methods, and why should they? There are other equally effective tactics, such as keylogging, phishing attacks, ransomware, or other social engineering tactics.
Phishing attacks are particularly effective—Verizon’s 2017 DBIR shows that 43 percent of data breaches stem from this type of attack. You need to protect your organization from those threats, as well.
Enhance Your Security with Multi-Factor Authentication
Multi-factor authentication (MFA) keeps your company safe from unauthorized access due to stolen credentials. It adds a second or third verification method in addition to passwords, rendering attacks harmless. Even better, MFA can replace passwords altogether.
There are a number of benefits to implementing MFA. Today’s MFA options are flexible: You can tailor authentication methods based on risk level using risk-based authentication. This enables your organization to increase security without negatively affecting usability.
Here are just a few of the benefits of MFA:
- When used as a password replacement, MFA does away with countless password resets, saving time and effort for your IT department.
- You can leverage existing security investments, such as ID cards, so MFA becomes a smart choice for your firm.
- MFA helps you comply with regulations, such as SOX and PCI-DSS.
Bad password policies and practices weaken your organization’s security and leave you vulnerable to hackers. The reality is that if you’re going to continue using passwords to combat today’s threats, you need to have good password policies in combination with flexible, multi-factor authentication. | <urn:uuid:89522013-f52f-4ef7-8745-5662776e207d> | CC-MAIN-2022-40 | https://blog.identityautomation.com/identity-management-best-practices-addressing-password-policy-misconceptions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00300.warc.gz | en | 0.93518 | 1,133 | 2.859375 | 3 |
Research Team’s Breakthrough for Room Temperature Quantum Optomechanics
(InterestingEngineering) Researchers from the Delft University of Technology in the Netherlands and the University of Vienna in Austria have developed a new way of both controlling and measuring nanoparticles which are trapped in a laser beam, achieving the results in conditions of high sensitivity. The team concluded that it offered “a promising route for room temperature quantum optomechanics”.
According to the researchers, the current study is only the beginning, they plan to continue to refine the results over time. “This could lead to new ways of tailoring materials by exploiting their nanoscale features. We are working to improve the device to increase our current sensitivity by four orders of magnitude,” he continued. | <urn:uuid:7d2e62e6-56c8-4d91-af52-8dcaec586010> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/research-teams-breakthrough-room-temperature-quantum-optomechanics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00300.warc.gz | en | 0.941316 | 159 | 2.609375 | 3 |
The transferring of data between two devices is known as Transmission Mode or Communication Mode in Computer Networks. So the Transmission mode basically defines the direction of flow of signal between the connected devices.
TYPES OF TRANSMISSION MODES
The different types of transmission mode are:
Simplex, Half duplex and Full duplex
- Simplex : As the name signifies, it is the most simple mode of transmission. It only sends information/data in one direction i.e. from the sender to the receiver. The receiver cannot reply to the sender. For example, a radio broadcast is a simplex channel. As it sends signals to the audience but never receives signals back from them.
- Half duplex : In a Half Duplex Mode, the data/information can be sent in both the directions, but one at a time and not simultaneously. For example, a walkie-talkie is a device that can be used to send message in both the directions, but both the persons can not exchange the message simultaneously. One can only speak and the other can only listen.
- Full Duplex : In a Full Duplex Mode, the transmission of the information between the sender and the receiver can occur simultaneously. It is used when communication in both direction is required all the time. For example, a telephone is a two way communication in which both the persons can talk and listen to each other at the same time.
COMPARISON OF TRANSMISSION MODES
The differences between the three modes of transmission can be summarized as below: | <urn:uuid:782b1aa6-e494-455e-8fef-ee6e60a3bf70> | CC-MAIN-2022-40 | https://networkinterview.com/comparison-of-simplex-half-duplex-and-full-duplex/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00300.warc.gz | en | 0.923236 | 314 | 4.0625 | 4 |
The world’s economy is primarily geared towards driving sales and maximising profits. Manufacturing industries play their part in the linear economy, where natural resources are made into objects which are subsequently used and thrown away when broken or worn out. While this may satisfy a business’s bottom line, it has become apparent that the linear economy we live by is incredibly wasteful and contributes to a number of environmental concerns, such as depletion of natural resources, pollution, rising CO2 emissions, and excessive waste.
There is, however, an alternative. A circular economy is defined by the Ellen MacArthur Foundation as a systemic approach to economic development designed to benefit businesses, society, and the environment. In contrast to the ‘take-make-waste’ linear model, a circular economy is regenerative by design and aims to gradually decouple growth from the consumption of finite resources. A whole-systems approach to product design is employed, taking into account the entirety of a product’s life cycle – from design and manufacture to end-of-life.
While data centres have made great strides to increase energy efficiency in order to decrease CO2 emissions, more work is needed in terms of considering the life cycle of the equipment they use. The Circular Economy for the Data Centre Industry (CEDaCI) project, piloted by London South Bank University (LSBU), was set up to support the data centre industry’s transition to the circular economy, with an emphasis on reducing sectoral waste, preventing supply chain problems, and securing uninterrupted data centre operation.
Professor Deborah Andrews, professor of design for sustainability and circularity at LSBU, and academic lead on CEDaCI, says that their research began with trying to identify where data centres have the heaviest environmental impact.
“We have achieved a lot of ground-breaking research because we are using primary source data rather than other people’s data and assumptions. We can make sure all of our life cycle assessment (LCA) work and models are robust enough to provide us with the most accurate analyses.”
Based on an average refresh rate of 1.5 years, servers were found to have a huge environmental impact, with 3.3 new servers manufactured and the same amount of equipment going to waste over a five-year period. The impact is even more dramatic when you consider that hyperscale facilities have an average refresh rate of between nine months and one year.
The Circular Data Centre Compass
One of the ways that CEDaCI is able to make an accurate analysis of the life cycle of particular components is via their Circular Data Centre Compass (CDCC), an evaluator tool that covers the key stages of all LCAs, from initial design, midlife reuse, refurbishment, and second-life, and end-of-life recycling and reclamation.
“The second life and recycling aspects are very reactive, and we have to use these stages to manage what is already in the market,” explains Andrews. “The second life aspect, in particular, is essential at present because the infrastructure needed to recycle or reclaim raw materials is just not available at an adequate scale to cope with the sheer volume of e-waste created.”
The design phase, in contrast, is proactive, and if the design or service of a product can be improved, then it will be possible to reduce the negative impacts seen further down the supply chain.
“There is a statistic that gets banded about the design world which says that up to 80 per cent of a product’s life, or a product’s environmental impact, is determined during the design phase,” notes Andrews. “Whether that is accurate or not is up for debate, but even if it is 50 per cent, the design phase is the phase that really determines what happens to products throughout the rest of their lives, so it is critical to get that right.”
Andrews describes a ‘waste management hierarchy’ to explain how society currently deals with waste. Much of what is considered waste currently ends up in landfill, with some incinerated for energy recovery purposes, and even less recycled or reused. She and her colleagues at CEDaCi want to flip that hierarchy on its head and promote an ‘increase in reduction’ – that is to say, an increase in resource efficiency.
“Reduction, or dematerialisation, is using fewer materials in products without compromising on performance. We constantly see reports in the press about ocean plastics which are virtually impossible to recycle. Take a toothbrush, for example. Expensive toothbrushes tend to have handles made from two or three different types of plastic which you cannot separate, so they are an absolute nightmare to recycle.
“We would like to see much more design intelligence employed in the first phase of a product’s lifecycle in order to reduce the number of unnecessary components or materials used in manufacturing.”
Designing a server with circularity in mind
To demonstrate what they mean in the context of circularity for data centre equipment, CEDaCI analysed 16 different server chassis across different generations and brands, and found a number of issues with current server design. The results showed a lack of standardisation between different brand models and even generations, meaning the majority of parts cannot be interchanged. Additionally, three or four different alloys and three different polymers, alongside textiles, paper, aluminium, zinc, and copper were used within each machine.
Based on recommendations for improvement, such as standardising and simplifying chassis design, avoiding excessive material use and overengineering, and allowing second-hand parts from different brands to be reused across the board, CEDaCI has designed a circular-economy-ready server. The model is currently in digital format and is intended to be used as a benchmark for server design in the future.
“This design exhibits various improvements which we see as absolutely essential, without compromising on its efficiency and durability,” says Andrews. “The chassis mass is reduced from 22 kilograms (kg) to 14kg, and the number of components has been reduced from 117 to 65.
“Crucially, however, the mass of plastic has been reduced to just under ten per cent compared to standard server design (down from 889.45 grams to 85.69 grams). Plastics are notoriously difficult to recycle because there is not a large market for recycled plastic. It is manufactured to perform one function, which it does very well, but when it goes through the recycling process, it loses some of those useful inherent properties. So, for example, Coca Cola bottles often advertise they are made from 50 or 70 per cent recycled material because they have to add some virgin material to the mix to ensure it performs the way the company requires it to. That is why reducing plastic is a really good thing in terms of circularity.”
Of course, server chassis design is scratching the surface when considering circularity in the wider digital infrastructure industry. But, with the EU’s release of the Circular Economy Action Plan in 2020, the emphasis is certainly shifting in the right direction.
As well as continuing to offer free workshops on recycling, product life extension and Ecodesign, alongside training for small- and medium-sized enterprises, CEDaCI’s research continues. They are currently working on a comprehensive LCA model to compare their server design against other servers that they have assessed, including some of the most circular models on the market and liquid-cooled machines. The hope is that a working prototype can be built in the future, and tested for performance and recyclability.
Andrews comments that CEDaCI began their research with servers because they have the highest embodied impact, due to the types of electronics, printed circuit boards (PCB), and other components within the server. The plan is to then progress to less complex products such as switches.
“The next huge milestone, however, is figuring out how to increase circularity in motherboard design,” surmises Andrews. “We are working with some partners in France which are developing different mechanical, chemical, and thermal processes to reclaim the various metals, but the board itself is a composite which is incredibly difficult to recycle. They are made with a fibreglass epoxy resin that then has layers of copper etched into it to create the circuit patterns, alongside all the other individual components which are soldered onto the PCBs. Those components are comprised of lots of very small sub-components made from a wide range of materials, none of which are designed to be disassembled, which, in conjunction with the fact that they are soldered onto the boards, means that reuse and recycling is virtually impossible at the moment.
“That is the holy grail of circular electronics design.” | <urn:uuid:68e1fa92-e23a-4176-806b-6567d6387a8c> | CC-MAIN-2022-40 | https://digitalinfranetwork.com/accelerating-supply-chain-decarbonisation-for-servers-and-electronics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00300.warc.gz | en | 0.957367 | 1,824 | 3.015625 | 3 |
Bug bounties can increase the breadth – and the effectiveness – of white hat communities.
White hat hackers have been making significant contributions to cybersecurity by detecting vulnerabilities in companies’ software systems and websites and communicating their findings, according to a recent research project at Penn State.
Long used by agencies for penetration and vulnerability testing, white hat or ethical hacking helps organizations find holes and bugs in software, digital devices and networks, thereby better securing the online world.
Researchers at Penn State’s College of Information Sciences and Technology (IST) studying white hat behaviors suggest that organizations that reward hackers who uncover vulnerabilities in their systems could improve the bug discovery process by expanding and adding diversity to their white hat communities.
The research to understand how the white hat "market" functions was undertaken by Jens Grossklags, an assistant professor at Penn State's College of Information Sciences and Technology; Mingyi Zhao, a doctoral student at the college; and Kai Chen, a postdoctoral scholar currently at the Chinese Academy of Sciences.
Their paper, "An Exploratory Study of White Behaviors in a Web Vulnerability Disclosure Program" used a dataset from WooYun.org, the "predominant" Web vulnerability disclosure program in China. The data encompassed contributions filed since the site's launch in 2010 from 3,254 white hat hackers from around the world and the 16,446 vulnerability reports they filed about 4,269 web sites.
WooYun follows a process similar to other such community sites. Those who submit reports to WooYun receive no compensation. Once WooYun checks out the severity of a report, it informs the administrators of the affected site and gives them two months to fix it. Only after the fix is made will WooYun disclose the vulnerability.
The researchers found that the top contributors to Wooyun posted only a fraction of all vulnerability reports to the site and that less active hackers also contributed high-quality vulnerability reports. Their conclusion: that the community as a whole, rather than a few expert white hats, plays a key role for vulnerability discovery.
This finding could influence how major Web companies — Google, Facebook and others — handle their own white hat operations. Currently, these operators often use a "vulnerability award program" (VRP) or "bug bounties" to encourage the white hat community to uncover potential problems in their software. They also use the services of crowdsourcing companies such as HackerOne and BugCrowd, which act as liaisons between white hats and software companies.
Based on preliminary results of the WooYun research, Grossklags, Zhao and Chen suggest that managers of company vulnerability programs "should not only focus on the top contributors, but also try to attract as many white hats as possible as contributors. More participation would likely translate [into] more diversity during the search process and more discoveries."
Undertaking that, however, may require new mechanisms for working with white hats, such as a program with a search tool that could help reduce the likelihood that contributors would report on the same vulnerability and the disclosure of more technical details of past vulnerabilities, "so that white hats can learn from others' findings."
As the researchers pointed out in their report, "Wooyun's full disclosure model, which allows the reading of the white hats' comments in the vulnerability reports, likely helps new and even experienced white hats to learn." | <urn:uuid:f560488b-e178-4576-a82f-79e586b7375f> | CC-MAIN-2022-40 | https://gcn.com/cybersecurity/2015/01/more-white-hats-improve-security-researchers-demonstrate/287111/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00300.warc.gz | en | 0.935819 | 687 | 2.625 | 3 |
When trying to cool a data center, consideration is often given to outside temperatures, to the cooling capacity of chillers, and the airflow of fans. Less thought is given to the coldness of space, the vast bleak void that envelopes us all, just begging for warmth.
That may change, with a small startup planning to take advantage of radiative sky cooling, a fascinating natural phenomenon that allows one to beam heat into space. SkyCool has begun to develop panels that emit heat at infrared wavelengths between 8 and 13 micrometers - which are not absorbed by the earth.
The ultimate heatsink
“This essentially allows us to take advantage of the sky, which it turns out is very cold,” SkyCool’s co-founder and CEO Eli Goldstein told DCD. “And more broadly, space is the ultimate heatsink and is around three kelvin - it’s extremely cold.”
While the phenomenon has been known and studied for centuries, the company’s panels, made out of a thin layer of silver covered by layers of silicon dioxide and hafnium oxide, can remove heat during the day - a first.
A prototype of the system was installed on a two-story office building in Las Vegas in 2014. Under direct sunlight, the panel remained 4.9˚C (8.8˚F) below ambient air temperatures, delivering “cooling power of 40.1 watts per square meter.”
“What our surfaces do, is they’re able to not absorb heat from the Sun, but at the same time they’re able to simultaneously radiate heat to the sky in the form of infrared light. And that combination of properties has really never been present in any natural material, and it’s been very difficult to make up until recently by design,” Goldstein said.
The startup was formed by three researchers from Stanford University: Goldstein, his post-doctorate advisor, Aaswath Raman, and his professor, Shanhui Fan. SkyCool was started last year to commercialize the research first carried out at Stanford.
“At the end of this summer, we hope to have a couple of installations in California,” Goldstein said. “After that the plan would be to deploy panels in more locations and scale up.”
The company hopes its panels, which circulate water-glycol, will find success in sectors that require high cooling loads - data centers, the refrigeration sector and commercial cooling.
“Early on we’re more focused on edge data centers,” Goldstein said. “I think there’s absolutely interest in the larger ones: provided that you wanted to cover the entire load, you would need to have adjacent space for the panels, not just the roof. You could also use the panels in conjunction with traditional cooling systems to reduce water use from cooling towers, or electricity use from cooling in more traditional ways.”
He added: “We’ve had a number of conversations with data center companies. I think the biggest challenge for us right now is, because we’re such a small company, to think about a 5MW data center or more is a lot, it’s a big installation for us.”
Another challenge, he said, was that “unfortunately, no one wants to be the first person to try a technology, especially at facilities like data centers. We know the technology itself works - I can heat up water and pump it through the panels and show that we can cool the water. The next thing we need to demonstrate is not on the technology side, but on our ability to execute deployments in a cost-effective way, and tie it into actual systems.
“The energy side of things is pretty straightforward, we know how much heat needs to be removed, and how much heat can be removed by a panel, now it’s about how we do that at scale.”
This article appeared in the June/July issue of DCD magazine. Subscribe for free today: | <urn:uuid:8312c752-f82b-4035-9989-ed7f0edb805d> | CC-MAIN-2022-40 | https://direct.datacenterdynamics.com/en/analysis/a-new-cooling-frontier/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00500.warc.gz | en | 0.958815 | 848 | 3.09375 | 3 |
As more and more companies and governments make data available online, the demand and need to access and process Internet-based data is growing fast. Problematically, much of the data is available only through a front-end web-app that is designed for ad-hoc queries. If the need is to get bulk amounts of online data, automation becomes essential. Real-world automation requires that any number of methods for extracting data from the Internet be made available to support the desired business process. This is made even more so by the variances of web systems design, e.g. some apps have web services and many do not.
Automate supports the ability to integrate with web services or to navigate through webpages, interact with web data, initiate searches and logins, enter data, click through links, and extract tables and entire webpage source information. It does this through the action library with actions for web browsers, HTTP, XML and web services, all which can be invoked without writing code. For this example, we use Excel automation methods.
This article teaches you to automate basic web scraping and table extraction automation. To start, we utilize two sets of actions, the web browser and the Excel actions, to create the task.
In the first step of the task, the “Open” web browser activity opens the website, specified in the Page URL field, to mimic manually opening the Sample Task webpage.
Notice in the image, there is a browser dropdown arrow indicating multiple browser support (such as Internet Explorer, Firefox, Safari and Chrome).
In the second step of the task, we use the “Extract Table” web browser action. This requires selecting a browser, locating the HTML element and creating a dataset to populate the table information for importing into Excel.
After selecting your preferred web browser, use the Magnifying Glass icon to drag it to the opened webpage. You can also manually enter the webpage link into the URL area under "Select browser."
Locating the HTML Elements requires using the Hand icon to point to the website containing the table for extraction. Drag the hand to the table to select the HTML components. The controls and components collected from the HTML elements always identify the correct table. Notice how the “Locate by HTML tag” and “Locate by attributes (case sensitive, all must match)” identifies the HTML location and discovery as shown in the image below.
When locating HTML elements, Automate matches the locator element results with the criteria specified.
Creating and populating the dataset under the Interaction area allows us to place a value on each of the columns in the table for integration with other applications. In this example, we will be using Excel.
In the third step of the task, we use a “Create an Excel Workbook” action to create an Excel spreadsheet and establish an Excel session for interaction. The only requirement with this action is to specify the path and a new spreadsheet name, such as C:\test\sampletask.xlsx.
The last step in the task is the “Dataset to Cells” action. This is used to set the text of the selected cell range in the established Excel session with the values contained in the dataset created in the Web Browser Extract Table (Step 2). Starting at Row 1, Column 1, the entire table is placed into the newly created Excel spreadsheet starting at Row A1. And that’s it! | <urn:uuid:b6b35632-4dd4-49f7-8148-18a5ffd55afb> | CC-MAIN-2022-40 | https://www.helpsystems.com/resources/articles/how-automate-web-extraction | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00500.warc.gz | en | 0.845846 | 718 | 2.8125 | 3 |
Looking to subscribe to Children Learning Reading Program by Jim Yang? Find out the important features of this program from our detailed Children Learning Reading review 2022.
Jim Yang created the Children Learning Reading Digital Course, and it offers a step-by-step teaching method. If you look at it very well, you’ll find that it is designed for young children. It can help them read and learn effectively.
It is not surprising that the Children Learning Reading program is coming from Yang. Because he has a lot of experience when it comes to teaching and learning, moreover, he was a father. He said it works because it was based on a proven codex technique. The Children Learning Reading creator noted that small children are full of intelligence and want to educate parents on how best to take advantage of this extreme intelligence to educate their children.
Jim Yang noted that the first seven years of the child’s mental development are the most important. Parents must build their children future during this stage, and they have to cultivate healthy brain development. This period is unique, and every parent must take advantage of it, which is what it helps parents achieve.
The way parents help to develop their children’s brains at this stage is crucial because it will determine how excellent or bright the child will be in the future. Parents must pass on their children essential skills to help them in the future.
This eBook, through a learning program, is quite different from several others out there, and that is because of three essential features. These features make it more effective than the usual traditional learning techniques that most people are known.
What Is Children Learning Reading? Features Explained.
The Children Learning Reading PDF book review has three important features that distinguish it from several others, as well as the traditional method of learning reading.
First, Children’s Learning Reading is based on phonics, phonemic recognition, and absorption. When the word phoneme is used, it simply refers to sound characteristics, especially those people make when they speak. This aspect is crucial when you are teaching children about learning reading.
The second significant feature has to do with attention span. The Children Learning Reading author pointed out that parents or teachers should not forget that children’s attention span is very short in teaching children about reading. The duration can be within three to five minutes to ensure that children are well concentrated and coordinated when teaching them. He reflected this aspect in the book presented.
Thirdly, the Children Learning Reading creator noted that such things as television and computer programs are used in teaching. The Children Learning Reading eBook is split into two major stages. The two stages are sub-divided into at least fifty smaller lessons. It is made smaller to make it easier for the children to comprehend. The lessons are provided with different supports such as stories, audio files, sight cards, as well as books, and most importantly, it is supported nursery rhymes and so on.
Most importantly, you discover that the Children Learning Reading author provided some supporting materials and bonuses. That is why you will see such items as MP3 audio clips, different stages of lesson stories, and so on. More than that, the author has made available three months of free counseling through email and other means.
Furthermore, the Children Learning Reading PDF book is available in two packages: the standard package, often regarded as the standard edition, and the premium package, also called the premium edition. It means you can get one of the packages or all the packages. You can always get more information about the packages.
Children Learning Reading formulates an effective teaching strategy suitable for parents and teachers. It will help children master how to read successfully. Before introducing the book to the public, Jim Yang tested it and found it helpful. It helped him in teaching his children.
If you apply the teaching, you will notice the progress within the first weeks of using it. Many parents have used it to educate their children. The Children Learning Reading book is simple and very easy to apply. That is why children would not find it hard to use it. It was programmed so that your children will be engaged and become proficient readers within a short time.
What We Liked In Children’s Learning Reading Program
There are lots of benefits you are going to derive from using this program.
1) Children Learning Reading Is Easy to use
The greatest benefit of using the Children Learning Reading eBook is that it is easy to practice, and that is because it is easy to understand. It removes all the difficulties and all guesswork that are involved in teaching. Because of the easy and simple way it is formulated, children will find it fun using the method.
The information will be helpful because children will learn how to pronounce different phonemes correctly. If you get the premium package, you get good videos to help your children learn more about reading, which is the foundation of learning.
2) Children Learning Reading Has Real Feedbacks And Reviews
If you check, you discover that the Children Learning Reading product is supported by honest feedback and reviews from parents. The Testimonials and reviews are real from people who have used them and are satisfied with them. This indicated that Children Learning Reading by Jim Yang works.
You can check various forums and online discussion groups and observe many positive sentiments shown by people who have them. From feedback on the internet, many parents who have used the book have positive reviews and results within the first month of using it.
3) Engaging Lessons
Moreover, the Children Learning Reading course lessons are short, and because of that, they are engaging. It shows that the creator recognizes that children have a short attention span. It is designed so that the children grab the best and most helpful information within the short time the lesson is available to them. Once it has caught their attention, it will remain in their brain for a long time.
4) Children Learning Reading Program is Budget-friendly
There are at least two options on the market today to ensure that anybody who wants a Children Learning Reading book gets it. This includes standard and premium options. The standard option is budget-friendly. An investment you make in this is a worthwhile investment; even if you choose the premium option, you will get the result within a short time.
Most importantly, Children Learning Reading Downloadable System is backed up by a sixty-day money-back guarantee. It shows that the creator is happy with the product and is sure of the quality he is releasing to the market. Because of that, he offers a sixty-day money-back guarantee. The book is risk-free; your children will learn how to read fast. That is a great advantage.
What Wo Do Not Like About Children Learning Reading Are –
Though many benefits are associated with the Children Learning Reading program while we reviewed it, a few shortcomings are associated with it.
1) Learning Period May Vary
It is reported that parents can begin to see the result within a short time. This is not always the case. Learning speed among children is not the same; it can vary and affect many children. It means parents should not set unrealistic results for their children using the system.
2) Not The Best For Busy Parents
The eBook requires patience, and because of that, many busy parents would not find this helpful. Some parents would not have sufficient time to dedicate to the product to teach their children about reading.
3) No Physical Copy – Children Learning Reading Is Download Only
Perhaps the worst shortcoming is that this product is available in digital formats only, i.e., it does not have a physical copy. You cannot get a Children Learning Reading review copy of it in any bookstore. Before you can use it, you have to download it online. It is not the best for those that lack access to the internet.
How Children’s Learning Reading Course Helps In Children’s Learning
As you can see, there are lots of advantages to using Children Learning Reading Digital course. The major advantage is that it would facilitate your child’s learning process. You can see that your children would be proficient in pronouncing many words and reading within a short time.
Secondly, it is going to help develop the child’s brain. It is most useful for children under the age of seven. The system would assist children in developing their brains at a younger age.
Thirdly, you have seen that it is formulated in a way that makes it simple and easy to understand. Moreover, the lessons are presented in short videos. This helps children to master it very well.
The issue of cost is equally important. There are different formats, such as the Children Learning Reading standard edition. If you do not have the money to get the premium version, you can get the standard version and derive value for your money.
A Brief About The Children Learning Reading Creator Jim Yang
Jim Yang, the creator, was a father himself. He knew the difficulties parents encounter in educating their children about learning. He decided to delve into the problem to find a permanent solution. Moreover, the fact that the creator is a reading teacher is remarkable. He knows the challenges and is better positioned to produce such a book.
He formulated it out of experiences he has about teaching reading. Before he made the product available for everybody, he first tested it on his daughters and was happy that the book worked before he made it open to the public.
Why Children Learning Reading Is Useful
Children Learning Reading is beneficial for several reasons. The most important thing is that it helps educate your children about reading. If you follow it as recommended, you will observe the results within the first three to four weeks. It is a beneficial learning method for children aged seven years and below.
Moreover, it is formulated so that children would find it easy to grab information contained in it, which makes it user-friendly. Furthermore, it is highly affordable.
Is Children Learning Reading A Scam? Are The Reviews Legit?
The children’s Learning Reading electronic book is not a scam. It is one hundred percent legit and the positive reviews about the program explain this. Everything about the system is verifiable. Its creators are well known. Most importantly, it works. You are going to derive real value for your money.
The children’s Learning Reading Program is helpful. It is meant to help parents facilitate their children reading processes. It is presented in short lessons so parents and children would not find it hard to understand. It is effective for most parents, and research has shown that parents would begin to see the results within the first few weeks of using the system.
Since its release on the market, many people have found it useful. There are two children Learning Reading packages on the market, and this is to make it easy for every user to make a choice. You can opt for the standard product if you do not have money for the premium product. Though there are many benefits associated with this fantastic Children Learning Reading eBook, a few shortcomings are also associated with it. It is available digitally, meaning you must be online before you can lay your hands on it.
The few shortcomings notwithstanding, this is one of the best learning methods ever released. Both teachers and parents are finding it helpful. If you are looking for a system that can help educate your kids, then you should opt for Children Learning Reading because it is meant to benefit children. | <urn:uuid:eaf6ac41-1b4f-480d-86da-8eadcf52a866> | CC-MAIN-2022-40 | https://www.infostor.com/courses/children-learning-reading-reviews-features-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00500.warc.gz | en | 0.96611 | 2,335 | 2.5625 | 3 |
The Internet of Things (IoT) is everywhere. These connected devices now work in medical settings, automotive vehicles, industrial and manufacturing control systems, and more.
Unfortunately, security in IoT devices has lagged behind their production, and the time has come for manufacturers to face this challenge head-on. Specifically, manufacturers need a scalable solution to address concerns like authentication, data encryption, and the integrity of firmware on connected devices. Together, these efforts will help ensure manufacturers not only deliver secure products to market but that those devices stay secure throughout their lifetime.
Realizing these security goals requires introducing a public key infrastructure (PKI) strategy for strong authentication in a cost-effective, lightweight, and scalable way.
Risks Abound in the IoT: Why We Need Better Security, Now
IoT security has not matched the pace of innovation for these connected devices. For example, manufacturers often ship devices with default passwords or shared cryptographic keys, which puts every single device at risk based on a single hacking event.
The most common security risks in the IoT today center around weak identity and authentication. These risks include:
- Weak authentication: The use of weak default credentials allows hackers to access devices at scale and deploy malware.
- Hard-coded credentials: Hardcoding passwords or keys into software or firmware and then embedding those credentials in plain text into source code simplifies access for developers, but it does the same for everyone else.
- Shared and unprotected keys: Instances of symmetric (rather than asymmetric) encryption, as well as issues properly storing and provisioning encryption keys (even asymmetric ones), makes it easy to compromise devices. Secure storage becomes especially challenging at scale.
- Weak encryption: The use of weak algorithms results in encryption that’s easily penetrated. This challenge stems from the fact that lightweight IoT devices with limited power struggle to generate enough entropy for random number generation.
- Unsigned firmware: Code signing verifies the authenticity and integrity of code, but many IoT devices don’t check for a signature, making them vulnerable to malicious firmware updates.
PKI to the Rescue: 6 Ways PKI Supports IoT Device Security
Despite all these challenges, achieving security in the IoT is well within reach. It all starts with a sound strategy centered around PKI. Digging deeper, PKI can support IoT device security in six critical ways:
- Unique identities: Using certificates introduces a cryptographically verifiable identity into each device for secure network access and code execution. Importantly, these certificates can be updated or revoked at the individual level.
- Open and flexible standard: PKI is an open standard with flexible options for roots of trust, certificate enrollment, and certificate revocation, supporting a variety of use cases.
- High scalability: Issuing unique certificates from a single trusted Certificate Authority (CA) makes it easy for devices to authenticate one another without a centralized server.
- Robust security: A well-managed PKI program offers the strongest authentication, as cryptographic keys are far more secure than methods like passwords and tokens.
- Minimal footprint: Asymmetric keys have a small footprint that makes them ideal for IoT devices with low computational power and memory.
- Proven and tested: PKI is a proven and time-tested approach that experts recognize as a practical and scalable solution for strong security against a variety of attacks.
Are They Different? PKI for Product Security vs. Enterprise PKI
Importantly, while the approach required to fit within complex hardware supply chains and IoT device lifecycles may look different from the enterprise PKI companies have used for years, the fundamentals of PKI remain consistent and best-in-class PKI software solutions can handle both use cases:
- Scalability and availability: The volume and velocity of certificate issuance is much higher in the IoT than it is in enterprise deployments (think thousands per hour). As a result, PKI components like issuing CAs and validation infrastructure must meet higher availability and performance levels, which simply defines the speed and network bandwidth of the machines running the PKI software.
- Private key generation and storage: Since IoT devices typically live in places that are publicly accessible (unlike enterprise web servers), manufacturers need a way to protect the private keys stored on these devices for authentication. This typically requires generating private keys within a secure hardware component so that they never get exposed outside the device. The same PKI software handles Certificate Signing Requests wherever they can from IoT devices or enterprise IT.
- Certificate policy: Certificate policy is always important and adhering to the policy can be as critical in the IoT as in a complex enterprise network given the disparate ecosystem. Fortunately, these policies are always documented in standards and made available as templates in the best PKI solutions.
- Lifecycle management: Much like in enterprise or factory networks, lifecycle planning is also important in the IoT given how long devices will live in the field. Manufacturers must understand how identities will be provisioned and updated over time as devices live in the world, and they need response plans for algorithm degradation or a compromised root of trust.
- PKI infrastructure: Both enterprise and IoT PKI may be hosted on-premise, in the cloud, or in any variation of the former and latter depending on budget, corporate policies, interfacing to factories, and already available IT infrastructure. Both follow the same recommendations: securely hosted HSMs serving an offline root CA with issuing CAs securely hosted and protected by proxies when they need to interface with distant machines over the internet.
- Access management: Both enterprise and IoT PKI must be administered by a very small team with strong rules in place to log and keep track of any action and change in certificate policies. Best-in-class PKI solutions offer role-based interfaces, capable of clustering usage and isolating between different business units and projects, allowing to mutualize costs of PKI software license and hardware infrastructure if needed.
Moving Forward: Top Recommendations for IoT Security
Ultimately, the manufacturers that can provide strong, unique identities for their IoT devices – and do so at scale – will deliver better protected, differentiated products to market. Achieving that goal starts with these top recommendations for IoT device security:
- Incorporate trusted device identities and provisioning: Use unique digital certificates for every device for more granular management and tracking. Importantly, each of these certificates should tie to an established root of trust with a strong CA hierarchy.
- Manage on-device key generation: Document use cases for on-device key generation to define Certificate Policy and Certificate Practice Statements (CP/CPS). Drafting CP/CPS is optional, but it offers high assurance through greater policy enforcement.
- Consider offline/limited connectivity devices: Enable devices within the same trust chain to authenticate even without an internet connection so that they can continue to receive regular maintenance and updates.
- Support mutual authentication: Require authentication to restrict access to trusted users and systems. Digital certificates allow for mutual authentication between two entities that share a root of trust for more secure data exchanges on open networks.
- Ensure secure boot and code signing: Program devices to only permit execution of code with a verified signature. Adding a secure boot also protects devices by ensuring they only start or install updates once they’ve been signed by a known, trusted authority.
- Incorporate crypto-agility and lifecycle management: Introduce the ability to quickly re-issue or revoke certificates from active devices, as static systems are inherently insecure. Consider cases like certificate expiration, ownership changes, and algorithm degradation, all of which require devices to receive cryptographic updates to remain secure.
Need Help Delivering Secure IoT Products to Market?
If you’re ready to face the IoT security challenge head-on, you’ve come to the right place. Download the complete white paper on the value of PKI for IoT to discover what it takes and learn how Keyfactor Control and EJBCA Enterprise can help manufacturers bring secure IoT devices to market at scale. | <urn:uuid:da650077-808b-49c2-a430-5ce2221992c5> | CC-MAIN-2022-40 | https://www.keyfactor.com/blog/securing-the-iot-at-scale-how-pki-can-help/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00500.warc.gz | en | 0.906679 | 1,644 | 2.796875 | 3 |
REST (Representational State Transfer) is an architectural style for designing decentralized systems. It originated from an architectural analysis of the Web and combines a client/server architecture with additional constraints that define a uniform interface.
Since an API (Application Programming Interface) is a set of rules that lets programs converse interact (it defines the rules of communications between applications), REST as a style puts some constraints on what an API will look like.
With REST APIs, you have guidelines that developers adhere to when they create an API. The end result is that API users can get a part of the data when you link to a specific URL. The individual URL allows us to make a request and the data that is sent back is the response.
- What are the possible technologies for REST?
- How do REST APIs work?
- REST API Example
- REST APIs Uses
- REST API vs API
- Why are REST APIs Important?
- What was used before REST APIs?
- REST vs SOAP
- What are the True RESTful APIs?
What are the possible technologies for REST?
Technology is the foundation of any REST APIs since REST is just an architectural style and therefore does not require any specific technology. But of course, the underlying technology of REST APIs must match the general constraints of REST, such as being a client/server protocol. Using well-known protocols makes it simpler to be easily usable by developers. Simple designs work best and mean that APIs can work easily with most programming languages.
The basic technologies used for REST APIs are HTTP, OAuth, JSON/XML, JWT, and Webhooks. These are the go-to and favored technology for API consumers—as well as providers—due to the effortlessness, safety, as well as the ability to make resources and data available. This makes it much easier to incorporate into web and mobile applications.
REST API: How do they work?
A REST API works by breaking down an activity. This generates a sequence of small components, while each component will tackle a specific part of the activity. Flexibility is allotted from this component, but challenges can certainly arise.
REST APIs use prevailing HTTP procedures. They use GET to get back a resource; PUT to change the state of the API or modernize a resource. All API calls are stateless. This means that nothing can be held by the RESTful service between implementations.
What is a REST API example?
The Twitter API comes to mind. It’s a sound example because it implements the REST API. The Stripe API is also used quite frequently due to the advanced features like enlargement and pagination. It’s a well-used model of how REST APIs work. The Slack API is also used with REST APIs and GitHub API v3. Stress-free utilization makes it quick to implement.
Other popular examples to consider are Google APIs. Google has many uses with its APIs including YouTube API, Google Analytics API, Google Font API, Blogger API, plus many more. They all use REST APIs as part of their technology. Other social media platforms using APIs such as Facebook and LinkedIn are using REST APIs.
What are the uses of REST APIs?
REST APIs have many uses because the API calls are stateless. They can also be beneficial in cloud applications. Cloud computing and microservices are on the road to making RESTful API design the directive for the future.
Stateless elements can be easily redistributed is there is a failure on hand. Another use is that they can measure to adjust for load changes. The reason being that any specific request can be pointed to any request of an element. Nothing can be saved that has to be in memory for the next transaction. This makes REST APIs the go-to component for web use and cloud computing.
REST API vs API
Rest API and an API may sound similar, but they are a different ball of wax. API is more the King of the world while REST API is the Queen. Two different categories! One is the top and one is second in command. An API is a general term, but a REST API is more specific.
APIs are the glue that connects one application to another with services and data getting access. The difference lies is that REST APIs are a web-service API or architectural style which utilizes HTTP protocol and JSON for data format. REST APIs bring about simplicity in development with limited resources and requires less protection.
Why is REST API important?
REST API is the mainstay of service app development. It’s simple to use and has a global standard in the creation of APIs for Internet services. REST is any crossing point between structures using HTTP to get hold of data and produce operations on the data in all probable formats.
What was used before REST API?
Before REST API, which was the brainchild of Roy Fielding, SOAP (Simple Object Access Protocol) was used to integrate APIs. Since REST API was not on the API scene years ago, it was quite difficult to build and clumsy to use. There was another version called CORBA, but that was not much better.
The early days of APIs didn’t have all the technology that is available today and there was no real gold standard for how these APIs should be developed and used. But thanks to Roy Fielding who came up with the concept of REST APIs, the level playing field had become a whole new world.
REST vs SOAP
REST (Representational State Transfer) is an architecture style while SOAP (Simple Object Access Protocol) is a protocol.
REST API has the advantage of being able to use the SOAP protocol. So, they are featured differently and certainly work in a diverse function. SOAP is also known to have tighter security features which are a plus.
Other differences are REST APIs access a resource for data (URI) while SOAP operates. REST also handles different data forms, HTML, XML and JSON, where SOAP only uses XML. Also, SOAP needs more bandwidth to operate and REST needs fewer resources. Because REST is mainly for web services, being lightweight is a plus. Note that REST can be cached, SOAP doesn’t have this functionality. In the end, the differences are unique, there’s no real comparison is the bottom line.
What are the true RESTful APIs? (Six architectural constraints)
REST APIs have six architectural constraints. This makes any web services a true RESTFUL API.
- Uniform interface
- Layered system
- Code on demand (optional)
The uniform interface control is necessary for the design of any REST service because it makes it easy to use and decouples the architecture. This lets each part evolve on its own.
The Client-server allows the client application and server to be able to advance individually. Only utilizing resource URIs makes it simple and is common practice.
Stateless is when the computer or program follows the status of interaction. This means that there is no record of prior communications and that the request was handled on an individual basis.
The outcome is that the server will not store data that the client made. Each request is individual and is treated as such. No meeting, no history.
In the world of REST APIs, being cacheable brings improvement to performance for the client. This brings better scalability for a server because the load has been decreased. The bottom line is a well-run cacheable machine helps to eliminate interactions that impede performance.
A layered system allows you to take layer upon layer with different units of functionality. Basically, as the name states, you have a way of communicating with each layer to have functions performed to its specifications.
Code on demand (optional)
Most of the time, code on demand can be optional. Code on demand allows you to execute a real REST API and a server can increase the ability of a client on runtime.
Download the whitepaper and learn how to modernize your IT structure with APIs. | <urn:uuid:cba6966b-6de5-4177-8f02-4aab5522baa6> | CC-MAIN-2022-40 | https://blog.axway.com/learning-center/apis/what-is-rest-api?utm_source=axway&utm_medium=blog&utm_campaign=gc_amplify&utm_term=null&utm_content=blog&utm_id=null | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00500.warc.gz | en | 0.93609 | 1,671 | 3.578125 | 4 |
Access control is often the first line of defense for organizations. Deployed to control the movement of people based on specified permissions, it is the difference between risk mitigation and susceptibility.
Access control systems form the backbone of any successful physical security posture. But choosing a system is not always straightforward.
Electromechanical or electromagnetic locks? Fail-safe or fail-secure locks? Standalone systems, integrated systems, role-based access control, mandatory access control…
Albeit terms that signify the continued advancement of access control technologies, they are jargon to most. Learning the landscape and researching solutions can be a time consuming and frustrating process. Here, we will try to simplify things and answer some vitally important questions: what are access control credentials, how do they work, and which one is right for you?
What Are Access Control Credentials?
But with so many possible credentials on the market, which one fits your environment?
A proximity card looks a lot like your typical debit or credit card. Earlier iterations of access control cards had to be swiped in order to be read, yet proximity cards can trigger access remotely thanks to contactless verification technologies.
From a convenience perspective, this is incredibly useful. They can be scanned through wallets, or, depending on the system’s configurations and capabilities, may be detected from greater distances, allowing access even if a proximity card is left in a pocket, for example.
Smart cards are different from proximity cards. As low frequency (LF) contactless cards, proximity cards are read-only devices that do not contain multiple types of data.
Smart cards are an evolution of proximity cards. They are designed to provide greater flexibility in card application and can be programmed with multiple credentials. While a proximity card will solely be used to define access privileges, a smart card may also store cash values, a key reason why they are often utilized as pre-paid membership cards.
MIFARE is a common technology used in smart cards. Originally deployed to handle payment transactions for public transportation systems throughout Europe, it is now a common feature in security access control credentials, able to provide identification, authentication, and the storage of other information thanks to embedded microchipping.
Much like smart and proximity cards, key fobs are physical access control system (PACS) credentials designed to be both small and convenient. They are programmable security devices with built-in authentication, usually attached to a key chain, and are available as proximity or smart, high-security credentials.
They are a core part of keyless entry systems. Leveraging radio frequency identification (RFID) technology, they open and unlock doors electronically, and are one of the most common ways in which businesses manage entry to their facilities.
Mobile Access Credentials
Mobile access credentials are digital credentials integrated within an Android or Apple iOS device. Essentially, they turn a smartphone into a digital key.
People today will almost always have a smartphone on them, making mobile access credentials a logical and convenient way of facilitating access control. The technology operates like Apple Pay or Google Pay, where a phone is held up to a digital reader and the credentials are authenticated.
They can be effective at reducing costs, removing the need to have additional physical objects manufactured in order to uphold access control systems.
Biometric readers do not require an additional physical device, yet they do not use mobile devices either. Instead, the identity of a person is authenticated by the individual themself.
Biometric technologies have been applied in numerous ways: fingerprint and palm scanners are more typical variations, but facial recognition, eye scanners, and voice recognition are all cutting-edge technologies available on the market.
Biometrics are extremely sophisticated and often more expensive than alternative access control credentials. Without the need for a device, and being impossible to imitate, they are some of the most convenient and secure access control methods.
To learn more about Identiv’s access control credentials portfolio, contact us today at firstname.lastname@example.org or +1 888-809-8880. | <urn:uuid:4d693533-6c73-4cb4-80da-04676c9c2e35> | CC-MAIN-2022-40 | https://www.identiv.com/community/2021/04/22/what-are-access-control-credentials/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00500.warc.gz | en | 0.932718 | 839 | 2.921875 | 3 |
http://www.siliconvalley.com/mld/siliconvalley/5147205.htm By Miguel Helft February 10, 2003 Just two weeks ago, a nasty little piece of software known to security experts as an Internet ``worm,'' wreaked havoc in parts of cyberspace. ``Slammer,'' as the worm was dubbed, went beyond the usual disruptions to e-mail and Web sites: It crippled 911 systems near Seattle, disabled Bank of America ATMs, and gummed up ticketing systems at Continental Airlines. The attack exploited a flaw in a Microsoft database program that was well known. Microsoft had issued a software ``patch'' to fix the flaw six months earlier. Computers that had the patch were not directly affected. Ironically, some of Microsoft's own computer servers did not have the patch and were overwhelmed by Slammer. That such an attack took place was no surprise to most security experts. But that the relatively simple worm -- Richard Clarke, who just retired as head of the National Strategy to Secure Cyberspace called it ``dumb'' and ``cheaply made'' -- could take its toll on electronic systems that many believed to be beyond the Internet, should worry everyone. ``More sophisticated attacks against known vulnerabilities in cyberspace could be devastating,'' Clarke said. ``We cannot assume that the past level of damage is in any way indicative of what could happen in the future.'' Making cyberspace more secure is not easy. The global network grew up as a freewheeling medium that allowed anyone and anything to connect. Without profound changes in how computer products are built, and how networks are maintained, it will remain vulnerable. ``We are making systems so complex that when they fail, they fail in a major way,'' says Eugene H. Spafford, a computer science professor at Purdue University specializing in security. But better technology alone will not suffice. Making cyberspace safer will require policy changes that affect the economic forces driving technology decisions. Companies view security as just any other business risk and make security decisions to minimize costs, says Bruce Schneier, chief technology officer of Counterpane Internet Security. As long as the costs of ignoring security outweigh the benefits of extra security, little will change. Schneier makes a compelling argument that enforcing liability both for making shoddy software and for not protecting networks could be the single most important step to improve security. ``Liability changes everything,'' Schneier wrote in an essay. It will force companies to rethink their priorities in product development, which currently emphasize new features over security and robustness. And it will force companies to be better guardians of their own networks and their customers' data. Liability enforcement will also spawn an insurance industry to help businesses manage liability risk. That industry, in turn, will demand better security. ``A company doesn't buy security for its warehouse because it makes it feel safe,'' Schneier wrote. ``It buys that security because its insurance rates go down.'' Finally, liability will help establish minimally accepted standards and processes for developing products and securing networks. Spafford recommends a similar approach. ``You could set up a tax credit for companies that invest in certain kinds of security technology,'' he says. Or we could hold liable a company that uses software known to be insecure, just like we hold liable a contractor who ``builds a new factory out of flammable wood, not steel.'' After all, Internet security is a bit like public health. If you don't protect yourself against disease, you are not only putting yourself at risk, but others as well. Unfortunately, Clarke's recommendations, expected to be unveiled soon, are not likely to endorse any such approaches. Following pressure from the tech industry, which recoils at the idea of mandates, the plan has been watered down. ``We've gone from mandates, to recommendations, to suggestions,'' Spafford says. Policymakers have been right to be cautious. In a field so complex, the wrong approach could easily make things worse, not better. And no one likes to increase the cost of doing business -- at least not until they realize that the costs of a crippling attack could prove to be far greater. Miguel Helft is a Mercury News editorial writer. - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Tue Feb 11 2003 - 10:47:19 PST | <urn:uuid:aeb35b50-10dc-4919-bbdd-e4a329ba208e> | CC-MAIN-2022-40 | http://lists.jammed.com/ISN/2003/02/0043.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00500.warc.gz | en | 0.962732 | 914 | 2.625 | 3 |
December 10, 2020
Source: AI Trends’ Staff
The Ford Motor Co. has made a substantial investment in AI, from investing $1 billion in Argo AI in 2017 to advance its self-driving car efforts, to developing centers of excellence to focus on machine learning and AI, where engineers determine the AI tools and methods that can be dispersed throughout the company.
The use of AI for predictive maintenance, anticipating when a part may fail before it does, is proving productive for manufacturing at Ford, according to Jeanne Magoulick, Advanced Manufacturing Manager, Ford Motor Co.. She spoke as a member of a panel on Shaping Technology Innovation at MIT’s recent AI and the Work of the Future Congress 2020 held virtually.
“We are excited about predictive maintenance,” Magoulick said. “It will make us more efficient. We can identify when a machine is trending out of control and may need maintenance, so we can schedule at the next available window. It’s the next level of predictive maintenance from what we do today.”
It also helps in the ordering of needed replacement parts. “If we know the part is going bad, rather than holding the cash in our inventory, we can order it on demand,” she said.
AI is also being applied to vision systems, making for more powerful abilities to conduct inspections during manufacturing. ”We can find defects anywhere, including seeing paint scratches,” Magoulick said.
In addition, AI is being applied to further automate the auto manufacturing process, with research into where to apply the innovation ongoing. “We are using machine learning to try to reduce our cycle times,” she said. “We recently reviewed a use case for transmission assembly, which reduced the cycle time slightly the first time through.”
In addition, Ford is experimenting with the use of natural language for voice commands to communicate with machines on the shop floor. “It’s Siri for manufacturing,” she said.
Additional areas of research include studying audio to detect quality defects, “using AI to assess what is a good and what is a bad digital audio signature,” she said. Also, Ford is experimenting with collaborative robots on the shop floor, she said.
Domain Expertise Comes from Those Doing the Work Today
Asked by session moderator David Mindell, Co-chair, MIT Task Force on the Work of the Future, and MIT Professor of Aeronautics, where the domain expertise comes from, Julie Shah, MIT Associate Professor, Department of Aeronautics and Astronautics, said it is primarily from the people doing the work today. “The domain expertise is with people on the shop floor doing the job today, learned through years of apprenticeship in some cases,” said Shah. “It might look easy in some cases on first look but it can be challenging to program.”
She added, “Being able to learn from observation and demonstration is best done directly from someone doing the task on the shop floor, to see the key factors in doing the job successfully.”
Panelist Daron Acemoglu, MIT Professor of Economics, in response to a question from Dr. Mindell on whether AI will make better engineers, stressed the need for AI engineers to have a “concrete understanding” of the social implications of decisions they will make. He also stressed the importance of government policy.
“Government priorities are signals,” he said. “If the government gives up on the agenda of creating better technology, it’s natural for researchers to do that too.”
He is concerned that AI researchers maintain autonomy from the corporate world, and that big tech companies fund much of AI research in their own AI labs. “They have their own agendas,” he said. “If those companies set the tone for leading AI labs, how can we expect the AI research to do anything but parrot the priorities of those companies. It’s a difficult lesson. We are not really establishing our autonomy. We are saying good research means we are more integrated with Google, Amazon and IBM. Autonomy is critical in this area.”
Rus Sees “Problem-Driven” Research With Industry as Productive
He was challenged on this point a bit by panelist Daniela Rus, Director MIT CSAIL, and MIT Professor of Electrical Engineering and Computer Science, who said she has had some good experiences collaborating with researchers in private industry.
“I think there is a fair bit of autonomy and a number of programs that support problem-driven research,” she said. “Maybe there is not enough funding, and in some sense, where the government is lacking, the companies are stepping in.”
She added, “I find working with companies can be enriching and empowering,” mentioning a collaboration with Toyota Research Institute about five years ago to advance the science and AI and robotics research. Outlining her thoughts, she said, “When I think about how the university and industry research labs connect together, I have a mental model where the industry development lab works on products for today, the industry research lab works on problems of tomorrow, and the university research role is to think about the day after tomorrow, connecting to how those advances matter. The applications allow us to root our ideas into things the world cares about.”
Mindel asked if we should worry about AI taking over too many functions of humans. Prof. Acemoglu said, “There is a choice. There is no iron-clad rule on what humans can do and what technology can do. They are both fluid. It depends on what we value.”
Prof. Shah agreed with the sentiment. “The machines are still performing very narrowly-defined tasks,” she said. “Deep learning is a functional approximator, like algebra and calculus. It’s how we take those tools and use them for a purpose, and how we define success for those systems that matters. We might be trying to replace some aspect of what a human is doing today, but none of these systems operate truly independently. So asking what is the way we can have these technologies achieve our larger goals is the critical question.”
Rus ended the session on an optimistic note. “In the scientific community, we advance the science and engineering or intelligence and in doing so, we accomplish many things. We get a better handle on life, and we develop a broader range of machine capabilities. I am excited about using the latest advances in AI, machine learning and robotics to make certain jobs easier, to make life easier,” she said.
Technology has allowed people to come closer together during the pandemic, she said, “Despite the fact that the world is in the middle of a pandemic. And technology has allowed us to develop a vaccine more quickly, and that is helping us address the disease.” | <urn:uuid:e2c7a50c-7bf0-4efb-92b2-16e5341f1413> | CC-MAIN-2022-40 | https://internetofbusiness.com/fords-use-of-ai-an-example-of-shaping-innovation-in-mit-future-of-work-session/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00500.warc.gz | en | 0.95905 | 1,442 | 2.515625 | 3 |
As the agency's next-gen satellites come online, it expects more accurate forecasts and increased lead time for severe weather.
Hurricane Harvey has already claimed the lives of more than two dozen people, displaced tens of thousands across Texas and could eventually cost as much as $75 billion in damages.
But those figures may be lower than what they otherwise might have been given the accuracy of early prediction models from the National Hurricane Center. The NHC, part of the National Oceanic and Atmospheric Administration, used computer models parsing data from weather satellites and other instruments to correctly predict Harvey’s Friday landfall in Southeast Texas three days before the storm did just that.
The NHC’s model called for potential flooding and storm surges 72 hours before the storm crashed into the Texas coastline.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
In the near future, NOAA’s computer models, which are based on data points like wind speed, cloud movement and moisture levels, are expected to improve further when NOAA’s next-generation geostationary satellite system comes online, according to Steve Goodman, the chief scientist for NOAA’s GOES-R fleet.
GOES-16, which stands for Geostationary Operational Environmental Satellite, is the first of three satellites in the new $11 billion constellation. Launched in November, GOES-16 is expected to be fully operational in the coming months.
The GOES-R fleet brings an exponential upgrade to the amount of data NOAA’s computer models use, producing up to nine terabytes of weather information per day—a total that will double in 2018 when the second GOES-R satellite is launched. That’s the equivalent of 30,000 CD-ROMs full of weather data.
“We’re an enabling capacity for the forecasters, and the ultimate hope is we’ll have more accurate forecasts and warnings and get an increase of lead time that we all expect will help save lives and property,” Goodman told Nextgov.
The volume of raw weather data is important, Goodman said, and required extensive modernization of the ground systems and algorithms NOAA uses to receive and analyze it. But GOES-R will also improve the kinds of data forecasters can access and the speed at which they can make important decisions, such as issuing warnings.
“It’s been 22 years since we last had new imaging technology,” Goodman said. NOAA began budgeting and developing the current aging GOES fleet of satellites in 1999 using imaging technology that was already a little old. GOES-R is a “monumental upgrade” over existing weather satellites, Goodman said, and part of the reason it takes scientists nearly a year to validate the satellite is because of its sophisticated instruments and capabilities.
For example, GOES-R will have four times better spatial resolution than today’s crop of aging GOES satellites. From its position 20,000-plus miles above sea level, GOES-R will allow forecasters far better imaging of cloud tops and storm formations that could alert forecasters to a potential weather event before a storm eye forms, Goodman said.
GOES-R will also scan the Earth through 16 spectral bands as opposed to today’s crop, which only scans through five bands. One key GOES-R spectral band will allow forecasters to see where moisture in a storm system is piling up—and conversely, where dry air is located—which can be used to predict whether and how much a storm will intensify.
GOES-R is also equipped with a lightning mapper, a new capability that allows the satellite to “see” every lightning strike in the air or toward the ground.
“With these different bands, we can see different things about what is happening in the atmosphere, put it together and have a visual impact to see what is really going on,” said Michael Stringer, acting systems program director for GOES-R.
The current GOES fleet can observe specific weather systems through a spectral band once every minute, but “at one-quarter the resolution of GOES-R and at the cost of doing everything else,” Stringer said.
When current GOES satellites observe a system through a spectral band, they aren’t able to simultaneously image the remaining continental United States. In other words, it can’t do those two important things at once. GOES-R, however, will.
Moreover, the latency getting data from scans drops from several minutes down to approximately 23 seconds, meaning the images of storms forecasters see are essentially a real-time look at the storm.
“The benefit of GOES-16 and the entire series is that it can do up to 30-second [refreshes] while also doing five-minute images of the whole [continental U.S.] and 15-minute images of the full disk of the hemisphere,” Stringer said. “With the legacy system, if you wanted to go to one-minute imagery, you’d have to give up everything else.”
While GOES-R is not officially cleared as operational, the satellite is producing experimental data forecasters can access and have used for decision-making purposes, including regarding Hurricane Harvey, Goodman said. The satellite is operated by NOAA out of its Satellite Operations Center in Suitland, Maryland, with input from the National Weather Service. | <urn:uuid:63dd4b51-72ab-4723-b523-5406cc508d3f> | CC-MAIN-2022-40 | https://www.nextgov.com/analytics-data/2017/08/how-noaa-plans-improve-tracking-harvey-storms/140650/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00500.warc.gz | en | 0.934939 | 1,135 | 2.625 | 3 |
Machine Learning is the foundation for today’s insights on customer, products, costs and revenues which learns from the data provided to its algorithms.
Some of the most common examples of machine learning are Netflix’s algorithms to give movie suggestions based on movies you have watched in the past or Amazon’s algorithms that recommend products based on other customers bought before.
Typical algorithm model selection can be decided broadly on following questions:
- How much data do you have & is it continuous?
- Is it classification or regression problem?
- Predefined variables (Labeled), unlabeled or mix?
- Data class skewed?
- What is the goal? – predict or rank?
- Result interpretation easy or hard?
Most used algorithms for various business problems:
Decision Trees: Decision tree output is very easy to understand even for people from non-analytical background. It does not require any statistical knowledge to read and interpret them. Fastest way to identify most significant variables and relation between two or more variables. Decision Trees are excellent tools for helping you to choose between several courses of action. Most popular decision trees are CART, CHAID, and C4.5 etc.
- Investment decisions
- Customer churn
- Banks loan defaulters
- Build vs Buy decisions
- Company mergers decisions
- Sales lead qualifications
Logistic Regression: Logistic regression is a powerful statistical way of modeling a binomial outcome with one or more explanatory variables. It measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative logistic distribution.
- Predicting the Customer Churn
- Credit Scoring & Fraud Detection
- Measuring the effectiveness of marketing campaigns
Support Vector Machines: Support Vector Machine (SVM) is a supervised machine learning technique that is widely used in pattern recognition and classification problems – when your data has exactly two classes.
In general, SVM can be used in real-world applications such as:
- detecting persons with common diseases such as diabetes
- hand-written character recognition
- text categorization – news articles by topics
- stock market price prediction
Naive Bayes: It is a classification technique based on Bayes’ theorem and very easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods. Naive Bayes is also a good choice when CPU and memory resources are a limiting factor.
In general, Naive Bayes can be used in real-world applications such as:
- Sentiment analysis and text classification
- Recommendation systems like Netflix, Amazon
- To mark an email as spam or not spam
- Facebook like face recognition
Apriori: This algorithm generates association rules from a given data set. Association rule implies that if an item A occurs, then item B also occurs with a certain probability.
In general, Apriori can be used in real-world applications such as:
- Market basket analysis like amazon – products purchased together
- Auto complete functionality like Google to provide words which come together
- Identify Drugs and their effects on patients
Random Forest: is an ensemble of decision trees. It can solve both regression and classification problems with large data sets. It also helps identify most significant variables from thousands of input variables.
In general, Random Forest can be used in real-world applications such as:
The most powerful form of machine learning being used today, is called “Deep Learning”.
In today’s Digital Transformation age, most businesses will tap into machine learning algorithms for their operational and customer-facing functions. | <urn:uuid:f5cd7ec2-ae7d-44bb-8c90-5f16292c3c86> | CC-MAIN-2022-40 | https://www.crayondata.com/want-to-know-how-to-choose-a-machine-learning-algorithm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00500.warc.gz | en | 0.919809 | 765 | 2.90625 | 3 |
Cloud hyperscalers are using ower purchase agreements (PPAs) and renewable energy credits (RECs) to effectively reduce data center carbon emission.
They are also obsessed with efficiency. Having sought and secured sustainable energy sources they are now thinking differently about power ride through and energy storage inside the data center.
The UPS system sitting between the grid and the IT is the health insurance of the data center. While historically the UPS was generally discussed in the context of its capacity and back up capability in an unplanned power outage - the U in uninterruptable – today its ability to provide conditioned, efficient power in normal operations has grown in importance. As cloud data centers scale to 100MW and above economic and electrical efficiency become ever more important.
Putting sustainability first
The sustainability, environmental and economic efficiency of the data center is based on the available energy mix, the UPS efficiency, and the back-up energy source.
The UPS should provide the flexibility of configuration and operation to work efficiently with alternative grid power energy sources and different backup energy stores.
In July 2020 the IEA (International Energy Authority) reported that “hyperscale data center operators in particular are leaders in corporate renewables procurement, particularly through power purchase agreements (PPAS).”
The top four corporate users of renewables in 2019 were all ICT companies, led by Google. In 2018, Google (10 TWh) and Apple (1.3 TWh) purchased or generated enough renewable electricity to match 100 percent of its data center energy consumption. Equinix consumed 5.2 TWH in 2018 (92 percent renewables) while Facebook data centers consumed 3.2 TWh (75 percent renewables). Amazon and Microsoft sourced about half of their data center electricity from renewables.
These efforts are to be commended. Yet Cloud providers are not just being good citizens. The broader context for operating as efficient a data center as possible is economic, reputational, and in the near future as likely as not to be driven by regulatory compliance through the global efforts to de-carbonize the economy and stiffer regulatory environments such as the EU Green deal.
Fundamentally a cloud data center is the physical embodiment of an always-on service where costs must be kept as low as possible. Power must be available 24x7, 365 days a year. The quality of the service is a direct product of the cost, quality, and reliability of the power provision. The SLA of the cloud platform availability is directly tied to the power back up reliability.
Cloud providers think long term about data center technology, operations, and capital assets. Their success is built on cost efficiency in capital deployment and data center operations.
Inside the data center, cloud providers are increasingly seeking the greenest possible infrastructure options such as those which work well with renewable power sources, provide efficient feeding of power back to the grid and that can use clean energy storage such as kinetic sources for back up.
Data centers are estimated to use around 200TWH of electricity annually, approximately one percent of global power generated.
The big cloud providers are increasingly turning to alternative energy sources and different back up technologies to power their critical infrastructure.
Sustainability and environmental protection mean using systems that take care of environment, have a small carbon footprint, and at the same time address long term operational and investment cost concerns. | <urn:uuid:56ad5c35-d195-4e6d-9a63-caa249e9234a> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/new-power-scale-priorities-cloud-data-centers-demand-fresh-thinking-about-ups/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00500.warc.gz | en | 0.937658 | 677 | 2.609375 | 3 |
There are odd times when you may wish to use the stdout of a program as the stdin to more than one follow-on. bash lets you do this using a > to a command instead of just a file. I think the idea is that you just have to make sure the command is in ( ) and it works. One up to bash, but what about somthing that will work on standard shells?
The answer is to use a named pipe or fifo, and it’s a bit more hassle – but not too much more.
As an example, lets stick to the “hello world” theme. The command
sed s/greeting/hello/ will replace the word “greeting” on stdin with “world” on stdout – everything else will pass through unchanged. Are you okay with that? Try it if you’re not comfortable with
Now I’m going to send a stdout to two different
sed instances at once:
sed s/greeting/hello/ sed s/greeting/world/
To get my stdout for test purposes I’ll just use “echo greeting”. To pipe it to a single process we would use:
echo greeting | sed s/greeting/hi/
Our friend for the next part is the
tee command (as in a T in plumbing). It copies stdin to two different places, stdout and (unfortunately for us) a file. Actually it can copy it to as many files as you specify, so it should probably have been called “manifold”, but this is too much to ask for an OS design that spells create without the training ‘e’.
Tee won’t send output to another processes stdin directly, but the files can be a fifos (named pipes). In older versions of UNIX you created a pipe with the
mknod command, but since FreeBSD moved to character only files this is deprecated and you should use
mkfifo instead. Solaris also uses
mkfifo, and it came in as far back as 4.4BSD, but if you’re using something old or weird check the documentation. It’ll probably be something like
mknod <pipename> .
Here’s an example of it in action, solving our problem:
mkfifo mypipe sed s/greeting/hello/ < mypipe & echo greeting | tee mypipe | sed s/greeting/world/ rm mypipe
It works like this: First off we create a named pipe called mypipe. Next (and this is the trick), we run the first version of sed, specifying its input to come from “mypipe”. The trailing ‘&’ is very important. In case it had passed you by until now, it means run this command asynchronously – or in background mode. If we omitted it,
sed would sit there waiting for input it would never receive, and we wouldn’t get the command prompt back to enter the further commands.
The third line has the tee command added to send a copy of stdout to the pipe (where the first
sedis still waiting). The first copy is piped in the normal way to the second
Finally we remove the pipe. It’s good to be tidy but it doesn’t matter if you want to leave it there use it again.
As a refinement, pipes with names like “mypipe” in the working directory could lead to trouble if you forgot to delete it or if another job picks the same name. Therefore it’s better to create them in the /tmp directory and add the current process ID to the name in order to avoid a clash. e.g.:
$$ expands to the process-ID, and I added a .1. in the example so I can expand the scheme to have multiple processes receiving the output – not just one. You can use tee to send to as many pipes and files as you wish.
If you run the example, you’ll probably get “hello world” output on two lines, but you might get “world hello”. The jobs have equal status, so there’s no way to of knowing which one will complete first, unless you decided to put
sleep at the start of one for force the issue.
When I get around to it a more elaborate example might appear here. | <urn:uuid:73580e16-a785-44f6-af0c-275c45666051> | CC-MAIN-2022-40 | https://blog.frankleonhardt.com/2013/pipe-stdout-to-more-than-one-process-on-freebsd/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00700.warc.gz | en | 0.918633 | 967 | 2.84375 | 3 |
A new high altitude IoT research project is set to launch that will begin sending data to the cloud from beyond the clouds.
A helium balloon fitted with seven radios, 38 sensors, and six cameras will be sent 100,000 feet into the air so that it can stream real-time telemetry and live flight video to Microsoft’s Azure IoT platform.
The project has been named Pegasus II and is scheduled for a test flight over the next few days, following a cancelled launch back in February due a malfunctioning air pressure sensor. The upcoming test will prove critical to ensure that communications with the payload, ground station and field gateways are all working correctly, as well as any associated software. If successful, a new launch window will be declared for later this month.
“The Pegasus Mission is all about experimentation and the search for new ideas to achieve something that is not currently possible,” explains the project website. “This is not our day job, it’s our passion for experimentation. High Altitude Science provides an interesting proving ground for this, where it takes literally a mission to get a craft into the upper atmosphere, 20 miles above the surface of the Earth.”
IoT in space
The Pegasus II project is not the first effort that has looked to take IoT solutions into the stratosphere. Airbus Defence and Space began work on the MUSTANG project last year, aiming to create a global IoT network that relies on both terrestrial and satellite terminals. The vast quantities of data provided by extra-terrestrial sources – NASA collects hundreds of terabytes every hour – also presents possible IoT opportunities.
For the team at Pegasus II, they will be hoping that the tests scheduled to take place this weekend are without issue. Anyone that wants to follow the team’s progress can access live feeds from the project website or download the Pegasus Mission smartphone app.
Want to stay up-to-date with the latest IoT news and analysis? Sign up for our weekly newsletter here! | <urn:uuid:0b6318a2-dc85-4114-ae96-53889423e5fd> | CC-MAIN-2022-40 | https://internetofbusiness.com/microsofts-azure-sends-iot-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00700.warc.gz | en | 0.913449 | 406 | 2.578125 | 3 |
On some level, if you ask a person if their smartphone data is being collected, they’ll say “yes.” Most people know their data is highly sought after by companies and governments. Nobody likes this, yet an increasing number of people use smartphone apps every day despite these known privacy risks.
However, people probably underestimate just how much data their government possesses and how this data is used. News reports about the US Customs and Border Protection (CBP) agency shed light on just how far the government can reach.
Vice News reports that US CBP agents seize tens of thousands of devices every year from travellers, even when they aren’t near a border or haven’t been charged with a crime. Data from these devices is then uploaded into a searchable database and remains there for up to 75 years.
It’s known that governments collect data on citizens in the name of national security. However, recent changes have decentralized the process, creating potential scenarios where border agents can extract large quantities of sensitive information that has no bearing on their specific investigation.
While this agency claims that they limit access to trained forensic analysts, privacy advocates warn the scope of the data obtained remains large and that this agency has a history of overstepping. So long as the CPB can demonstrate the lowest burden of proof, “reasonable suspicion of a crime, the type of data they can access includes:
- GPS history
- Text messages
- Social media posts
- Financial accounts
- Transaction records
People uncomfortable with the idea of governments tracking their metadata should object to this even more. What could be a more flagrant privacy breach than government agents having full access to your private pictures and personal communications?
Activists worry that the CPB is stockpiling a centralized collection of data, which the government may then repurpose for unrelated matters down the road. The nightmare scenario where an initial government privacy invasion unfairly provides the basis for a second government overreach may seem far-fetched, but that it’s even plausible is a major red flag.
People need resources to help keep their information secure.
Buying Commercial Data
It’s possible that US border agents already have a cache of data on you, even if you have never interacted with one in person. The agency has location data on Americans from across the country, including those who don’t live near the border.
The US government doesn’t need to seize your device to obtain massive troves of your data. They also claim they don’t need a warrant, either.
CPB buys app location information from middlemen providers, who specialize in harvesting this data and selling it to law enforcement agencies so they can track individuals or groups. Venntel, one of these companies, sources their data from innocuous online apps countless people use every day:
Most people use these types of apps without giving them much thought. At worst, they may wonder what the app does with their data.
It’s bad enough that private companies obtain your data under one pretext and sell it to third-party advertisers without you knowing, but quietly selling it to law enforcement agencies is a problem of a higher magnitude.
While it’s likely that most people would object to this type of privacy invasion, it’s not clear the government is doing anything illegal. A group of Democratic senators are calling on the Department of Homeland Security to investigate.
But in an important sense, legality here is moot. It shouldn’t be comforting if these practices are ultimately illegal because the government has already participated in them for years. On the other hand, it’s even worse if the CPB hasn’t broken the law, and the government permits this type of invasive data gathering.
Whatever the investigation finds, people need a way to ensure their communication remains confidential.
Real-World Examples of Breaches
If you don’t even know who has your sensitive data, how can you trust how it’s being used or misused? Most people have never had the government breach their privacy, so fears about potential overreach may seem overblown to them.
Many activists and journalists know that the danger is all too real because they have experienced it first-hand.
In 2019, the American Civil Liberties Union (ACLU) claimed the US government surveilled three not-for-profit organizers, who were on a list of more than 50 activists and journalists. The ACLU’s complaint alleges the activists’ relief efforts were hampered on both sides of the border, derailing their lives and work.
The CBP had defended the list, claiming the people on it were linked to the 2018 migrant caravan. However, there were activists on the list who had no prior experience working with anyone related to the caravan.
It’s unclear precisely what these activists did to draw the US government’s attention, but their privacy would have remained intact had they been using only encrypted communication solutions rather than third-party apps and web browsers.
Breaching Privacy is Non-Partisan
With a new Democratic administration set to take office in January 2021, it’s worth recalling that both political parties in the US have presided over enormous privacy breaches. The National Security Agency (NSA) has grown enormously since its founding in the wake of the 9/11 terror attacks, including under President Barack Obama.
In fact, one of Obama’s final acts as president was to allow the NSA to share its vast information-gathering network with 16 other agencies in the US intelligence community. It doesn’t matter who is in office: the government can access your data unless you take steps to prevent them from doing so.
Usually, the people interested in accessing your private communications are one step ahead of you. By the time you find out how far their reach is, it’s too late, and they’ve already violated your sensitive data. Check out these Myntex encryption success stories to learn more about how the market’s best encryption, vital security measures that patch up any remaining susceptibilities, and safe and reliable data storage have made an impact. Keeping data private is something you will like. | <urn:uuid:0a49970d-3b58-4b7b-81b6-1f1eca7b00fb> | CC-MAIN-2022-40 | https://myntex.com/blog/index.php/2020/12/03/if-you-knew-how-data-was-really-collected-you-wouldnt-like-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00700.warc.gz | en | 0.956858 | 1,298 | 2.625 | 3 |
OCR: Optical Character Recognition for Text Recognition
Bots & People
10 min read
Innovative technologies such as Robotic Process Automation (RPA) and Artificial Intelligence (AI) are continuously pushing the digital boundaries for companies. They are being held back primarily by the large amount of information that is still being generated and stored in paper-based, analog formats. The solution to the problem of turning analog documents into digital file formats is Optical Character Recognition (OCR).
The use of OCR systems is one of the first steps towards automation. Yet OCR is by no means new. Anyone who works in an office equipped with a modern printer has most likely had to deal with OCR at some point. But what does OCR actually stand for? What can it be used for? And how does it work at all? The answer to these questions is so important because those who know how OCR works can take full advantage of its capabilities in the context of automation and process optimization.
OCR - From Analog To Digital Document
Optical character recognition is a widely used technology for automating text extraction from documents or an image-based PDF, TIFF or JPG file and converting the extracted text into machine-readable text forms. Optical character recognition software processes a digital image by searching and recognizing characters such as letters, numbers and symbols and transforming them into editable text.
OCR is a technology that turns analog into digital documents. When OCR scans a word, the algorithm recognizes certain parts or shapes of a digitized image, such as the letters, but it does not understand the meaning of the word. Advanced OCR software can also extract and export the size and formatting of the text, as well as the layout of the text. Once a document has been processed with OCR technology, the text data can be easily edited, searched, indexed, and retrieved. The digitized documents can also be compressed into ZIP files, keywords can be highlighted or embedded into a website.
How Does Optical Character Recognition Work?
The basic steps are image acquisition, pre-processing, segmentation, feature extraction, classification and post-processing. In the first step, the physical texts are scanned and copied and converted to binary by the OCR software. In the next step, the software analyzes the scanned images for light and dark areas. Light areas are recognized as background and dark areas as written characters. Next, the program processes the dark areas to find alphabetic letters, numeric digits and symbols. There are different techniques for OCR software, but most of them refer to a character, word or text block.
Two Methods - One Goal
Before OCR software can work smoothly, it must go through a pattern recognition process. This method feeds the program with text samples in different fonts and formats, which are then used to recognize and compare characters in the scanned text.
Another method is feature recognition. This uses certain features of letters, numbers, or symbols to recognize characters in the scanned image. Features could be the number of angled lines, cross lines or curves in a written character. For the uppercase letter "A", this could be two diagonal lines meeting a horizontal line in the middle. Then, once numbers and characters have been identified, they can be converted to an ASCII code (American Standard Code for Information Interchange) - the most common format for text files in computers and on the Internet.
Trust Is Good, Control Is Better
Once the text has been processed by OCR, however, it should then be checked again to ensure that the process was successful and that the text was correctly and completely extracted and converted. The recognition accuracy is 99 percent, but the one percent can theoretically contain a serious error, for example, if the comma in the price quote was not recognized in the original document. Poor contrast or blurred characters in the original significantly affect the recognition accuracy. Nevertheless, accuracy can be improved if OCR is coupled with a lexicon so that the algorithm can refer to a list of words that occur in the scanned text.
Advantages Of OCR
OCR solutions improve the accessibility of information for users. Before OCR software was available, the only way to digitize printed paper documents was to retype the text manually. This was not only enormously time-consuming, but also associated with inaccuracies and typos.
The first successful steps with an optical character recognition software were taken by the financial sector. The characteristic font used for the account number and bank code on checks - called OCR-A - can still be admired on bank checks today. It was designed to make each letter and number distinguishable from the others. OCR technology became popular in the early 1990s when an attempt was made to digitize historical newspapers.
OCR Saves Time & Resources
Since then, the technology has undergone several improvements. Today, solutions deliver near-perfect results. Advanced methods, such as zonal OCR, are used to automate complex document-based workflows. Organizations that use OCR capabilities to convert images and PDFs save time and resources that would be required to manually process non-scannable data.
Once transferred, OCR-processed text information can be more easily and quickly used by organizations through machines. This means a reduction in data transfer errors, huge resource savings, and improved productivity. Thanks to OCR software, companies can not only digitally store and better organize analog documents, but also prepare document-based workflows, which often rely heavily on PDF formats, for data extraction and subsequent automation. But more on that later!
From Printed Paper To Machine-Readable Document
Optical character recognition is a technology behind many familiar systems and services in our daily lives. Lesser known use cases include automating data entry, indexing documents for search engines, automatic license plate recognition, and assisting the blind and visually impaired. Probably the best-known use case for OCR is converting printed paper documents into machine-readable text documents. Once a scanned document has passed through the OCR software, the text of the document can be processed using word processing programs such as Microsoft Word or Google Docs.
More Transaction Security For Banks
Optical character recognition is most commonly used by banks to improve transaction security and risk management. OCR can be used to scan important handwritten guarantee documents from customers, such as loan documents. The International Bank Account Number (IBAN) is used to identify bank accounts across borders. The IBAN can vary in length and can consist of both numbers and letters. To facilitate cross-border transactions, banking apps with built-in OCR software can scan the IBAN for further transaction processing instead of laboriously typing it in. Various providers offer special application-oriented OCR systems that make use of business rules, standard expressions, or extensive industry information, for example.
Simplified Data Entry And Data Categorization
OCR can be used for a variety of data entry and data categorization tasks. For example, data entry of business documents can be automated by converting hard copies of legal or historical documents into PDF files that can then be edited, formatted, and searched. But OCR can also be used for data categorization, for example, to automate the sorting of letters for mail delivery or to deposit checks electronically without the need for a bank teller.
Data Indexing And Pattern Recognition
Other use cases include adding certified legal documents to an electronic database and indexing printed material for search engines or using it in security cameras to recognize license plates. From capturing business cards to extracting incoming invoices from vendor emails, optical character recognition systems specialize in converting printouts into pixels through pattern recognition and electronic capture of visual information. OCR has long been used in invoice processing to free employees from the tedious re-keying of invoice data and is a key component of broader automation solutions.
OCR and RPA For Process Optimization
Optical character recognition is also a key element for any good RPA solution. It involves converting unstructured data from scanned or sent text templates into structured, digitized data that can in turn be incorporated into digital business processes without the need for manual intervention. As a result, OCR combined with RPA enables companies to automate operational business processes that are still heavily dominated by completed forms to a much greater extent. The data obtained with OCR can then be routed to the various enterprise applications such as CRM, ERP or legacy system. An OCR engine fully embedded in the workflow of complex business process automations can automate the time-consuming tasks associated with manually processing invoices into readable data, for example.
What Does NLP Have To Do With OCR?
For non-structured documents, a combination of optical character recognition tool and Natural Language Processing (NLP) has proven successful. It improves the readability of documents without knowing the context, format, or regional slang, takes into account abbreviated words, short texts, or even hashtags. These solutions have a fast build engineering core and provide good assimilation of data. In a nutshell, NLP helps improve word accuracy by replacing wrong words with correct ones.
This is because NLP is a component of artificial intelligence (AI) and enables computers to record, process and understand human language as it is spoken and written. To do this, NLP uses two techniques: syntax analysis and semantic analysis. In syntax analysis, NLP evaluates the meaning of a language based on grammatical rules. Semantic analysis works with algorithms to understand the meaning and structure of sentences.
ICR recognizes Even Spidery Handwriting
Many businesses struggle with large volumes of consumer-filled handwritten forms, such as registration forms and credit applications, that need to be scanned, digitized and transcribed. But even handwritten scribbles and different handwriting styles or fonts are now not a particularly big problem for optical character recognition. Intelligent Character Recognition (ICR), the logical evolution of OCR, uses neural networks, a machine learning (ML) technology, to learn and self-correct over time.
To do this, neural networks use vast amounts of handwritten training data with a variety of different styles and formats, and then compare each character to the training data to find the best match and most accurate transcription. In the process, ICR also analyzes and evaluates the scan result in terms of semantic context. ICR checks within the text whether it makes sense in terms of content to use a particular letter. In this way, ICR can even recognize handwritten notes that no human being can read anymore.
End-To-End Automation Of The Transcription Process
By using ICR to digitize handwritten forms and documents, companies can automate the transcription process end-to-end, significantly accelerating and simplifying it. ICR and OCR can now also be used to protect existing paper archives and important content of historical documents in fractured script that are at risk of decay and make them accessible in a legally secure manner. Companies such as Ancestry, a genealogy portal, are taking advantage of this to make historical documents available for members' personal research without requiring them to spend hours searching documents for information. OCR/ICR is also suitable for use in sorting processes in the inbox. Even handwritten notes on envelopes or other mail items can be recognized and forwarded accordingly.
Optical Character Recognition Tools You Should Know
The most significant optical character recognition solutions are Adobe Acrobat Pro DC, OmniPage Ultimate, Abbyy FineReader, Readiris and Rossum. Whereas in the past the amount of documents still to be scanned stood in the way of the paperless office, modern OCR tools can scan documents both individually and in batches, making the process much more efficient.
Adobe Acrobat Pro DC
Adobe Acrobat Pro DC offers an extensive list of options. The DC stands for Document Cloud. So users can access their files from any computer. Beyond the basic OCR features, the Pro version also offers the ability to annotate documents and provides special tools to scan spreadsheets and compare documents. Within seconds of being scanned, the documents can be edited directly on the screen as PDF files.
OmniPage Ultimate offers a wide range of input, output and workflow options that go far beyond what one would normally expect. One can quickly and easily convert individual paper documents or even batches of paper into any digital file format. OmniPage Ultimate impresses with its high conversion accuracy. Custom workflows can be set up so that documents are automatically delivered to the right place in the right format, as needed.
Over the past few years, Abbyy has developed a comprehensive text file management toolbox for scanning, organizing and creating digitized paper documents. In addition to text conversion to all common formats, text files can also be compared and annotated in the enterprise version.
Readiris relies on a sophisticated user interface and offers many useful functions. Readiris supports a variety of file formats and offers the option to have text read aloud. In addition, Readiris can be used to add signatures to scanned documents and security protection to finished digital documents, as well as watermarking, commenting and annotation functions.
Rossum specializes in scanning and digitizing invoices, and its OCR solution is aimed primarily at companies that still work with a large number of paper invoices and mainly need to extract figures quickly and easily. Rossum's OCR solution does not use a template format, but relies on the use of artificial intelligence to scan important information.
Organizations looking to break free of paper-based documentation and its associated costs, environmental impact, and inefficiencies are using OCR to digitize existing information and create new workflows that automatically capture and store new information. AI and ML are expected to transform scanning and character recognition. This combination will make it possible to analyze data and teach systems to detect discrepancies in large data sets. AI-driven OCR technologies can not only help digitize full texts, but also digest and understand the context of such texts to save valuable resources for the organization.
The Automation Mag Subscribe to get fresh news, hot rumors and deep insights into the world of automation.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
"We got exactly what we wanted. It was strongly practice-oriented and that is exactly what I appreciate so much about Bots & People. For me, that's what sets it apart from other providers."
Project Manager Process Automation in Finance | Internal Control System | FRAPORT AG
Automation Pioneer Program: jointly organized by T-Systems International, RWTH Business School and Bots & People. The aim was to train technology consultants and sales staff in the field of process automation in order to build up in-house expertise.
We particularly liked the comprehensive content coverage of the topics and technologies relevant to us as well as the inspiring lecturers in the virtual classroom as well as in the video. Our colleagues were provided with a holistic view of the topic of hyperautomation, giving them the opportunity to discuss their challenges together with the experts and work out possible solutions. | <urn:uuid:e1314df8-63fd-4d5a-a20e-fa60a66832ee> | CC-MAIN-2022-40 | https://www.botsandpeople.com/blog/ocr-optical-character-recognition-for-textrecognition | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00100.warc.gz | en | 0.922404 | 3,137 | 3.375 | 3 |
Volunteering, especially when it’s done on a regular basis, can help you make new acquaintances.
More than 77 million Americans volunteer a total of 6.9 billion hours a year doing everything from fighting fires to raising funds for cancer research. These efforts help others and support communities. But volunteering also tends to benefit the volunteers themselves in at least four different ways, explains nonprofit management scholar Jennifer A. Jones.
1. Boosting your health, especially if you assist others
Volunteering has long been associated with good mental and physical health, particularly for older people. In a long-term study, researchers at the University of Wisconsin found that volunteering was linked to psychological well-being, and the volunteers themselves said it was good for their own health.
While anyone can benefit from volunteering, people who are the least connected to others tend to benefit the most. In fact, the benefits are so strong that researchers have suggested public health officials educate the public to consider volunteering as part of a healthy lifestyle.
One study in particular looked into which kind of volunteering may be best for your health. When a team of social scientists combed through data collected in Texas, they found that people who volunteered in ways that benefited others tended to get a bigger physical health boost than volunteers who were pitching in for their own sake. They also benefited in terms of their mental health, such as by experiencing fewer symptoms of depression and becoming more satisfied with their lives.
That is, serving meals at a soup kitchen might be better for your health than doing unpaid shifts as an usher in exchange for free theater tickets.
2. Making more connections
Volunteering, especially when it’s done on a regular basis, can help you make new acquaintances. Whether you volunteer for an organization on a daily, weekly or monthly basis, over time you are bound to develop strong relationships, typically with other volunteers and staff members.
Regular volunteers may get these benefits to a greater degree than people who volunteer sporadically, known as episodic volunteers. Consider this: Handing out water at a fundraising run in April and then helping bag groceries to give away in November is surely easier to squeeze into a busy schedule than volunteering regularly in an office. But those more convenient activities aren’t as likely to help you build relationships over time. In other words, consistency matters.
There are benefits and drawbacks to every type of volunteering. For example, volunteering once in a while is often easy to schedule and is something families or friends can do together. However, volunteers who pitch in occasionally may not feel very connected to the mission of the nonprofits they support or get to know many other volunteers.
Regularly volunteering, on the other hand, makes it more likely that you will develop a deep relationship to the cause and to other staff and volunteers. However, this kind of volunteering requires a longer-term and bigger time commitment. It can also become frustrating if the volunteer’s duties aren’t a good fit for them.
Still, if people are willing to work toward finding the right fit and making time in their schedules, volunteering on a regular basis can help them get more out of their efforts, including new friends and acquaintances.
3. Preparing for career moves
When volunteers gain and strengthen skills and meet more people, it can help them find new paid work by honing their social and job skills and expanding their professional contacts.
Especially if you’re unemployed or eager to get a new job, you may want to volunteer in ways that are more likely to fill gaps in your resume or help you network with people who can help advance your career. For example, you can learn leadership and governance skills by volunteering on a board of directors at your local food pantry and, at the same time, network with other board members.
Alternatively, you can volunteer for an organization in your field, whether it’s health care, child care or accounting, as a way of staying current and active while looking for work.
Including volunteer work on your resume can also signal to a prospective employer that you’re community-minded, self-motivated and willing to go above and beyond. As I often see with my students who volunteer, close relationships with nonprofit staff can lead to job referrals and glowing letters of recommendation.
4. Reducing some risks associated with aging
Older people who engage in mentally stimulating leisure activities on a regular basis may have better memory and executive function than those who don’t, according to an analysis of related studies.
And because volunteers may need to tackle new problems, interact with clients and staff or drive to a new location, volunteering can be a highly stimulating leisure activity.
Volunteering can also help older people feel valued. For example, nonprofits can encourage older volunteers to become mentors – giving them a chance to impart what they’ve learned from their life and career experiences.
Jennifer A. Jones is an assistant professor of nonprofit management and leadership at the University of Florida. | <urn:uuid:476e4ea9-8065-4922-9e43-fd29ba0e4cd1> | CC-MAIN-2022-40 | https://www.nextgov.com/ideas/2021/08/4-ways-volunteering-can-be-good-you/184187/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00100.warc.gz | en | 0.96189 | 1,018 | 2.59375 | 3 |
October is National Cybersecurity Awareness Month (NCSAM) in the United States, and like every year, each week is focused on a single theme. These themes are brought together by a monthly theme that usually changes from year to year. This year the theme is “Do Your Part. #BeCyberSmart.” This would have been an appropriate theme for each of the prior 17 years of NCSAM, and it could be the theme for every NCSAM in the future. Personal accountability and awareness play a significant role in the fight against cybercrime.
It’s no secret that cybercrime exploded over the past couple of years. It was already growing and trending toward new strategies when the pandemic changed the entire landscape. The sudden expansion of remote work provided unprecedented opportunities for cybercriminals, and they quickly learned how to use these opportunities to their advantage.
The FBI’s 2020 Internet Crime Report includes information from 791,790 complaints of suspected internet crime — an increase of more than 300,000 complaints from 2019 — and reported losses exceeding $4.2 billion.
Steps you can take to prevent cybercrime
This week the NCSAM theme is “Be Cyber Smart,” which introduces the monthly theme and underscores the fact that cybersecurity starts with the human component of computer networks. Each person plays a role in preventing cybercrime. The most basic actions start with those that can be employed in the home:
- Protect endpoints — Maintain updated desktop security such as anti-malware, antivirus, and personal firewalls. Apply updates to smartphones and IoT devices as soon as possible to secure them against new exploits.
- Secure network appliances and smart devices — Change the default password on each of these devices and check for software updates regularly. IoT botnet attacks have risen approximately 500% over the past couple of years. These botnets harness millions of routers, CCTV cameras, and other vulnerable devices for use as attack vehicles against a target. Once these devices are compromised, the threat actor can use them for DDoS attacks, remote command execution, and other attacks. Your smart things and internet service may be participating in cybercrime if you do not keep your devices updated and change your passwords to something other than the default.
- Protect online accounts with strong passwords — The average internet user has about 100 passwords to manage, which is roughly 20% to 25% more per user than prior to the pandemic. A password manager will help you avoid unsafe passwords and prevent password fatigue/carelessness. There are several free and low-cost options available, and many allow you to pay a higher price so that family members can securely share access to the program. If you are new to password managers, articles like this can help you select one that meets your needs.
- Defend your inbox — Install and maintain a solution that protects you from malicious email attacks. Many email systems have some built-in protection, but the best defense is always going to be a security-conscious email user. Learn how to identify and defend against the various email threat types and always report the threats you find in your mailbox.
- Back up your data — Anything you value that resides on your desktop, home network storage, or in a cloud application like OneDrive should be backed up on a regular basis. Your family photos may be as important to you as your health records and financial information, so make sure you know what you need to protect. When done correctly, your backup system will ensure that you do not lose access to your data.
Protecting your business
Business users have a greater responsibility for cybersecurity because they’re protecting their organization, colleagues, and possibly the public. Business users have to manage more passwords, they are subject to sophisticated email attacks, and they stand between threat actors and a high-value target. Companies usually leverage an IT team to manage cybersecurity for employees and the organization. Business security is like home security but significantly augmented for greater protection.
- Login information is often protected with multi-factor authentication, or the company employs one-use passwords, passphrases, or physical security keys.
- IoT security is a greater concern because smart devices control critical infrastructure and supply chains. Industrial Control System firewalls and other defenses are used to protect these devices from botnets, automatic scanning, and manual big-game attacks.
- Online applications are usually protected with a web application firewall. This stops credential stuffing and other password attacks, as well as other threats like DDoS and the OWASP Top 10. We’ve already mentioned that protecting online accounts with strong passwords should be a priority for individuals. This is especially true for users who do business with companies that do not have strong web application protection to stop these password-guessing attacks.
- Businesses normally maintain email protection for all users, but it may be native protection provided with a system such as Microsoft 365 or basic spam, virus, and malware protection. Security-conscious companies will deploy AI-driven protection along with employee security awareness training.
- Data protection is a requirement for every business. It protects customer data, sales records, and anything else of value to the company. Proper data protection for a business requires the following:
- Identifying the type and location of valued data, which may include overlooked items like ICS configurations. This helps companies avoid surprises during a data restore procedure.
- Understanding the risk tolerance level of the business. How long is the company willing to wait for data to be restored, and how much data can be lost or re-entered? This information is used to select and configure the data protection system.
- Encrypting or otherwise protecting the backup from ransomware or other attacks. Ransomware is designed to identify and encrypt backup systems, which is why it is so important to protect the backup like any other resource on the network.
Hopefully you have good cybersecurity practices in place already. If not, Barracuda and the NCSAM program offer resources here to help individuals and business teams.
- Barracuda Ransomware Protection
- Barracuda Email Threat Scan
- Cybersecurity & Infrastructure Security Agency (CISA) – CYBER ESSENTIALS TOOLKITS
- CISA – STOP. THINK. CONNECT. ™
- European Cybersecurity Month (ECSM)
Barracuda offers businesses a complete solution for data protection and multi-layered security. Visit www.barracuda.com for more information.
Christine Barry is Senior Chief Blogger and Social Media Manager at Barracuda. Prior to joining Barracuda, Christine was a field engineer and project manager for K12 and SMB clients for over 15 years. She holds several technology and project management credentials, a Bachelor of Arts, and a Master of Business Administration. She is a graduate of the University of Michigan.
Connect with Christine on LinkedIn here. | <urn:uuid:65e76402-bd9b-4c65-8830-926e80b0ced3> | CC-MAIN-2022-40 | https://blog.barracuda.com/2021/10/04/get-cyber-smart-with-national-cybersecurity-awareness-month/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00100.warc.gz | en | 0.927117 | 1,414 | 2.640625 | 3 |
Artificial intelligence in the production processes
The new challenge is to have a deeper knowledge of how and why artificial intelligence generates its results, restoring the necessary centrality of decision-making processes to man.
With the recent developments, in recent years, in the development of systems with artificial intelligence, AI systems have become part of even high-risk decision-making processes, and the nature of these decisions is driving the creation of algorithms, methods and techniques that are in able to integrate explanations on the results and processes performed in the AI outputs.
This push, in most cases, is motivated both by laws and regulations that establish that certain decisions, including those provided by automated systems, are accompanied by information on the logic behind them, and by the goal of creating an AI that is reliable in machine learning.
The suspicion that an AI may be unfair or partial can only diminish the trust of users in this type of systems, also triggering concerns about the possibility of harmful effects on themselves and on society in general, with a consequent slowdown in adoption of technology.
“Explainability” is one of the properties on which trust in AI systems is based, as well as resilience, reliability, absence of bias and accountability, all terms that together represent the foundation of the rules that AI systems are expected to follow.
Fields of application
A multidisciplinary team in different disciplines, has analyzed and defined what can be considered the fundamental principles of an Explainable AI (XAI), or “explainable” AI, taking into account the multiplicity of both levels and fields of application of artificial intelligence, starting from the assumption that AI must be explainable to society in all its forms to allow understanding, trust and above all acceptance of decisions or indications generated by artificial intelligence systems.
The work produced a document where four principles are presented that summarize the fundamental properties of an XAI, with the caveat that these principles are in any case strongly influenced by the interaction between the AI system and the human being who receives the information: in other words, the requirements of a given application, the specific task performed and also the user of the explanation influence the type to be considered most appropriate, so the four principles aim to highlight the widest possible set of situations, applications and perspectives.
In summary, the four principles are:
Explanation, systems provide evidence or accompanying reasons for all outputs; Meaningful, systems provide understandable explanations to individual users; Explanation Accuracy, the explanation correctly reflects the process followed by the system to generate the output; Knowledge Limits, the AI system only works under the conditions for which it was designed or when it achieves sufficient confidence in its output. | <urn:uuid:eaeaccbd-a97c-4e4d-9dc6-75ad20bb6a61> | CC-MAIN-2022-40 | https://www.iiot-world.com/artificial-intelligence-ml/artificial-intelligence/artificial-intelligence-in-the-production-processes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00100.warc.gz | en | 0.94227 | 540 | 2.875 | 3 |
For Who Course
Students and professionals with a good knowledge of operating systems and network protocols and a basic knowledge of operating systems, file systems and the fundamental principles of networking. Experience of programming in any script-based language (Python, Bash, PowerShell, etc.) is highly desirable.
What you will learn
- In the course, students will learn modern tactics, methods and procedures of attacks, as well as protection methods. During classes, students will gain practical skills on attacks detection and investigation.
Head of Kaspersky Security Operations Center
Head of SOC Consulting Services
- Cybersecurity is a complex connection of processes, people and technologies. Their mutual effectiveness determines the effectiveness of information security at a company. Security monitoring is the most important bridge between these three components, and SOC is its practical implementation. | <urn:uuid:769970f9-bdf6-441f-b16f-b29e1c5b09f4> | CC-MAIN-2022-40 | https://academy.kaspersky.com/courses/security-monitoring-soc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00100.warc.gz | en | 0.901847 | 190 | 2.515625 | 3 |
When you look at the world around you you’ll notice that everything you see looks crisp, sharp, and clear–no matter how close you get to it (if you have good vision.) But when you look at generated representations of the world, like photographs or movies on a screen, you’ll notice that the image is rarely that clear. If you get close to a standard-definition (SD) television or computer screen you will see the thousands of individual pixels that make up the image. If you’re observant enough you might even notice that those pixels are flickering in and out of existence very rapidly. Those issues are precisely the reason high-definition television is so popular on the consumer market today. Let’s take a closer look at what HDTV really is and why DISH Network is so committed to providing quality HD service to all of its customers.
The human eye has 120 million cells called cones and rods, that interpret the light we see and create pictures in our minds. Essentially, this means our eyes are capable of seeing a much better picture than older, standard-definition screens present us with. High-definition screens capitalize on the potential of human vision by using a vertical resolution display from 720p–1080i, creating an image that is significantly more detailed than older SD displays.
Now, I’m sure many of you are wondering what the i and the p stand for. The p stands for “progressive scanning,” which means that each scan contains the complete lines needed to create a picture. The I stand for “interlaced scanning” which means that each scan contains alternate lines for half a picture. What these values translate into is known as frame rate.
So just how much better is the frame rate for high-definition programming? A lot better. To really understand why an HDTV is more preferable you need to understand the concept of “persistence of vision”. Persistence of vision is an optical illusion that occurs when the brain interprets a series of rapidly-shown still images as a continuous moving picture. A traditional TV displays programs at about 30 frames per second, this is why you can often detect a flicker on older televisions. A new HDTV is capable of displaying programming at up to 60 frames per second, twice that of a standard-definition TV. A higher frame rate creates a more powerful persistence of vision, which means that not only is your image less fuzzy, but you can also show an HDTV picture on a much larger screen with no deterioration of image quality.
One of the more immediate differences you’ll notice when switching to HDTV programming is the width of the screen. A decade ago, televisions were much more square than they are today, the switch to HD is why you see much more rectangular screens on the market. Most HDTV’s operate on a 16:9 aspect ratio, which allows you to watch all those widescreen movies without any part of the video image getting cut off.
DISH Network’s Commitment to HDTV Programming
There is one major caveat to remember when purchasing a high-definition television–you won’t receive a high-definition picture unless you have access to a full-resolution, HD-quality television signal.
DISH Network is committed to providing everyone with the highest quality television service possible. At DISH, they believe that HD is a right, not a privilege. Which is why DISH Network provides HD Free for Life with all of their television packages. You’ll see the sweat glisten on your favorite athletes as they score the game-winning goal, hear a bat hit a baseball with Dolby 5.1 surround sound, and if you close your eyes you’ll actually feel like you’re at the game.
If sports aren’t your passion, DISH provides over 200 HD channels with a rotation of 3D movies every month. You’re sure to find something you like, and with the support of DISH and HDTV technology, you’ll have a theatrical experience right from the comfort of your own home.
So settle into your favorite chair, break out the popcorn, and revolutionize your television. High-Definition Television is the future of visual entertainment, and DISH Network is ready to welcome you into it. | <urn:uuid:10098fa3-d56f-4064-8be8-664ee809c29d> | CC-MAIN-2022-40 | https://godish.com/high-definition-television/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00100.warc.gz | en | 0.934344 | 960 | 2.921875 | 3 |
August 18, 2020
Over 7 billion malware attacks were reported in 2019. Every minute, four companies fall victim to ransomware attacks, a form of malware. Learning how to detect malware and get rid of malware is becoming increasingly important for anyone who accesses the Internet and works online. In this article, we’ll cover how to detect malware on your computer, including the many warning signs that you’ve been infected with malware.
But first, this article is part of our Definitive Guide to Malware series:
WARNING SIGNS OF MALWARE
Malware, the delivery vehicle for cybercriminals to steal and hold your information or data for ransom, is built to avoid detection. Ransomware and other malware imbed themselves in your computer to steal passwords, financial information, and anything personal that can be sold for profit.
But not all malware can go undetected. There are several typical signs that malware has made its way to your computer and you are infected.
Redirects You intend to visit a specific URL and the browser or URL isn’t loading properly or even directing you to a completely different web page altogether. If you’re experiencing several redirects, you might be infected with malware. It’s time to run a scan and check your computer.
Warning Messages Malware can disguise itself as a local computer warning. Typically the messages will alert the user that their computer has a problem and they need to follow specific steps to fix it. By following the directions on this fake message, the user may inadvertently execute the malware.
Pop-up Ads If you’re seeing an unusual amount of pop-up ads and they’re increasingly frequent, you need to run a scan for malware on your computer. If they’re harder and harder to get rid of and often include offensive or suggestive content, your computer is likely infected with malware.
Slow Performance If you notice that the general functionality of your computer is running slower or not as smoothly as it was before, it’s time to run a malware scan. If applications don’t open as quickly or you can’t seem to do simple tasks within each application, malware may be slowing your system down.
Problems at Start Up or Shut Down As you start your computer or shut it down, if you run into any problems, delays, or suspicious activity, your computer may be infected with malware. These are common signs that something is running on your computer that you likely didn’t approve or install yourself.
Plug-ins or Tools You Didn’t Install If you notice a plug-in or a tool in your browser that you didn’t install yourself, you likely have a malware problem that is installing these applications without your knowledge or permission. Run a scan and follow instructions for getting rid of the malware.
HOW TO DETECT MALWARE
Once you’ve noticed initial warning signs of malware, it’s time to investigate further and gather evidence that the malware is present.
Run a Malware Scan If you have anti-malware software already installed on your device, many are pre-installed when you purchase a computer or are added as IT approves devices for employees, it’s time to run a malware scan. These scans can help you better detect a problem on your computer and will deliver exact next steps to get rid of the malware.
Run an On-demand Antivirus Scanner If you can’t seem to find the malware with a traditional anti-malware software scan, you can use a free online option like Malwarebytes Free. This second line of attack against malware may catch stealthy malware that the initial scan can’t catch.
Get Professional Help If your initial scan and your on-demand scan can’t find the malware you believe is on your computer or system, it’s time to get professional help. If your device is company-owned, you need to contact your IT department. If you’re using a personal device, you need to seek professional, third-party help. They may need to wipe your system or reform your hard drive.
Understanding the signs of malware and the early warnings can help prevent a larger malware attack on your computer or system. Protect yourself and your data by staying up-to-date on cyber security best practices and good hygiene.
Learn more about malware and how to get rid of malware here.
Written by: [Jack Vines] | <urn:uuid:082c7521-2371-4845-a279-336a1fd0a393> | CC-MAIN-2022-40 | https://measuredinsurance.com/blog/how-to-detect-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00100.warc.gz | en | 0.914376 | 919 | 2.703125 | 3 |
Protocol Independent Multicast (PIM) is a collection of multicast routing protocols, each optimized for a different environment. There are two main PIM protocols, PIM Sparse Mode and PIM Dense Mode. A third PIM protocol, Bi-directional PIM, is less widely used.
Typically, either PIM Sparse Mode or PIM Dense Mode will be used throughout a multicast domain. However, they may also be used together within a single domain, using Sparse Mode for some groups and Dense Mode for others. This mixed-mode configuration is known as Sparse-Dense Mode. Similarly, Bi-directional PIM may be used on its own, or it may be used in conjunction with one or both of PIM Sparse Mode and PIM Dense Mode.
All PIM protocols share a common control message format. PIM control messages are sent as raw IP datagrams (protocol number 103), either multicast to the link-local ALL PIM ROUTERS multicast group, or unicast to a specific destination.
PIM Sparse Mode
PIM Sparse Mode (PIM-SM) is a multicast routing protocol designed on the assumption that recipients for any particular multicast group will be sparsely distributed throughout the network. In other words, it is assumed that most subnets in the network will not want any given multicast packet. In order to receive multicast data, routers must explicitly tell their upstream neighbors about their interest in particular groups and sources. Routers use PIM Join and Prune messages to join and leave multicast distribution trees.
PIM-SM by default uses shared trees, which are multicast distribution trees rooted at some selected node (in PIM, this router is called the Rendezvous Point, or RP) and used by all sources sending to the multicast group. To send to the RP, sources must encapsulate data in PIM control messages and send it by unicast to the RP. This is done by the source's Designated Router (DR), which is a router on the source's local network. A single DR is elected from all PIM routers on a network, so that unnecessary control messages are not sent.
One of the important requirements of PIM Sparse Mode, and Bi-directional PIM, is the ability to discover the address of a RP for a multicast group using a shared tree. Various RP discovery mechanisms are used, including static configuration, Bootstrap Router, Auto-RP, Anycast RP, and Embedded RP.
PIM-SM also supports the use of source-based trees, in which a separate multicast distribution tree is built for each source sending data to a multicast group. Each tree is rooted at a router adjacent to the source, and sources send data directly to the root of the tree. Source-based trees enable the use of Source-Specific Multicast (SSM), which allows hosts to specify the source from which they wish to receive data, as well as the multicast group they wish to join. With SSM, a host identifies a multicast data stream with a source and group address pair (S,G), rather than by group address alone (*,G).
PIM-SM may use source-based trees in the following circumstances.
- For SSM, a last-hop router will join a source-based tree from the outset.
- To avoid data sent to an RP having to be encapsulated, the RP may join a source-based tree.
- To optimize the data path, a last-hop router may choose to switch from the shared tree to a source-based tree.
PIM-SM is a soft-state protocol. That is, all state is timed-out a while after receiving the control message that instantiated it. To keep the state alive, all PIM Join messages are periodically retransmitted.
Version 1 of PIM-SM was created in 1995, but was never standardized by the IETF. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-SM was standardized in RFC 2117 (in 1997) and updated by RFC 2362 (in 1998). Version 2 is significantly different from and incompatible with version 1. However, there were a number of problems with RFC 2362, and a new specification of PIM-SM version 2 is currently being produced by the IETF. There have been many implementations of PIM-SM and it is widely used.
PIM Dense Mode
PIM Dense Mode (PIM-DM) is a multicast routing protocol designed with the opposite assumption to PIM-SM, namely that the receivers for any multicast group are distributed densely throughout the network. That is, it is assumed that most (or at least many) subnets in the network will want any given multicast packet. Multicast data is initially sent to all hosts in the network. Routers that do not have any interested hosts then send PIM Prune messages to remove themselves from the tree.
When a source first starts sending data, each router on the source's LAN receives the data and forwards it to all its PIM neighbors and to all links with directly attached receivers for the data. Each router that receives a forwarded packet also forwards it likewise, but only after checking that the packet arrived on its upstream interface. If not, the packet is dropped. This mechanism prevents forwarding loops from occurring. In this way, the data is flooded to all parts of the network.
Some routers will have no need of the data, either for directly connected receivers or for other PIM neighbors. These routers respond to receipt of the data by sending a PIM Prune message upstream, which instantiates Prune state in the upstream router, causing it to stop forwarding the data to its downstream neighbor. In turn, this may cause the upstream router to have no need of the data, triggering it to send a Prune message to its upstream neighbor. This 'broadcast and prune' behavior means that eventually the data is only sent to those parts of the network that require it.
Eventually, the Prune state at each router will time out, and data will begin to flow back into the parts of the network that were previously pruned. This will trigger further Prune messages to be sent, and the Prune state will be instantiated once more.
PIM-DM only uses source-based trees. As a result, it does not use RPs, which makes it simpler than PIM-SM to implement and deploy. It is an efficient protocol when most receivers are interested in the multicast data, but does not scale well across larger domains in which most receivers are not interested in the data.
The development of PIM-DM has paralleled that of PIM-SM. Version 1 was created in 1995, but was never standardized. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-DM is currently being standardized by the IETF. As with PIM-SM, version 2 of PIM-DM is significantly different from and incompatible with version 1. PIM Dense Mode (PIM DM) is less common than PIM-SM, and is mostly used for individual small domains.
Bi-directional PIM (BIDIR-PIM) is a third PIM protocol, based on PIM-SM. The main way BIDIR-PIM differs from PIM-SM is in the method used to send data from a source to the RP. Whereas in PIM-SM data is sent using either encapsulation or a source-based tree, in BIDIR-PIM the data flows to the RP along the shared tree, which is bi-directional - data flows in both directions along any given branch.
BIDIR-PIM's major differences from PIM-SM are as follows.
- There are no source-based trees, and in fact no (S,G) state at all. Therefore there is no option for routers to switch from a shared tree to a source-based tree, and Source-Specific Multicast is not supported.
- To avoid forwarding loops, for each RP one router on each link is elected the Designated Forwarder (DF). This is done at RP discovery time using the DF election message.
- There is no concept of a Designated Router.
- No encapsulation is used.
- The forwarding rules are very much simpler than in PIM-SM, and there are no data-driven events in the control plane at all.
The main advantage of BIDIR-PIM is that it scales very well when there are many sources for each group. However, the lack of source-based trees means that traffic is forced to remain on the possibly inefficient shared tree.
There have been two proposed specifications for Bi-directional PIM. The first was described in draft-farinacci-bidir-pim, which dates from 1999. The protocol described here is a replacement, simpler than and with some improvements over the first. It is described in draft-ietf-pim-bidir.
Mixed-mode PIM Configurations
Typically, PIM-SM, PIM-DM or BIDIR-PIM would be used alone throughout a multicast domain. However it is possible to use a combination of the three by distributing multicast groups between the different protocols. Each group must operate in either sparse, dense or bi-directional mode; it is not possible to use a single group in more than one mode at once. Given such a division, the protocols coexist largely independent of one another.
The one way in which the protocols interact is that the same PIM Hello protocol is used by each, and is only run once on each link. The information learned from the Hello message exchange must be shared among the three routing protocols.
The method used to distribute groups between the three protocols is outside the scope of the PIM protocols and is a matter of local configuration. Note that it is important that every router in the domain has the same assignment of groups to protocols. The following techniques are used.
- The Bootstrap Router (BSR) protocol, used for RP discovery, has been extended to add a "Bi-directional" bit for each group range. This method may be used to assign groups between sparse and bi-directional modes if using BSR.
- Routers may be configured to use dense mode if the RP discovery mechanism (whatever that may be) fails to find an available RP for a group, and to use sparse or bi-directional mode otherwise.
- Router may be manually configured with group ranges for sparse, dense and bi-directional modes. | <urn:uuid:6fe57b87-c761-4cbe-a23e-ad3e97208d28> | CC-MAIN-2022-40 | https://www.metaswitch.com/knowledge-center/reference/what-is-protocol-independent-multicast-pim | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00100.warc.gz | en | 0.939777 | 2,245 | 3.375 | 3 |
PLEASANTVILLE — Local teachers have drawn statewide attention for their work to diversify district curricula at Pleasantville Public Schools — and now they have successfully challenged proposed textbooks as part of that effort.
THURSDAY, July 28, 2022 (HealthDay News) — Your daily walk, cleaning the house and lunch with friends could together be keys to staving off dementia, according to researchers.
A new study looked at lifestyle habits that could help lower risks, instead of factors that may contribute to the disease.
Researchers in China combed the data of more than a half-million British people in the UK Biobank and found that household chores, social visits and exercise were all associated with reduced risks for multiple forms of dementia.
The study, led by Dr. Huan Song of Sichuan University in Chengdu, China, followed the participants for an average 11 years. By the end of the follow-up, 5,185 participants had developed dementia.
“We can’t just look at this study and say, ‘if you exercise, you’re going to reduce your risk of Alzheimer’s‘ — that is not a conclusion we can come to, but this is one more piece of the evidence that suggests that exercise is really, really good for your brain,” said Dr. Allison Reiss, an associate professor at NYU Long Island School of Medicine who serves on an Alzheimer’s Foundation of America advisory board.
Reiss said the study was well done and exceptional because of its size, though it was also subjective because it was based on questionnaires filled out by participants. They answered questions about their physical activities, including frequency of climbing stairs, participation in strenuous sports and transportation use.
The participants also provided information about their household chores, job-related activities and use of electronic devices, as well as how often they visited with friends and family or went to pubs, social clubs or religious groups. They also reported family history of dementia, which helped researchers gauge genetic risk for Alzheimer’s.
Whether or not they had a family history of dementia, all participants benefited from the protective effect of physical and mental activities, the researchers found.
Specifically, risk dropped 35% among frequent exercisers; by 21% among those who did household chores, and by 15% among those who had daily visits with family and friends, compared to those who were least engaged in these activities.
Chores could be helpful because they’re a form of exercise, Reiss said. People who are cleaning, cooking and gardening may also be living a more active lifestyle.
“I really do think being physically active as you get older is just a very good thing,” she said. “Don’t drop out of life and sit on the couch. Get out and do stuff with people. And if it’s physical activity, it’s great. And mental activity is good. Keep living life.”
Dr. Zaldy Tan, medical director for the Jona Goldrich Center for Alzheimer’s and Memory Disorders at Cedars-Sinai Medical Center in Los Angeles, also reviewed the findings.
He said consistency may be a benefit of household chores. They’re so intertwined with day-to-day activities, unlike exercise, which people may give up for a time, he pointed out.
“And, of course, the goal here is to supplement that with other activities that are not only physically demanding, but also increase our mental simulation and flex our social muscles,” Tan said. “That’s the other interesting aspect of this study is that it’s not just physical activity, but also mental and social activities.”
Sometimes people end up isolated as they enter retirement if all their social interactions are tied to their work, he said. Social isolation is a risk factor for dementia.
“What people can take away from this is that no matter what age, no matter what our life situation, we have to find a way to stay physically active, mentally active, socially engaged as part of the overall approach to keeping our minds and our brains healthy from middle age to old age,” Tan said. He added that other studies have also shown that regular physical, mental and social activity benefits brain health.
Researchers noted that this study can’t be generalized to a diverse group, because most of the participants were white folks in the United Kingdom and because activity levels were self-reported.
Percy Griffin, director of scientific engagement for the Alzheimer’s Association, said the findings are exciting and warrant additional investigation.
“This type of research is already happening through the Alzheimer’s Association’s U.S. POINTER study, a two-year clinical trial that aims to uncover the best lifestyle ‘recipe’ for reducing risk of cognitive decline and dementia,” Griffin said in a written statement. “We are confident that the combined efforts of researchers around the globe will one day shed light on the possibility of preventing dementia altogether.”
The findings were published online July 27 in the journal Neurology.
The U.S. Department of Health and Human Services has more on Alzheimer’s disease and related dementias.
SOURCES: Allison Reiss, MD, member, Medical, Scientific and Memory Screening Advisory Board, Alzheimer’s Foundation of America, associate professor, medicine, NYU Long Island School of Medicine, Mineola, N.Y., and head, NYU Langone Hospital-Long Island Inflammation Laboratory, Biomedical Research Institute, Mineola, N.Y.; Zaldy Tan, MD, MPH, medical director, Jona Goldrich Center for Alzheimer’s and Memory Disorders, and director, Bernard and Maxine Platzer Lynn Family Memory and Healthy Aging Program, Cedars-Sinai Medical Center, Los Angeles; Percy Griffin, PhD, director, scientific engagement, Alzheimer’s Association, Chicago; Neurology, July 27, 2022
PLEASANTVILLE — Local teachers have drawn statewide attention for their work to diversify district curricula at Pleasantville Public Schools — and now they have successfully challenged proposed textbooks as part of that effort.
On Tuesday, the Pleasantville Board of Education voted down a resolution to purchase McGraw Hill social studies textbooks. The decision came after teachers and parents said the textbooks would fall short of the state diversity standards for education they were working to introduce into classrooms.
Tamar LaSure-Owens, director of the district’s Amistad, Holocaust and Latino heritage — or AMHOTINO — curriculum, spoke at the school board meeting against the textbooks. LaSure-Owens, who is responsible for implementing state standards for the district, said she did not have confidence the textbooks appropriately taught the histories of marginalized groups and that they would be incongruous with the district’s broader curriculum.
Two other speakers echoed LaSure-Owens’ thoughts. The school board was receptive to their concerns and voted to reject the textbooks without extensive debate.
“Why are we buying books that just don’t meet our standards?” LaSure-Owens said after the meeting. “We do not teach a textbook, we teach a standard.”
McGraw Hill did not immediately respond to a request for comment. The company website does include a section for its “commitment to diversity, equity & inclusion” and highlights the work of its PreK–12 Equity Advisory Board.
“Diverse and inclusive teams are critical to helping us be more creative in our approach, better understand our customers’ needs, and develop programs and materials that address equity issues and reflect the students we serve,” Terri Walker, McGraw Hill’s head of Diversity, Equity & Inclusion, said in a statement posted to the company website.
The Pleasantville school district has received statewide recognition for its AMHOTINO curriculum, which incorporates lessons about history, tolerance and diversity into all school subjects. It places a special focus on the histories of African Americans, Native Americans and Hispanic Americans; the legacy of slavery in North and South America; and the events of the Holocaust and other genocides in world history.
LaSure-Owens said members of the New Jersey Education Association met with representatives from McGraw Hill to discuss its textbooks in 2020. The NJEA members highlighted instances where they believed textbook material was inaccurate or incomplete, such as sections concerning Native American history and the trans-Atlantic slave trade. While there was an understanding between the company and the NJEA about the need to produce more diverse materials, LaSure-Owens said she believes the textbooks currently in use still fall short of state standards.
LaSure-Owens said she particularly wanted to avoid the use of euphemisms when discussing historical atrocities. She cited a curriculum change proposed to the Texas state Board of Education for second graders in which slavery was described as “involuntary relocation.” McGraw Hill in particular attracted controversy in 2015 when it described slaves as “workers from Africa” in a geography-textbook caption. Company leadership apologized for the caption shortly after students called attention to it. She added that she wanted textbooks to reflect other district standards, such as its study of slavery throughout the Western Hemisphere, and was concerned the books could otherwise confuse students.
The school board and New Jersey Department of Education agreed in March to have LaSure-Owens assist the Amistad Commission, which works to incorporate Black history into New Jersey classrooms. The NJEA awarded LaSure-Owens its Urban Education Activist Award in December for her efforts at Pleasantville.
Critical of the processPleasantville Education Association President Joe Manetta took issue with how the textbooks were selected. He said the district officials who made the decisions about the curriculum should have consulted with LaSure-Owens first, given her status as AMOHOTINO coordinator.
“They should have had a conversation with her because that directly involves what she does,” Manetta said after the meeting Tuesday.
New Jersey recently expanded curriculum mandates designed to promote diversity and tolerance in education. Gov. Phil Murphy signed a bill in January requiring schools to teach about the history of Asian Americans and Pacific Islanders for students in kindergarten through 12th grade. It follows the passage of a 2019 law requiring schools to teach students about the history of the LGBTQ community and that of persons with disabilities.
Recent efforts to promote diversity have precedent in past decades. The state Legislature created Black-history standards and the Amistad Commission in 2002. Eight years earlier, in 1994, the Legislature mandated that students be taught about the Holocaust and other genocides.
Parents both in South Jersey and across the country have taken exception to LGBTQ education mandates as well as the sex-education standards. Much of the national debate has centered on state-government efforts in Florida to regulate classroom discussions about LGBTQ topics. Some conservatives have argued that such efforts help ensure lessons are age appropriate, while liberals and LGBTQ-rights advocates have decried the regulations as bigoted.
Attendees at an Ocean City Board of Education meeting in April said the New Jersey Student Learning Standards for Comprehensive Health and Physical Education encroached upon the role of parents to teach their children about values and morals. Advocates for LGBTQ-rights in Cape May County said they were concerned about being harassed for their support of the standards.
Some Republican lawmakers, including state Sen. Michael Testa, Atlantic, Cape May, Cumberland, had asked that the standards be revised. Murphy promised in April to review the standards and said his administration wanted to include parental input into education. The governor did emphasize that he believes New Jersey schools should prioritize academic performance, mental health and making schools inclusive for all, including LGBTQ students.
Public controversy has not stopped the launch of other, similar education projects. The National Education Association awarded the NJEA a Great Public Schools grant to launch a consortium for New Jersey educators. The consortium will partner with more than 25 colleges and universities, museums, historical commissions and advocacy groups to train teachers to make their classrooms more inclusive and help everyone involved in education “to understand, embrace and celebrate New Jersey’s diversity.”
The Associated Press contributed to this report.
Contact Chris Doyle
There are many different ways to treat, manage and cope with social anxiety disorder.
ne of the most common and effective treatments is cognitive behavioral therapy (CBT). Given how much social anxiety is connected to irrational thought patterns and distorted thoughts about a social setting, CBT helps people by breaking down those thought patterns, disputing them and reframing the thoughts into more rational ones, according to Alison Seponara, a licensed therapist based in Pennsylvania.
“As a CBT therapist, I would begin by asking the client to rate the intensity of their anxiety in different social settings and then break down the thoughts connected to that situation,” says Seponara. “It’s important to recognize these thoughts so you can begin to challenge them and rewire the way the irrational brain is perceiving certain situations.”
A psychiatrist may prescribe medication to help manage anxiety, but other psychotherapies that can be used to treat social anxiety include (but aren’t limited to):
“You never know what type of challenges life can throw at us, but with the right tools we can 100% learn how to manage our anxiety when faced with adversity,” says Seponara. “Healing is a lifelong process.”
You May Also Be Interested In Online Therapy Services From Our Featured Partners
Choose your own therapist?
Choose your own therapist?
BetterHelp Online Therapy
Choose your own therapist?
Online therapy platforms connect you with licensed providers, which can include psychiatrists, psychologists, licensed marriage and family therapists, licensed clinical social workers and licensed professional counselors. Discover our top picks and the best online therapy to fit your needs and preferences here.
Derald Wing Sue is Professor of Psychology and Education in the Department of Counseling and Clinical Psychology at Teachers College and the School of Social Work, Columbia University. He received his Ph.D. from the University of Oregon, and has served as a training faculty member with the Institute for Management Studies and the Columbia University Executive Training Programs. He was the Co-Founder and first President of the Asian American Psychological Association, past presidents of the Society for the Psychological Study of Ethnic Minority Issues (Division 45) and the Society of Counseling Psychology (Division 17). Dr. Sue is a member of the American Counseling Association, Fellows of the American Psychological Association, the American Psychological Society, and the American Association of Applied and Preventive Psychology. Dr. Sue has served as Editor of the Personnel and Guidance Journal (now the Journal for Counseling and Development), is Associate Editor of the American Psychologist, Editorial Member to Asian Journal of Counselling, and has been or continues to be a consulting editor for numerous journals and publications.
Derald Wing Sue can truly be described as a pioneer in the field of multicultural psychology, multicultural education, multicultural counseling and therapy, and the psychology of racism/antiracism. He has done extensive multicultural research and writing in psychology and education long before the academic community perceived it favorably, and his theories and concepts have paved the way for a generation of younger scholars interested in issues of minority mental health and multicultural psychology. He is author of over 150 publications, 15 books, and numerous media productions. In all of these endeavors, his commitment to multiculturalism has been obvious and his contributions have forced the field to seriously question the monocultural knowledge base of its theories and practices. As evidence of his professional impact, Dr. Sue's book, COUNSELING THE CULTURALLY DIVERSE: THEORY AND PRACTICE, 2008, 5th Edition (with David Sue - John Wiley & Sons Publishers), has been identified as the most frequently cited publication in the multicultural field; since its first edition, it has been considered a classic and used by nearly 50% of the graduate counseling psychology market. With the help of many colleagues, he chaired committees for the Society of Counseling Psychology and the Association for Multicultural Counseling and Development that resulted in laying the foundations for the cultural competency movement in the delivery of mental health services to an increasingly diverse population.
Because of a personal life-changing experience with racism directed toward his family, Dr. Sue's research direction has progressively turned to the psychology of racism and antiracism. When he was invited to address President Clinton's Race Advisory Board on the National Dialogue on Race and participated in a Congressional Briefing on the "Psychology of Racism and the Myth of the Color-Blind Society", Dr. Sue realized that the invisibility of "whiteness" and ethnocentric monoculturalism were harmful not only to People of Color, but Whites as well. These experiences and activities have resulted in his critically acclaimed book OVERCOMING OUR RACISM: THE JOURNEY TO LIBERATION, 2003 (Jossey Bass Publishers). Written primarily for the general public, it directly confronted White Americans with their White privilege, inherent biases and their unintentional oppression of Persons of Color. As expected, the book aroused intense feelings and generated difficult dialogues on race.
These reactions led Dr. Sue and his research team at Teachers College to undertake a 6-year study on the causes, manifestations and impact of racial microaggressions. Their groundbreaking work resulted in a taxonomy of racial microaggressions that empowers People of Color by making "the invisible, visible," by validating their experiential realities, and by providing them with a language to describe their experiences. Dr. Sue is currently broadening research on microaggressions to include religion, gender, disability, sexual orientation and other marginalized groups. Contrary to the belief of most White Americans that microaggressions create minimal harm, his studies suggest that these daily assaults and insults are responsible for creating inequities in education, employment and health care and for producing emotional distress in People of Color. His most recent book, MICROAGGRESSIONS IN EVERYDAY LIFE: RACE, GENDER AND SEXUAL ORIENTATION (John Wiley and Sons Publishers) has already generated much excitement.
Dr. Sue's services have been widely sought by many groups and organizations. He has also done extensive cultural diversity training for many Fortune 500 companies, institutions of higher education, business, industry, government, public schools, and mental health organizations. In that capacity, Dr. Sue has worked with mental health practitioners, university faculty, teachers, students, community leaders, senior executives, and middle-level managers. His work is recognized not only on a national level, but on an international one as well. Dr. Sue has presented and traveled in Asia (China, Hong Kong, Japan, Taiwan, Macau, the Philippines, and Singapore), New Zealand and Europe. He is frequently sought as a spokesperson on issues of racism, multiculturalism, and diversity by the press and other media outlets. Dr. Sue has been interviewed on many television specials and is frequently quoted in the press.
As recognition of his outstanding contributions, Dr. Sue has been the recipient of numerous awards from professional organizations, educational institutions, and community groups. He has been honored by the Association for Multicultural Counseling and Development with the Professional Development Award and the Research Award; by the Asian American Psychological Association with the Leadership Award, Distinguished Contributions Award and President's Award; by the Third World Counselors Association with the Leadership and Distinguished Contributions to Cross Cultural Theory Award; by The Society for the Psychological Study of Ethnic Minority Issues with the Mentoring and Leadership Award; by Center for the Study of Teaching and Learning Diversity with the Diversity in Teaching and Learning Lifetime Achievement Award; by the California Psychological Association with the Distinguished Scientific Achievement to Psychology Award; by the American Counseling Association with the Professional Development Award; by the Society of Counseling Psychology, Sage Publications and The Counseling Psychologist for the Outstanding Publication of 2001; by California State University, Hayward, Alliant University and Teachers College, Columbia University for Outstanding Faculty or Teaching Awards; by the American Psychological Association with the Career Contributions to Education and Training Award and a Presidential Citation for Outstanding Service; by the National Multicultural Conference and Summit with the Dalmas A. Taylor Award; by the University of Oregon with the Outstanding Alumnus Award, by the American Psychological Foundations with the Rosalee G. Weiss Outstanding Psychologist Award, by the Society for the Psychological Study of Ethnic Minority Issues with Lifetime Achievement Award and by the Los Angeles County Psychological Association for the Distinguished Service to the Profession of Psychology Award. As evidence of Dr. Sue's stature in the field, a national Fordham University study of multicultural publications and scholars concluded that "Impressively, Derald Wing Sue is without doubt the most influential multicultural scholar in the United States".
NEWYou can now listen to Fox News articles!
The group that sets the standards for medical education recently released standards that force students to study and apply ideology typically pushed by the far-left while integrating diversity, equity, and inclusion into formal curricula.
The Association of American Medical Colleges (AAMC) published the New and Emerging Areas in Medicine series to help students benefit from "advancements in medical education over the past 20 years," and the third report from the collection "focuses on competencies for diversity, equity, and inclusion (DEI)."
The report notes that recent medical school graduates must demonstrate "knowledge about the role of explicit and implicit bias in delivering high-quality healthy care," "describe past and current examples of racism and oppression," identify "systems of power, privilege and oppression and their impacts on health outcomes" including "White privilege, racism, sexism, heterosexism, ableism, religious oppression" and "articulate race as a social construct that is a cause of health and health care inequities."
MAJORITY OF AMERICANS DON’T THINK LIBERAL IDEOLOGY INJECTED INTO MEDICAL INDUSTRY HELPS HEALTHCARE: POLL
Dr. Stanley Goldfarb, a board-certified kidney specialist, is the Board Chair of Do No Harm, a group of medical professionals dedicated to eliminating political agendas from healthcare. He feels the AAMC is doing more harm than good with its new standards that he believes will irk the American people.
"The AAMC agenda means that critical race theory will be an integral part of the education of medical students and this can only lead to discrimination against one racial group vs. another. One of the leaders of CRT, Dr. Ibrim Kendi, has declared that past discrimination can only be cured by future discrimination. I do not think the American people will like this kind of health care," Dr. Goldfarb told Fox News Digital.
BIDEN ADMINISTRATION GUIDANCE PRIORITIZES RACE IN ADMINISTERING COVID DRUGS
"The AAMC sets the standards for medical education," Dr. Goldfarb continued. "This latest set of expectations for the education of medical students and residents is nothing more than indoctrination in a political ideology and can only detract from achieving a health care system that treats all individual patients optimally."
In May, Legal Insurrection’s CriticalRace.org, which monitors CRT curricula and training in higher education, found that at least 39 of America’s 50 most prestigious medical colleges and universities have some form of mandatory student training or coursework on ideas related to critical race theory.
"The national alarm should be sounding over the racialization of medical school education. The swiftness and depth to which race-focused social justice education has penetrated medical schools reflects the broader disturbing trends in higher education," Legal Insurrection founder William A. Jacobson told Fox News Digital at the time.
DO NO HARM AIMS TO KEEP LIBERAL IDEOLOGY OUT OF HEALTHCARE: ‘PHYSICIANS ARE BEING PUSHED TO DISCRIMINATE’
Jacobson, a clinical professor of law at Cornell Law School, also found that 39 of the top 50 medical schools "have some form of mandatory student training or coursework" related to CRT and 38 offered materials by authors Robin DiAngelo and Ibram Kendi, whose books he said explicitly call for discrimination.
"Mandatory so-called 'anti-racism' training centers ideology, not patients, as the focus of medical education. This is a drastic change from focusing on the individual, rather than racial or ethnic stereotypes," Jacobson said.
In 2021, the American Medical Association (AMA) committed to utilizing CRT in a variety of ways and criticized the idea that people of different backgrounds should be treated the same. All 50 schools examined by CriticalRace.org are accredited by the Liaison Committee on Medical Education, which sponsors the Association of American Medical Colleges, which has also taken steps to support anti-racist initiatives, and the AMA.
Jacobson believes "Diversity, Equity and Inclusion entrenched bureaucracies promote, protect and relentlessly expand their administrative territory in medical schools," but the resources should instead be used "to expand medical knowledge and patient care, not to enforce an ideological viewpoint."
CLICK HERE TO GET THE FOX NEWS APP
The Association of American Medical Colleges sent Fox News Digital the following statement:
"Our goal, and the goal of every medical school, is to recruit a diverse class of talented medical students and educate them to Improve the health of their patients and the communities they serve in an evidence-based manner. Students must learn to consider all factors that affect health. As science advances and we understand more about what impacts health, medical schools will incorporate these discoveries into their curricula.
The AAMC’s responsibility is to work with our member institutions to disseminate effective new curricular approaches based on scientific evidence. With the common goal of achieving better health, we must recognize that changes to medical school curricula based on evolving evidence will ultimately help us achieve that.
The recently released competencies are grounded in the STEM disciplines that are taught in medical school and that future physicians need to care for their patients.
We have evidence that supports that race is a social construct, and there is a growing body of evidence about what race is and isn’t, and its impact on health. These new insights are improving medical practice and allow us to shift our thinking in medical education to better prepare tomorrow’s doctors.
The medical profession is grounded in the human interaction between doctor and patient and the factors that affect a patient’s health. We have an obligation to address and mitigate the factors that drive racism and other biases in health care and prepare physicians who are culturally responsive and trained to address these issues. Ignoring these facts would be detrimental to being able to provide sensitive, individualized, and medically appropriate care to each patient. The next generation of physicians must have the comprehensive skills and knowledge needed to heal all those in their care."
The article was updated to include a statement from the AAMC.
The Vice-Chancellor of Osun State University, Prof Clement Adebooye, talks BOLA BAMIGBOLA about the ongoing faceoff between the Academic Staff Union of Universities and the Federal Government, and how the dispute can be resolved
How will you describe the experience since your appointment as vice-chancellor about seven months ago?
It’s been challenging but it is the responsibility of human beings to face challenges and respond to the challenges of life. We are moving on smoothly and we are at cruising altitude, trying to stabilise the system. So as much as is given to us, is as much as we have delivered, trying to make the system work and to work not as just a university but as a 21st-century university. That is exactly what we have done in the last seven months that I assumed duty as the vice-chancellor of the university.
Some cases were instituted against the process that produced you as VC. How are you managing that area to ensure smooth industrial relations on campus?
It is guaranteed by the laws of the Federal Republic of Nigeria that an aggrieved party or parties have the liberty to approach the court of law to seek redress as my colleagues did before my appointment and the issues were resolved also in the court of law, with the university winning the legal suit. Immediately, after I came in, the first thing I did was to make peace among the different stakeholders in the university. I met the different categories of members of staff, starting with the professors, deans, provosts, heads of departments, management, principal officers, committees, and trade unions in the university, like ASUU, the Senior Staff Association of Nigerian Universities, National Association of Academic Technologists and the Non-Academic Staff Union of Educational and Allied Institutions. I even went as far as meeting the students union group, so I was able to tour the six campuses of the university and the Isale-Osun (Osogbo) annex of the university.
I also met traditional rulers where our campuses are located and different stakeholders, like security agencies, including the police, military, Nigeria Security and Civil Defence Corps, and the Federal Road Safety Corps. So, the dust settled within one month of my assumption of duty. It is gratifying to report that I am enjoying close to 100 per cent support from workers and students of this university. The strategy of dialogue, consultation, discussion, focus group meetings, and more is what we have used to make things work for everybody and also to have respect for the different categories of workers within the system and treat them as stakeholders in the system and not just workers within the system.
Your predecessor, Prof Labo Popoola, contended with different issues, especially the unions on the campuses while in office. Have those issues been addressed?
Those issues are no longer there. First, there were six members of staff on interdiction when I came in. They have been interdicted for three and a half years, almost going to four years. They were on half salary and by the grace of God, within my first three weeks in office, I put politics and strategy into play to meet all parties at various levels. I succeeded in approaching the council and doing the necessary things that led to the resolution of the crisis, including the withdrawal of the court cases against the six individuals, and the matters were resolved and council approved and the six people are back in the university now and their full entitlement for the period that they were on interdiction has been paid to them.
Another issue was about trust which I have tried to build since I came in. You know, as human beings, we differ and there is no complete human being. So, probably little changes in attitude have influenced the peace that we enjoy now in the system.
While most universities in the country are on strike, your school has enjoyed a stable academic calendar for almost six years. What are you doing differently?
Public universities, which include state and federal ones, are run on three models. The Federal Government operates a 100 per cent funding model for personnel costs. Some state universities also run that same model of 100 per cent funding. Some other universities run a 50/50 or 60/40 model of university funding and then the state contributes to the personnel cost. For us in Osun State University, we run the approximately 50/50 model whereby we contribute 50 per cent of the personnel cost and the state also contributes 50 per cent of the cost.
Also, we take care of almost 100 per cent of the running cost of the university, running costs in terms of gardening, maintenance, cleaning services, sanitation, security services, accreditation, laboratory materials, equipment, and other things except some of them provided by the Tertiary Education Trust Fund. So, the major contribution that makes the university to be able to stand on its feet is the 50 per cent personnel cost from the state added to the 50 per cent personnel cost by the university. Therefore, our employees in this university know very well that this system is like a ‘work-and-eat children’s football tournament.’ It is when our students are here that we can get 50 per cent of the personnel cost which is added to the one from the state and it is also at that time that we get our running cost. So, when students are not here, we don’t get both the 50 per cent personnel cost and almost 100 per cent running cost. So our employees are conscious of this and the university council is also conscious of this very important element of our funding and decided on March 18, 2018, that if this university needed to survive, we should run on a ‘who-does-not-work, does-not-deserve-a-pay,’ (policy) and that has become a tradition here and that is what has been sustaining the system.
Do you think other universities should adopt your strategy?
I am not going to suggest it to the Federal Government because it is an entity and our state is also an entity. This kind of thing can be implemented when a university has students contributing to personnel costs. If you have gone on strike for three or four months or even more and you are paying salaries when students are not around and you have to continue running the semester when the students are back, where do you get the money to pay? Osun State University has done very good arithmetic of how much we need per semester. We have a template, we have the way we spend our money. So if there is a shift either by strike or something, the university equilibrium is shifted and the university will lose balance. That’s why our workers have understood over time that it is better to keep this system as it is. It is an understanding between the management and the workers.
Is your kind of model not anti-labour and will it not be impossible for unions on campus to protest against any issue they are uncomfortable with?
I met them (the unions) almost every month and we discussed. We meet on the road and stop and talk. They don’t book an appointment to see me. We discuss in the corridor and that is my style. When they have issues, they do call me at midnight. We are very free. We discuss freely. Honestly, they are good and understanding people. The way we resolve our issues is very easy. They (unions) know we can meet anywhere and they know that for everything we do, there is an explanation and we do listen to them too and in a way, we incorporate their suggestions into what we do because they are stakeholders, not just workers.
A VC will go, but they will remain here until retirement. I am not going to be here until retirement. They own this system, so their stake here is far more than mine and that is why we must try to sit down, hear and crystalise their side of the story so that we are guided.
Other Nigerian universities have been on strike for several months. How does that make you feel?
I have worked all over the world. I have taught at eight universities abroad, including the best universities, like the University of Manitoba, the University of Bonn, and a university in Stuttgart, Germany. You can’t see anywhere in the world that universities are closed for this long period and that is the bitter truth. At the risk of being called names, we have to find something around this at the level of the government and the level of unions. A nation cannot grow the way we are going.
You were, at a point, a union secretary at Obafemi Awolowo University, Ile-Ife, and you must have fought against some of the issues causing unrest on campuses presently. Are ASUU’s demands not justified?
You are right. We did and we fought them and that is what I am saying on the side of ASUU and the side of the government. Given the quantum of money that federal universities get from the Federal Government, members of staff of such universities should be asking their vice-chancellors questions on how they spend the money. They should ask them. It (funds) may not be enough but what they are getting should provide some basic minimum. However, what happens in this country is what I don’t understand. I am a vice-chancellor of a state university. Go to our research laboratory, and see our infrastructure. We don’t have all our roads tarred on campus, but when you go into our campuses, you will see features of university development. What I am saying is that the little we get should be used judiciously.
You are insinuating mismanagement?
There are issues about Nigeria that are very difficult to discuss and I don’t want to discuss them. But unions should start asking vice-chancellors questions.
A piece was circulated on social media recently that your university may cancel its Arabic/Islamic course. Is that true?
It is all a tissue of lies. In fact, there was no meeting, no contemplation of such. If anything, we are trying to grow our Department of Islamic and Arabic Studies. Do you know that we have a Department of Common and Islamic Law in this university and it is one of the most subscribed departments in this institution? It is on the Ifetedo campus. Recently, we met to give scholarships to students in this university, and Arabic and Islamic Studies got 35 per cent tuition support for students who want to study Arabic and Islamic Studies, approved by the council of this university. So, where this news came out from, I don’t know. Our university is a religion-tolerant institution.
My registrar is a Muslim, and the chairman of the council is a Muslim. Actually, the majority of members of the council are Muslims. Most officers who work in the office of the vice-chancellor are Muslims appointed by me.
My deputy vice-chancellor, appointed by me, is a Muslim. He is the Chief Imam of the university. Please help us to dispel this rumor quickly and to tell them that our university is tolerant. We are a mixture of different religions in South-West, Nigeria, and we are not known for this kind of rubbish.
What is your university doing to provide solutions to societal needs through research?
I have said a million times that we should try to justify the reasons that universities should continue to exist. One of the reasons is for universities to make inventions and discoveries and that is what we are posed to do in the next five years here. We are consolidating our centre for renewable resources where technologies will be rolled out. I tell you I have just selected a professor of high standing to head that unit in the university to do things that will Improve the quality of life of our people not only in Nigeria but all over the world. So, this university will increase its global visibility through research. Many things are coming on board. Just this morning, I received a mail from Bill and Melinda Gates Foundation that one of our professors was awarded a grant to study how to mitigate the effect of malaria and COVID-19 in this part of the world. Several things are ongoing. Our colleagues in Ikire are trying to investigate issues that are related to religion and culture, and our living together. In Osun State, we have four dialect groups. We the Oyos, Ifes, Ijesas and Igbominas. Those are the four categories of people that we have in the state. Our researchers are looking at those things that join us together. Issues are being investigated all across our campuses. So, I tell you, by the grace of God, within the next one to one and a half years, UNIOSUN will be in the news in the area of discovery and inventions. New things are coming up to Improve the quality of life of the people.
What is the management doing about its internally generated revenue?
We are trying to do our best. For example, at the moment, we have a farm on our Ejigbo campus that produces eggs and broilers. We market that and we make some money. We have our established palm plantation that is yielding already and selling. We also have a garri processing plant on the Ejigbo campus. We have a water factory plant here on the Osogbo campus. Our water is a household name in Osogbo. Everybody wants to buy UNIOSUN Water. We have a block-making firm. You can see the Navy trying to erect a fence on their land in Osogbo. It buys all the blocks from UNIOSUN.
Is your stable academic calendar attracting more students to apply to the university?
It is bringing pressure. How we are going to manage the admission this year is my problem now because I am sure that our quota this year will not be more than 6,000 or so, even with the expansion of infrastructure. I do not know yet how we are going to accommodate the number of students that may want to come to this university but whatever it is, we have more opportunities because we have three streams of admissions approved by the Joint Admissions and Matriculation Board. We have full-time, part-time programmes and sandwich (programmes) which is the weekend programme. So, any student that could not make it into the normal full-time undergraduate programme will make it into either the part-time programme, which will be 150 per cent of the time or sandwich. So, that is what we are going to do this year and we are strategising for that already.
What is your recommendation to ASUU and the Federal Government concerning the current industrial action?
They should resolve this matter. One thing I have to say is that it is bad that you are paying someone the same salary since 2009 and this is 2022. The government should look at that. It is not fair to the professors and not even fair to the civil service. The government should have looked at that a long time ago. Inflation has eaten up all the money. The prices of rice, garri, elubo yam, bread, and other things have gone up. I am pleading with the Federal Government to look into the salaries of the academic staff members and that of all other workers in this country and give people a bit of a life of ease. The pressure on people is much. Secondly, ASUU itself should be asking the vice-chancellors questions about the management of the university.
University of Colorado Board of Regents on Thursday took its first look at a new study that shows a college degree can positively impact someone’s life during its annual summer retreat at Gateway Canons Resort & Spa in Gateway.
“There’s a very strong association with scoring higher on pretty much all of these dimensions as you move through your educational attainment,” said Phyllis Resnick, a Colorado economist who led the study.
This is the board’s first time hosting a retreat in Gateway. In recent years it was hosted at Devil’s Thumb Ranch in Tabernash, and before that it was held at former CU President Bruce Benson’s mountain ranch near Kremmling.
CU Regent Heidi Ganahl was not present for Thursday’s session.
CU acting Chief Financial Officer Chad Marturano said creating a “valuemetric” to study the different ways higher education can positively affect someone’s life is a concept the four-campus system started working on about 18 months ago.
“We’ve always talked about the economic earnings relative to educational attainment, and we kept going back to the board and saying ‘What are other ways we can think about this but not just talk about them in a piecemeal, anecdotal way but actually put something to that (and) have data to back it up?” Marturano said.
Resnick’s study pulled data from various sources and examined five metrics that impact people’s lives: economic; health; home; civic and social; and professional. Using this data, Resnick compared how the variables changed based on a person’s level of education. The levels of education she used included no degree, an associate’s or vocational degree, a bachelor’s degree and an advanced degree.
When Resnick compared the five metrics to the four education cohorts she found that people with a bachelor’s degree scored about twice as high on the valuemetric in comparison people with no degree.
“We got this really nice kind of result that as you move forward on your educational pursuits or attainments, you score higher and higher on that data,” she said.
Resnick said the study is just a snapshot over time using information from four sources with respondents varying in age.
CU President Todd Saliman said this initial study is just another tool the university has for its outreach, marketing and legislative work.
“It’s another tool in our toolbox to talk about the value of higher education depending on the audience that you’re talking to and to demonstrate analytically that there is this connection or association,” he said.
During Thursday’s retreat, board members also discussed their role in leadership and governance after listening to Peg Portscheller, president of education consulting company Portscheller & Associates.
Portscheller helped the board think about working collectively as it transitions regents Lesley Smith and Ken Montera into their new roles as chair and vice chair, prepares to lose three regents while gaining three new ones and begins working with CU’s new President Todd Saliman.
Portscheller told the board it needs to think of the retreat as a way to reflect on how it’s doing, the way it’s moving the university’s strategic plan forward with a new president and the chancellors and what it still has to accomplish.
“We are coming together and thinking about how we all work together,” she said.
The board began reviewing old goals that it formed during a 2020 winter retreat to discuss which it wants to keep, which it wants to start and which it wants to stop.
Regent Glen Gallegos said he believes the board has been doing well given the challenges it has faced in recent years with the coronavirus pandemic, new board members and a new president.
“Are we functioning as a board? In a lot of ways, but I think we can certainly do better,” he said.
The Board of Regents’ retreat continues until about 1 p.m. Friday.
Jul. 30—The Daily Item
A Susquehanna University student who is a Selinsgrove High graduate is one of the latest members of the Pennsylvania State Board of Education.
On Friday, the Department of Education announced that soon-to-be SU junior Natalie Imhoof will serve as the board's Junior Postsecondary Student member. She is joined on the board by Claire Chi, a junior at State College Area High School.
"Student voices are critically important as we work to develop policies and best practices in schools across the commonwealth, and we welcome the unique perspectives that Natalie and Claire will bring to the State Board of Education," said Acting Secretary of Education Eric Hagarty. "Through their participation in a number of important conversations, these two bright young leaders will help shape the vision of education for decades to come."
Imhoof is a biology and management major at Susquehanna University and serves as a student ambassador at the university and is a summer intern at the Mid-Atlantic Society for Healthcare Strategy and Market Development. Imhoof also serves as a Vertical Garden Project leader of Partners for a Healthy Community, where she creates and coordinates installation of indoor gardens at senior living facilities and researched the effects of gardening on social isolation and loneliness, a study which she will provide to the National Gardening Association.
She is also the treasurer of serves as the American Sign Language (ASL) Club Treasurer.
"The board values the input and perspectives that our student members provide on issues that are important to Pennsylvania's students," said Chair of the Board Karen Farmer White. "By including student leaders on the Board, we seek to encourage civic participation among Pennsylvania's youth and to empower student voices in education policymaking."
The State Board changed its bylaws in May 2008 to incorporate student representation in a non-voting capacity on the Council of Basic Education and the Council of Higher Education. Student members must attend and participate in board meetings, advise, and consult with the board, and adhere to board regulations. They will also establish an ongoing relationship with other students throughout the commonwealth to more effectively represent students in Pennsylvania's educational policymaking. The State Board of Education's members — and four student representatives — convene every other month throughout the year to discuss and vote on education policies and procedures.
Clever rhetoric has been utilized to advance causes since at least the 5th century B.C., but even the ancient Greeks couldn’t match the level of sophistry being practiced in Florida these days, to win support for restrictions on reproductive and transgender rights.
The ballroom drama event of the week came Thursday when Gov. Ron DeSantis announced the suspension of a duly-elected state prosecutor for what he called “neglect of duty.” Florida state attorney Andrew Warren is accused of believing that he “thinks he has the authority to defy the Florida Legislature and nullify... criminal laws with which he disagrees,” said DeSantis at a news conference in Tampa that his press secretary promoted on Twitter as the “liberal media meltdown of the year.”
“State Attorneys have a duty to prosecute crimes as defined in Florida law, not to pick and choose which laws to enforce based on his personal agenda,” the governor said.
Since then, politicians, pundits and protesters have sounded off about Warren’s removal, condemning the move. Some prominent members of Florida law enforcement hailed DeSantis’s decision at his news conference, but there have been crickets and “no comments” from Hillsborough County police chiefs and other officials.
The governor has found support from his followers on social media and in an interview with Fox News anchor Tucker Carlson, who blamed billionaire George Soros for “funding soft-on-crime prosecutors across the country.” It should be noted Carlson made no mention of the hefty financial support conservative attorneys general and judges have received in their races from the likes of the Koch family and Federalist Society executive vice president Leonard Leo, an advisor to former President Donald Trump.
The Republican, who is running for re-election this fall and reported to be considering a run for president in 2024, suspended Warren in an executive order he signed Thursday, dramatically citing evidence labeled “Exhibit A” and “Exhibit B.” The first is a June 2021 letter Warren signed along with other prosecutors, vowing not to use their “limited resources” on enforcing any laws that criminalize trans people or physicians that provide gender-affirming care.
“Bills that criminalize safe and critical medical treatments or the mere public existence of trans people do not promote public safety, community trust or fiscal responsibility,” the letter said. “They serve no legitimate purpose.” The letter goes on to vow prosecutors will use their discretion when it comes to enforcement of those anti-trans laws. At present, Florida is one of 18 states that bans trans student-athletes but has not thus far enacted any laws barring gender-affirming healthcare.
However, there is a new development: when the Florida Board of Medicine met today, Friday, to consider the DeSantis administration’s proposal to deny Medicaid coverage for all gender transition-related care for both trans youth and adults, the panel voted to adopt new state guidelines that would ban and restrict gender dysphoria treatments, but only for children and adolescents, according to WKMG-TV. Next, the Florida Board of Medicine will follow the rules-making process to determine a standard of care for state physicians and medical experts.
“Exhibit B” is a letter that was signed on June 24 of this year by local prosecutors nationwide, opposing enforcing abortion-related crimes. Warren was the only Florida-based state attorney to sign on.
Within days of the Supreme Court decision that overturned Roe vs. Wade in June, Warren—a Democrat—also made his feelings clear in a tweet that labeled state restrictions on abortion “unconstitutional.”
On Thursday, Warren tweeted again, this time issuing a statement in response to his suspension and replacement, calling it a “political stunt” and accusing DeSantis of “using his office to further his own political ambition,” going against voters who “elected me to serve them, not Ron DeSantis.”
As the Tampa Bay Times reported, DeSantis’ order said those two letters Warren signed amounted to a blanket refusal to enforce, or a “functional veto,” of the law, which amounts to neglect of the state attorney’s responsibilities.
Provided Warren fights his suspension, it’ll be up to the Florida Senate to decide whether to reinstate him or remove him from office after a hearing in which both sides will be able to make their case by presenting evidence, calling witnesses and having them give sworn testimony. According to State Senate rules, a notice of an initial hearing has to be announced within 90 days of the suspension, and the Senate’s final decision must be made by the end of the next legislative session, which does not start until next March and ends in May 2023.
That timeline may change should Warren take the governor to court, according to the Tampa Bay Times.
At that same news conference introducing Hillsborough County Court Judge Susan Lopez as the judicial district’s new prosecutor, DeSantis referred to gender-affirming healthcare as "disfiguring young kids.” He repeated an analogy he’s made before: “A 12-year-old boy can't go in and get a tattoo, but yet somehow, if the legislature were to say, that you can't get a sex change operation...” DeSantis paused for applause before continuing.
“They are literally chopping off the private parts of young kids,” he claimed, and hinted at future legislation on "protecting child welfare," which he said is currently being done "administratively and with medical licenses," as WPEC-TV reported.
The main problem with these statements by Gov. DeSantis is that they are not true.
As The Washington Post has reported, no surgeons are “chopping off the private parts of young kids.”
It should be noted that there are always exceptions, such as South Florida reality TV star and author Jazz Jennings, who underwent a vaginoplasty—commonly referred to as “bottom surgery”—at age 17. That operation involves or is preceded by an orchiectomy, the surgical removal of the testes, which are often repurposed in the creation of a neovagina in gender confirmation surgery. In his remarks on Wednesday, in which he chose to misgender trans girls who are actually too young to get this operation, Gov. DeSantis called it “castration.”
DeSantis also used a phrase that fell out of favor during the era of bell bottom jeans, “sex change,” and called for doctors who perform such operations to be sued, but he did not specify by whom.
According to the latest Standards of Care published by the World Professional Association for Transgender Health (WPATH), mastectomies, often called “top surgery” in relation to trans boys, trans men and nonbinary individuals, are not considered “genital surgeries.” It’s actually plastic surgery and there’s no minimum age. According to statistics from the American Society of Plastic Surgeons, 64,470 cosmetic surgical procedures were performed on people age 13 through 19 in 2015.
Read “What You Need to Know About Transgender Children” from the Washington Post by clicking here.
While spoken rhetoric is especially useful in the era of TikTok and YouTube, that doesn’t mean it’s the end of the era of written propaganda, especially when it comes to transgender youth. Especially when there’s Twitter to spread that propaganda.
The day before DeSantis suspended Warren, Florida officials pumped up their spin machine in response to two bombshell reports. One was a study published in the peer-reviewed journal Pediatrics, which poked a ginormous hole in the claims that “social contagion” is to blame for a suspected spike in the number of youth identifying as trans, and that they do so to “flee LGB-related stigma.” As Stanford child psychiatry fellow Dr. Jack Turban tweeted, neither claim was supported by their research.
The spokesperson for Florida’s department of health, retired chemist Jeremy Redfern, didn’t challenge Dr. Turban’s study directly. He instead retweeted a thread of questions from writer/filmmaker Nathan Williams, and added a GIF from the opening scene in the film, Ace Ventura: Pet Detective. It’s apparently meant to convey, “Oops!”
Turban didn’t respond to a request for comment as of press time.
The other blockbuster report this week revealed that state health officials working to put an end to gender-affirming healthcare in Florida reportedly misrepresented what 10 researchers found in order to bolster their claims.
As Vice News podcast producer Sam Greenspan tweeted on Wednesday, their investigation started with their own curiosity. They’re a former Floridian and transgender, and not a scientist.
Greenspan tweeted, “My reporting revealed a concerted attempt by a very small group of people working to seed evidence for lawmakers, muddying the waters of real science and providing confirmation bias for anti-trans agendas.”
The ten scientists told them they were shocked to learn their research was being used by Florida’s Department of Health to justify denying gender-affirming care to all minors in the Sunshine State.
“NO, this is NOT an appropriate use of our work,” one researcher cited in Florida’s memo reportedly told Vice News. “This does not mean denying transgender youth and certainly not gender-affirming care!”
The department issued its memo in April, following the enactment of what opponents have called the “Don’t Say Trans or Gay” law.
That memo goes so far as to recommend against the practice called “social transition,” usually marked by a change in gendered clothing and adoption of pronouns that differ from those a child used previously.
In addition to reporting that the researchers were unaware of how Florida was misusing their findings, Greenspan noted that all 12 of the citations in the health department’s memo were distorted or relied upon anti-trans activists who oppose the use of gender-affirming care to treat trans youth. They wrote that officials “reverse-engineered” their rationale for a policy completely counter to research-based medical best practices.
While Redfern dismissed Greenspan, telling them the data supported the state’s policy and suggesting they do some memorizing about “how the ‘burden of proof’ works in the sciences,” it fell to this reporter to seek further comment from the state government.
Brock Juarez, communications director for the Agency for Health Care Administration (AHCA), responded to an introduction from DeSantis press secretary Christina Pushaw. Juarez is in his fifth month on the job following his 18 months as the director of corporate and strategic initiatives at Florida Healthy Kids Corporation, and before that working for Chief Financial Officer Jimmy Patronis.
“The Agency for Health Care Administration released a report that found several services for the treatment of gender dysphoria promoted by the Federal Government—i.e., sex reassignment surgery, cross-sex hormones, and puberty blockers—are not consistent with widely accepted professional medical standards and are experimental and investigational with the potential for harmful long term affects [sic],” Brock wrote in an email, apparently meaning to type “effects.”
“The Biden Administration was found to be using low-quality studies to affirm the guidance they released,” he said. “The Agency’s report is an evidence-based report that speaks for itself.”
If it did speak for itself, it would be lying. Juarez’s claim that the treatment of gender dysphoria he mentions “are not consistent with widely accepted professional medical standards” is entirely false.
The Transgender Legal Defense and Education Fund (TLDEF) has on its website links to almost 30 respected medical associations which endorse such treatments:
As to Juarez’s claim about gender-affirming treatments being “experimental,” Dr. A.J. Eckert and Dr. David Gorski write in sciencebasedmedicine.org that this is a red herring.
“The intent behind this disparaging and dismissive use is to promote the view that the safety and efficacy of GA [meaning “gender-affirming”] care is so much in doubt that it cannot be generally used and requires more clinical trials to evaluate its efficacy and safety before it can be recommended,” they wrote. “Of course, there is also a not-so-subtle implication behind labeling GA care ‘experimental,’ namely that it is also dangerous, which then often leads to further claims that clinical trials of GA care would be unethical. In essence, those promoting these bills seem to be deceptively conflating experimental medicine (i.e., unproven treatments) with ‘disproven’ treatments, safe in the knowledge that most people outside of medicine will not know the difference.”
What about that claim gender-affirming care like puberty blockers could have “harmful long-term” effects?
“This assertion is, quite simply, untrue,” they write. “A systematic literature review of research from 1991 to 2017 noted 52 studies (listed here, along with links to the individual studies) showing overall improvement in the well-being of trans people following GA medical and/or surgical interventions, four studies showing mixed or null findings, and zero studies that GA interventions cause overall harm. More recent studies confirm that gender-affirming care can be immensely beneficial in properly selected candidates.” Eckert and Gorski provided the hyperlinks to research that supports their statement.
Earlier this month, nearly every conservative news media outlet and even some television stations based in red states posted bulletins with incendiary headlines like, “FDA Officials Warn Of Brain Swelling, Vision Loss In Minors Using Puberty Blockers”
The real story, of course, is a whole lot less alarming. On July 1, the FDA did indeed add a new warning to the label for gonadotropin-releasing hormone (GnRH) agonists, which are mostly used to treat precocious puberty in children, as well as in gender-affirming care as puberty blockers. The medication hasn’t been recalled and no treatments have been banned; it’s just an update for prescribers to note and be aware of a new risk related to this treatment.
The warning is about something called pseudotumor cerebri, aka “idiopathic intracranial hypertension,” a condition that occurs when pressure inside the skull increases for no obvious reason, according to the Mayo Clinic. It can happen to any child or adult, but it’s most often found in women of childbearing age who are obese.
It becomes evident with several symptoms: headache, papilledema, which means swelling around the optic nerve, blurred or loss of vision, diplopia, which means double vision, pain behind the eye or pain with eye movement, tinnitus, dizziness and nausea.
You would think from the headlines that hospitals coast to coast were going to be inundated by an epidemic of brain-swelled, suddenly blind trans kids. While no one wants to see even one child fall ill because of a reaction to a medication, the truth is there have so far only been six cases of pseudotumor cerebri linked to GnRH: five cisgender girls and one trans boy, ranging in age from 5 to 12 years. Only the trans boy was prescribed GnRH for transgender care; the rest were being treated for precocious puberty.
Already, three of the six patients have recovered and are experiencing no symptoms at all, according to the FDA, and one patient was on their way to a full recovery. No information was available about the other two. Three of the six patients discontinued use of the medication. And again, GnRH is still being prescribed and used, mostly to treat cisgender children, as it has been for generations.
Juarez’s other claim, that “The Biden Administration was found to be using low-quality studies” is unsubstantiated and appears to be a matter of opinion. This one gets even deeper into the weeds, but what I learned is that his claim relies upon a concept called “quality of evidence” that Redfern, in a separate email, explained with a quip: “It appears that you need to check out the evidence-based pyramid.” Here’s what he sent me:
I shared Redfern’s response with Dr. Lynne Kelly, an author and award-winning professor of research communication—among other things—at the University of Hartford. Although she said this pyramid isn’t something she uses to teach graduate students, she agreed to help me help readers understand what “quality of evidence” is, and what Redfern and Brock are claiming when they call peer-reviewed studies “low quality.”
“The figure that you were sent illustrates different forms of research evidence that can be evaluated in terms of their quality,” Kelly wrote in an email. “The figure suggests that as you go from the bottom (bright green) to the top (pink), the evidence increases in quality but that’s debatable. Depending on how it is done, someone might find a randomized control trial to be superior (i.e., higher quality evidence) than a critically appraised individual synopsis, but it also depends on how those terms are being used in the figure. I would agree that these are all forms of evidence that can vary in quality.”
So, it appears that the determination a study is of “low quality evidence” is merely a matter of opinion, not a scientific fact.
That didn’t stop Juarez from continuing to make his claim when pressed for a comment on Greenspan’s investigation for this report.
“I am sure as a journalist focused on writing a factual non-biased story you will be sure to note many of the ‘experts’ refuting our report are relying on studies and surveys deemed low or very low quality and insufficient to meet medical necessity criteria. Many of these ‘experts’ have also shown a remarkable blindness to what is happening in Northern Europe on this very topic. I am still waiting for the critics to provide a logical and well-reasoned case based on quality evidence. However, rather than cite quality evidence, they have simply ignored the AHCA report’s main arguments while disseminating misinformation and making false claims. Our main focus has always been on the real evidence, rather than the eminence of a medical society or association.”
“If Mr. Juarez had read our article, he might have been surprised to learn that the experts refuting Florida's ‘report’ are the same experts he himself is quoting,” responded Greenspan. “That was the crux of our entire endeavor. We did reach out to Juarez for this story, but did not receive a response.”
Read more about Vice News’s investigation, “How Florida Twisted Science to Deny Healthcare to Trans Kids,” by clicking here.
In his report, Greenspan noted Redfern “often uses his personal Twitter account to tweet disinformation about gender-affirming care and troll members of the press when they seek clarity on the state’s policies.”
“I would just note that “gender affirming healthcare” is a political term, not a scientific or factual one,” wrote Christina Pushaw, press secretary for DeSantis. Together, with Redfern, these two spokespersons spend a rather sizable amount of their time tweeting.
And lately, those tweets have been to troll journalists—including Vice News managing editor Leah Feiger—about the phrase, “gender-affirming healthcare.”
It’s unknown why Pushaw has no use for the hyphen in “gender-affirming,” an adjective modifying the word “healthcare.”
“As Governor DeSantis noted at the press conference,” Pushaw wrote in her Thursday email to me, “the reality of ‘gender affirming care’ is life-altering experimental pharmaceutical and surgical interventions on children, such as double mastectomies and puberty blockers that can cause blindness and sterility. Reporters should strive to be accurate in describing procedures and medical interventions, rather than cloaking the reality in misleading euphemisms like ‘gender affirming care’. The public, especially parents of kids with gender dysphoria or confusion, need to understand what that actually comprises. So if Vice wants to discuss ‘twisted science,’ perhaps they should scrutinize those who promote these radical procedures on kids and teens, based on low quality evidence.”
"’Gender-affirming healthcare’ is, in fact a medical term,” said Greenspan in a response to Pushaw’s statement. “But its history and usage, especially in light of anti-trans legislation, the term has been weaponized.”
Last word on the “gender-affirming healthcare” debate goes to Joanna Harper, a doctoral researcher at Loughborough University in the U.K. She’s a medical physicist by profession, an avid distance runner by choice and the only person in history to publish a peer-reviewed article on the performance of transgender athletes. In fact, she is the first author on four peer-reviewed papers on the subject of transgender athletes. She was the first transgender person ever to advise the International Olympic Committee on matters of gender and sport.
“There is plenty of peer-reviewed research showing the positive benefits of gender-affirming care,” Harper told me. “It is unfortunate that the politicians running Florida refuse to accept it.”
Last month, Axios reported a majority of Floridians disapprove of the Supreme Court's decision to overturn Roe vs. Wade, with 44% saying that they "strongly disapprove." Those are the findings of a statewide poll from the University of South Florida and Florida International University. Roughly a third said the state should pass a law protecting the right to an abortion.
According to GLAAD, an estimated 4.6% of Floridians identify themselves as LGBTQ+ and 24% of LGBTQ+ people in Florida are raising children. The state ranks 3rd in the nation with the most same sex couples, behind New York and California.
So given that GLAAD released a poll of LGBTQ+ Floridians on Aug. 1, its findings bear relevance here, even though the survey was conducted a few weeks prior to this week’s news.
As GLAAD noted, the Florida governor’s race of 2018 was decided by only 32,463 votes out of 8+ million. If there’s a similar result his November, this poll suggests LGBTQ+ and ally voters could be decisive: 77% of LGBTQ+ and ally voters have an unfavorable opinion of DeSantis.
The pollsters say voters want their political candidates to address these top issues: restoring abortion rights (47%), reforms for gun safety (31%), high housing costs and inflation (22% each), and protecting LGBTQ+ equality (19%).
With 70% of respondents saying Florida’s anti-LGBTQ+ laws and policies are designed to attack LGBTQ+ people and are emotionally damaging to the population, leaders for equality hope to see a big turnout on Election Day in November.
“Florida’s LGBTQ voters and ally voters have grave concerns about their basic human rights, including access to abortion, freedom of speech, and evidence-based healthcare for LGBTQ youth,” said GLAAD President and CEO Sarah Kate Ellis. “They’re motivated to make a difference in this crucial election.”
“The stakes are as high as ever: our civil liberties, the progress we’ve won, and our very democracy are on the line,” said Equality Florida press secretary, Brandon Wolf.
“It is imperative that Floridians use the power of their votes to hold Governor DeSantis and his right-wing allies accountable for the hate and bigotry they have unleashed on our state.” | <urn:uuid:7c7c4d80-5302-4bd0-b339-8f80b26c9253> | CC-MAIN-2022-40 | http://babelouedstory.com/bibliographies/accueil_bibliographie/training-pdf-guide.php?pdf=ASWB-Association-of-Social-Work-Boards | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00100.warc.gz | en | 0.962975 | 15,252 | 2.671875 | 3 |
Most organizations use databases to store sensitive information. This includes passwords, usernames, document scans, health records, bank account and credit card details, as well as other essential data, all easily searchable and conveniently stored in one place.
Unsurprisingly, this makes databases a prime target for malicious actors who are eager to exploit unprotected systems and get their hands on profitable information. In fact, attackers often don’t even need to hack them to steal all that precious data: one of the most common causes of a breach are databases that have been simply left unsecured, allowing anyone to access the data without providing a username or password.
Such lapses in database security can (and often do) lead to hundreds of millions of people having their personal information exposed on the internet, allowing threat actors to use that data for a variety of malicious purposes, including phishing and other types of social engineering attacks, as well as identity theft.
While there’s been a noticeable decline in data leaks from open databases in the last year, many database managers are still struggling to keep their data protected from unauthorized access.
But just how many unsecured databases are still out there? That’s what we at CyberNews wanted to find out.
What we discovered was eye-opening: tens of thousands of database servers are still left out in the open for anyone to access, with more than 29,000 instances of unprotected databases leaving nearly 19 petabytes of data exposed to theft, tampering, deletion, and worse.
The fact that thousands of open databases are exposing data is not new. Indeed, cybercriminals are so well-aware of this that it can take mere hours for an unprotected database to be detected and attacked by threat actors.
One would suppose that after years of massive leaks, ransom demands, and even devastating data wipeouts by feline hackers (meow) making the headlines, database owners would now be aware of the problem enough to, at the very least, ask for a username and password before letting anyone in.
Unfortunately, as our investigation shows, this does not seem to be the case.
To conduct this investigation, we used a specialized search engine to scan for open databases of three of the most popular database types: Hadoop, MongoDB, and Elasticsearch.
While performing the search, we made sure that the open databases we found required no authentication whatsoever and were open for anyone to access, as opposed to those that had default credentials enabled. We excluded the latter because it would require us to log in to those databases without authorization, which would be unethical. As a result, the actual number of unprotected databases and the amount of exposed data is likely even higher than what we were able to find.
With the initial search completed, we then ran a custom script to measure the size of each unprotected database, without gaining access to the data stored within.
Here’s what we found.
Our findings show that at least 29,219 unprotected Elasticsearch, Hadoop, and MongoDB databases are left out in the open.
Hadoop clusters dwarf the competition in terms of exposed data with nearly 19 petabytes easily accessible to threat actors who can potentially endanger millions, if not billions of users with a click of a button.
When it comes to the number of exposed databases, Elasticsearch leads the pack with 19,814 instances without any kind of authentication in place, putting more than 14 terabytes of data at risk of being stolen or taken hostage by ransomware gangs.
MongoDB seems to fare much better than others terabyte-wise, but the 8,946 unprotected instances show that there’s still a long way to go in terms of basic database security for thousands of organizations and individuals who use MongoDB to store and manage their data.
As we can see, China tops the list with 12,943 exposed instances overall, beating the other countries in each category by a large margin.
The United States comes second, with over 4,512 databases left out in the open, while Germany, where we found 1,479 unprotected instances, takes the third place.
India and France close the top five, with 1,018 and 746 publicly accessible databases, respectively.
Back in 2020, unknown cybercriminals launched a series of so-called ‘Meow’ attacks that wiped all the data stored on thousands of unsecured databases – without any explanation or even a ransom demand – leaving shocked owners with only an empty folder with files named ‘meow’ as the signature of the attacker.
Interestingly enough, we found that 59 databases hit by the ‘Meow’ attacks a year ago are still unprotected and collectively leaving 12.5GB of data exposed.
According to CyberNews security researcher Mantas Sasnauskas, this only goes to show that raising awareness about exposed and publicly accessible databases is as important as ever.
“Anyone can look for these unprotected clusters by using IoT search engines to effortlessly identify those that don’t have authentication enabled and exploit them by stealing the data, holding them ransom, or, as was the case with the ‘Meow’ attack, simply destroy valuable information for fun, wiping billions of records and crippling both business and personal projects in the process,” says Sasnauskas.
Organizations of all shapes and sizes use databases to store customer and employee records, financial data, and other kinds of sensitive information. Unfortunately, databases are often managed by administrators with no security training, which makes them an easy target for threat actors.
If you’re a database owner, here are a few simple steps you can take to secure your database against unwelcome guests:
With authentication enabled, make sure your database is protected by a unique and complex password that a potential intruder wouldn’t be able to guess.
About the author: Edvardas Mikalauskas
Original post available here:
(SecurityAffairs – hacking, PLA Unit 61419) | <urn:uuid:d776eb90-ab75-4118-9e62-0f6ccd92ec6a> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/117660/data-breach/data-exposed-unprotected-databases.html?utm_source=rss&utm_medium=rss&utm_campaign=data-exposed-unprotected-databases | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00100.warc.gz | en | 0.943784 | 1,229 | 2.796875 | 3 |
October is National Cybersecurity Awareness Month, an initiative launched by the National Cyber Security Alliance and the United States Department of Homeland Security in 2004 which is now in its 18th year. Throughout October, cybersecurity advice will be issued, and resources made available to improve awareness of cyber threats, teach cybersecurity best practices, and raise awareness of the threats everyone faces in an increasingly digital world.
A lot has changed since 2004 when Cybersecurity Awareness Month was conceived. Initially, advice included updating anti-virus software twice a year, and making this a routine practice like changing the time on watches and clocks to reflect daylight saving practices. Today, National Cybersecurity Awareness Month is more important than ever as so much personal and sensitive data are stored on computers and online accounts and mobile phone apps. Cyberattacks targeting personal data and accounts have grown at an enormous rate over the past 18 years, with attacks now posing a major threat to individuals, businesses, and national security.
The goal of National Cybersecurity Awareness Month has remained the same over the past 18 months, and that is to improve awareness of cybersecurity with the public to make it harder for threat actors to achieve their goals. Over the years, National Cybersecurity Awareness Month has grown considerably in scope and its reach has greatly improved. Now, in addition to raising awareness of cybersecurity with the public and encouraging the adoption of cybersecurity best practices, a major focus is to raise awareness of the importance of cybersecurity with business leaders.
Businesses must ensure that industry-standard security practices are followed, and ongoing security awareness training is provided to employees. This is especially important in regulated industries such as healthcare. The healthcare industry in the United States has been extensively targeted by hackers seeking access to sensitive patient data, with ransomware attacks on the industry now at record levels. Security awareness training for healthcare employees, which is a requirement of the Health Insurance Portability and Accountability Act (HIPAA), plays a vital part in improving security and preventing phishing, malware, and ransomware attacks.
Each year, National Cybersecurity Awareness Month has had a different overall theme, with this year’s theme being “Do Your Part. #BeCyberSmart.” Everyone has a role to play in cybersecurity and protecting their own personal privacy and sensitive data, as well as protecting their employer’s systems from attacks and must do their bit to improve security.
Each week in October has a different theme, focusing on a specific aspect of cybersecurity with the theme of Week 1 being “Be Cyber Smart.”
Be Cyber Smart is concerned with protecting personal and business data on Internet-connected platforms. These systems are targeted by cyber actors, but attacks can be prevented by adopting cybersecurity best practices and practicing good cyber hygiene. Individuals must own their role in cybersecurity, which means following cybersecurity best practices such as creating strong, unique passwords for accounts, updating software promptly, turning on multi-factor authentication, and regularly backing up data. Employers should be teaching these best practices in security awareness training sessions and should enforce these best practices as far as possible.
The focus of Week 2 is “Fight the Phish!” and seeks to raise awareness of the risk of phishing attacks. Phishing is the leading cause of security incidents, accounting for 80% of all reported incidents according to the Verizon Data Breach Investigations Report. Employees need to be trained on phishing email identification, taught how to identify email threats, and told how to react when a suspicious email is received.
The focus of Week 3 is to raise awareness of career opportunities in cybersecurity and to encourage more people to consider taking up positions in the industry. The focus is on businesses in Week 4. Business leaders will be encouraged to implement cybersecurity safeguards into products and processes at the design stage, rather than adding security measures as an afterthought. Businesses will also be encouraged to provide cybersecurity training to employees during onboarding and for training to be regularly reinforced through refresher training sessions throughout the year. | <urn:uuid:64d7a9c1-9cd3-40a5-a6b7-2e4576d51f76> | CC-MAIN-2022-40 | https://www.compliancejunction.com/2021-national-cybersecurity-awareness-month-do-your-part-becybersmart/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00100.warc.gz | en | 0.963531 | 798 | 2.875 | 3 |
The biggest change in data privacy regulation to date has been ratified by the European Commission. How might the new law affect you?
The urgent need for a revamp of the laws that govern the handling of personal data was highlighted once again in a special report published in June 2015 by the European Commission. The Eurobarometer survey indicates that trust in digital is low, with more than two-thirds (67%) of Europeans worried that they have no control over their online information, while six out of ten don’t trust online businesses (63%).
High-profile data breaches that leave customers’ details exposed – such as the TalkTalk breach in November 2015, when hackers broke into systems to steal the personal and bank account data of 157,000 customers – have made individuals acutely aware of the potential risks of sharing their information. They are also more conscious of the extent to which websites, online services and social media are ‘invisibly’ collecting personal data. If the full economic and social benefits of developments such as cloud services and the Internet of Things are to be realised, consumers’ perceptions that doing business digitally is inherently risky needs to change.
The new European General Data Protection Regulation (GDPR) will replace the current 1995 EU Data Protection Directive intending to plug the trust gap, by modernising legislation that safeguards personal data within the EU. It will make protection levels more stringent and consistent across member states, superseding fragmented national laws and standardising the way regulations are implemented, audited and enforced. The GDPR is causing great concern for businesses, with 50% of global companies saying they will struggle to meet the rules set out by Europe unless they make significant changes to how they operate.
- ‘Personal data’: Increasing in scope
Under the new regulations, the definition of ‘personal data’ is expected to broaden, bringing additional in-scope information into the regulated perimeter.Before adequate governance and security controls can be put in place, organisations first need to identify what data they have, where it is stored and how it is being used. Data classification is critical in achieving a greater knowledge of the data that you hold, and the first step to a truly data-centric approach to protecting personal information.
- You’ve been breached: Disclosure and reporting
Organisations will be required to notify their national data protection authority and affected individuals within 72 hours of a breach occurring. The breaches that must be reported are those which are “likely to result in a high risk for the rights and freedoms of individuals,” including identity theft or fraud, and financial loss – risks present in most data breaches.Currently, many breaches are not identified after the event: according to the Ponemon Cost of Data Breach Study (2015), malicious attacks take an average of 256 days to identify, while data breaches caused by human error go unnoticed for an average of 158 days. Organisations need greater visibility of the data they gather and hold, and how it is accessed and used so they can quickly discover and retrieve sensitive personal information.This means setting up processes and systems for communicating with customers and remediating problems. If data has been classified then this makes the discovery of affected data much quicker and easier, and furthermore monitoring and reporting tools may enable the organisation to identify potential risky user behaviour before it occurs – mitigating the insider threat.
- The ‘right to be forgotten’ is back
If there are no legitimate grounds for retaining an individual’s data it must be deleted. This means it must be accurately classified, indexed and stored, making it easy to retrieve so that its status can be updated.
- A bigger stick: Greater penalties and fines
The current maximum fine for misuse or mishandling of personal data is €0.5 million. This is rarely levied, and has not proved enough of a ‘stick’ to motivate organisations to comply with the regulations; representing a drop in the ocean for most major multinationals, which has made cutting corners on compliance to save time or money a risk worth taking.Under the GDPR, businesses can be fined up to 4% of annual global turnover, or €20 million (whichever is higher), for violating the new regulations. This a significant change that will escalate data protection to a regular boardroom issue. Cost of non-compliance
will also be assessed in terms of reputation loss and damage to the brand, while the regular and periodic data protection audits recommended in the regulations will make it more likely that incidences of non-compliance are picked up.
While this may sound like a significant burden to businesses, organisations maintaining PCI DSS compliance have already invested in security technologies and solutions such as modern encryption, audit & logging, rights management etc, which can be leveraged and extended once the ‘personal data’ to be protected has been correctly identified. Taking initiative now to review the way your company collects, classifies, stores and shares data in a controlled and secure way with these new data protection regulations in mind, will enable you to ensure ongoing compliance and avoid devastating fines and reputational damage in the future. You can ensure that you are PCI DSS and EU GDPR compliant using Boldon James Classifier. | <urn:uuid:882aa417-e29b-4ff4-bbc3-6baf6b1b0101> | CC-MAIN-2022-40 | https://www.boldonjames.com/blog/eu-gdpr-what-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00301.warc.gz | en | 0.933431 | 1,054 | 2.71875 | 3 |
With the introduction of classroom management tools, we know classroom technology has made life easier for teachers. However, let’s not forget how much easier it’s made life for students.
Here are three ways:
1) You no longer need to travel to school to contact your peers. Students and teachers are able to communicate through online messaging and group forums so that they can get their questions answered from home, and even arrange study groups.
2) Technology allows students to apply learned information to real life situations, and deters them from mindlessly memorizing. Technology helps students grasp information by providing pictures, narratives, and texts that allow students to envision the concept. As well, games and video turn learning into an active, rather than passive activity. The result of these new technology based teaching methods is that students not only retain information better, but they also see how it applies to daily life.
3) Many universities have provided students with online databases. These databases allow students to research information online quickly, without having to check books out of the library. That leaves more time to study or paper write.
Technology improves the efficiency and productivity of both students and teachers. It removes many obstacles that existed before the Internet, and it will no doubt continue to do so! | <urn:uuid:dab98854-8244-4560-b286-fae60282ed09> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/3-ways-classroom-technology-makes-life-easier | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00301.warc.gz | en | 0.949081 | 258 | 3.390625 | 3 |
Student Data Protection Legislation in the State of Wyoming
Wyoming’s HB 08 is a student data protection law that was passed in 2017. As many states around the country have drafted and implemented new legislation in the past few years to protect the personal information of students that attends schools within said states, Wyoming’s HB 08 was enacted for the purpose of amending and updating the requirements that educators, school administrators, and other related personnel must abide by when collecting, using, or disclosing personal data relating to students. More specifically, the law prohibits educators within the state from using personal information obtained from students for any purpose other than the fulfillment of their educational obligations.
What are the duties of educators under the law?
The duties that school districts and educators have under Wyoming’s HB 08 in regard to safeguarding the personal information of the various students they serve on a daily basis include:
- School districts are responsible for developing and implementing security measures and protocols that will protect the information of students from unauthorized access.
- In conjunction with the state the Department of Enterprise Technology Services, school districts are responsible for establishing criteria that can be used to regulate the collection, maintenance, storage, and dissemination of personal information.
- The state superintendent, the Department of Enterprise Technology Services, and school districts within Wyoming are responsible for developing and implementing data privacy and breach security plans that can be used to protect the personal information of students.
- School districts are prohibited from selling or transferring personal data obtained from students to private businesses or organizations.
Authorization and authentication mechanisms
In addition to the requirements stated above, school districts that serve children enrolled in K-12 institutions around the state are also responsible for creating and maintaining authorization and authentication mechanisms that can be used to verify student data. These mechanisms include the following:
- “Administrative, physical, and logical security safeguards, including employee training and data.”
- “Privacy and security compliance standards.”
- “Processes for identification of and response to data security incidents, including breach notification and mitigation procedures.”
- “Standards for the retention and verified destruction of student data.”
Alternatively, the data elements concerning K-12 students within the state of Wyoming that are protected under the law, in conjunction with the Family Educational Rights and Privacy Act (FERPA), include but are not limited to:
- Dates and place of birth.
- Email addresses.
- Physical addresses.
- Mother’s maiden name.
- Telephone numbers.
- Social security numbers.
- First and last names.
- The names of a student’s family members.
- Biometric records.
- Transcript data.
- Participation information.
- Credits attempted and earned.
- Financial account numbers.
- Socioeconomic information.
Working to comply with the law
As the obligations that school districts have under Wyoming’s HB 08 are numerous, ensuring the personal data of students within the state is safeguarded from foul play, while also using this information to aid students in achieving their educational goals can prove to be a challenging task. With all this being said, one way in which educators within Wyoming can tackle such issues is to invest in an automatic redaction software program. These software programs effectively allow users to render specific data elements, such as social security numbers, inaccessible, guaranteeing that they cannot be used for nefarious purposes. Conversely, the automatic functionality of these programs also allows educators to save valuable time and resources when looking to protect student data.
As students within the U.S. will be forced to disclose personal information concerning not only themselves, but also their family members, when looking to advance their academic careers, legislation such as Wyoming’s HB 08 serves to both regulate and govern this massive flow of information. In lieu of a federal comprehensive data privacy law, such as the EU’s General Data Protection or GDPR, laws such as Wyoming’s HB 08 play a huge role in protecting the personal privacy of citizens that reside in the multitude of jurisdictions that comprise the U.S. Subsequently, the provisions of the law provide parents and guardians within Wyoming with the peace of mind that their children’s personal data is legally protected, even when they are away from school. | <urn:uuid:49fdc35f-8b91-47da-babe-2697eef8e93b> | CC-MAIN-2022-40 | https://caseguard.com/articles/student-data-protection-legislation-in-the-state-of-wyoming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00301.warc.gz | en | 0.935133 | 876 | 2.703125 | 3 |
What is Black Box Fuzzing?
Black box fuzzing and dynamic application security testing (DAST) can have a lot of the same features, but there are some differentiators. Black box fuzzers are a type of DAST and an important part of the cybersecurity testing continuum. Along with static application security testing (SAST) in the beginning of development, dynamic application security testing in the middle of development, black box fuzzing fits in at the end to ensure there are no code weaknesses before the application’s deployment.
Fuzzing is a code testing technique that uses the automated injection of malformed or partial code data into an application to find implementation bugs. What sets apart black box fuzzers? For one, they don’t have access to the original program’s source code, so the automatic code injections have to be done from outside the application, the same way a malicious actor would attempt to break in.
Who Needs Black Box Fuzzing?
Since black box fuzzing emulates how a cybercriminal will bombard your application or program to force a crash and find weaknesses, you could argue that any software application will benefit from black box fuzzing.
There are many industry use cases for black box fuzzing. Critical infrastructure, like energy, water, transportation, food distribution, and communication, as well as healthcare, automotive and more are all attack targets with devastating consequences, should they be hijacked. The aviation industry and automotive vehicle manufacturing industries are under strict compliance, especially since more vehicles have internet connectivity applications installed, making it pertinent to have a black box fuzzer to prevent any application takeover on those vehicles.
Medical devices that are wireless and internet connected must be protected as well. Connected healthcare devices, especially those that use bluetooth, need black box fuzzing to help prevent breaches and takeovers.
The Internet of Things (IoT) or any device that connects to the internet — whether that be a home thermostat, home or office networks, or any personal or professional use item with internet capabilities — needs to be tested to make sure it can’t be co-opted. If an industry produces or uses internet connected devices, black box fuzzing is a necessity. Security teams must be empowered to use tests that mimic a cyber attacker’s methods so they can ensure the strength of their software security.
How is Black Box Fuzzing Related to Dynamic Application Security Testing (DAST)?
Dynamic application security testing scans applications as they’re operating to find exploitable, existing vulnerabilities. DAST monitors this running code and how the application and client interact in order to find these vulnerabilities.
Black box fuzzing isn’t used to find specific vulnerabilities, it’s used to identify conditions that create exceptions within the code and crash the application or system being targeted. In other words, it is used to find unknown and undiscovered vulnerabilities. This goes beyond the monitoring and reporting aspect of DAST and actively tries to break into the product and exploit unknown triggers within it.
When to Use a Black Box Fuzzer?
Black box fuzzing is crucial in the early stages of development. The most important time to use a black box fuzzer is after the product is developed but before it is deployed. This step ensures that the product is secure for customers to use and if there are any security weaknesses detected, there’s still time to return to the development phase and remediate them before the product is released. This step can be repeated until the product meets security compliance standards. After the product is released, black box fuzzing can still be utilized to continually check for any additional security issues.
What Makes a Black Box Fuzzer Special?
Several industries already require DAST to achieve compliance and other verticals will soon follow. Using black box fuzzing DAST for IoT, Automotive, Medical, Aviation, and Infrastructure scanning helps your organization adhere to tightly regulated compliance standards. Fuzzers that generate in-depth reporting of repeatable findings can create the information required by auditors to show compliance and meet regulatory standards.
Comprehensive QA Before Release
If a company releases a product that is easily exploited, it probably won’t stay in business very long. The damage to customer trust can be insurmountable and the cost to fix a vulnerability in an application that is already deployed is extensive. It is essential for your code to be properly secured before it goes out the door.
Efficiently Check Numerous Protocols
With a myriad of uses for black box fuzzing from assessing web applications to testing custom devices, a fuzzer needs to be flexible enough to communicate across numerous protocols. Having the right black box fuzzer that can assess the needs of your specific protocol or use prebuilt protocol testing modules will simplify the code changes needed during the Software Development Life Cycle (SDLC) testing phase. It will also systematically validate the application’s secure development.
Fast Automated Testing
Time constraints can cause traditional security testing to be rushed, making efficient testing tools necessary. When testing is too time-consuming, it will likely be cut short, and incomplete testing leaves threats behind, ready to be exploited. Automated testing shortens testing time without any necessary manual intervention required. You can automate scans during development and monitor after deployment.
What is a Protocol Fuzzer and How Does it Relate to Black Box Fuzzing?
Protocol fuzz testing tests network app protocols and file formats that are low level. This fuzzer changes valid protocol communication to try to find bugs in it. For example, if there is a character limit, a protocol fuzzer will input too many or too few characters to see how the application reacts.
Black box fuzzers automatically inject millions of different, random coding types into applications, mimicking the overwhelming attacks a cybercriminal would use to try to break the application. These attacks go beyond protocol attempts and use more of a code bombardment strategy.
What Do I Need to Know to Evaluate Fuzzing Tools?
First you need to understand if the black box fuzzer will work with your current protocol testing modules and can it be customized to your proprietary ones? This is important because if the fuzzing tool can’t work with your product, it can’t safely scan for weaknesses.
Black box fuzzers can be cloud based for ease of use or on-site for your staff to monitor. Cloud based is definitely a good choice because the testing can be done from anywhere, not necessarily a dedicated testing center.
Another big feature, self-learning and intelligence. Black box fuzz testing shouldn’t be confined to a regimen, it needs to adapt as an attacker would and continually change attacking combinations, especially if the application is updated.
Also, scalability and customization is crucial, as companies, their products and infrastructure are constantly changing. A black box fuzzer should have the capability to adjust as a company and its products evolve and grow | <urn:uuid:42cb566f-4da5-4d5f-b722-5511d1f1e7b1> | CC-MAIN-2022-40 | https://www.beyondsecurity.com/black-box-fuzzing-faqs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00301.warc.gz | en | 0.912819 | 1,423 | 2.546875 | 3 |
An attack has been discovered that can expose developers, it was discovered that nearly a third of the packages in PyPI, the Python Package Index, can automatically cause code execution when downloaded.
“A concerning feature in pip/PyPI allows code to be executed automatically when developers simply download a package.”
Yehuda Gelb, a researcher at Checkmarx, in a white paper published this week.
One of the ways to install Python packages is by running the “pip install”, calling the “setup.py” that is included with the module. This script is used to specify the metadata associated with a package, including its dependencies.
While attackers have resorted to embedding malicious code in the “setup.py” script, Checkmarx found that they could accomplish the same goals by running the “pip download” command. This function does the same download as “pip install”, but it does not install the dependencies, it only downloads them. This download command also executes the “setup.py” script, thus executing the malicious code it contains.
Note that the problem only occurs when the package to be installed or downloaded contains a tar.gz file instead of a .whl file, which prevents “setup.py” from running.
This feature is alarming because most of the malicious packages found use this code execution technique when they are installed, thus achieving higher infection rates.Although pip uses .whl instead of .tar.gz, an attacker could exploit this behavior to intentionally publish python packages without a .whl, leading to malicious code execution.
Information security specialist, currently working as risk infrastructure specialist & investigator.
15 years of experience in risk and control process, security audit support, business continuity design and support, workgroup management and information security standards. | <urn:uuid:b81f16c9-7178-487c-a7fb-94b8d7cefa7f> | CC-MAIN-2022-40 | https://www.exploitone.com/vulnerabilities/code-execution-vulnerability-upon-downloading-package-from-python-package-manager/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00301.warc.gz | en | 0.915779 | 382 | 2.515625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.