text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
An Overview of Conversational AI
What is Conversational AI?
Hundreds of millions of people use Facebook Messenger, Kik, WhatsApp, and other messaging platforms to communicate with their friends and family every day. Millions more are experimenting with speech-based assistants like Amazon Alexa and Google Home. As a result, messaging and speech-based platforms are quickly displacing traditional web and mobile apps to become the new medium for interactive conversations. This overview of Conversational AI will detail how this advanced technology works and how it is a driver for digital transformation for businesses.
As users worldwide become more dependent and accustomed to these platforms, it’s no surprise that enterprises are rapidly adopting Conversational AI solutions to keep up with user interests and demands. While the ultimate goal of deploying these solutions is to revolutionize service experiences for customers and employees, it is important to know what Conversational AI and Chatbots are, how they help brands differentiate themselves within the market, and how best to leverage them.
How It Works
At its core, Conversational AI refers to technologies that leverage supervised and unsupervised natural language processing (NLP), understanding (NLU), and generation (NLG) to discern and interact with users in a natural, human-like manner. While these words are often used interchangeably, they differ in terms of function.
Natural language processing converts unstructured data into a structured data format that enables Conversational AI solutions to understand and derive meaning from natural human language in any given context. It does this by identifying entities, word patterns, and ambiguities within a text, enabling Conversational AI solutions to process language accurately at high speeds. By making sense of human language, NLP helps Conversational AI solutions perform various tasks, such as assigning support tickets in the right categories or plucking out specific information within long textual pieces.
NLP also helps Conversational AI solutions grow more intelligent over time. Instead of relying on manual updates from support agents, Conversational AI solutions autonomously learn in real-time from every interaction, enabling them to identify trends and patterns within future support requests. By recognizing these patterns, Conversational AI solutions can differentiate or make associations between pieces of text, allowing them to expedite issue resolutions.
While NLP enables Conversational AI solutions to process human language, Natural language understanding, a subset of NLP, helps Conversational AI solutions understand the intent behind a user’s query regardless of how they phrase their questions. It does this by auto-classifying topics, analyzing sentiment, and extracting key terms within a text to define the function and meaning of words. By recognizing user intent, Conversational AI solutions can analyze the consumer’s needs to provide accurate responses to their support requests.
Conversational AI solutions can respond to these questions because of natural language generation, another integral subset of NLP. Leveraging NLG, Conversational AI solutions automatically produce appropriate textual responses to users based on structured data accumulated over time.
Why is Conversational AI Important?
Looking further into this overview of Conversational AI, it’s key to ask why this technology is so important. While the advanced capabilities of NLP, NLU, and NLG all sound fascinating, you might be wondering why they matter. How will these capabilities benefit business processes? What makes Conversational AI unique compared to traditional tools and solutions?
More often than not, service agents are bogged down by high ticket volumes, causing them to feel burnt out and unmotivated to do their jobs. Ticket resolution times are then delayed, causing employees and customers to grow impatient and disgruntled. Leveraging NLP, NLU, and NLG, Conversational AI solutions automate repetitive tasks and workflows that service agents traditionally perform. By relieving service agents from monotonous tasks, Conversational AI solutions not only enable them to become more productive and attentive towards higher priorities, but users also avoid having to wait hours or even days to get their issues resolved. By eliminating long user wait times, Conversational AI solutions ensure customers and employees receive the answers they need in a matter of seconds, leaving them happy and eager for future interactions.
Conversational AI solutions can also detect user emotion through sentiment analysis. With traditional solutions such as rule-based chatbots, users were stuck interacting with cold machines that failed to recognize urgency or emotion. In contrast, Conversational AI solutions can identify the customer’s tone to modify their behaviors accordingly, making their responses more natural, personalized, and human-like. For example, when a customer is frustrated or upset, Conversational AI solutions can recognize this and work to improve the customer’s mood. They can achieve this by becoming more sympathetic towards the customer or offering additional suggestions to resolve their issues. By facilitating back-and-forth conversations that mirror human interaction, Conversational AI solutions ensure every user receives a positive and engaging service experience.
Understanding the intent behind user queries also makes Conversational AI solutions beneficial to businesses. By recognizing the purpose behind a user’s question, Conversational AI solutions can provide customers and employees with fast and accurate resolutions. Ticket volumes will then decrease, enabling employees to have more time to do their work while empowering customers to solve issues on their own.
The implementation of Conversational AI solutions is also quick, easy, and hassle-free. While building custom Conversational AI platforms requires a significant amount of time and prep-work, purchased Conversational AI solutions integrate immediately with existing knowledge bases and systems, eliminating the need for additional training or data cleansing. By deploying with minimal effort, Conversational AI solutions demonstrate value on day 1 for both customers and employees.
The Future of Conversational AI at-a-Glance
Today, instant availability and accessibility matter more than ever. Digital businesses are no exception to this. As more and more users now expect, prefer, and demand conversational self-service experiences, it is crucial for businesses to leverage Conversational AI to survive and thrive within the market.
Following are expert predictions from Gartner about how AI will transform digital businesses in the next five years:
“By 2023, 30% of customer service organizations will deliver proactive customer services by using AI-enabled process orchestration and continuous intelligence” (Gartner).
“By 2024, AI will become the new user interface by redefining user experiences where over 50% of user touches will be augmented by computer vision, speech, natural language, and AR/VR” (IDC).
“By 2025, customer service organizations that embed AI in their multichannel customer engagement platform will elevate operational efficiency by 25%” (Gartner).
Aisera offers the most feature-comprehensive and technology-advanced self-service automation solution in the market, which blends AI Virtual Assistant technology, Conversational AI (cognitive search), and Conversational Automation into one SaaS cloud offer for IT Service Desk and Customer Services. Aisera proprietary unsupervised NLP/NLU technology, User Behavioral Intelligence, and Sentiment Analytics are protected by several patent-pending applications. | <urn:uuid:59879ad5-6372-4e7b-a0a9-f51105bc1245> | CC-MAIN-2022-40 | https://aisera.com/an-overview-of-conversational-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00743.warc.gz | en | 0.928312 | 1,445 | 2.84375 | 3 |
Share this post:
For centuries, electricity was thought to be the domain of sorcerers – magicians who left audiences puzzled about where it came from and how it was generated. And although Benjamin Franklin and his contemporaries were well aware of the phenomena when he proved the connection between electricity and lightning, he had difficulty envisioning a practical use for it in 1752. In fact, his most prized invention had more to do with avoiding electricity – the lightning rod. All new innovations go through a similar evolution: dismissal, avoidance, fear, and perhaps finally acceptance.
Almost two hundred years after Franklin’s lightning experiment, man was routinely harnessing electricity, even though we still lacked a deep understanding of its origins. The Lineman’s Handbook of 1928 begins with the line: “What is electricity? – No one knows.” But according to this field guide for early electrical linemen, understanding the make-up of electricity wasn’t important. The more significant aspect was knowing how electricity could be generated and safely used for light, heat and power.
Today, too many people view artificial intelligence (AI) as another magical technology that’s being put to work with little understanding of how it works. They view AI as special and relegated to experts who have mastered and dazzled us with it. In this environment, AI has taken on an air of mysticism with promises of grandeur, and out of the reach of mere mortals.
The truth, of course, is there is no magic to AI. The term Artificial Intelligence was first coined in 1956 and since then the technology has progressed, disappointed, and re-emerged. As it was with electricity, the path to AI breakthroughs will come with mass experimentation. While many of those experiments will fail, the successful ones will have substantial impact.
That’s where we find ourselves today. As others, like Andrew Ng have suggested, AI is the new electricity. In addition to it becoming ubiquitous and increasingly accessible, AI is enhancing and altering the way business is conducted around the world. It is enabling predictions with supreme accuracy and automating business processes and decision-making. The impact is vast, ranging from greater customer experiences, to intelligent products and more efficient services. And in the end, the result will be economic impact for companies, countries, and society.
To be sure, organizations that drive mass experimentation in AI will win the next decade of market opportunity. To breakdown and help demystify AI, one needs to consider two key elements of the category: the componentry and the process. In other words, identifying what’s behind it and how it can be adopted.
Much like electricity was driven by basic components such as resistors, capacitors, diodes, etc., AI is being driven by modern software componentry:
- A unified, modern data fabric. AI feeds on data, and therefore data must be prepared for AI. A data fabric acts as a logical representation of all data assets, on any cloud. It pre-organizes and labels data across the enterprise. Seamless access to all data is available through virtualization from the firewall to the edge.
- A development environment and engine. A place to build, train, and run AI models. This enables end-to-end deep learning, from input to output. Machine learning models, help find patterns and structures in data that are inferred, rather than explicit. This is when it starts to feel like magic.
- Human features. A mechanism to bring models to life, by connecting models and applications to human features like voice, language, vision, and reasoning.
- AI management and exploitation. This enables you to insert AI into any application or business process, while understanding versions, how to improve impact, what has changed, bias, and variance. This is where your models live for exploitation and enables lifecycle management of all AI. Lastly, it offers proof and explain-ability for decisions made by AI.
With these components in hand, more organizations are unlocking the value of data. But to fully leverage AI, we must also understand how to adopt and implement the technology. For those planning the move, consider these fundamental steps first:
- Identify the Right Business Opportunities for AI. The potential areas for adoption are vast: customer service, employee/company productivity, manufacturing defects, supply chain spending, and many more. Anything that can be easily described, can be programmed. Once it’s programmed, AI will make it better. The opportunities are endless.
- Prepare the Organization for AI. Organizations will require greater capacity and expertise in data science. Many of today’s repetitive and manual tasks will be automated, which will evolve the role of many employees. It’s rare that an entire role can be done by AI. But it’s also rare that none of the role could be enhanced by AI. All technology is useless without the talent to put it to use, so build a team of experts that will inspire and train others.
- Select Technology & Partners. While it’s unlikely that the CEO will personally select the technology, the implication here is more of a cultural one. An organization should adopt many technologies, comparing, contrasting, and learning through that process. An organization should also choose a handful of partners that have both the skills and technology to deliver AI.
- Accept Failures. If you try 100 AI projects, 50 will probably fail. But, the 50 that work will be more than compensate for the failures. The culture you create must be ready and willing accept failures, learn from them, and move onto the next. Fail-fast, as they say.
AI is becoming as fundamental as electricity, the internet, and mobile as they were born into the mainstream. Not having an AI strategy in 2019 will be like not having a mobile strategy in 2010, or an Internet strategy in 2000.
Let’s hope that when you look back at this moment in history, you can do so fondly, as someone who embraced data as the new resource and AI as the utility to harness it.
A version of this story first appeared on Informationweek. | <urn:uuid:b06e5429-a142-4fc6-88c9-ad85102a9c7d> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/think/2019/03/ai-is-not-magic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00743.warc.gz | en | 0.952119 | 1,273 | 2.59375 | 3 |
There’s been a lot of comment about the coming Internet of things as the marketing hype from companies that would provide the relevant equipment gears up. The predictions about the potential impact made largely by academics seem pretty accurate.
But perhaps we should look at what it’s likely to mean to you, especially when you combine the IoT with the vast resources of big data and companies that profit from knowing what you’re up to at any given time.
Let’s say, for example, that you routinely have a couple of bottles of beer when you get home from work. There’s the convenience of making sure your favorite brew is being tracked by your refrigerator and that it’s communicating with your beer purveyor to make sure your shopping list includes the beer for restocking so you don’t run out.
But by collecting the data from the IoT and by some creative use of big data, your health insurance company probably knows about your beer consumption, too. If you’re lucky, that may mean you start getting calls about counseling, but it might also mean you lose your health insurance.
There are lots of scenarios making the rounds, such as getting alerts from your car that you’re 15 minutes from home, triggering a change to the temperature setting on your home air conditioner, or receiving a warning when temperatures go up in one room, alerting you to anything from a fire to a window accidentally being left open during the summer. So far, most of the possibilities seem pretty benign.
But the impact could get much more pervasive. AT&T is already well along with its plans for a connected car that will communicate on its own with a variety of services. Not only will such a car connect with the Internet for everything from maintenance requirements to restaurant menus, but it’s entirely capable of letting those same services know where you’re going and when. It may also communicate enough information that it will make the information about why you’re driving somewhere available.
So far, it looks like a real opportunity to make your quality of life better. You won’t run out of beer, your car will get the maintenance it needs, and you will save money on air conditioning. But the question that has to go along with these benefits is, what are you giving up in return for that convenience? What details of your private life will become public, potentially ending up in the wrong hands with access to too much data?
You’ll notice that I haven’t mentioned the government or the National Security Agency in this discussion. The reason is that this is not about spying on your activity for real or imagined national security purposes. This is about increasing the visibility of your personal life voluntarily—by default—which means that privacy protections are no longer an issue. You are, after all, allowed to tell people, even indirectly, where you are and what you’re doing.
Internet of Things Sure to Reveal Even More About Us Than Smartphones
Let’s take my recent activity on Foursquare, the location-sharing service, where I announced that a friend and I were in line at a place in Chicago called Hot Doug’s. Just this mention would tell you where I am, of course, and it would also tell you that I’m a sausage aficionado and that I’m going to the world’s top emporium of encased meats. My health insurer would also know that I could be raising my cholesterol through my dietary choices. Someone might even notice that I grabbed a ride on Uber to get back to my hotel.
Right now, this sharing of what might have been considered private information in an earlier day is generally accepted. After all, I’m the one who is putting the info into Foursquare, and I’m the one who said it could go to Twitter and Facebook for my friends and colleagues to see. But suppose you didn’t make that decision consciously?
Suppose that instead of intentionally tweeting out your activities, your activities and plans were being shared on a global network that’s designed to make life easier for you? If you and your activities are being monitored by a network of sensors that communicate through the IoT, is there even a means by which you can control what is shared and what is not?
Right now, many of the sensors that are becoming part of the IoT already exist in some form. The Internet-connected coffee pot and the connected soda machine were some of the very earliest applications on the Internet. Since that time, this level of connection has grown larger. It reaches farther, and it has already sunk into the background.
My car, for example, already has the ability to connect to its manufacturer’s servers to schedule maintenance visits automatically and to warn me of impending problems that may otherwise go unnoticed. In this case, I had to give my car permission to do this. But when the car has its own connection, how long will such permission be required?
You’ll notice that I’m not decrying the loss of privacy here. There really are important things that could come out of the IoT. Perhaps health monitors could summon help in case of a heart attack, provided you consent to the monitoring. Or perhaps a beer monitor could help me lose weight by reminding me that more than two bottles of beer is too many.
But what seems to be overlooked here is the need for specific and informed consent. When the sensors and monitors are put in place, you should be able to know what they do and you should be able to turn them off—such as when the day comes that your team makes the playoffs and more than two beers are warranted.
But for the IoT to deliver its promise, it also needs to come with the ability to consent to share the data and control what and when it’s shared. Then the IoT will really have the chance to be the boon it could be. | <urn:uuid:f1770b31-4584-4e18-b2b9-d7fcf65c0b2d> | CC-MAIN-2022-40 | https://www.eweek.com/cloud/internet-of-things-sure-to-reveal-even-more-about-us-than-smartphones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00143.warc.gz | en | 0.969864 | 1,230 | 2.53125 | 3 |
Downers Grove, Ill. (April 30, 2020) — Information technology (IT) professionals facing an increasing number of Distributed Denial of Service (DDoS) attacks can get help in their battle from CompTIA, the leading trade association for the global IT industry.
CompTIA's "DDoS Guide for IT Pros" is a free resource that provides comprehensive information on the nature and types of DDoS threats; explains how to identify organizational vulnerabilities and recognize the warning signs of a potential attack; identifies the tools, best practices and response procedures that can prevent attacks or minimize their damage; and more.
"DDoS attacks have grown in scope, size and sophistication," said Dr. James Stanger, chief technology evangelist at CompTIA. "An organization that is unprepared for an attack can face devastating consequences – hours of downtime and millions of dollars in lost business and productivity."
A DDoS attack typically occurs when a network, server or website is flooded with traffic by a malicious actor until the target cannot respond properly or simply crashes. Sometimes, even a few malformed packets can destabilize a system. This prevents legitimate users from accessing email, websites, online accounts, or other services.
A recent report estimated that there were more than 175,000 DDoS attacks in the United States in March. But the threat is not limited to the U.S. South Korea experienced nearly 74,000 incidents during the month; Brazil, more than 51,000; China, 45,000; and the United Kingdom, almost 44,000.
"No one is immune, but organizations can minimize their risk by investing in both technologies and personnel," Stanger noted. "Our guide identifies the steps that any business can take to strengthen their defense against an attack."
CompTIA's "DDoS Guide for IT Pro" includes information on:
"Just as technology advances, so do the cyber-threats we must deal with," Stanger said. "That's why it's essential for IT professionals to continue to educate themselves through ongoing training and professional certification."
The standards and practices taught in the industry can help IT pros and their employers respond to DDoS attacks. One way to stay current with the standards and best practices covered by IT certifications is to visit the CompTIA Career Pathway.
To download a free copy of CompTIA's "DDoS Guide for IT Pros" visit https://www.comptia.org/content/guides/what-is-ddos-protection-tools-stopping.
The Computing Technology Industry Association (CompTIA) is a leading voice and advocate for the $5.2 trillion global information technology ecosystem; and the estimated 75 million industry and tech professionals who design, implement, manage, and safeguard the technology that powers the world's economy. Through education, training, certifications, advocacy, philanthropy, and market research, CompTIA is the hub for advancing the tech industry and its workforce. Visit www.comptia.org to learn more. | <urn:uuid:20c993fc-76c7-4368-8d71-a1360a1f78e3> | CC-MAIN-2022-40 | https://www.gocertify.com/it-certification-press-releases/comptia-offers-it-professionals-free-resource-to-help-combat-ddos-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00143.warc.gz | en | 0.927068 | 626 | 2.546875 | 3 |
Windows has built-in tools that will let you write zeros to a drive, securely erasing its contents. This ensures deleted files on the drive can’t be recovered. Whether you want to wipe an internal drive or an external USB drive, here’s how to do it.
It’s often possible to recover deleted files from a drive. Whether this is possible depends on a number of factors.
If the drive is a traditional magnetic drive with a spinning platter, deleted files are simply “marked” as deleted and will be overwritten in the future, making recovery of deleted data easy. This should not be the case on modern solid-state drives, as they should use TRIM by default, ensuring that deleted files are deleted immediately. (This helps with speed.)
However, it’s not so simple as mechanical vs. solid-state storage: External storage devices like USB flash drives don’t support TRIM, which means that deleted files could be recovered from a USB flash drive.
To prevent this from happening, you can “wipe” a drive. This is actually a pretty simple process: Windows will write zeroes or other junk data to every sector of the drive, forcibly overwriting any data already there with junk data. This is a particularly important step to take when you’re selling or otherwise disposing of a computer, drive, or USB stick that had sensitive private data on it.
By the way, if a drive is encrypted, this provides a lot of additional protection. Assuming an attacker can’t get your encryption key, they wouldn’t be able to recover deleted files from a drive—they wouldn’t even be able to access files that aren’t yet deleted.
To write zeros over the contents of any drive, all you have to do is perform a full format of the drive. Before you do this, bear in mind that this will completely erase all files on the drive. Also, you can’t perform a full format of your Windows system drive while you’re running Windows from it.
This method is ideal for internal drives that don’t have your operating system installed, USB flash drives, other external storage devices, and any entire partitions you want to erase.
To get started, open File Explorer and locate the drive you want to wipe. Right-click it and select “Format.”
Uncheck “Quick Format” under Format Options. This will ensure Windows 10 or Windows 11 performs a full format instead. According to Microsoft’s documentation, ever since Windows Vista, Windows always writes zeros to the whole disk when performing a full format.
You can change any other formatting options you like here; just ensure “Quick Format” isn’t checked. (If you’re not sure what to choose, just leave the options here on their default settings.)
When you’re ready, click “Start” to format the drive. The process may take some time depending on the size and speed of the disk.
Warning: The format process will erase everything on the drive. Be sure you have a backup of any important files before continuing.
If you’ve deleted some files from a mechanical hard drive or an external storage device, you might want to wipe only the free space, overwriting it with zeros. This will ensure those deleted files can’t easily be recovered without wiping the entire drive.
Windows 10 and Windows 11 have a way to do this, but you’ll have to visit the command line. The
cypher command built into Windows has an option that will wipe a drive’s free space, overwriting it with data. The command will actually run three passes, first writing with zeros, then another type of data, then random data. (However, just one pass should be enough.)
To get started, launch a command-line environment like the Command Prompt or Windows Terminal with administrator permissions. On either Windows 10 or Windows 11, you can right-click the Start button or press Windows+X and click either “Windows PowerShell (Admin)”, “Command Prompt (Admin)”, “Windows Terminal (Admin)”. Choose whichever appears in the menu—any will work.
Run the following command, replacing X with the drive letter of the drive you want to wipe free space for:
For example, if you want to wipe free space on your D: drive, you’d run the following:
The command will show its progress at the command line. Wait for it to finish—depending on the speed of your drive and the amount of free space to be overwritten, it may take some time.
If you want to wipe your entire Windows operating system drive, there’s an easy way to do it. This option is built into the Reset This PC feature on Windows 10 and Windows 11, although it isn’t enabled by default.
While Windows is restoring itself to factory default settings—in other words, reinstalling Windows—you can have it wipe your system drive. You should use this option to protect your private data when you’re selling your PC or giving it to someone else.
To do this on Windows 10, head to Settings > Update & Security > Recovery. Click “Get Started” under Reset This PC. (You can press Windows+i to quickly open the Settings app.)
On Windows 11, head to Settings > System > Recovery. Click the “Reset PC” button under Recovery Options.
Select “Remove Everything” to have Windows remove all your files during the Reset process.
Select “Local Reinstall” or “Cloud Download,” either will work for this process. If you’re not sure which to pick, we recommend selecting “Local Reinstall” to avoid the big download.
“Cloud Download” is useful if your local Windows operating system files are corrupted and the Reset This PC process won’t work otherwise. Also, believe it or not, Cloud Download can be faster than Local Reinstall as Windows just has to download installation files rather than reassembling them from the files on your computer’s hard drive—it depends on the speed of your internet connection.
Under Additional Settings, select “Change Settings.”
Clean the switch under “Clean data?” to set it to “Yes.” With this option enabled, Windows will “clean the drive” and make it much harder (theoretically, practically impossible) to recover your files
Windows warns you that this process may take hours—as always, it depends on the speed and size of the drive in your computer.
You can now click “Confirm” and continue through the process to reset your Windows 10 or Windows 11 PC and wipe your drive during this process.
Warning: This process will erase all the files, applications, and settings on your drive, leaving you with a fresh Windows installation without any of your files. Be sure to back up everything important first.
By the way, Windows refers to this process as “cleaning the drive” instead of wiping it. This is different from the traditional meaning of “cleaning” a drive in Windows, which actually refers to removing all of its partition information rather than wiping it. | <urn:uuid:f4019ab4-6a97-4909-9838-6c05021ed5d6> | CC-MAIN-2022-40 | http://dztechno.com/how-to-wipe-a-drive-on-windows-10-or-windows-11/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00143.warc.gz | en | 0.9106 | 1,562 | 2.8125 | 3 |
Why edge computing is essential for the IIoT?
Edge computing seems to be what everyone in IoT is talking about these days.
Well, currently in most big data solutions, information is collected from a sensor, sent to the cloud, processed in the cloud and then sent back to the device the sensor is on. For example, let’s say I own 10 parking lots across the Midwest. I’ve installed sensors on each of the overhead lights that illuminate the lots. The sensors will tell the lights when to turn on or turn off based on how much sunlight is available. So, on a rainy day with little sunlight, the sensors will make sure the lights are on in the parking lot until the rain passes and the sunlight illuminates the lot. However, the information from the sensors is not being processed on site, but rather on the cloud. This can lead to a delay in when the lights turn off or on. This is when edge computing advantages come into play.
What is IIoT edge computing?
Edge computing is done on site or at the “edge” of a device so there could be a small processing unit that is running the mechanism that is turning on and off the light on site. This allows the actions to be taken in real-time rather than having a delay occur before the action takes place.
What is an example of edge computing in IIot?
Edge computing is also very powerful in situations where it’s difficult to connect to the cloud. For example, in agriculture. Most farms are in rural areas where wi-fi may not be readily available. So even though tractors or sensors in the ground are collecting information, decisions such as when to turn on a water system or when an area needs to be fertilized may be delayed until data can be uploaded to the cloud to be processed. If a small processer was located on the farm and connected to the sensors via bluetooth, then these insights could be generated much faster. By having processing on the edge in this situation, data turns instantaneously into action.
Why is edge computing important for IIoT?
Also, in situations where enormous amounts of data are generated every second. For example, on planes, sensors are connected to almost every part – they’re on the wings, the engine, the landing gear, etc. According to Forbes, on an average flight between 5-8 Terabytes of data are collected. Could you imagine the costs of uploading this information in real-time to the cloud while the flight was in the air? In situations like this, its integral to have a processor located on the device or plane in this case to make in the moment decisions without having to worry about the additional costs of sending all of this data to the cloud to be processed.
How is edge computing used in IIoT applications?
Edge computing has become more common place with the adoption of small inexpensive processors, such as the Raspberry Pi. Raspberry Pis and similar devices allow companies to place processors on devices and make decisions in the moment. In the future, edge computing devices like these will power the majority of IoT solutions. However, we shouldn’t expect the cloud to be abandoned. Cloud and fog computing will still play a major role when it comes to storing data and making prescriptive decisions based on that data. However, for most decisions that need to happen in the moment we’ll need to go over the cloud, through the fog and onto the edge!
The article was written by Rebecca Camus from Entrigna and originally it was posted here.
Check out some of the startups we spotted at Web Summit this year, focused on clean energy and renewable energy. | <urn:uuid:2fc8dcae-db29-4012-acfd-ac2689de77ef> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/digital-disruption/why-edge-computing-is-essential-for-the-iiot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00143.warc.gz | en | 0.956427 | 756 | 3.125 | 3 |
Malware, an abbreviation of "malicious software," is a type of computer program whose purpose is to infect a user's PC. Malware is usually installed by accident, often by, for example, inadvertently downloading software (such as browser toolbars, download assistants, or bogus antivirus software) that come bundled with an otherwise innocent looking program.
It's vital that all users know how to recognize and protect themselves from all of its forms. Some malware can get on your computer by taking advantage of security vulnerabilities in your operating system and software programs.
- Virus: Computer viruses attach themselves to clean files and then infect other clean files. They can spread quickly, and often damage a system’s core functionality by deleting or corrupting files.
- Trojans: This kind of malware disguises itself as legitimate software, or is included in legitimate software that has been altered. Trojans often stay under the radar, and exist to create backdoors in your computer security to allow for further infection.
- Spyware: Spyware is malware built to spy on you. It hides in the background and takes notes on what you do online, and uses this to gather sensitive information about you.
- Worms: Worms take over entire networks, both local or across the internet, by using network interfaces. The worm uses the network to travel from device to device, infecting as it goes.
- Ransomware: otherwise known as scareware, ransomware can (or sometimes, only appear to) lock down your computer and threaten to destroy your data unless a ransom is paid.
- Adware: While not always malicious, aggressive advertising software can diminish PC security in order to serve ads. Also, even if not directly dangerous, pop-ups destroy the quality of a user experience.
- Botnets: These are networks of computers already infected, made to work together by a remote attacker, often without users being aware of the hijacking.
New types of malware are constantly updated to include new evasion and backdoor techniques designed to fool users and security services as well.
Some of these evasion techniques rely on simple tactics, such as using web proxies to hide malicious traffic or source IP addresses. More sophisticated evasion techniques include polymorphic malware, which constantly changes its code to side-step detection from most anti-malware tools. Anti-sandboxing means the malware can detect when it's being analyzed, allowing it to hold off on executing until out of sight; and “fileless malware” resides only in the system's RAM in order to avoid being discovered.
- Software downloads that at first seem to be something safe like a simple image, video, or audio file, but in reality are harmful executable files that install malicious programs. So-called “drive-by downloads” automatically download malicious programs to users' systems without their approval or knowledge.
- Local storage devices, such as USB drives or other external storage, are plugged into a computer and spread infection.
- Phishing attacks, where emails disguised as legitimate messages contain malicious links or attachments.
While it is possible to remove malware from a system, and return to an uninfected state, it will always be more beneficial to prevent contamination in the first place. The most effective methods for avoiding infection are:
- Install antivirus / anti-malware programs: These programs should be configured to automatically look for signs of activity in both downloads and active files. Many programs can also monitor suspicious websites or harmful email messages.
- Adjust behavior: Start by avoiding untrustworthy emails and attachments from suspicious accounts. Malware sometimes spreads by sending copies of itself to everyone found in a contact list.
- Regularly update software: Not only anti-virus software, but also key programs on your computer, especially your web browser and local email client. This way, your computer is more likely to recognize newer threats.
- Practice safe browsing: Consider the websites you visit, and avoid clicking on links or downloading files that seem suspicious or disingenuous.
- Use strong passwords and a password manager: An effective password is complex, non personal, changed often, and unique to each website. This will greatly increase the security of your various web accounts.
- Check the strength of your secure connection: Look for the padlock icon to the left of the URL and check that the URL reads ‘https’ instead of ‘http’. If it’s there, then that means the information passed is secure.
- Set up a reliable firewall: This is extremely important. A firewall protects computers from a huge number of exploits and vulnerabilities. On its own, a software-based firewall isn't enough to protect systems from the constant automated attacks prevalent across all Internet-connected systems. Because of this, it is important that all high value PCs connected to the Internet should be protected by a hardware-based firewall.
Malware today is almost entirely designed by criminals as a means of personal gain. Cyber criminals use a plethora of ever evolving tactics to evade detection while acquiring stolen digital property.
The main risk that cyber criminals pose to PC users and companies is securing banking and credit card accounts and passwords, sensitive information related to business practices, or the personal information of users stored by a company. The people who acquire this information illegally often use it to empty bank accounts or max out credit cards. Often they’ll even sell the information to other criminals. These can be underground criminal organizations who want access to tools such as money or false personal information. Even some governments use these techniques in order to gather intelligence.
How Barracuda Can Help
The Malware Protection built into Barracuda CloudGen Firewalls shield the internal network from malicious content by scanning web content (HTTP and HTTPs), email (SMTP, POP3), and file transfers (FTP) via two fully integrated antivirus engines.
The Barracuda Web Security Gateway is a comprehensive solution for web security and management, it unites award-winning spyware, malware, and virus protection with a powerful policy and reporting engine.
Barracuda Email Protection is a cloud-based security solution designed to protect against spam, phishing, malware, ransomware, and other targeted email threats. Barracuda Email Protection combines heuristic, behavioral, and sandboxing technologies to detect advanced, zero-day attacks
Barracuda Advanced Threat Protection is a cloud-hosted service available as an add-on subscription for multiple Barracuda security products and services. It uses signature matching, heuristic and behavioral analysis, and static code analysis to pre-filter traffic and identify the vast majority of threats. Finally, it feeds remaining suspicious files to a CPU-emulation sandbox to definitively identify zero-day threats and block them from reaching your network.
Do you have more questions about Malware? Contact us now. | <urn:uuid:38bc4eff-cb63-41ad-a9c0-78d2904548ee> | CC-MAIN-2022-40 | https://www.barracuda.com/glossary/malware?utm_source=18454&utm_medium=blog&utm_campaign=blog | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00143.warc.gz | en | 0.913751 | 1,401 | 3.734375 | 4 |
The number of smart digital innovations being introduced in cities has increased over the last ten years. Many of the innovations have been introduced as one-off projects as technology develops, awareness grows, and city needs and budgets permit. These IoT innovations are varied, including intelligent traffic control, smart parking, smart lighting, remote infrastructure monitoring, waste management, smart sensors, and even shark monitoring. The opportunities grow by the day.
Each of these initiatives is a huge step in the right direction and can help improve city efficiencies and sustainability, but in isolation, the benefits are often diminished. Many IoT initiatives involve proprietary systems, which rely on different frameworks, and use different data architectures or sensor hardware that speak different “languages.” Managing one of these projects in isolation is fine, but as the number grows, the management of these smart systems can quickly become an IT nightmare.
IoT & Smart City Benefits
There are significant benefits and efficiencies to be gained by integrating disparate IoT systems and data. Let’s take a look at some of the possible improvements.
- Operations: By integrating systems, operations teams no longer need to switch between different operating platforms and systems, forgetting logins and functions. The ability to monitor and control all city operations, be it municipal buildings, traffic management, lighting, and waste management all from one platform can greatly improve operational efficiencies.
- Maintenance: Improve the prioritization and scheduling of predictive maintenance work as teams access one cloud-based platform where they can obtain crucial information to assist with the correct ordering of spares, insights into the root cause, and improved coordinating of personnel. Improving the flow of data and communication to maintenance teams ensures that work is carried out correctly the first time, every time.
- Advanced analytics and AI: As systems and software are integrated, cities can benefit from the application of advanced analytics and artificial intelligence across the available data. Predictive analytics can provide critical insights to city teams to improve day-to-day services, as well as help inform strategic planning. Having the data on one platform greatly reduces, if not removes, time-consuming data wrangling and data manipulation.
- City stakeholders: By connecting disparate IoT systems, a larger number of stakeholders can benefit. For example, community facilities can be integrated to allow for booking management, access control, maintenance alerts, and accounts for automated accurate billing (i.e., power consumption). Events teams can be granted access to city decorative lighting installations to tie in lighting sequences with city and community events.
- Automation: Many cities rely on a large workforce as their jurisdictions are widespread. The integration of IoT systems can allow for the automation of many simple tasks, such as the switching on/off of lights or irrigation, as well as the monitoring and reporting across a wide range of critical infrastructure. Some cities are even automating their traffic infringement systems, connecting to existing cameras on their fleet of cars.
- IT and OT: Reducing the number of systems, software, and platforms can reduce operating costs for cities and councils, eliminate IT and OT headaches, and help to improve communication and data usage between departments. Data security can be improved as data won’t reside in legacy and proprietary systems which can increase the risk of security breaches.
Integrate From the Beginning
Many cities run small pilot projects to test an innovation. While often successful, many still struggle to scale the innovation and roll it out. This can often be attributed to issues with system compatibility, which may require customization, retrofitting, and sometimes rebuilding, all of which add time and can blow out a project’s costs.
When planning a smart city innovation there are three things to focus on to ensure you avoid IoT silos:
The linking of legacy IT systems, IoT sensors, and data architectures needs to be at the forefront when considering future solutions. When looking for a solution, cities should prioritize innovations that use open standards and are committed to interoperability.
#2: Operations Transformation Project
Many IoT projects are still considered technology projects rather than operational transformation projects. An operational transformation project can benefit a wide range of stakeholders, even if the project wasn’t designed for them. As innovation is used and embraced by a greater number of stakeholders, the inherent value will be more easily recognized, which will help embed and scale the solution.
The beauty of digital innovation and IoT is the vast amounts of data that can be produced. Ensure you have a plan for how you are going to store and access the data. Even if you are not using the data immediately for advanced analytics or artificial intelligence, you very soon may choose to do so. The more data available the better, so the integration of all available data, even from different systems, will prove to be invaluable. An ideal period for artificial intelligence is two years of historical data, so we recommend you start storing your data from day one.
Built to Last
Finally, teams and personnel come and go, but an interconnected, interoperable smart system is built to last. Future city and council employees will one day be grateful for the vast amounts of data stored, and city services and resources can continue to be optimized and improved to increase efficiency and effectiveness. This is the beauty of integrated IoT for smart cities. | <urn:uuid:1a6c5e38-5f48-4767-9eac-ca01322bf37f> | CC-MAIN-2022-40 | https://www.iotforall.com/how-to-avoid-iot-silos-in-cities | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00143.warc.gz | en | 0.934947 | 1,076 | 2.53125 | 3 |
5G is finally here. This year, we’ve seen the first 5G networks going live for consumers in the US, UK and South Korea. It’s the first generation of cellular network technology designed with IoT applications in mind and will make a big difference in the medium to long-term. But, for anyone planning an IoT project right now, what are 5G’s benefits – and is it truly a viable option?
The Advantages of 5G for IoT
It’s worth noting that some of the big advantages of 5G for consumer applications – the higher connection speeds and greater capacity – aren’t of much relevance for IoT, which typically uses a large number of devices, each sending a small amount of data. The extra capacity could theoretically be a boon to IoT because it allows for a much greater density of devices within a region, but in truth, it’s very rare to find a Applications today where the density is too high for existing networks.
The biggest gain lies in its lower power usage. When we say 5G was built with the IoT in mind, this is what we mean. Previous cellular technology was designed on the assumption that it would be used predominantly by mobile phones with batteries which are charged once a day, and each generation has consumed more power than the last. But 5G is much better optimized for devices sending small amounts of data, reducing the overhead of signaling and payload for any particular bit of data.
The benefits of lower power consumption – either smaller batteries or longer battery life – are huge for IoT and connected devices. Smaller devices allow for greater flexibility in how an IoT solution is deployed, while extended life means devices can be left in the field for longer without requiring costly maintenance.
Alternatives to 5G
However, while 5G has now started to roll out, it’s unlikely to be ready for most IoT Applications today. The networks are still geographically limited, and you also have to consider the availability of 5G-compatible devices and all of the systems integration steps that come along with that.
In truth, waiting for 5G could mean waiting for a significant time. And there are other solutions available right now that can solve most of the same IoT problems.
The most notable is LoRa, a low-power wide-area network (LPWAN) technology based on an unlicensed public spectrum which was first developed in 2009 but has started to see widespread adoption in the past 12-18 months. It’s specifically designed to send very small amounts of data with very low overhead.
There are limitations, of course. A LoRa module gives around 10 kilometers of coverage in open space, which confines it to Applications with static devices operating in a fixed area. Public LoRa networks also exist, but these only cover major urban areas unlike cellular networks, where a device can reliably connect in almost any city on earth.
Whether it’s the right technology depends on the specific requirements of your Applications, including location, bandwidth and security. Alternatively, your needs may be better served by using current cellular technology and accepting larger batteries in the short term or by some combination of technologies.
The vast majority of IoT applications are viable on currently available technology. Implement it correctly, and you’ll be ready to transition to 5G once its coverage is more ubiquitous and hardware is more affordable.
To learn more about 5G and other IoT connectivity options that can interconnect everyone, everything, everywhere, read “Managing IoT Connectivity for Tomorrow: Growing your connected device business” and join the SAP Digital Interconnect Community.
Written by Mirko Benetti, VP, Head of Sales EMEA and APJ, SAP Digital Interconnect and John Candish, Head of IoT Products, SAP Digital Interconnect. | <urn:uuid:22d54e81-67a3-4bb6-8576-390f6bdd214c> | CC-MAIN-2022-40 | https://www.iotforall.com/is-5g-ready-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00143.warc.gz | en | 0.949282 | 777 | 2.515625 | 3 |
The phone rings and it's a number I don’t recognize. That’s enough to bring my mood down a few degrees. It shouldn’t, but unfortunately experience has taught me that at least 95 percent of the calls from numbers that are “private” or that I don’t have an account name stored for on my phone are so-called cold calls.
A cold call is an unsolicited visit or telephone call made by someone trying to sell goods or services. These goods or services don't necessarily need to have any real value, so cold calling can also include tech support scammers trying to convince you they work for Microsoft and happen to know that your computer is having problems.
Recently, phone numbers that look vaguely familiar have been showing up on my phone. This is because scammers have found out how to spoof telephone numbers that appear to be from the same area code as the victim. This phenomenon is called neighbor spoofing, and it’s the latest strategy being used by scam artists in an attempt to get people to answer the phone.
So what can you do about these annoying and sometimes dangerous calls? Here are a few tips on how to handle and protect yourself from cold calls.
Most important rules when dealing with cold callsDo not call back on missed cold calls, especially if the telephone number starts with a foreign country code or strange area code. There are scams that try to trick you into calling premium-rate telephone numbers that generate cash for the threat-actors.
If you happen to pick up a cold call, cut the conversation short by stating that you are “not interested” and hang up. Do not use the word "yes." While this may seem rude to you, know that some scammers record the phone conversations and use them to construct a falsified version that appears you have given consent. Consent to what? Anything from sending worthless rubbish to your house to paying hundreds of dollars for a service you don't need.
Prevention is better than a cureThere are many ways to keep the number of cold calls you receive to a minimum. Here are a few tips on keeping the vultures away.
1. Sign up with your local “Do Not Call Registry.”
- USA: National Do Not Call Registry
- Canada: National Do Not Call List
- Australia: Do Not Call Register
- New Zealand: Do Not Call register
- UK: Telephone Preference Service
- Ireland: National Directory Database
2. If organizations keep calling you despite being listed, you have the option to file a complaint with most of the "Do Not Call" registries listed above.
3. Some companies feel they have a right to call their existing customers, even if these customers are listed in a "Do Not Call" register. Make it clear to them that you will become a former customer very quickly if they keep cold calling you. Tell them, “Do not call us, we’ll call you.”
4. Do not give away your telephone number online, even if you just “won a new iPhone.”
4. Most phone operators can bar international calls to make sure that you do not receive any, so consider blocking international numbers if you don’t need to receive any calls from abroad.
5. Ask for your number to be removed from directories (such as the Yellow Pages in your local phone book) to stop cold callers from finding them there.
6. When doing business with companies that need your telephone number, make sure to ask them not to call you for marketing purposes or pass your number on to third parties.
7. Find and use the opt-out option for telemarketing whenever it’s offered.
Block listsSome phone operators allow you to create a personal block list, but these block lists are usually of a very limited size. Still, it’s a good option to block the most persistent pests.
Your mobile phone may also have the option to create a personal block list, but the number of options and where to find them depend on the manufacturer of the phone. For example, Android does not have a built-in method to block numbers. This article on Android Authority explains how to do it for some major phone brands. And this support article by Apple explains how to block numbers on iOS.
Creating block lists is useful for blocking cold callers after their first try. But ideally, we'd like to avoid that first call as well. There are some apps that can do that for you. Some will block, or automatically send to voicemail, any number that is on their list of known cold callers or comes from a certain range (0800 for example). Having the blocked calls on voicemail can be handy in case you are afraid to miss an important call. And it enables you to listen to the messages at a more convenient time.
Some home phones are also equipped with one or more call blocking features. And for existing landline phones, there are separate call blockers that you can plug in to your device.
Whitelisting and blacklistingThere are two ways in which block lists can be created.
Whitelisting: Only the numbers or ranges that you allow can get through. While this helps to block calls from any number you don't recognize, it could potentially block calls that you'd want to receive, such as from doctor's offices or job interviews. I would recommend routing the rest to voicemail as not to miss any calls that you would have wanted to take.
Blacklisting: Whether it’s based on individual numbers or ranges of numbers, this method blocks any number on the list from getting through. While this ensures that known cold callers don't reach you, it potentially misses a whole lot of unknowns. Depending on how the list is maintained, there is a lot less urgency to route them to voicemail as we assume the numbers were blacklisted for a reason.
Be aware that neither of these options will be foolproof when it comes to neighbor spoofing. Spoofers can make it appear to be calling from any given number—even one from your whitelist. In fact, you could even get an angry call from someone telling you to stop calling them, because the number that was spoofed happened to be your number!
Recording callsTo counter scammers that may falsify the phone conversation you had with them, there are several options to record telephone conversations. Some apps will even save them to a cloud service for you, so you can play them back at any given time. Please note that in some countries you must let the other party know that you are recording the conversation. As a bonus, some scammers will hang up themselves as soon as you let them know.
SummaryCold calls can be persistent, time-consuming, and incredibly annoying. Unfortunately, there's no single solution that can prevent 100 percent of them from getting through to you—unless you decide to ditch phones altogether. Hopefully, our tips will help keep the number of cold calls to a minimum and your phone number off the radar. And when in doubt...
Keep calm and hang up. 😉 | <urn:uuid:edb2a3d7-ed89-4c2d-95ba-9b0c7708abf0> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2018/03/cold-call-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00143.warc.gz | en | 0.954729 | 1,470 | 2.59375 | 3 |
Separation of Duties involves dividing roles and responsibilities to minimize the risk of a single individual subverting a system or critical process without detection.
The classic example used in Separation of Duties is the United States’ government which is broken up into three branches of government (Legislative, Executive, and Judicial). This was the wisdom used by the founding fathers so that one branch did not hold all the power to govern the people, not even the US president. This is called “separation of powers”; similarly in business, the Accounts Payable is often separated from the Accounts Receivable to “separate duties” and require collusion for company funds to be misappropriated.
What Should My SMB Do?
If you own a business, you need to be doing these things to protect your sensitive information:
- Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum.
- Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure.
- Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training.
- Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints.
- In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely.
- If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money.
- Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most.
Most of these recommendations are built into CyberHoot. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. | <urn:uuid:184f833d-d463-4c94-b0af-0d7ad7894f60> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/separation-of-duties/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00344.warc.gz | en | 0.924024 | 552 | 2.96875 | 3 |
Saturday, September 24, 2022
Published 2 Years Ago on Thursday, Jan 28 2021 By Mounir Jamil
Just when we thought we are on the losing side of the COVID-19 pandemic, a snowball effect was set in motion when the world first got wind of the what the Internet has dubbed the COVID-19 wonder drug – Ivermectin.
While we still await a recognized effective treatment or vaccines for the pandemic, Ivermectin – a drug with a wide bioactivity range that has been in use for more than 30 years primarily in the treatment of parasitic infections in humans, is now being considered as a potential target drug for the virus and is currently undergoing extensive research in clinical trials.
The snowball first started to gain traction back in December 2020, when a team of highly published ICU critical care physicians and scholars conducted a comprehensive analysis of scientific data curated from centers around the world that supports the use of the oral medication as a solution for the pandemic.
The team held a webinar in which they called for immediate action from several worldwide national health authorities to conduct a prompt review of their findings with the goal of ultimately greenlighting Ivermectin as a solution in the fight against the disease.
One of the team members has even testified at a U.S. Senate hearing on the early treatments of COVID-19 where he went on record to say that “I am here to report that our group, led by Professor Paul E. Marik, has developed a highly effective protocol for preventing and early treatment of COVID-19.”
The group maintain that the chemical in question has a potent combination of anti-viral and anti-inflammatory assets that render it useful preventively and can also be used for treating the virus at its early and late stages.
Further adding to the appeal of Ivermectin is that the drug is off-patent and extremely cheap, so cheap its already popping up on black markets around the world – where even there its being sold at relatively acceptable prices.
With the snowball effect reaching full momentum, Oxford University has announced that their researchers have gotten the go-ahead and are planning a large-scale trial of the inexpensive drug that could aid in dramatically reducing virus deaths globally.
When it comes to the actual reality of all the ambiguity surrounding the Ivermectin remedy, the data is hopeful yet incomplete, no large-scale randomized control trials have been conducted yet that prove its efficacy, so bets are on Oxford University.
The fastest-growing waste stream in America, according to the Environmental Protection Agency (EPA), is electronic garbage, yet only a small portion of it is collected. As a result, the global production of e-waste may reach 50 million metric tons per year. Sustainably manufactured green phone have, as a result, risen in popularity. When you purchase […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:fdd429c0-b1b5-4ed0-95da-0161f195176b> | CC-MAIN-2022-40 | https://insidetelecom.com/ivermectin-newest-vaccine-warrior/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00344.warc.gz | en | 0.957397 | 634 | 2.90625 | 3 |
Penetration testing, or pen testing, is a proactive way that organizations can improve their security hygiene and assure their clients that the products and services they provide are as secure as possible. While many enterprises rely on internal audit teams to test the security of their networks, applications, and devices, undergoing third-party penetration testing is a surefire way to identify overlooked or unknown vulnerabilities, find remediation strategies and guidance, and gain peace of mind. But because, often times, pen tests are merely a suggestion – like with HIPAA – or are only required annually – like with the PCI DSS – organizations overlook the value of undergoing pen testing after a significant change is made.
What Constitutes a Significant Change?
Think about the many components of your organization’s security infrastructure: software, hardware, networks, and even your personnel. How often are updates made to your software? How frequently do you replace hardware? What does your organization’s turnover rate look like? The goal of pen testing is to identify vulnerabilities in your IT infrastructure, which is constantly changing. When a significant change occurs, like developing a new web application, implementing a new smart security system, or having a senior-level executive retire, penetration tests are needed to account for any new risks or vulnerabilities that may be introduced.
Examples of Significant Change
Example 1: Updating Code in a Web Application
According to Verizon’s 2019 Data Breach Investigations Report, “Web application breaches made up nearly 30% of all breaches in 2018.” This should come as no surprise – nearly every organization uses web applications to provide or conduct business, and no matter if they are public-facing or exist on an intranet, they’re susceptible to many cyber threats like SQL injection, DoS, brute-force attacks, or malware. Let’s say that a director of IT has instructed her team to implement and deploy new code. While this code may be developed with security in mind and may go through ample security testing, there could still be undiscovered vulnerabilities. By undergoing pen testing and code review after developing the new code, organizations can rest assured that they performed their due diligence to make sure that the improved web application is secure.
Example 2: Introducing New IoT Devices
IoT devices have made daily tasks easier – from making coffee in the morning to securing your office building. But how might these devices compromise your organization’s security hygiene? Even the smallest, seemingly non-threatening IoT device could cause the demise of your organization if a malicious hacker used it to gain unauthorized access to your network. For instance, let’s say that your coworker brought in a smart picture frame – one that connects to your organization’s WiFi network to display images from your coworker’s phone. Seems pretty harmless, right? Now, if everyone in your organization did something similar, there would be multiple, seemingly non-threatening attack vectors that a malicious hacker could exploit. In scenarios like this, having a robust information security program, thorough internal auditing, and third-party continuous pen testing would be useful to discover new vulnerabilities the IoT devices may introduce.
Example 3: Accounting for Personnel Changes
Major changes to personnel can greatly impact your organization’s security hygiene. If a CISO or CTO leaves, how would that impact the entire IT department? If a developer or network administrator resigns, how would their responsibilities be covered or reassigned? Does the culture of compliance stay intact? Personnel changes are just as likely to introduce new risks into your environment and undergoing continuous pen testing can help account for those changes.
How Can Continuous Pen Testing Help?
Undergoing annual penetration testing is a great first step for improving your organization’s security hygiene, but to really get the most out of your investment in pen testing, you should consider partnering with a third-party firm like KirkpatrickPrice to conduct continuous pen tests. Why? Because changes happen every day, and malicious hackers won’t give you an opportunity to fix the vulnerabilities those changes introduce before they exploit them. By investing in third-party continuous pen testing, organizations like yours will not only gain an objective insight into the security of your IT infrastructure on a regular basis, receive actionable remediation steps to mitigate vulnerabilities, and maintain compliance, but you’ll also be able to leverage your commit to security and give your customers peace of mind that your organization is doing everything it can to remain secure.
Businesses today are rapidly adopting new technologies, but are they staying ahead of the latest threats? Ask yourself if your organization is doing everything you can to prevent a data breach or security incident when the next significant change occurs. Not sure if you are? Contact us today to find out how KirkpatrickPrice’s penetration testing services can help. | <urn:uuid:4320174b-5640-431d-8517-af8b94bc5834> | CC-MAIN-2022-40 | https://kirkpatrickprice.com/blog/pen-testing-after-a-significant-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00344.warc.gz | en | 0.94019 | 978 | 2.59375 | 3 |
Data Governance Best Practices
Data governance has become essential to enable and protect data in today’s data-driven organizations. As three Vs of data – volume, variety, and velocity – continue to grow, data governance has become complex. Here are some of the factors that can help you launch and run a successful data governance program.
Privacy, Security, and Compliance
The three most essential pillars of data governance are privacy, security, and compliance. You must implement measures that facilitate appropriate control of internal access while ensuring security from outside threats.
Any governance is crucial to maintain compliance with all applicable laws and regulations that oversee collecting, using, and storing personal data. This is all the more important in the present landscape of data privacy, especially considering the consequences of data breaches and the advent of GDPR.
Establishing the rules, standards, and models that govern and define the type of collected data. It also encompasses how the data is used, stored, and integrated into your organization and its databases.
Establishing the correct data architecture is a crucial data governance best practice. Properly optimizing your data architecture can result in many operational advantages when it comes to data governance.
In data governance, it is good to never let the focus stray from the goal—ensuring consistently high data quality. The quality of data indicates its usefulness and appropriateness in particular contexts, observed in terms of accessibility, timeliness, and relevance, among other parameters.
To maintain high data quality, you need to implement a process of scrubbing or cleaning the data, preferably during ingestion. This helps remove incorrect, incomplete, and duplicate data. This is followed by data profiling—a process that removes inaccurate or irrelevant data about your specific use case.
Master Data Management (MDM)
To implement sound data governance, you need a concrete plan for creating data organization and data strategies. All these organizational and strategic elements combine to constitute the MDM or Master Data Management plan. This plan covers all relevant policies, standards, tools, and processes that your organization needs to process and manage data with effectiveness and control.
Data governance, like many other important aspects of an organization, can benefit greatly from having the right people in the right roles. You need to create roles in your structure that have specific duties and responsibilities in establishing and maintaining data governance. These roles would require different skills and come with different levels of accountability. It is crucial to clearly establish, from the very start, the responsibilities and expectations associated with each role.
Make sure that you carry out a skills audit and provide every involved person the requisite training. Many data governance programs produce disappointing outcomes because the organization never understood the importance and the rationale behind the controls and processes. Training could help bring the culture of data governance inside an organization.
Ensuring the proper accuracy, reliability, availability, security, and legality of your data can form the foundation of a comprehensive data governance program. By implementing these data governance best practices, you could achieve a data governance model that works particularly well for your organization. | <urn:uuid:61023942-e648-4f9c-b757-2f319ee85bf1> | CC-MAIN-2022-40 | https://www.protecto.ai/data-governance-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00344.warc.gz | en | 0.919493 | 632 | 2.625 | 3 |
You’re likely thinking to yourself, “OK, I see there is some potential in Quantum Computers, and some theoretically important use cases, but nobody has created a robust working Quantum Computer…existing qubits only stay coherent for milliseconds at best, so isn’t this all just hype?”
While no one can say for sure, my suggestion, paraphrasing Deep Throat’s instructions to Bob Woodward, is to “follow the money.”
The amount of funding being dedicated to Quantum Computing on a global basis is staggering. Governments, private companies, venture firms and academic institutions are all committing huge sums of money and resources to this field. While investment flows are no guarantee of future value, there is a broad common theme to push the development of Quantum Computers, and the equivalent of the modern “space race” is garnering growing attention in the media. Given the awesome power, potential and disruption that Quantum Computers can deliver, these trends should not be surprising.
The industry is at an interesting crossroad, where it has evolved from being an esoteric theoretical construct, to having many dozens of firms and academic institutions creating actual working (albeit still not very powerful) Quantum Computers. The challenge now is an engineering one, not a theoretical one. And with the growing pull of resources, it should be expected that engineering challenges will be overcome and developments will accelerate. When integrated circuits were still being created in the 1950’s, very few people could have imagined the boon it would create. Things like personal computers, cellular phones or the Internet were not yet contemplated. Even when PC’s were made available in the early 80’s, many were skeptical that there was an actual market for such an esoteric device. In fact, here is a reprint of an editorial by William F. Buckley Jr. as printed in the Lancaster New Era on July 19, 1982, where he is mulling that he cannot fathom any possible way a personal computer could be useful in the home:
Not surprisingly, his point-of-view was strictly in the context of the written word, since he was a writer, so his myopia makes contextual sense. Given that Quantum Computers are based on a completely different set of physics, logic gates and architecture, I am confident that the use cases will expand well beyond any currently contemplated uses and that current skeptics should try to maintain an open mind.
Government Directed Quantum Computing Investments
As can be seen in the chart below, the top ten countries focused on Quantum Computing technology have recently invested or committed over $21 billion towards this field:
The breadth and depth of these commitments are catalyzing the industry and I expect these trends to continue, so even excluding private company investment, there will be significant advancements achieved at the national level.
Major Current Players
Some of the largest players in the technology space have already dedicated large departments or divisions to Quantum Computing, and lead the push to broad adoption, as highlighted below:
Many are already offering their own quantum software platforms and providing early access to prototype machines over the web. For example, anyone can download the IBM Qiskit open-source Quantum Software Development Kit (SDK), create programs and run them on an IBM quantum emulator. Similarly, you can download and run Google’s Cirq, Microsoft’s Azure, Alibaba’s Aliyum, etc. among others. These firms are leveraging their broad infrastructure, technological resources and established web-based platforms to advance the access to, and utilization of, evolving Quantum Computing resources. In addition, in June Honeywell agreed to invest $300 million into its Quantum Computing unit after it merged with Cambridge Quantum Computing.
Venture Investment in Quantum Computing
In addition to the large government programs and major push by leading technology firms, there is a growing and accelerating focus on Quantum Computing among venture investors. According to the Quantum Computing Report, there have been more than 450 venture investments in Quantum Computing companies made by more than 300 different venture investment firms. Echoing the growth of Silicon Valley companies funded by legendary Sand Hill Road venture investors, current venture investors are making increasing large and diverse bets on many parts of the Quantum Computing ecosystem. The following chart showcases aggregate venture investments in each of the past three years (with more than a month still left in 2021):
A few venture firms have focused on Quantum Computing investments, with 17 firms making 3 or more such investments and with two (Quantonation and DCVC) making 10 or more each, as highlighted in the following table:
Not only has the playing field for Quantum Computing investments been growing, but there have been some very significant investments made. The following highlights some of the larger announced venture investments:
Of these companies, IonQ became the first-ever pure-play Quantum Computing company to go public, debuting on the NYSE on October 1, 2021 and as of Nov. 23rd had a market capitalization of $4.8 BILLION. Rigetti Computing also recently announced it would be going public in an expected $1.5 billion reverse merger with a SPAC. The latest PsiQuantum investment was announced this past summer and included a $450 million investment at a valuation exceeding $3 billion, with ambitious plans to build a commercially viable Quantum Computer by 2025.
University Focus on Quantum Computing
Quantum computing and quantum information theory has gone from being a fringe subject to a full complement of classes in well-funded programs at quantum centers and institutes at leading universities. Some world-class universities offering dedicated Quantum Computing classes and research efforts include:
- University of Waterloo – Institute for Quantum Computing
- University of Oxford
- Harvard University – Harvard Quantum Initiative
- MIT – Center for Theoretical Physics
- National University of Singapore and Nanyang Technological University – Centre for Quantum Technologies
- University of California Berkeley – Berkeley Center for Quantum Information and Computation
- University of Maryland – Joint Quantum Institute
- University of Science and Technology of China – Division of Quantum Physics and Quantum Information
- University of Chicago – Chicago Quantum Exchange
- University of Sydney, Australia
- Ludwig Maximilian University of Munich – Quantum Applications and Research Lab
- University of Innsbruck – Quantum Information & Computation
These Colleges and Universities, as well as many others, continue to add courses and departments dedicated to Quantum Computing.
We are witnessing an unprecedented concentration of money and resources focused on Quantum Computing, including substantial government initiatives, major industrial player committment, accelerating venture investment and evolving university programs. While not all investments will be positive, and the landscape continues to evolve, serious, smart money is backing this trend. The clear message is that resource focus will lead to engineering breakthroughs and immense value creation. There are now 100’s of companies jockeying for position in this evolving field. Stay tuned to this blog as we watch for the winners and losers.
Jean-Francois Bobier, Matt Langione, Edward Tao and Antoine Gourevitch, “What Happens When ‘If’ Turns to ‘When’ in Quantum Computing”, Boston Consulting Group, July 2021.
Hajjar, Alamira Jouman, 33+ Public & Private Quantum Computing Stocks, AI Multiple, May 2, 2021
Inside Quantum Technology News, Government Investments in Quantum Computing Around the Globe, May 31, 2021.
Pitchbook Database, Retrieved November 2021
Universities With Research Groups — Quantum Computing Report, Retrieved November 2021
Venture Capital Organizations — Quantum Computing Report, Retrieved November 2021 | <urn:uuid:eab038f7-113c-4a7e-a6b1-c9498138154d> | CC-MAIN-2022-40 | https://quantumtech.blog/2021/11/24/follow-the-moneythe-quantum-computing-goldrush/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00344.warc.gz | en | 0.936073 | 1,550 | 2.75 | 3 |
What Exactly is Zero Trust Security Model?
What is Zero Trust?
Zero trust is a perimeter-less security model focused on designing effective and efficient security architectures. In enterprise security, the zero trust principle states that anyone inside or outside the network cannot be trusted unless they are identified appropriately. The zero trust philosophy assumes that threats both outside and within the network are omnipresent. In addition, the zero trust model also assumes that any attempt to access a network or application is a security risk. It is these assumptions that drive network administrators to design stringent security measures. A zero trust model is based on the move to the “deny by default” model where any access will be denied by default until verified.
John Kindervag of Forrester Research first coined the term zero trust. An article named “Build Security Into Your Network’s DNA: The Zero Trust Network Architecture,” published in 2010 by Kindervag, explained how traditional network security models do not provide adequate protection because they depend on trust. Administrators must trust people and devices at various points on the network, and if that trust is violated, the entire network could be put at risk.
Because of the evolving global threat landscape, zero trust has gained popularity due to its challenge to long-held assumptions about the trustworthiness of network communications. Well-organized cybercriminals have found clever ways to get beyond traditional security architectures by recruiting insiders and recruiting insiders. Cyber terrorists and financially motivated criminals can also operate more efficiently with more sophisticated hacking tools and ransomware-as-a-service products. Threats like these can penetrate business and commerce, cause disruptions to human life, and steal valuable data.
Why Do Companies Need Zero Trust Security Model?
Enterprises can achieve the following benefits by implementing a Zero Trust Security Model or Zero Trust Security Architecture.
- Defend attack surface
- Prevent data breaches
- Data protection
- Reduced redundancy
- Reduced complexity of the security stack
- Reduced need to hire and train security professionals
Components of the Zero Trust Security Architecture
Following are the fundamental components of building a zero-trust network security model.
- De-perimeterization: Not binding the network inside a fixed perimeter
- The Protect Surface: Comprise of the data, applications, assets, services, and users that you want to protect.
- Multi-factor Authentication: Involves security mechanism for users to access the applications
- Authorization: Involves verifying if the user is allowed to access the application
- Endpoint Verification: Verifying and recording all the endpoints related to the organization
- Micro-Segmentation: creating zones within the network to isolate and secure elements of the network
- Least-Privilege Access: Allowing users to access only essential applications for performing the operations.
- Zero Trust Network Access: Defining security policy based on not trusting anyone from inside or outside the network.
How to Implement Zero Trust Security Architecture?
Zero Trust architectures do not require massive technology modifications or a comprehensive replacement of existing networks. Instead, the framework strengthens existing security practices. Following are some simple steps for implementing a zero trust security model.
- Step1: Define the Attack Surface
– Identify Sensitive Data
– Identify Critical Applications
– Identify Physical Assets
– Identify Corporate Services
- Step 2: Implement Access Controls Around Network Traffic
- Step 3: Architect a Zero Trust network
– Intrusion Prevention Systems
– Packet Filtering
– Content Filtering
– Email Filtering
– Access Controls
– Multi-factor Authentication
- Step 4: Define a Zero Trust Policy
- Step 5: Perform Active and Continuous Threat Monitoring
Challenges for Implementing Zero Trust Security Model
Though the zero trust model has a large number of potential benefits, it comes with certain challenges:
- Complex Infrastructure
- Cost and Effort
- Complex Understanding
- Lack of Skilled Professionals
How Can NetSecurity ThreatResponder Help You?
Cyber security threats and ransomware attacks are increasing at a tremendous pace. It is extremely difficult for cyber security analysts and incident responders to investigate and detect cyber security threats using conventional tools and techniques. NetSecurity’s ThreatResponder, with its diverse capabilities, can help your team detect the most advanced cyber threats, including APTs, zero-day attacks, and ransomware attacks. It can also help automate incident response actions across millions of endpoints, making it easy, fast, and hassle-free.
Want to try our ThreatResponder, cutting-edge Endpoint Detection & Response (EDR) security solution in action? Click on the below button to request a free demo of our NetSecurity’s ThreatResponder platform.
The page’s content shall be deemed proprietary and privileged information of NETSECURITY CORPORATION. It shall be noted that NETSECURITY CORPORATION copyrights the contents of this page. Any violation/misuse/unauthorized use of this content “as is” or “modified” shall be considered illegal subjected to articles and provisions that have been stipulated in the General Data Protection Regulation (GDPR) and Personal Data Protection Law (PDPL). | <urn:uuid:982f0ee8-8d9f-4e43-b647-d796d03be6e6> | CC-MAIN-2022-40 | https://www.netsecurity.com/what-exactly-is-zero-trust-security-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00344.warc.gz | en | 0.874653 | 1,072 | 3.234375 | 3 |
Last week, news broke that famous UK-based artificial intelligence research lab DeepMind was stacking up huge losses for its parent company Alphabet Inc. According to documents filed with the UK’s Companies House registry, DeepMind incurred $570 million in losses in 2018, up from $341 million in 2017.
DeepMind is the AI outfit that is behind some of the most remarkable feats in the past years, including the AI that beat the human champion at Go developing a deep learning model that beat human champions at StarCraft 2. Alphabet, which also owns tech giant Google, acquired DeepMind for $650 million in 2014. Since then, it has been pouring money into the AI research lab without significant returns. DeepMind has 1.04 billion pounds in debts due this year, which includes an 883 million-pound loan from Alphabet.
DeepMind’s huge costs bring to light some of the most serious challenges the AI industry is grappling with. Here are some of the key takeaways.
AI talent scarcity is concentrating research in a powerful few
According to the released information, DeepMind paid $483 million to approx. 700 employees, which means an average of around $700,000 per employee. Of course, the pay is not evenly distributed and some of DeepMind’s AI engineers earn seven-digit salaries.
Currently, the AI talent that can lead the kind of innovative projects research labs like DeepMind work on is very scarce. This has created a race between tech giants to offer bigger salaries to AI engineers in hopes of attracting them to their research teams. Paying more than $1 million to AI researchers has become common in large tech companies like Google and well-funded AI research labs such as OpenAI.
The stellar costs of hiring AI researchers is problematic in several ways. The AI arms race between the Big Tech is making it harder for smaller companies and organizations to contribute their share to AI research. After all, not every company can afford to pay its AI researchers seven-digit salaries.
But perhaps the more damaging effect is the brain drain in academic AI. The growing interest and the deep pockets of large tech companies is attracting AI talent toward commercial entities. Universities are finding it harder and harder to hold on to their AI researchers as they can’t match the lucrative incentives Big Tech offers.
There are a few AI researchers who prefer to spend their time in less paid academic projects, but their numbers are shrinking.
With AI talent being concentrated in a few powerful organizations, AI research and innovation can become focused on serving the interests of those companies and less the public good. In some cases, commercial and public interests are aligned, but that is not the rule. The disastrous state of social media and addictive tech shows what happens when tech companies decide to give priority to their own bottom line. The impact of Big Tech monopolizing the AI industry can be even more severe.
In my experience examining dozens of commercial and academic AI projects and speaking to their engineers and executives, there needs to be a balance between the two.
Academic projects provide infrastructural, open-source, general-purpose AI tools that are publicly accessible and can solve the problems of all sorts of organizations and achieve long-term goals. They solve fundamental problems but are not ready to be used out of the box. They usually need to be integrated into other products and software and require technical expertise to be finetuned for specific purposes.
Commercial AI projects, on the other hand, provide end-to-end, ready-to-use solutions that organizations and individuals can purchase and immediately employ to solve problems. They’re easy to use and accessible to people and organizations that don’t have AI expertise. But often, they’re not open to modifications and are hidden behind the walled garden of the commercial entity that develops them. The developers usually don’t share details on how the AI technology works and consider it IP and business secrets. Some entities take ownership of the data you generate when you use their AI system, and the service comes at hefty costs (they do have to pay those expensive AI researchers, after all).
Usually running on government grants, academic AI research is not constrained by return on investment and can run long-term projects without worrying about revenue. But commercial AI is constantly under the pressure of investors who want to see return on investments. That’s why they aim for goals that can be achieved in the short term.
With big tech companies recruiting more and more AI researchers into their ranks, there’s concern that there will be too much commercial AI and too little academic work.
Fortunately, there are some initiatives that might help bridge this gap, such as the MIT-IBM Watson AI Lab, which brings together the resources and talents of commercial and academic AI to develop projects that can benefit everyone, such as a technique that makes AI models more robust against adversarial attacks and another that helps understand the inner-workings of neural networks.
Other developments that might help alleviate the gap created by the cost of AI talent are the many online education programs such as Fast.ai, a free course that teaches deep learning to anyone who has basic coding skills and decent understanding of high-school math. These courses will help expand the pool of AI talent and make it more affordable and accessible to organizations that don’t have the resources and money of Big Tech.
Operating costs pose limits on AI research
Another important factor DeepMind’s losses highlight is operating and infrastructure costs. The general belief is that because of the nature of artificial neural networks, the current focus of the AI industry, developing deep learning models requires vast amounts of data and compute resources.
As AI researcher Jeremy Howard explains, however, this does not necessarily hold true. There are plenty of scenarios and use cases where you can develop deep learning models with minimal training data and by spending a few bucks to rent GPUs in the cloud.
There are also plenty of pretrained neural networks that can be finetuned for new purposes with minimal efforts and resources through transfer learning.
But many AI research projects that require reinforcement learning are still very resource intensive. Reinforcement learning is a training technique in which the AI model is given the basic rules and reward functions for a problem and is left on its own to explore the environment and find solutions. Reinforcement learning is used for domains such as robotics and teaching AI bots to play games.
For instance, according to figures released by DeepMind, its StarCraft-playing AI model consisted of 18 agents. Each AI agent was trained with 16 Google TPUs v3 for 14 days. This means that at current pricing rates ($8.00 / TPU hour), the company spent $774,000 for the 18 AI agents. Other reinforcement learning projects can have similar costs.
Overcoming this hurdle will probably be much harder than reducing the costs of AI talent. But there are already interesting efforts in the work. One possible solution is the development of hybrid AI systems that combine neural networks and rule-based programs. According to initial results, hybrid AI systems trained with reinforcement learning can achieve their goals with much less data and compute resources. These types of AI models might make it possible for more resource- and cash-constrained organizations to run their own research programs.
Whether any of these projects and efforts will help reduce the costs of AI remains to be seen. But DeepMind’s growing losses remind us of the current challenges of AI and the need to steer the industry in the right direction. | <urn:uuid:82bb5a5c-9f22-4c0f-bac8-7bc4454127f4> | CC-MAIN-2022-40 | https://bdtechtalks.com/2019/08/12/deepmind-losses-costs-of-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00344.warc.gz | en | 0.962842 | 1,529 | 2.5625 | 3 |
A Brief Guide to HIPAA-Compliant SMTP Relaying
Simple Mail Transfer Protocol (SMTP) is a way in which email travels across the internet. An SMTP relay is a mail server that passes on your email message to another server that can transfer your message to the intended recipient. Email providers like Gmail own and manage SMTP servers; some allow you to connect to their servers directly while others require you to send email via their webmail applications. In the latter case, providers are also safeguarding against the risk of companies sending several emails in a short period of time and engaging in spamming.
Providers that allow direct access to their SMTP servers may or may not support SMTP relaying. ‘Support’ means that you can connect to their SMTP server to send outbound email to recipients whose email is not managed by the provider (e.g., they handle email for luxsci.net addresses but not yahoo.com).
SMTP authentication versus Secure SMTP
To avoid the risk of hackers spamming users, many email providers require authentication (e.g., via a username and password) to use their SMTP servers. Some providers may go beyond SMTP authentication and offer Secure SMTP, encrypting the communication between your computer and their server using SSL/TLS protocols. This way, the contents of your email message cannot be read along the transmission channel to the SMTP relay server.
As far as sending protected health information (PHI) via email is concerned, do HIPAA Security and Privacy Rules require encryption?
Email encryption or mutual consent?
Covered entities can use unencrypted email to communicate sensitive information to patients as long as they meet mutual consent criteria, as follows:
- patients have to be informed of and understand the security compromises arising from a lack of encryption;
- patients should state in writing that they are fine receiving ePHI via unencrypted email; and
- covered entities need to maintain records of mutual consent statements, including risk warnings and written acceptance from patients.
As mutual consent email is still subject to HIPAA guidelines, you cannot send ePHI through the same email host or provider you use for unencrypted business email.
And even if you plan on encrypting emails containing PHI, you should bear certain other factors in mind, as discussed below. They also apply if you send ePHI insecurely.
Six key points to note on SMTP relaying
- Business associate agreement: You must have a Business Associate Agreement with the email provider. BAAs serve two purposes – creating liability between parties and satisfying HIPAA regulatory requirements. So, if one of the parties fails to comply, the other party may have a remedy. But, if the agreement isn’t in place or violated, then both parties will be held liable and suffer consequences.
- Audit trails and activity logging: Auditing controls should also be established to satisfy the administrative safeguards of HIPAA Security Rule. Audit trails and activity logging allows you to see where sensitive content is shared, and when necessary, revoke access at any point.
- Correct recipient: Ensure that the right individual gets the email! The right message to the wrong email is a breach, even under mutual consent. Avoid sending PHI via email unless you have verified the recipient’s address and checked that you have entered the address correctly. Use auto-fill lists and automatic forwarding with care.
- Unique user authentication: Unique user IDs and authentication are essential. You must implement procedures to verify that the person seeing the ePHI is the one claimed.
- Backups/Archives of messages: HIPAA requires you to maintain an email archival system where copies of all sent and received emails are kept in a location separate from your offices and email servers; archived email cannot be deleted or edited; the archived email cannot be downloaded, searched or read by administrators or users, and archived email is secured and kept immutable for long periods of time.
- Proper ePHI protection by you and provider: Covered entities and email providers have to adhere to the same HIPAA Security Rule requirements. Both services should have access, integrity, ID authentication and audit controls in place.
If you plan to send marketing emails to patients, they must first indicate their approval to receive marketing communications. This consent can be obtained electronically, which is more helpful than paper consent forms as it can be managed and audited conveniently. There are other requirements under the Privacy Rule with regard to email marketing. In fact, relaying any email in a HIPAA-compliant manner requires careful consideration and planning. Engaging an email service provider well-versed in HIPAA compliance is the easiest step towards establishing compliance. | <urn:uuid:19cac5aa-208d-4901-ae29-3f5270898e09> | CC-MAIN-2022-40 | https://luxsci.com/blog/a-brief-guide-to-hipaa-compliant-smtp-relaying.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00344.warc.gz | en | 0.930029 | 957 | 2.640625 | 3 |
A new imaging method could put super-resolution microscopy within reach of most biologists
(BOSTON) — Cell biologists traditionally use fluorescent dyes to label and visualize cells and the molecules within them under a microscope.
With different super-resolution microscopy methods, they can even light up single molecules and see their complex interactions with one another.
However, the microscopy hardware required to do this is highly specialized, expensive, and requires operators to have unique skills; hence, such microscopes are relatively rare in laboratories around the world.
Ralf Jungmann, Ph.D., an alumnus of the Wyss Institute and currently a Professor of Biochemistry at the Ludwig Maximilian University (LMU) and the Max Planck Institute (MPI) in Germany and Wyss Institute Core Faculty member Peng Yin, Ph.D. have been developing DNA-PAINT, a powerful molecular imaging technology that involves transient DNA-DNA interactions to accurately localize fluorescent dyes with super-resolution.
However, although the researchers demonstrated DNA-PAINT’s potential by visualizing single biomolecules such as proteins in fixed cells at a fixed close distance, the technology could not yet investigate molecules deep inside of cells.
Now, Jungmann’s and Yin’s teams jointly report a solution to overcome this limitation. In their new study, they adapted DNA-PAINT technology to confocal microscopes, which are widely used by researchers in cell biology laboratories to image whole cells and thicker tissues at lower resolution.
The MPI/Wyss Institute team demonstrates that the method can visualize a variety of different molecules, including combinations of different proteins, RNAs, and DNA throughout the entire depth of whole cells at super-resolution.
Published in Nature Communications, the approach could open the door for detailed single-molecule localization studies in many areas of cell research.
The DNA-PAINT approach attaches a DNA “anchor strand” to the molecule of interest. Then a dye-labeled DNA “imager strand” with a complementary sequence transiently attaches to the anchor and produces a fluorescent signal, which occurs as a defined blinking event at single molecular sites. Because this “blinking frequency” is precisely tunable, molecules that are only nanometers apart from each other can be distinguished — at the higher resolution end of super-resolution.
“Our new approach, SDC-PAINT, integrates the versatile super-resolution capabilities of DNA-PAINT with the optical sectioning features of confocal microscopes. We thus created the means to explore the entire depth of a cell, and to visualize the molecules within it at the nanometer scale,” said Jungmann.
The team mapped out the presence of different combinations of proteins within whole cells, and then went beyond that.
“By diversifying our labeling approaches, we also visualized different types of individual biomolecules in the chromosome-containing nucleus, including sequences in the DNA, proteins bound to DNA or the membrane that encloses the nucleus, as well as nuclear RNAs,” adds Yin, who is also co-leader of the Wyss Institute’s Molecular Robotics Initiative, and Professor of Systems Biology at Harvard Medical School.
In principle, confocal microscopes use so-called pinholes to eliminate unwanted out-of-focus fluorescence from image planes above and below the focal plane.
By scanning through the sample, plane after plane, researchers can gather the desired fluorescence signals emitted from molecule-bound dyes over the entire depth.
Specifically, the MPI/Wyss Institute team developed the technique for “Spinning Disk Confocal” (SDC) microscopes that detect fluorescence signals from an entire plane all at once by sensing them through a rotating disc with multiple pinholes. Moreover, “to achieve 3D super-resolution, we placed an additional lens in the detection path, which allows us to archive sub-diffraction-limited resolution in the third dimension” said first author Florian Schueder, a Graduate Student working with Jungmann who also worked with Yin’s Wyss Institute team as part of his master’s thesis.
“This addition can be easily customized by manufacturers of SDC microscopes; so we basically implement super-resolution microscopy without complex hardware changes to microscopes that are generally available to cell biologists from all venues of biomedical research.
The approach thus has the potential to democratize super-resolution imaging of whole cells and tissues,” said Jungmann.
“With this important advance, super-resolution microscopy and DNA-PAINT could become more accessible to biomedical researchers, accelerating our insights into the function of individual molecules and the processes they control within cells,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).
Other authors on the study are past and present members of Yin’s group including Juanita Lara-Gutiérrez; Brian Beliveau, Ph.D.; Sinem Saka, Ph.D.; and Hiroshi Sasaki, Ph.D.; and Johannes Woehrstein, Maximilian Strauss, and Heinrich Grabmayr, Ph.D., who are working with Jungmann. The study was funded by grants from the Wyss Institute for Biologically Inspired Engineering at Harvard University, the German Research Foundation’s Emmy Noether Program, the European Research Council, LMU’s Center for Nanoscience, the Max Planck Society and Max Planck Foundation, the National Institutes of Health and the Office of Naval Research. | <urn:uuid:c173b614-b6cd-4629-ad45-4be3d061b3d3> | CC-MAIN-2022-40 | https://debuglies.com/2017/12/13/visualizing-single-molecules-whole-cells-with-new-spin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00344.warc.gz | en | 0.915993 | 1,209 | 3.15625 | 3 |
The older bits of the world’s accumulated knowledge, bound together in volumes of printedbooks and magazines, are slowly disappearing. Out-of-print renditionsoften disappear forever. Libraries with limited shelf space oftenreplace seldom-used titles with newer tomes. A far smaller portion ofprinted matter makes it to page-scanning processes for preservationin digital form.
In the race to build a universal digital library, many importantbooks and documents are being left behind: special editionbooks, religious books, historical documents, and books found in smalllocal libraries or in private collections. Left undigitized, the information inside them will fade as the paper deteriorates.
Despite the best efforts of organizations intent on creating exhaustive digital libraries of all human knowledge, their projects are still too fragmented toproduce a reliable, universal, digital repository of all printedgoods. Often, corporate decisions and budgetary considerations mean books are left behind.
Google’s Book Project and Project Gutenberg are two of the morewell-known efforts to convert the printed page to a digitally viewableform. Usually, large libraries and university research directors formalliances to take on the challenge of digitizing their own collections. Their ‘leave no book behind’ mentality is filtering downto smaller businesses with limited revenue, driven by improvements inscanning and storage technologies. This is creating a balance ofpower, so to speak, that allows those without the reach and capital ofGoogle to join in the digitization movement.
“People are doing this with scanners of all kinds. Hardware is gettingcheaper and better. Nowadays, a lot of it is done with digital cameras.They have high enough resolution today to give very good results. It’salmost like going back to the microfiche days,” John Sarnowski,director of The ResCarta Foundation and director ofImaging Products for Northern Micrographics, told TechNewsWorld.
Northern Micrographics is a service bureau that converts paper andfilm into electronic format. The company has been digitizing printedpages since the early 1980s. In that time span, Sarnowski has observed a bigmisconception about how the process works. Contrary to popular belief, the job of converting from physical page to digital screen does not end with the scanning or camera image.
“Shooting the pages is only one-fifth of the job,” he explained. “There is a lot oftechnology built around getting the page numbers and the physicallayout to match the original printing. The rest of thejob involves getting the metadata right.”
That involves a detailed process of making the pages match. Forinstance, you cannot have all the pages in digital form listed in theobscure number that the scanned file or camera image usually generates, such as00001.scn. Page inserts, titles, author, and otherdata have to be coordinated in the finished digital product.
That’s been the problem since day one, and it’s what the technology hasto overcome, according to Sarnowski.
Your Way or Mine
Another part of the problem in digitizing printed books intoelectronic media is the end-user format. There is no standard protocolfor viewing digitized conversions so that anybody with access can readthem.
For example, in 1994, Sarnowski’s company got involved with Cornell University andthe University of Michigan on one of the earliest digital conversion projects in the United States. Ocular Character Recognition (OCR) initially cost US$14 perpage, but as the technology got better, the cost dropped. At the end of the three-and-a-half-year project, it was down to afew cents per page.
When the company asked school officials how they wanted the data back, the officials responded, “How would you like to send it back?” said Sarnowski.
“There were no standards then. There still aren’t. The library peopleat Cornell didn’t know how to extract the data out of their databasesystem so we could integrate the digital pages. We had to work out allof those details,” he explained.
Web No Solution
The Internet is not a truesolution to providing universal access to a digital book library, either.Standardization does not always exist on the Web.
To see the problem, think of the digitizing process in terms of other technology. For instance, you can put a sound file inan MP3 player anywhere and it works, as there is only one standard. Notso with DVDs. Different parts of the world have regional codecs withtheir own file formats for video.
The same lack of universal standards plagues those working to create auniversal digital book library.
“The big problem at every major research center, including Google, isthere is no standard for dealing with digital pages. To this day, westill do not how how Google is storing the book data and what theirformat is,” Sarnowski said.
Starting From Digital
Some publishers start out in the digital form, so printed books do nothave to be converted. While this approach does not solve the problemof saving books left behind, it at least does not add to that problem.
In the case of publishers such as Springer Science + Business Media, authors must now submit their manuscripts in Microsoft Word or a similar software file format. The company publishes all of itscollections in both PDF (Portable Document Format) and XML (Extensible Markup Language).
“We did digitize all of our journal collections all the way back tothe 1840s. We sent the physical pages to a vendor who made themavailable digitally through a scanning process. Somebody was insertingthe metadata during that process,” George Scotti, global marketingdirector at Springer, told TechNewsWorld.
Springer does not worry about intellectual property theftinvolving its easy-to-get digital library offerings. The collection isnot mainstream reading. Still, it is available on Amazon’sKindle e-book reader and other such devices.
Springer specializes inpublishing scientific research. Since researchers already do most oftheir work online, the company’s customers are usually familiarwith the electronic format, according to Scotti.
“We have a very liberal DRM (digital rights management) policy. Onceyou buy the content, you can do whatever you want with it. We’ve onlyhad a few cases where it was a problem putting it on a Web site. Butit’s not causing us a great deal of concern,” Scotti said.
Another solution in the digital mix, offered by Atiz.com, could be ideal for small companies and individualauthors who want to preserve their printed pages digitally.
As long asthe user owns the copyright, there is no legalentanglement, according to Atiz President Nick Warmock. The company’s biggest customers include academic libraries aroundthe world, municipalities for deed registries, students and servicebureaus.
Three of Atiz’s products give consumers and small organizationsan inexpensive device to make their own decisions on what to preservedigitally rather than going through outside services like Gutenberg and Google. In 2006, Warmock partnered with anassociate who invented a way to have a mechanical arm turn the pagesof books being scanned. The resulting BookDriveDIY (Do It Yourself) includes the cameras, mechanicalsetup and proprietary software. A related product released in 2007,BookSnap, targets students and others who want to digitize reams ofnotes. Atiz released BookDrive Pro in January of this year. The productprices range from $1,595 to $15,000.
“We envision one day having a searchable repository for all digitizedcontent. But that hasn’t been worked out yet. The power of such auniversal library would be incredible. We’d like to get involved inthat project, but too many things would have to be worked out,”Warmock told TechNewsWorld.
The encumbrances blocking asingle set of standards — and the financial costs associated withforming a universal digital library — may be solvable, according to Sarnowski. He heads the ResCarta Foundation, a nonprofit organization established to encouragethe development and adoption of a single set of open communitystandards for digital document warehousing.
Northern Micrographics, partially in conjunction with the foundation,promotes an open source raster format. The companyoffers open source tools free to download in an effort to encouragethe use of a standardized data format. The strategy includes workingwith metadata standards and the same standards the Library ofCongress uses.
“We’re fighting for the long-term preservation of data. We’re fightingto stop the loss of original data. It’s been an uphill battle for fiveyears to convince people at large institutions to adopt our system.We’re waging a guerrilla war. We’re saying, do it this way,” saidSarnowski.
The digital divide problem may not go away. In fact, Sarnowski worries, itcould become worse. “Twenty years from now, when the next generation of storage comesalong, we’re going to have to move all this stuff. If you only had ahandful of standards, you could run them through a converter to makethe move — but that isn’t the case.” | <urn:uuid:1701aae0-5f67-4fbe-b4df-2ed836d0dd42> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/the-digital-book-drives-left-behinds-66867.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00544.warc.gz | en | 0.920008 | 1,990 | 2.71875 | 3 |
In the realm of computer science, the term data mining is interchangeable with the term knowledge discovery in data (KDD), and for good reason. There’s a tremendous amount of valuable, actionable information hidden in the mountains of data that almost all businesses collect and warehouse. This guide explains how this raw data can be accessed and analyzed so that it can be turned into actionable information.
What Is Data Mining?
Modern data mining is the computerized, algorithmic process of searching through those massive piles of raw information and “discovering” or revealing the profitable, efficiency-enhancing knowledge that’s buried there. The process of data mining is the bedrock of any data analytics program. This already crucial aspect of information science is only going to grow more prominent as the emerging technologies of data collection and data storage improve.
What Is Data Analytics?
If data mining is the process of bringing hidden gems of information to the surface, data analytics is the method of refining that raw digital material into profitable ideas and beneficial actions.
In simple terms, data analytics is the sorting and organizing of information into related sets and subsets. Once the large amounts of bulk data are configured into manageable arrangements, the data analytics process can get underway in earnest.
Data analytics best practices involve combing over data sets to discern beneficial patterns and relationships that can be exploited to improve organizational efficiency and, ultimately, boost profits.
The data analytics process has money-making — as well as money-saving — potential. This process applies to almost all areas of operations, including the following:
- Accounting, especially risks and audits
- Organic growth
- Growth through mergers, acquisitions and partnerships
- Human resources
- Inventory management
- Legal, compliance and liability
Why Is a Data Analytics Program Important?
Data science is a rapidly growing, increasingly important aspect of modern business. Organizations that neglect data analytics best practices risk being left behind by more agile competitors that know how to collect, evaluate, and utilize all the information available to them.
A professional plan of data analytics integration can yield significant benefits quickly. Quality analysis of data has an almost predictive element. As if by magic, management will know exactly which resources to deploy and where to deploy them.
Here are some key areas where data science can play an important role.
Big problems that cause business slowdowns and inefficiencies don’t happen out of the blue. There are almost always warnings signs. But those signs are often hidden in obscure collections of confusing data.
A proper course of data analytics can spot potential problems long before any issues become actual problems, allowing management to be proactive rather than reactive. By the same measure, data mining can identify rewarding opportunities as easily as it can spot trouble on the horizon.
The proper attitude toward information science is bound to increase efficiency across the board.
Every time a customer or employee opens and reads a piece of digital content, hundreds of data points are created and stored. Thanks to modern technology, managers in sales, marketing and human resources can now know what content is effective and what content goes straight to the trash. Data mining can tell a manager which emails are being opened and which are deleted, while data analysis can tell let them know who read these messages and what action (if any) was taken.
In short, data analytics allows management to customize and optimize content for maximum results.
Expansion Opportunities and New Product Development
For a manager, staying on top of data means staying on top of the industry they're working in. Being first to market with an innovative product or expanding into new geographic areas can make huge differences to the bottom line.
Sales data by product and geographic area, website traffic data, and other trends that can be revealed through data reviews help businesses make informed decisions on expansion plans, as well as product development.
Data Analytics Tools
An excellent place to start is with KnowledgeLeader’s Data Analytics and Mining Guide, a fully customizable, 24-slide Microsoft PowerPoint presentation. KnowledgeLeader subscribers can enhance and personalize the guide by adding their logos, photos, charts and illustrations and can also add, remove and edit the text as they see fit. This comprehensive guide is designed to broaden an entire organization’s understanding of data analytics and help companies develop and maintain a robust data analytics and mining process.
Here is a look at some of the highlights of this valuable tool.
Definition of Data Analytics Terms
When dealing with complex subjects like data analytics and mining, a critical first step is defining terms. The Data Analytics and Mining Guide offers straightforward, working definitions of:
- Data. What the word means to businesses.
- Data Analysis. What constitutes analysis as opposed to collection.
- Data Mining. What to look for in the information.
Reasons for Robust Data Analytics
People are generally more productive when they know why they’ve been assigned a task. Four slides in the guide are dedicated to reminding key personnel of the value of data analytics. We highlight several of the most important reasons for employing data analytics best practices, including:
- Transforming “data” into action
- Identifying risk and mitigation
- Conducting accurate, comprehensive auditing and testing
- Determining error rates and success rates
- Increasing productivity
- Finding cost savings
Methodology (Process) Overview
When it comes to optimizing organizational data, designing and implementing a process that works for a specific organization is essential for success. KnowledgeLeader's guide is a methodology template covering three broad areas:
- Planning and data request
- Assessment (testing and auditing)
- Acquisition and validation
- Acquire data
- Load data
- Profile and validate data
- Analysis and testing
- Basic analysis
- Sharing results
The overarching goal is to uncover opportunities in a high-volume data environment and ensure the integrity of information through testing and controls. This is facilitated by making some key determinations, which is done by identifying:
- The objectives of the analysis
- The correct analysis tool or tools
- The method of data acquisition, taking transfer considerations and file formats into consideration
A vigorous program of data analytics is bound to uncover opportunities and generate money-making ideas. It may also uncover problems that need to be addressed. The final sections of KnowledgeLeader's Data Analytics and Mining Guide are devoted to the proper and appropriate presentation of findings to management, officers and other interested parties. Recommendations include:
- Confirming both positive and negative findings
- Identifying root causes and the basis of findings
- Summarizing findings in plain terms
- Preparing a report that can be backed up by facts
- Incorporating data findings into all appropriate operations
- Cleaning up the presentation by deleting unnecessary information
Every company wants to reduce costs, increase earnings, grow the business and generally make better decisions. Data mining and data analysis, when done correctly, can achieve that.
The tools and training available at KnowledgeLeader make employing a customized data analytics best practices policy as easy as possible. | <urn:uuid:6c78d227-c8b2-415c-8fc0-4d6d1bcffd26> | CC-MAIN-2022-40 | https://www.knowledgeleader.com/blog/guide-reviewing-your-organizations-data-mining-and-data-analytics-practices | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00544.warc.gz | en | 0.909917 | 1,465 | 3.0625 | 3 |
A new study conducted by Mitre-led independent advisory group JASON for the Department of Health and Human Services has listed several challenges to adoption of AI platforms in the health care field.
Those include issues associated with the use of the technology in clinical practice; confluence of AI and smart devices for disease and health monitoring; and creation of health databases for use in the development of AI tools, according to the study published in December 2017.
JASON found that training data is needed to advance the development of AI tools and that potential misinformation could hamper the use of AI in health.
The study offered recommendations to help address such challenges.
Those include the need to build a data infrastructure to gather and integrate information from smart devices; support studies that seek to identify how to incentivize health data sharing; development of information technology capabilities to support the collection of diverse data; and encourage the development of transparent policies to “ensure reproducibility for large scale computational models.“
The document also cited three factors that could drive the adoption of artificial intelligence in health care such as the prevalence of networked smart devices, frustration with legacy systems and acclimation to at-home services. | <urn:uuid:99269491-410d-42cb-8b88-21dc10282129> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2018/01/mitre-study-cites-challenges-to-ai-adoption-in-health-care/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00544.warc.gz | en | 0.941639 | 240 | 2.59375 | 3 |
Ethernet was originally conceived of as being used by networks consisting of limited numbers of devices, each painstakingly connected and configured. When mainstream business adopted Ethernet, only relatively small numbers of devices, nearly all of them computers of one sort or another, were connected utilising the original IPv4 addressing scheme. This was designed to handle what at the time was considered to be “vast” numbers of connected devices, but today it is clear that such optimism was misplaced.
With the numbers of PCs, laptops and now mobile devices exploding, the pressure on Ethernet networks to provide each device with connectivity has the potential to seriously impact the addressing scheme. The use of solutions such as one-to-many Network Address Translation (NAT) allows large numbers of private IP addresses to be hidden behind one or a small range of public IP addresses, but at the cost of added management complexity and security. So how will networks develop to allow the number of devices connected to continue growing rapidly?
Of equal importance is the question of how can the bandwidth required by a central Ethernet be calculated, managed and, if need be, “rationed” with hugely escalating pressures on usage? To help cater for these demands it is clear that there may be major network architectural modifications to be made. Some of these, such as network ‘flattening’ where aggregation layers of the network are removed, offer the potential to gain major benefits in terms of quality of service, predictability and manageability. Other changes, most notably the migration from IPv4 to IPv6, provide the means to meet the demand for ever more addresses for devices and to provide new ways to manage service quality.
New pools of IPv4 addresses are diminishing day by day as the rush of device connectivity, servers, storage, desktops, laptops and mobile systems continues at a frantic pace. The transition from IPv4 to IPv6 is likely to prove to be taxing. There is little doubt that IPv6 will grow in popularity, as it is already doing so in certain geographies, most notably Japan, the only question is when. IPv4 and IPv6 are likely to be utilised side by side for many years adding another layer of complexity to network management, an area hardly free of such challenges.
Meanwhile the rapid adoption of “virtualisation” is adding further complexity to the mix. The absence of good practices and established processes to help migration projects is inhibiting progress and propagating the deployment of a mix of solutions to extend the usage of the existing address range.
Security and management will also need to be reappraised as mobile connectivity grows in enterprises, especially as the range of devices allowed to connect to corporate systems expands. Many organisations already recognise that their network monitoring and management tools need to be upgraded and this recognition will grow further as networks become more stressed.
The importance of the monitoring and management of networks are once again, after a lull of a decade or more, growing in visibility as a major factor in service quality. As device connectivity grows, as flexible IT systems take off and as organisations grow their use of external systems and devices linking to the core, the management of resource demand becomes vital to ensure network resources are utilised according to business goals.
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know. | <urn:uuid:b517c276-ec2d-4464-9569-7ec3b36489ec> | CC-MAIN-2022-40 | https://www.freeformdynamics.com/core-infrastructure-services/expert-clinic-revamp-the-network-to-cope-with-explosion-in-mobile-kit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00544.warc.gz | en | 0.957894 | 749 | 2.828125 | 3 |
At one time, complex computing tasks — from designing safer and more efficient automobiles to forecasting weather and seismic
activity to researching drugs and gene sequencing — required a mainframe computer. Now dubbed supercomputers, these machines are
made by the likes of companies like IBM and Seattle’s Cray Inc., which shipped its first Cray MTA-2 supercomputer system in late December.
Now a different method of performing those complex calculations is beginning to gain clout in the commercial world: grid computing.
“Grid computing is a method of harnessing the power of many computers in a network to solve problems requiring a large number of
processing cycles and involving huge amounts of data,” said Alan Meckler, chairman and chief executive officer of InternetNews.com
parent INT Media Group, which Thursday launched GridComputingPlanet.com, a Web site dedicated to coverage of the grid computing
industry. “Rather than using a network of computers simply to communicate and transfer data, grid computing taps the unused
processor cycles of numerous — sometimes thousands of — computers.
Traditional supercomputers are single systems with large numbers of processors, enormous amounts of memory and performance that is
measured in gigaFLOPS or even teraFLOPS. Needless to say, these machines
are expensive and require top-notch technical expertise to maintain. For instance, IBM’s ASCI White supercomputer is rated at 12
teraFLOPS and costs $110 million.
Grid computing, in contrast, is a type of networking that harnesses the unused processor cycles of computers in a network
(including lowly PCs) for supercomputing tasks.
One of the most well-known grid computing projects is SETI@Home, in which PC users worldwide donate their unused processor cycles to
analyze radio signals from outer space for signs of extraterrestrial life. Volunteers simply download a screen saver from the
project and their processing power is used to analyze information when the screen saver is active. SETI@Home says that by harnessing
volunteers’ unused processor cycles it has achieved about 15 teraFLOPS for about with about 3 million volunteers. It says the cost
has been about $500,000 to date.
Commercial Uses Growing
While SETI@Home is a non-profit project, commercial interests have also begun to take an interest. Juno Online, now a part of United
Online, latched onto the idea last year, dubbing it
the Juno Virtual Supercomputer Project. The company viewed the virtual supercomputer as a way of monetizing its free subscriber base
by selling supercomputing services to research firms. In May of last year, the company secured its first contract when bioinformatics incubator
LaunchCyte LLC signed a letter of intent for use by it and its portfolio of companies.
Other firms are also getting into the act, including supercomputing mainstay IBM. In December, Big Blue sealed a deal to provide a
traditional parallel processing system to the University of Texas for Austin’s advanced computing center (TACC). TACC will use the
system to test computing grids, and IBM has long maintained that grid computing will drastically change computing by enabling
heterogeneous systems to share resources over the Web.
INT Media Group launched GridComputingPlanet.com to help the technical community stay abreast of developments brought about by the
emergence of grid computing. As part of that effort, the company also announced the launch of Grid Computing Planet Spring 2002
Conference & Expo, which is slated for June 17-18 at the DoubleTree Hotel in San Jose, Calif.
“Grid Computing Planet will become the gateway to grid computing and help solve problems that are beyond the processing limits of
individual computers, as well as being a resource center for the technical community, online and offline,” Meckler said.
This story was first published on InternetNews, an internet.com site. | <urn:uuid:bba9a137-56eb-41cc-9206-116effb47257> | CC-MAIN-2022-40 | https://www.datamation.com/networks/grid-computing-gaining-voice/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00544.warc.gz | en | 0.927059 | 815 | 3.25 | 3 |
It’s almost inconceivable to think of life without the Internet. As if out of nowhere, this remarkable technology quietly emerged from modest beginnings and proceeded to explode, revolutionizing the world in countless ways – as well as in countless ways we have yet to imagine. But, given how unexpectedly this remarkable phenomenon arose, not to mention how it has come to so completely dominate many aspects of our lives, are we fully aware of its current influence and potential future impact? Those ideas are among the many raised in director Werner Herzog’s thoughtful new documentary, “Lo and Behold, Reveries of the Connected World”
Making a film about a subject as overarching as the Internet is no small task. The sheer volume of material available for possible inclusion is itself overwhelming. There’s so much to cover that it’s impossible to believe any one picture could do it justice.
Given that, Herzog wisely chose not to try and incorporate all facets of this potentially unwieldy subject into this project. Rather, he selected a handful of relevant topics, illustrating them with specific examples. This material is effectively complemented by interview segments with such experts as entrepreneur Elon Musk, Internet pioneer Dr. Leon Kleinrock, computer scientist Danny Hillis, former hacker Kevin Mitnick, educator and robotics expert Sebastian Thrun, visionary physicist Lawrence Krauss, and astronomer Lucianne Walkowicz.
Among the topics covered in the film are the Internet’s origins and its future, as well as a number of key questions about its current and evolving character. Through the picture’s various examples, Herzog examines such net-inspired innovations as virtually instantaneous universal connectivity, technological wonders in areas as diverse as robotics, smart phones and self-driving cars, and the capacity for unfettered global participatory problem-solving. By contrast, the film then depicts the darker side of the electronic world, such as its potential to inflict devastating emotional harm, the debilitating impact of some forms of wireless technology on physical and psychological health, and the implementation of web-based platforms and hacking programs for intentionally wreaking economic, social and political chaos. The film also explores a number of pending developments, such as Internet-based applications for use in space exploration and interplanetary colonization, brain mapping, and even the creation of highly personalized residential units incorporating “the Internet of Me.”
But, as becomes apparent in the film, much of what ultimately happens with the Internet and related technologies depends not so much on the hardware and software but on what we do with it, considerations driven as much by human nature as by microchips and algorithmic protocols. Of course, as the technology evolves, so is human nature, especially when we look at what electronics now make possible, capacities for creativity and productivity not previously envisioned, let alone capable of being deployed. And none of this takes into account the impact of what’s in the pipeline, developments whose influence can hardly be predicted at this point (if you doubt that, consider the fact that virtually every 20th Century futurist who speculated about the nature of life today never saw the Internet coming).
So what does all this mean? Essentially it’s a rallying cry that we choose carefully what we do with this amazing new technology. That may be easier said than done, though, given the volume of new information that is being added to the Internet on a daily basis. As Herzog astutely points out, if we were to copy to CD all of the data that is being added to the web every day, the stack of disks containing it would extend from Earth to Mars. That’s 365 new interplanetary CD piles every year. That’s some serious food for thought – and a lot to digest.
Though occasionally uneven, “Lo and Behold” is a fascinating documentary about what is arguably one of the most transformative inventions in human history. The bulk of the segments make their cases clearly and succinctly, though there’s a slight tendency to succumb to inadequately explained computer jargon, leaving the technically uninitiated in the dark. Nevertheless, there’s much to like here. Geeks will assuredly adore the film, but even casual users of this technology will likely come away with an enlightened perspective.
Source: THE GOOD RADIO NETWORK | <urn:uuid:14aacc2f-878a-4146-9a46-348866a7cc7f> | CC-MAIN-2022-40 | https://www.mitnicksecurity.com/in-the-news/movies-with-meaning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00544.warc.gz | en | 0.943573 | 885 | 2.546875 | 3 |
Environmental sustainability in business has always been a concern. But with current affairs in the macro environment, from noticeable changes in weather patterns to high-profile protests and heightened public discussion, it’s safe to say that its urgency has never been more present.
With these issues more pressing than ever, research shows that 80% of businesses globally now report on sustainability, with this figure rising to 96% among the world’s 250 largest companies.
As well as developing a sustainability policy and making achievable pledges and goals to become greener, what can businesses of all sizes do to sincerely reduce their environmental impact?
One answer may be less obvious than others and comes in the form of the technologies you use. And it’s not just about the green credentials of specific products. But rather, affiliating yourself and purchasing from other businesses dedicated to sustainability can help with reaching your own targets.
Environmental sustainability in business – firmly on the Microsoft agenda
The majority of businesses use Microsoft products in some way or another. The good news for those looking to become more sustainable is, the environment is a top priority for the tech giant.
What is Microsoft really doing about the environment?
Microsoft has made a set of ambitious and detailed commitments to help it fulfil its environmental aims. Microsoft sustainability goals are split into four main areas of focus – carbon, ecosystems, water and waste.
Addressing the carbon problem
Carbon emissions have been flagged by many environmental groups and governments around the world as a particular concern for humankind and the planet. And the severity of this appears to be high on the environmental sustainability in business agenda.
According to KPMG’s 2020 Survey of Sustainability Reporting, around two-thirds of the world’s top 100 companies by revenue now have targets in place to reduce their carbon emissions.
Microsoft has been carbon neutral since 2012, meaning that the sum of the greenhouse gas emissions it produces is offset by natural carbon sinks and/or credits.
The severity of the problems brought by carbon emissions is not lost on Microsoft, with the company making several pledges in this area to:
- Become carbon negative by 2030
- Remove all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975 by 2050
- Invest $1 billion (£737,365,000) into a climate innovation fund to speed the development of carbon reduction and removal technologies
Impact on ecosystems
The impact wide-scale manufacturing and, by extension through the supply chain, the businesses who buy from them, can have on the natural landscape is significant. Microsoft is taking steps to take responsibility for its land footprint by committing to protect and restore more land than it uses by 2025.
Additionally, it has a goal to achieve net zero deforestation from construction work related to its facilities. It also shares data with the conservation community, having made 10 petabytes of environmental and Earth observation data available through Azure.
AI for Earth
Part of Microsoft’s efforts to encourage its customers and peers to become more sustainable includes AI for Earth. This initiative, spanning five years, will see Microsoft pouring $50 million (£36.8 million) into accelerating innovation in green technologies.
The fund will support projects around the world focused on using cutting-edge technologies to solve problems related to biodiversity, climate change, water and agriculture.
Water – a not-so-abundant resource
Water scarcity is something many of us take for granted in our personal lives and this is no different when it comes to businesses.
The Microsoft sustainability programme includes a commitment to becoming “water positive” within a set timeline. Namely by reducing its water consumption and replenishing supplies in the regions it operates. How It plans to do this is by:
- Creating and implementing tools to help address the world’s water challenges, including pollution, water scarcity and the health of the world’s oceans
- Replenishing more water than it consumes and achieve zero-waste certification by 2030
- Implementing practical steps in the areas it operates to reduce its impact as a business. This includes focusing replenishment efforts on 40 basins deemed ‘highly stressed’, introducing rainwater re-use systems into a selection of its offices and exploring the use of adiabatic cooling, i.e., using air instead of water to cool equipment
Reducing waste in the world of tech
Waste is a major concern for many businesses, especially those involved in manufacturing. Microsoft sustainability efforts to reduce waste output include:
- Setting a goal to achieve zero waste for its direct operations, products and packaging
- Diverting at least 90% of the solid waste from its campuses and datacentres away from landfills and incineration
- Manufacturing 100% recyclable Surface devices
- Using 100% recyclable packaging in Organization for Economic Cooperation and Development countries
- Divert a minimum of 75% of its construction and demolition waste across all projects
The timeline for achieving all of the above has been set to 2030. And Microsoft appears to already be making positive headway in this area, with over 60,000 metric tons of waste diverted from landfills in the last fiscal year.
Products to encourage greener businesses
Aside from these four areas of focus, Microsoft is also highlighting how its apps and products can be used by businesses on a practical level to reduce their environmental impact.
Some of the technologies Microsoft is encouraging people, project owners, customers and businesses to use in their bid to become more sustainable includes AI, Azure and the Azure IoT Hub. Here’s how.
Microsoft Azure sustainability credentials
There are many ways the cloud and specifically Azure can make your business practices greener. Looking at Azure sustainability specifically, Microsoft plans to run its Azure data centres on 100% renewable energy sources by 2025.
And when it comes to accountability and monitoring progress, it announced in late 2020 that its new Azure region in Sweden would monitor on an hourly basis how much of its energy consumption is based on renewable sources.
Cloud for Sustainability
Microsoft’s Inspire event in July 2021 saw the unveiling of the Cloud for Sustainability package. This cloud offering, currently only available in preview, was launched to help businesses record the progress of their environmental goals.
Using the vast potential of the Microsoft cloud, this product offers automated, integrated insights through Software as a Service (SaaS). Designed to help businesses automate reporting as much as possible for greater speed, efficiency and flexibility, Cloud for Sustainability enables a seamless flow of data using real-time sources.
It then processes this data to produce an accurate real-time picture of progress. This can then be shared with a wide range of stakeholders – from data processing managers to senior business teams and CEOs. Importantly, these data-driven insights can then be converted into actionable tasks.
Planetary Computer is essentially a vast catalogue of data with the potential to contribute positively towards projects and initiatives aimed at protecting the environment.
Pulling together a multi-petabyte database of environmental monitoring data with APIs to access it, Planetary Computer aims to offer an environment that allows users to explore this data and how it can be used to positive effect.
Also only currently available in preview mode, Microsoft is inviting partners to apply for access to develop apps through the platform and use the cloud to scale their environmental sustainability work.
Further to the above actions and goals, Microsoft is also a signatory of the climate pledge https://www.theclimatepledge.com/
So, for those companies considering how their suppliers and their supply chain can contribute positively to their environmental credentials, or those who are simply looking for positive examples to follow, a cue can be taken from Microsoft’s efforts to prioritise sustainability.
All images used in this blog are courtesy of Microsoft
Join the discussion on environmental sustainability for business
If sustainability is on your businesses’ agenda, join us on Thursday 11th November when our Azure Product Director Paul Collins will host a short live discussion on hitting your environmental targets and how Microsoft products can help you do this. Sign up here. | <urn:uuid:bac443a8-d962-425b-8dff-f0cbacb38ad8> | CC-MAIN-2022-40 | https://contentandcloud.com/how-microsoft-can-help-environmental-sustainability-in-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00544.warc.gz | en | 0.943412 | 1,657 | 2.53125 | 3 |
In this guest post, Boris Manev, head of sustainability and government affairs at Epson Europe, sets out the role that energy-efficient peripherals can have on helping enterprises achieve their net-zero goals.
What do places such as Alaska and Italy have in common this summer? While in completely opposite parts of the world, they have both seen extreme and deadly events brought about by climate change. Alaska has been suffering from weeks of dry weather and wildfires, while Italy has seen extreme drought and the collapse of a large glacier which crushed ten people.
These events are not isolated but indicative of a wide and accelerating trend. The need to address climate change with far-reaching actions by individuals and businesses across every industry and activity is becoming urgent.
In fact, the latest analysis from the International Energy Agency (IEA) says failure to meet its net zero decarbonisation scenario risks a 100% increase in the frequency of extreme heatwaves and a 40% increase in ecological droughts. Without action, devastating climate change scenarios such as displacement of human communities and animal extinction will be likely.
While the contribution of every individual or organisation may feel insignificant compared to the scale of the climate crisis, incremental changes in behaviour and energy use can make a big difference.
Let’s explore how changing the technology you use, stronger international cooperation and change in behaviour can reverse the current trend and limit global warming beneath the key threshold of 1.5 ˚C.
Tackling appliance emissions
As the world becomes technologically more advanced and populations in developing regions continue to grow, appliance ownership will continue to increase. Therefore, improving the energy efficiency of appliances – and reducing the energy required to produce and run appliances – is crucial to reaching net zero emissions by 2050.
The use of appliances accounts for a very significant share of any building’s carbon footprint. In fact, the electricity consumed by appliances for uses including cooking, cleaning, lighting, information technology and entertainment represents roughly 15% of global electricity demand.
Selecting energy-efficient office appliances and consuming less energy are important actions for businesses and individuals seeking to reduce carbon emissions that drive climate change.
For example, research by Dr Tim Forman of the University of Cambridge, summarised in the Epson “Lower the heat” report, analysed the environmental footprint of different printing technologies, comparing inkjet and laser printer carbon emissions over a typical four-year period. The findings show significantly lower carbon emissions associated with inkjet printers compared to laser printers.
In fact, a worldwide switch to inkjet from laser printing technology by 2025 could reduce energy emissions to 52% of current levels. This reduction is equivalent to taking about 280,000 cars off the road for a year.
Call for international cooperation
To keep the world on track for a net-zero carbon future, the energy consumed globally by appliances must fall on average by approximately 25% from 2020 levels by 2030 and 40% by 2050.
This is no mean feat and requires greater international cooperation and wide support by key political institutions and decision makers.
The progress made in the lighting sector serves as a great example of best practice to be replicated in other areas of household and office appliances, including printing technology.
Technological advancements in artificial lighting, such as improved uptake of LED light fittings, have reduced energy consumption in this subcategory substantially, reinforcing how incremental changes can mitigate total carbon emissions.
We call on decision-makers to encourage the uptake of more efficient appliances. As we saw with lighting regulations, this has the potential to accelerate action and drive down the costs of energy efficient appliances. Another important aspect is product labelling. Energy efficiency labels are proving to be an important tool in promoting environmentally-friendly products and helping consumers make informed choices.
Every single choice is important
Reducing the energy required to power our appliances in our homes and workplaces, including printers, is critical to minimising the devastating impacts of climate change.
Every single individual and organisation has a responsibility and a choice to act now. If everyone on the planet makes one positive change, it can have a huge overall positive impact.
For businesses, the responsibility for change extends into their partner networks and supply chains. They must question the environmental commitments of suppliers and partners to check if they align with their own environmental standards. Influencing the activities of other businesses spreads awareness of effective practices and extends the beneficial impact for the company and the planet.
The future is in our hands. The one thing we have control over is our choice of technology and how we consume energy – and we can make the world a better place one appliance at a time. | <urn:uuid:506ea90d-8899-40f8-bd8e-a3db70317253> | CC-MAIN-2022-40 | https://www.computerweekly.com/blog/Green-Tech/Office-appliances-and-the-road-to-net-zero | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00544.warc.gz | en | 0.930373 | 935 | 2.921875 | 3 |
Today’s cars are expected to do so much more than just take you from point A to point B. With in-vehicle infotainment systems, on-board diagnostics, advanced driver assistance, and other safety systems, cars need to transmit incredible amounts of data simultaneously.
To accommodate this data load, new vehicles require much faster, more reliable networking than ever before, which has inspired advancements in automotive ethernet.
What is automotive Ethernet?
Automotive Ethernet is a physical, wired network that connects various components within a car. However, traditional ethernet could not keep up with the demands of emerging car technologies. In order to make ethernet viable for modern vehicles, BroadR-Reach technology was introduced to reduce latency, eliminate “noise” from physical sources in the car, and control bandwidth allocation.
Unlike standard ethernet that uses a dedicated transmit and receive path, automotive Ethernet employs a single twisted pair that can transmit and receive at the same time. This enhancement not only improves bandwidth and latency performance, but it also reduces the amount of cabling needed which lowers the cost of implementation as well as the weight burden on the vehicle.
What is Automotive Ethernet Used For?
Many cars are already equipped with surround-view parking assistance, collision avoidance systems, lane departure warnings, and other safety features that rely on cameras and sensors. The cameras and sensors must be able to communicate efficiently to guarantee safety and newer BroadR-Reach ethernet systems help accommodate the higher computing and bandwidth requirements necessary.
Cars are also being outfitted with increasingly complex infotainment systems. From smartphone connectivity and Bluetooth to interactive video screens, cars are loaded with more applications and connections than ever before. Automotive ethernet is designed to be flexible so as even as new technologies arise, the network can be easily reconfigured to successfully connect each element.
As we move closer to self-driving, autonomous vehicles, cars will be expected to connect to the internet, other vehicles, and even surrounding infrastructure simultaneously. This concept is coined Vehicle-to-everything (V2X) Communication. All of this must be done using the same network, so it is essential that the network can meet bandwidth and latency requirements, as well as have the intelligence to differentiate and direct high priority communication over lower priority. Prioritizing safety-critical information over entertainment for example.
What are the Test Requirements for Automotive Ethernet?
To operate safely on the road, the network itself must be tested and each device’s performance needs to be validated individually and as a complete system. Proper testing of automotive ethernet should include the following:
- Stress testing devices to determine their breaking points
- Verifying resilience by testing worst case scenarios
- Understanding performance under different impairment conditions
- Validating security features under attack conditions
Each of the elements of the RFC 2544 test methodology can apply to automotive ethernet testing. For example:
Throughput – Testing throughput helps determine whether there is enough bandwidth to accommodate the large amount of data that needs to be sent at the same time. What happens when the load is too great? Are the correct applications and protocols being prioritized? Did the failover successfully kick in?
Latency – While some applications can still perform well with higher latencies others will fail. Testing can pinpoint where latency starts to critically impact performance which is especially important for optimizing safety features.
Frame Loss – Understanding how frame loss affects performance gives insight into the quality of the user’s experience. Which of the car’s features are more negatively impacted by frame loss and how much frame loss will cause complete failure?
Each new piece of technology, system, or protocol will require proper testing to validate proof of concept, verify consistent quality & performance, and ensure the safety of the consumer. But there is one other factor that must be considered when testing automotive ethernet – security.
Introducing Ethernet and IP to automobiles exposes their systems to the same threat of attack as any other network. In fact, many vulnerabilities are published publicly and hacking manuals already exist for many cars. It is imperative that car manufacturers build in systems to prevent intrusions and that those systems are thoroughly tested.
How does Traffic Generation Help Test Automotive Ethernet?
Traffic generators can send a large scale of various application traffic through an automotive ethernet network to benchmark the performance of both safety-critical features and infotainment systems. Test unicast, multicast, learning caching, and more by generating a mix of multi-media streams from 1000’s of simulated clients simultaneously. Verify that QoS policies are effectively directing traffic, routing high-priority traffic from cameras and sensors through the best performing links over less critical traffic.
Understand the impact that bandwidth, latency, and loss have on application performance, discover performance bottlenecks on the network, and test against heavy traffic loads to ensure every component that relies on automotive ethernet is delivering optimal user experience.
Traffic generators also allow car manufacturers to carry out security and vulnerability testing using an extensive library of malicious attacks. Validate firewalls are detecting and blocking unauthorized traffic and perform DDoS mitigation by sending a mix of authorized application traffic and malicious attacks at a very high scale.
Advancements in automotive ethernet are opening the door for innovative technologies, and traffic generators can help reduce testing and remediation costs, speed time to market, and protect brand reputation by ensuring the best possible performance. | <urn:uuid:24695a41-953d-43fe-b425-00d8d6ea67fc> | CC-MAIN-2022-40 | https://www.apposite-tech.com/blog/test-automotive-ethernet-performance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00544.warc.gz | en | 0.933484 | 1,112 | 2.953125 | 3 |
New Training: The Windows OneNote App
In this 9-video skill, CBT Nuggets trainer Simona Millham teaches you how to use the Windows OneNote app to organize your notes. Gain an understanding of OneNote content types, such as pictures, online videos, audio, and more. Learn how to create separate notebooks and divide them into sections and pages, how to add and format text on OneNote pages, how to search through your notebook content, and how to take notes with inking. Watch this new Microsoft Windows 10 training.
Learn Microsoft Windows 10 with one of these courses:
This training includes:
48 minutes of training
You’ll learn these topics in this skill:
Structuring Your Notebooks
Working with Text
Pictures, Video, Audio and More!
Organizing and Searching
What is Microsoft OneNote?
Microsoft OneNote is a note-taking application that is part of Microsoft Office. You can look at it as a hub for your notes. With it, you can organize them into notebooks, which you can then organize into sections and pages. OneNote not only supports text but also a whole range of multimedia formats that you can include within your notes. It further has a familiar Office-like interface, and it allows you to share your notes in real-time with others.
Some of OneNote's other features include:
Compatibility with multiple operating systems and devices, including mobile
Ability to scan and convert handwritten text
OneNote differentiates itself from Word in that, while Word is intended for the creation of documents, OneNote is meant as a place where people can collaborate on ideas. It is not uncommon to start a project in OneNote during the planning stage and then finish it in Word. | <urn:uuid:b51d9d5a-e901-432d-8b16-7d9ec646128c> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-the-windows-onenote-app | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00544.warc.gz | en | 0.899887 | 392 | 2.890625 | 3 |
The National Institute of Standards and Technology (NIST) is a non-regulatory government agency in the United States, that produce standards and guidelines to help federal agencies meet the requirements of the Federal Information Security Management Act (FISMA).
The NIST password guidelines, as you might expect, provide recommendations for how passwords are created, verified, and handled. The guidelines are not enforced, although many companies choose to follow them in order to strengthen their security posture and comply with the relevant data privacy regulations.
Revision 3, the current revision of the NIST password guidelines, was released in 2017 and updated in 2019. Revision 3 introduced a number of changes relating to the strict complexity requirements that were detailed in previous revisions.
To put it simply, when passwords become too complex, users find other ways to inadvertently compromise password security in order to help them to remember their passwords, which is counter-productive. For example, they might start writing their passwords down on post-it notes, or reusing them, with, perhaps, a few alterations, etc.
NIST Password Guidelines
Following NIST password guidelines will help organizations protect themselves against brute force attacks, dictionary attacks, credential stuffing, and more. Below are some of the most notable changes made in the 3rd revision of the NIST password guidelines:
1. Password Length
As mentioned above, the strict password complexity requirements have been removed in revision 3, as they were seen as being counter-productive. Under the new revision, user-created passwords should be at least 8 characters in length, and machine-generated passwords should be at least 6 characters in length. Organizations should also allow for passwords that are as big as 64 characters in length.
2. Password Processing
Organizations should stop truncating passwords, and all passwords should be hashed and salted, with the full password hash stored. Users should be allowed to enter their password at least 10 times before getting locked out.
3. Accepted Characters
All ASCII characters are permissible, including the space character. Unicode characters, such as emojis, are also acceptable. Users should be prevented from using obvious patterns, such as sequential numbers or repeated characters.
4. Commonly Used Words
Users should not use commonly used words in their passwords. Likewise, they should be discouraged from using words and phrases that are context-specific.
5. Breached passwords
Organizations should check passwords against a list of previously breached passwords. There is a service called Have I Been Pwned? which contains a list of 570+ million passwords, which have been used in real-life breaches. When users try to create a password that is on the list, they should be prompted to enter a different password.
6. Password Expiration
According to both NIST and Microsoft, password expiration policies are no longer necessary. It has been suggested that forcing users to periodically change their passwords may actually do more harm than good, as users become more likely to choose predictable passwords as they are easier to remember.
7. Password Hints
Password hints, or what some refer to as Knowledge-based Authentication (KBA), are now discouraged by the NIST guidelines. For example, a password hint such as “What was the name of your first pet?”, could be fairly easy for an attacker to guess, especially if they did some research beforehand.
8. Password Managers
It’s often the case where users use password managers to help them remember their passwords. However, some password fields don’t allow users to paste their passwords. Under the new NIST guidelines, login forms should allow users to paste passwords.
9. Two Factor Authentication (2FA)
When using 2FA, organizations should use an authenticator app, such as Google Authenticator or Okta Verify, as opposed to SMS, as it is no longer seen as a secure method of verification. | <urn:uuid:5f27543a-73c6-47b7-9d52-a1cb07235965> | CC-MAIN-2022-40 | https://www.lepide.com/blog/nist-password-guidelines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00544.warc.gz | en | 0.95621 | 795 | 3.40625 | 3 |
Wireless data connections that exploit millimetre wave radio spectrum (30GHz to 300GHz) are expected to be used in worldwide 5G networks from 2020. Bristol start-up Blu Wireless Technology (BWT) has partnered with The University of Bristol’s Communication Systems and Networks research group to develop this technology and they will demonstrate their innovative work at the Small Cells World Summit in London this week [10-12 June].
Millimetre wave radios use much higher carrier frequencies than those in current systems, such as 4G and Wi-Fi. The University and BWT radios transmit data approximately 50 times faster than the 2.4GHz Wi-Fi standard. At 60GHz there is significantly more unallocated spectrum, and this opens up the possibility of multi-Gigabit data rates to future mobile terminals.
The challenge at 60GHz is how to overcome the additional signal losses. If transmit powers and antenna gains were equal, at 60GHz the received signal would be 1000x weaker than a Wi-Fi signal. To address this challenge, millimetre wave systems need electronically steered high gain antennas to track users as they move within the network.
A demonstration of results from the first phase of work, supported through the West of England Local Enterprise Partnership Regional Growth Fund, will be showcased for the first time at the summit to be held at ExCel London.
Using a newly developed virtual network simulator the team will show how antenna beam steering supports robust point-to-point connections up to 400 metres. For 5G mobile access, the team will demonstrate multi-gigabit beamforming and mobile tracking up to 100 metres from the base station. In both cases beam forming is shown to overcome the harmful effects of blocking trees and buses.
Henry Nurser, CEO at Blu Wireless Technology, explained: “BWT has developed the Gigabit Digital Baseband necessary for millimetre wave communications to enter the mass market. At the Small Cells World Summit we’re presenting some of the details behind our innovative system-level solutions, how this can be applied to solve the TCO problems associated with backhaul for small cells and why Europe needs to re-think regulations for outdoor 60GHz networks.”
Mark Beach, Professor of Radio Systems Engineering, Department of Electrical and Electronic Engineering, said: “This technology builds on a wealth of knowledge and expertise over the last 25 years in Smart Antenna systems and an in-depth understanding of radiowave propagation. Our rich mix of fundamental research and practical validation at Bristol makes us an ideal partner for industrially relevant projects such as this.”
Andrew Nix, Professor of Wireless Communication Systems and Head of the, Department of Electrical and Electronic Engineering, added: “Our sophisticated ray tracing tools have been combined with the University’s high performance computing facilities to enable the rapid analysis of complex millimetre wave systems. In particular, our simulators combine detailed channel models with antenna arrays and beam tracking algorithms to dynamically determine user performance in a virtual network.”
For more than 20 years academics from the University of Bristol have played a key role in the development of wireless communications and in particular, they have contributed to the development of today’s Wi-Fi and cellular standards. | <urn:uuid:ca20d851-2e26-4b4b-819a-452c75833f35> | CC-MAIN-2022-40 | https://www.bluwireless.com/insight/past-events/blu-wireless-technology-and-bristol-university-demonstrate-millimetre-wave-at-small-cells-world-summit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00744.warc.gz | en | 0.918793 | 662 | 2.703125 | 3 |
While it may sound unintuitive at first, software has a supply chain just like any physical good.
Though it is perceived as intangible lines of code, software has a long, complex supply chain starting from the writing and documentation of code, its distribution and delivery, licensing, and finally reaching the customer. The term reached mainstream attention with massive hacking attacks on Solarwinds and Kaseya that exploited vulnerabilities in the software supply chain to cause enormous security, privacy, and business headaches. It was a reckoning for the software industry to pay closer attention to the software supply chain.
The genesis of software comes from elements like the code, external libraries, third-party tools, and open source repositories. From there, it is assembled into a program by software engineers or programmers, tested, documented, disseminated, and used by customers. Similar to a physical supply chain, this process involves multiple parties that can participate in one aspect of the software supply chain. There could be multiple teams of developers involved in the assembly portion that must be managed effectively to meet schedules on time and ensure that the rest of the product will be on time.
Though software may not need to be assembled and moved physically with trucks and boats anymore to reach its final destination, it can still be bottlenecked and delayed if the software supply chain is not smooth.
As it is a complicated chain of logistics, development, and distribution often involving many parties, a software supply chain also leaves security susceptible to hacking and source code injection due to creating many surfaces for attacks. Furthermore, the use of application programming interfaces (APIs) to make it easier for partners and customers to access code also means it is easier to attack flawed software via those same APIs.
Because the vast majority of software today uses open source code, significant portions of ‘new’ applications might not be written directly by the developer. This could lead to unnoticed security holes if the developer is not aware of the software supply chain. It is a problem that the software industry appears self-aware of, with a survey commissioned by ReversingLabs finding 98 percent of its respondents agree that third-party software use like open-source software increases security risks, yet only 37 percent said they can detect software tampering in their software supply chain. A mere seven percent said they detect software tampering during their entire software development lifecycle.
To resolve software supply chain vulnerabilities, solutions include constantly patching open source code, securing the continuous integration and continuous delivery pipeline, constantly testing and monitoring deployed applications, and providing customers with a software bill of materials (SBOM) that lists all the components of the program or application for the sake of transparency.
The edge computing angle
Organizations such as the National Institute of Standards and Technology (NIST) are working on reducing the attack surface at the hardware layer by creating a blueprint for hardware-based security techniques and technologies to reinforce server platform security and data protection for cloud data centers and edge computing.
Edge computing and IoT adoption will mean the enterprise IT environment attack surface potentially gets larger. More operating systems, more applications, and ephemeral compute environments will make software supply chain security an important issue for edge computing vendors and developers to address.
API | DevOps | NIST | security | software supply chain | <urn:uuid:a9a67480-e077-47d7-9592-e3711890705e> | CC-MAIN-2022-40 | https://www.edgeir.com/what-is-a-software-supply-chain-and-what-does-it-mean-for-edge-computing-20220610 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00744.warc.gz | en | 0.932969 | 661 | 3.03125 | 3 |
Railway operators rely on different radio technologies to support rail operations. To meet this need, they count on a technology-neutral radio planning tool to design and manage their communication networks. Most rail operators operate both analogue and digital technologies, including GSM-R, LTE-R, TETRA and PMR. These networks support services like centralised traffic control for rolling stock and GSM-R for high-speed rail communications.
A key network requirement is to provide adequate coverage and capacity. This can be achieved using propagation models to attain a high level of accuracy. Automatic tuning models can be used to calibrate drive-test data and improve the overall frequency plan.
HTZ Communications supports all radio technologies ranging from 1kHz to 350 GHz and has been used extensively by rail operators around the world, enabling them to manage their radio spectrum and networks efficiently. Its main functions include:
ATDI supports a comprehensive library of cartographic data for use with radio network designs.
This tutorial looks at how to model a leaky feeder in a tunnel environment. The tutorial walks through the process of building a tunnel from scratch using a .shp file to replicate the tunnel environment.
Check out how ATDI support accurate planning for railway communications. | <urn:uuid:cf797dd8-f916-4bef-8a9e-dc3a6ced97c8> | CC-MAIN-2022-40 | https://atdi.com/technologies/railways/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00744.warc.gz | en | 0.92793 | 251 | 2.875 | 3 |
The NHS in England has started to flesh out its plans for data sharing with the publication of new policy guidelines for ‘secure data environments’, which will underpin how people and organizations access data for research and analysis.
The plans are the NHS’s latest attempt to build trust in letting third parties analyze England’s health data at scale, after two previous schemes were ultimately scrapped due to privacy concerns and citizens opting out. Both care.data and the more recent GP data scrape proved to be highly controversial and failed to win over the support of experts and citizens.
The Department of Health and Social Care is hoping that this latest endeavour, which is being underpinned by key learnings during the COVID-19 pandemic, will be more successful.
diginomica has written in the past about how the NHS is thinking about building and maintaining public trust in sharing healthcare data and how the NHS App could be used by citizens to control how their health data is shared.
Secure data environments were a central idea in the government’s recent Data Saves Lives strategy, which followed on from the Goldacre Review that stated:
Data can drive research. It can be used to discover which treatments work best, in which patients, and which have side effects. It can be used to help monitor and improve the quality, safety and efficiency of health services. It can be used to drive innovation across the life sciences sector.
If we are to unlock the full potential of data, we must make sure that the public has confidence in how their data is used and protected. We believe this will only be possible by moving from the current system that relies on data sharing, to one that is built on data access. Secure data environments will be key to achieving this ambition.
The Department of Health and Social Care defines secure data environments as data storage and access platforms, which uphold the highest standards of privacy and security of NHS health and social care data when used for research and analysis. They allow approved users to access and analyze data without data leavin the environment.
Secure data environments allow the NHS to control:
who can become a user to access the data
the data that users can access
what users can do with the data in the environment
the information users can remove
The hope is that these environments can be used for planning and population health management, internal planning, and broader research and analysis.
At the moment, the Department of Health and Social Care has three different approaches for these environments - NHS Digital’s National Secure Data Environment, four sub-national secure data environments that will work at a regional level, and a federated data platform that will be implemented across the NHS in England.
With this in mind, the Department has published new guidelines - the Five Safes Framework - that aim to not only provide additional information regarding the environments’ purpose, but to build confidence in the NHS’s approach.
The framework, which has been developed by the Office for National Statistics (ONS), aims to follow ‘best practice’ principles for data protection. These include:
safe settings - the environment prevents inappropriate access, or misuse
safe data - information is protected and is treated to protect confidentiality
safe people - individuals accessing the data are trained, and authorized, to use it appropriately
safe projects - research projects are approved by data owners for the public good
safe outputs - summarized data taken away is checked to make sure it protects privacy
It’s worth reading the guidance in full to understand how the NHS is approaching this, but there are some key standouts that are worth highlighting.
For example, these environments will be the default way to access NHS Health and Social Care data. It states:
Secure data environments must be adopted by organisations hosting NHS health and social care data for research and analysis. These environments have features that improve data privacy and security, which will help build public trust in the use of their data.
Instances of analysing or disseminating data outside of a secure data environment will be extremely limited. Any exceptions will require significant justification, such as where explicit consent from clinical trial participants has been obtained.
Transparency is going to be at the core. It adds:
Owners of secure data environments must be open about the way data is used within their secure data environment. They must be able to detail who is accessing the data and for what purpose. This may be achieved, for example, by organisations ensuring that clear and accessible reporting is in place for their secure data environment.
The public will also be included in how these environments are used. The guidance notes:
Owners of secure data environments must make sure that the public are properly informed and meaningfully involved in ongoing decisions about who can access their data and how their data is used. For example, by ensuring that relevant technical information is presented in an accessible way (that is, through publishing privacy notices and data protection impact assessments).
Secure data environment owners must also be able to demonstrate that they have, or plan to, undertake active patient and public involvement activities.
Patient confidentiality is also highlighted as a priority, as the guidance notes:
Data must be treated in a secure data environment to protect confidentiality using techniques such as data minimisation and de-identification. De-identification practices mean that personal identifiers are removed from datasets to protect patient confidentiality. This includes techniques such as aggregation, anonymisation, and pseudonymisation. The level of de-identification applied to data may vary based on user roles and requirements for accessing the data.
Data minimization practices help make sure that access to data is relevant and limited to what is necessary in relation to the purposes for which they are processed.
And the key priority for the environments is that the data use within them must be for the public good. The guidance adds:
The use of NHS health and social care data must be ethical, for the public good, and comply with all existing law. It must also be intended for health purposes or the promotion of health. Data access must never be provided for marketing or insurance purposes.
Owners of secure data environments must make sure there are processes in place to assess the reasons for accessing NHS health and social care data in a secure data environment. These processes must fulfil minimum national standards, which we will set out.
This will make sure that appropriate access is given to NHS health and care data, which will support the delivery of improved outcomes across the health and care system. It will also help build public confidence in why their data is accessed and how it is used.
The Department says that it has started to engage with patients and the public on its plans for data sharing, but that engagement will scale up from Autumn 2022. By the end of 2022 it will also publish technical guidance for secure data environments and an outline of the accreditation process that all NHS secure data environments will need.
I’ve said it before, but there is a huge opportunity for the NHS in terms of how it uses and shares data - in fact, how it does this may be critical to its future success and survival. However, past endeavours have come close to diminishing any trust the public would want to place in letting the NHS do this. They were done behind closed doors and without much public consultation. For this to work, the government needs to bring people with them. And that’s only possible with transparency, seeking input and acknowledging this is a sensitive area. So far, this latest attempt appears to be taking a much better approach. | <urn:uuid:e7df29a0-a11b-4596-b9bb-eddfd4ee0efb> | CC-MAIN-2022-40 | https://diginomica.com/nhs-progresses-its-data-sharing-plans-secure-data-environment-guidelines | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00744.warc.gz | en | 0.94884 | 1,553 | 2.734375 | 3 |
MFA stands for multi-factor authentication. It refers to the use of more than one means of identification to access a secure software system. Usually, MFA security uses a combination of traditional security measures, like keycards and passwords, and biometric measures, like retinal scans.
A subset of MFA security, which uses two, three, or more authentication measures, 2FA security uses two.
The vast majority of security breaches come from weak passwords or insecure devices. By relying on more than one verification mechanism, at least one of which is unique to the individual user, the software system and all it contains remains secure and safe.
Companies require secure access to data for their employees and for their customers. Their security system, therefore, must store employee and customer biometric data.
Companies should use other security safeguards, as well, such as tiered access that enables only senior employees to, for example, transfer funds above a certain amount. Setting up security alerts when the system is accessed by a new device or from a novel location also strengthens security.
Many companies also provide employees with randomly-generated passwords to at least ensure that this traditional authentication method is as strong as possible.
MFA security systems must use up-to-date privacy and security regulations—particularly if they work in the financial or healthcare industries. Developers should also keep abreast of security news, to prepare for new potential threats.
Some companies may consider cross-device identity and geolocation data more useful than essential, particularly if they don’t deal in highly sensitive information or don’t have need for remote employee access.
In addition, companies are increasingly moving toward password-free security systems, so expect that soon even these traditional security measures will be redundant.
Security and privacy concerns are some of the main challenges of MFA security, particularly for customers. Their data cannot be held in a database accessible by company employees, so cloud-based and block-chain solutions have become effectively mandatory for many companies.
Additionally, security threats are consistently evolving. Hackers can now fool physical biometric security scans, prompting companies to move to behavioral biometric measures. And with this encroachment on safety, companies must consistently upgrade their security systems, which can be a costly endeavor.
[Security industry leaders from across the world] reported that more than 77% of their employees have been working remotely this year and they expect this to continue and not ask employees to return to the office at all. […] An overwhelming majority are relying on multi-factor authentication (84.3%) and SSL VPNs (81.9%) for secure remote access.
Trulioo GlobalGateway Identity Verification is where it’s clients can request to verify a person’s identity.
Audiens Resolve Identities identifies customers across multiple touch points and devices.
B2BSignals Cybersecurity Review is designed to help users to conduct research and comparison among cybersecurity solutions.
Data Security and Interoperability services provided by EcoSteer work to provide shareable data streams for businesses.
CBI Information Inc Cloud Security can deliver powerful threat detection, incident response, and compliance management services | <urn:uuid:a16f9216-7ca3-4398-a2d7-c17f07766f60> | CC-MAIN-2022-40 | https://www.data-hunters.com/use_case/mfa-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00744.warc.gz | en | 0.938246 | 645 | 2.859375 | 3 |
What is Impact Investing?
Impact Investing is a new trend in corporate philanthropy that blends investment strategies from the business sector with social and environmental impact strategies from the public sector. It allows social and environmental causes to tap into money and other resources that may otherwise be lacking. Simply put, impact investments are made into companies, organizations, and funds in order to generate positive social and environmental outcomes. Investors not only expect to create social value from their investments, but also to get a financial return on capital. Impact investments provide capital to support solutions that create positive impacts, rather than solutions meant to avoid harmful socio-environmental impacts, making them unique from other forms of "socially responsible investment." They also serve as unique catalysts to emerging and developed markets in social and environmental change, enabling recipients to harness the power of enterprise to achieve positive outcomes on a much larger scale than before.
More on Measurement
An important part of impact investing is measuring the social impact created by the organization, project, etc. of choice. There are a variety of approaches taken when measuring impact. Generally, measurement will closely relate to an investor's goals and intentions for their investments. The Global Impact Investing Network gives a general guide for impact measurement best practices (under "Core Characteristics of Impact Investing"):
- Establishing and stating social and environmental objectives to relevant stakeholders
- Setting performance metrics/targets related to these objectives using standardized metrics wherever possible
- Monitoring and managing the performance of investments against these targets
- Reporting on social and environmental performance to relevant stakeholders
Investments are evaluated to determine if the capital provided has been used as originally intended, has achieved the impact intended, and how any generated impact compares to earlier performance prior to the investment. Investors want to see reports that show hard evidence of the social or environmental value of the programs in which they invested. They also want to see an outlook on the future with consideration given to areas of growth and improvement, where future investments may be allocated.
More about Returns on Investment
For something to be deemed an impact investment, the investment must have a financial return that is at least equal to the principle invested, if not more. In the growing world of impact investing, there are three main providers of capital, or investors: professional investors, specialized funds, and governments.
Impact investors are socially motivated; they value the social or environmental outputs generated from the enterprises in which they invest, as well receiving a financial return from their investment. Investors vary in their expectations for financial returns. In their article, Unpacking the Impact in Impact Investing, Paul Brest and Kelly Born explain that "Non-concessionary investors are not willing to make any financial sacrifice to achieve their social goals," and "Concessionary investors are willing to make some financial sacrifice—by taking greater risks or accepting lower returns—to achieve their social goals."
Impact Investing is an innovating strategy for addressing some of the world's most pressing social and environmental issues. As organizations work to address causes and generate positive social and environmental impacts, socially responsible investing provides much needed capital to advancing these causes and driving stronger and farther reaching impacts. It is becoming widely acknowledged that the private and public sector will need to cooperate together to generate the tools, funding and intellectual resources necessary to truly address issues such as poverty, disease, and other issues affecting society and the world. Impact investing is one rapidly expanding approach to achieving widespread social and environmental impacts.
To learn more about impact investing, please visit the following resources:
|The Rockefeller Foundation Innovative Finance||The Rockefeller Foundation founded the concept of impact investing in 2007 and has since been expanding their reach and research in this area. Their website offers a number of publications about impact investing, social impact bonds, and an event page with information on upcoming forums as well as videos and summaries from past events.|
|Global Impact Investing Network||The Global Impact Investing Network "is dedicated to increasing the scale and effectiveness of impact investing." Their community of impact investors is the largest in the world, including asset owners, managers and service providers engaged in impact investing. They provide access to a number of resources, from research publications, news articles, and event calendars, to investor tools and training information.|
|The U.S. National Advisory Board on Impact Investing||"The group's purpose is to highlight key areas of focus for US policymakers in order to support the growth of impact investing and to provide counsel to the global policy discussion." Review their 2014 report: Private Capital, Public Good: How Smart Federal Policy Can Galvanize Impact Investing - and Why It's Urgent|
|Impact Investor UK||Impact Investor UK is a portal providing users with in-depth information and content on: impact investing, measurement strategies, types of impact investments, opportunities for impact investing, and a wealth of related news articles and other media publications.| | <urn:uuid:e80dfed4-663e-4970-b783-92c50bdf8045> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2015/4/9/impact-investing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00744.warc.gz | en | 0.93428 | 999 | 2.84375 | 3 |
The IT profession is constantly changing and as technology evolves professionals are forced to learn new skills…
Before you can start a career in IT you must become adequately skilled for the type of job you want. For example, if you want to become a website designer, you should become fluent in web development languages, such as HTML, CSS, and Java.
You can gain IT skills by checking textbooks out at the library and studying them, or by taking classes or even getting a degree in IT. Some employers will train their newly hired employees, but it’s good to figure out if you are capable of doing the work. Try choosing a skill that you enjoy, or a skill that is a pastime of yours.
Some employers only hire people with college degrees, but you don’t have to go to college to improve your job prospects in the IT field. Look for programs in order to get properly certified. A certification like MCSE or A+ is very impressive and will show potential employers that you have the skills necessary to perform on the job. If you’re having a hard time getting your foot in the door you should look for an internship. An Internship will have you working for free or for lower wages, but you will gain invaluable work experience. If you impress your employer enough they may just hire you; hard work and perseverance pays off.
Even if the company doesn’t hire you, the experience you gain will make it much easier to a job in the IT field. However, since IT is a field that changes very quickly, you must constantly hone your skills in order to remain competitive. If you’re good at handling change and can appreciate the fact that your skills will always be in demand, getting a job in the IT field would be a good move. As long as your skilled at what you do and are willing to work hard, you will eventually get a job in the IT field.
Evolving Marketplace for skilled workers
There is always going to be a need for people who truly understand how technology works. Companies need someone who can manage the Internet network, and someone who can keep the website updated and modernized.
Technology is constantly evolving and keeps the IT profession on its toes. Another upside to IT is that it’s quite difficult to get bored; it isn’t a strict discipline like medicine or law.
About the author: This is a guest post written by Jayvee on behalf of theceomagazine.com website in Norway. If you’d like to know more about this program, visit the website. | <urn:uuid:d2f0f520-e146-4e69-b37c-3e4777eba4ca> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/how-to-improve-it-job-prospects | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00744.warc.gz | en | 0.966612 | 531 | 2.5625 | 3 |
If you’re visiting this page, then it’s safe to assume you have some appreciation for how complex the technology sector is. IT is a complicated beast in part due to the fact that it’s constantly evolving and each new advancement is an added layer of complexity on top of an already impossible number of pre-existing layers.
We employ BMC Blogs as a means of peeling back some of these layers to offer deep insights and brief overviews alike—ideally exposing the IT world in a way that allows newcomers and veterans to understand it. Today, we thought we’d talk about a more basic concept in a way that clears up at least some of the confusion surrounding software development.
The topic for today, as I’m sure you guessed by reading the title of this post, is systems programming. But before we talk about what systems programming is, we should first address what a system even is within this context.
What is a System?
The dictionary definition of a system is “a set of things working together as parts of a mechanism or an interconnecting network.” This is a pretty apt way of thinking about systems as they pertain specifically to the IT world. A computer system is a collection of components (both hardware and software) that function as a part of a whole.
A system is comprised of five primary elements: architecture, modules, components, interfaces, and data:
Architecture is the conceptual model that defines the system structure and behavior. This is often represented graphically through the use of flowcharts that illustrate how the processes work and how each component is related to one another.
Modules are pieces (hardware or software) of a system that handle specific tasks within it. Each module has a defined role that details exactly what its purpose is.
Components are groups of modules that perform related functions. Components are like micro-systems within the system at large. Using components and modules in this way is called modular design, and it’s what allows systems to reuse certain pieces or have them removed and replaced without crippling the system. Each component can function on its own and can be interchanged or placed into new systems.
Interfaces encompass two separate entities: user interfaces and system interfaces. User interface (UI) design defines the way information is presented to users and how they interact with the system. System interface design deals with how the components interact with one another and with other systems.
Data is the information used and outputted by the system. System designers dictate what data is pertinent for each component within the system and decide how it will be handled.
Each component complements the system in its own way to keep everything functioning properly. If one piece of the puzzle becomes askew, the entire system can be impacted. Because technology is constantly evolving, components are modified, added, or removed on a constant basis. To make sure these modifications have the desired effect, systems design is used to orchestrate the whole affair.
What is Systems Design?
Systems design involves defining each element of a system and how each component fits into the system architecture. System designers focus on top-level concepts for how each component should be incorporated into the final product. They accomplish this primarily through the use of Unified Modeling Language (UML) and flow charts that give a graphical overview of how each component is linked within the system.
Systems design has three main areas: architectural design, logical design, and physical design. The architectural design deals with the structure of the system and how each component behaves within it. Logical design deals with abstract representations of data flow (inputs and outputs within the system).
Physical design deals with the actual inputs and outputs. The physical design establishes specific requirements on the components of the system, such as input requirements, output requirements, storage requirements, processing requirements, and system backup and recovery protocols. Another way of expressing this is to say that physical design deals with user interface design, data design, and process design.
Systems designers operate within key design principles when they are creating the system architecture. Some of the key tenets of good system design are:
- Be Explicit – All assumptions should be expressed.
- Design for Iteration – Good design takes time, and nothing is ever perfect the first time.
- Keep Digging – Complex systems fail for complex reasons.
- Be Open – Comments and feedback will improve the system.
The systems programmers are the ones responsible for executing on the vision of the system designers.
What is Systems Programming?
Systems programming involves the development of the individual pieces of software that allow the entire system to function as a single unit. Systems programming involves many layers such as the operating system (OS), firmware, and the development environment.
In more recent years, the lines between systems programming and software programming have blurred. One of the core areas that differentiates a systems programmer from a software programmer is that systems programmers deal with the management of system resources. Software programmers operate within the constraints placed upon them by the system programmers.
This distinction holds value because systems programming deals with “low-level” programming. Systems programming works more closely with computer resources and machine languages whereas software programming is primarily interested in user interactions. Both types of programming are ultimately attempting to provide users with the best possible experience, but systems programmers focus on delivering a better experience by reducing load times or improving efficiency of operations.
It’s imperative that everyone working within the system is aligned. The primary goal of any service or product is to deliver value to your customers. Whether you are involved with top-level user interactions or low-level system infrastructure, the end goal remains the same. This is why a company culture that supports teamwork and goal-alignment is so important for technology companies.
Modern customers have increasingly high expectations. As such, organizations must constantly be seeking ways to improve their output to provide customers with an ever-improving product. Achieving this is done through intelligent systems design and an agile approach to development. Bringing everyone together to work towards a singular goal is the main pursuit of the DevOps approach to software development. | <urn:uuid:a9887b8c-47d2-466e-8bc1-5d380e69158f> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/systems-programming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00744.warc.gz | en | 0.93101 | 1,270 | 3.546875 | 4 |
The ability of babies to differentiate emotional expressions appears to develop during their first six months.
But do they really recognise emotion or do they only distinguish the physical characteristics of faces and voices?
Researchers from the University of Geneva (UNIGE), Switzerland, have just provided an initial answer to this question, measuring the ability of six-month-old babies to make a connection between a voice (expressing happiness or anger) and the emotional expression on a face (again, of happiness or anger).
The findings, published in the journal PLOS ONE, show that babies look at an angry face – especially the mouth — for longer if they have previously heard a happy a voice.
This reaction to something new demonstrates for the first time that babies have an early ability to transfer emotional information from the auditory mode to the visual.
Emotions form part of our lives from a young age.
Expressing emotions is the first tool available to babies for communicating with those around them.
Babies express their emotions through their posture, voice and facial expressions from birth.
These attitudes help their carers adapt their behaviour to the baby’s emotional state.
A baby’s tears, for example, may be an expression of his or her distress and primary needs (to be fed or changed or to lie down).
But is the opposite also true, asked UNIGE researchers, led by Professor Edouard Gentaz, president of the Psychology section of the UNIGE’s Faculty of Psychology and Educational Sciences and a member of CISA?
Are babies capable of identifying the emotions expressed by adults? Do they adapt their behaviour to fit in with the emotions they are exposed to?
Early skills for discriminating emotions
The ability of babies to differentiate emotional expressions seems to develop in the first six months of life. During this period, new-borns and babies have a preference for smiling faces and happy voices.
Prior to six months, they can distinguish happiness from other expressions such as fear, sadness or anger.
From seven months onwards, they develop the ability to discriminate between several other facial expressions.
It seems, therefore, that babies possess early skills for differentiating between emotions… but do they really recognise them or only distinguish the physical characteristics of faces or voices?
In an attempt to find an answer, 24 six-month-old babies took part in a study at the Geneva BabyLab.
They were exposed to voices and faces expressing the emotions of happiness and anger.
During a first phase devoted to auditory familiarisation, the babies faced a black screen and listened to a neutral, happy or angry voice for 20 seconds.
In the second stage – based on visual discrimination lasting 10 seconds — the babies were placed in front of two emotional faces, one expressing happiness and the other anger.
The research team used eye-tracking technology to measure the baby’s eye movements with great precision.
They were then able to determine whether the time spent looking at one or other of the emotional faces – or specific areas of the face (the mouth or eyes) – varied according to the voice they listened to.
If the babies looked equally at both faces, it would not be possible to conclude that there was a difference.
«On the other hand, if they clearly looked at one of them much longer, we could state that they are able to spot a difference between the two faces,» explains Amaya Palama, a researcher at the Laboratory of Sensorimotor, Affective and Social Development in UNIGE’s Faculty of Psychology and Educational Sciences.
Babies prefer what is new and surprising
The results of the study revealed that six-month-olds did not have a preference for either of the emotional faces if they had already heard a neutral voice or a voice expressing anger.
On the other hand, they spent longer looking at the face expressing anger — especially its mouth — after hearing a voice expressing happiness.
This visual preference for novelty on the part of six-month-olds testifies of their early ability to transfer emotional information about happiness from the auditory to the visual mode.
Based on this study, we can conclude that six-month-old babies are able to recognise the emotion of happiness regardless of these auditory or visual physical characteristics. This research forms part of a project designed to examine the development of emotional discrimination abilities in childhood funded by the Swiss National Science Foundation (SNSF).
Source: Amaya Palama – University of Geneva
Image Source: UNIGE.
Original Research: Open access research for “Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study” by Amaya Palama, Jennifer Malsert, and Edouard Gentaz in PLOS ONE. Published April 11 2018, | <urn:uuid:6a2b3564-4560-4441-aa3b-f71da925ea7b> | CC-MAIN-2022-40 | https://debuglies.com/2018/04/12/babies-make-the-link-between-vocal-and-facial-emotion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00144.warc.gz | en | 0.948253 | 1,002 | 3.640625 | 4 |
‘Streams in the Beginning, Graphs in the End’ is a three-part series by Dataconomy contributor and Senior Director of Product Management at Cray, Inc., Venkat Krishnamurthy – focusing on how big changes are afoot in data management, driven by a very different set of use cases around sensor data processing. In this first part, we’ll talk about how the bigger revolution in data management infrastructure is driven more by the increasing ease of data collection than by processing tools.
For those of you that like natural disaster movies, you may recall Twister, a movie about tornado chasers where the star attractions were the twisters themselves. As a quick plot summary, the movie is about tornado chasers risking life and limb to get a bunch of shiny, winged sensors into the heart of an EF5 twister to enable them to understand these monsters better from the inside. In a way, ‘Dorothy’, the machine that digitized the tornado foretold the arrival of the Big Data age
It’s no exaggeration that we’re in the golden age of data management and analytics.
To us at Cray, Data has never really been any other scale than Big, and the reason for this has been the scientific method itself. Science begins with observation, and ‘data analytics’ has been fundamental to this endeavor from the beginning. In the past, this led to the invention of specialized instruments to observe the very small (microscopes) or very large (telescopes). Arguably this is really the first application of ‘data analytics ‘- in a sense, a (optical) microscope or telescope simply turns a tissue sample or patch of sky into a stream of photons analyzed by sophisticated pattern recognition engines (human brains) attached to extremely high-fidelity sensors (human eyeballs).
However, as science relentlessly advanced into ever smaller and ever larger scales simultaneously, it became humanly impossible to build equally capable instruments.
Scientists instead turned to creating scalable, high fidelity mathematical models of physical phenomena and needed tools to study them, hence giving rise to supercomputing by necessity. They use these models to study the insides of stars, the structure of the universe, or molecular dynamics. So, supercomputers have evolved primarily driven by the need to approximate reality at extreme scales – and are really versatile, multipurpose scientific instruments in disguise.
Meanwhile, major advances in data processing have been driven primarily by the commercial sector, starting with the birth of the database. Big ideas in data management like the relational model, transaction processing and SQL were birthed in this age of relatively scarce data and compute capabilities when it was too expensive to capture anything other than a carefully curated recording of key business events.
When the inevitable need arose to understand a business beyond just recording it, the central ideas of Data Warehousing and Business Intelligence were born, driven by basic business needs like financial reporting and sales analysis. Hence, the major ideas of data processing were driven primarily by a need to understand reality, albeit in a narrow business-oriented sense.
For a long time, the paths of traditional ‘supercomputing’ and data analytics didn’t quite intersect except in specialized domains like finance. This persisted till Google upended the status quo famously with the Map Reduce processing model in 2004. The motivating problem at Google was to index the entire Web – but by focusing on building a set of simple building blocks and principles for data processing at extreme scale, they set the stage for the Big Data revolution.
The subsequently rapid, exponential evolution of many open frameworks to process data at scale has meant that Big Data has now become a pervasive cliche applied to every domain and several use cases beyond this original need. Tools like Spark and Hadoop allow the average commercial company to dream really Big about their Data, but have brought to the fore all the problems of building and using distributed computing platforms and applications to the commercial datacenter. In addition, businesses are evolving from simple counting and aggregation of business events, to trying to identify sophisticated patterns in their data, inevitably bringing them closer to the computational techniques used in science.
On the flip side, supercomputers have gotten even better at approximating reality, and generate ever-increasing amounts of data in the process. As the supercomputer has become a telescope, or microscope into the un-observably large or small, the ‘stream of photons’ is now a deluge of bits. Increasingly, scientists need to combine the results of these simulations with data from the real world, and identify patterns in petabyte-sized datasets. Their big data need isn’t gated so much by scale, as by productivity, loosely defined as the quickest time to first result in analyzing the data they have either simulated and/or collected. What is needed is the equivalent of human eyeballs and brains at this scale – this is, in essence why convergence between Supercomputing and Big Data is inevitable.
Fig 1 – The evolution of the microscope – on top, the first ever microscope invented by Anton Leeuwenhoek and some samples. Below, a pictorial representation of mass-spectrometry bio-imaging, which ionizes biological samples into mass spectra
Great, you say – but why is ‘Dorothy’ and a barrel of shiny artificial butterflies relevant to this? Also, the idea of ‘convergence of supercomputing and Big Data’ sounds good, it’s still somewhat abstract. How does this all tie together?
The way we see it, the big changes for data management so far have been the ‘revolution at the center’: Storage facilities (‘Data Warehouses’), distribution facilities (‘Data Hubs’) or aquatic bodies of data (‘Data Lakes’).
In contrast, we believe that the realization of the Big Data revolution will be at the edges of data management. Here is where we see this idea of convergence fundamentally becoming reality, and driving changes in everything from the building blocks for large-scale data processing to the system architecture for platforms at Cray that can deliver on the promise.
Why is this true? We believe it has to do with 2 fundamental problems on either end of the analytical data management lifecycle
- At one end, how to handle data management when data collection is pushing towards the ‘edges’, where a large number of sensors produce data
- At the other end, how to create a scalable model of knowledge to unify the results from any and all types of data processing of all that sensor data
To address the above, we believe that an important organizing principle for data management of Big Data will be about ‘Streams in the beginning, Graphs in the End’.
In subsequent parts, we’ll dive into greater detail on each of the above. Stay tuned! | <urn:uuid:714ad634-3b40-4bc9-8322-60ee58ff6777> | CC-MAIN-2022-40 | https://dataconomy.com/2015/06/streams-in-the-beginning-graphs-in-the-end-part-i-data-management-for-the-internet-of-everything/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00144.warc.gz | en | 0.935401 | 1,424 | 2.71875 | 3 |
I love WhatsApp. I use it daily. That’s why the recent WhatsApp breach got my attention.
The breach reminded me about a question we get asked from time to time – what’s the difference between binary-based obfuscation and source code obfuscation? When I am asked this question, I usually make a small joke and say – what’s the difference between a house and a bundle of wood?
Don’t get me wrong, I’m a security engineer and developer. I know that, in the right hands, a bundle of wood can turn into a miraculous creation. That’s not the challenge. The challenge lies in actually coding obfuscation in the source code of an Android or iOS app manually, which is an extremely complex and error-prone undertaking.
What Went Wrong With WhatsApp To Enable the Breach?
I spent a bit of time analyzing the WhatsApp breach and vulnerability. WhatsApp had implemented Java obfuscation using a popular open-source obfuscation tool. That tool places all the burden on the developer to code the solution into the app, and get it *exactly* right.
The vulnerable code was located in a non-obfuscated C library called libwhatsapp.so. That library held most of the logic for the WhatsApp code. Why was libwhatsapp.so not obfuscated? Was it an oversight? Was it to maintain the performance of the app? These questions and more reveal the underlying problem with all manual obfuscation methods. Someone chose (or forgot) to obfuscate libwhatsapp.so and that made it easier for the attacker to capitalize on the exploit, as documented by Check Point.
Manual Obfuscation Requires Lots of Considerations:
(A) You have to find a developer that knows how to code obfuscation.
It takes a highly specialized mobile developer to code obfuscation. First time mistakes, oversights and errors are costly.
(B) You have to go through trial and error to get manual obfuscation right.
There’s a learning curve. That learning curve takes time. And the learning curve starts over for each development environment and for each new developer that touches the project. As a developer, you also have to be careful not to manually obfuscate too much (or too little), to maintain the performance of your app. Remember, the larger development team is constantly writing new code that is not obfuscated. Maintaining the proper obfuscation across an app’s codebase is a real challenge.
(C) You can’t obfuscate everything in source code.
The pressure to release the app on time, non-native file systems, 3rd party SDKs, and other mechanisms pose very sophisticated problems to developers on the obfuscation project. Some things simply can’t be obfuscated in source code.
(D) Obfuscation, alone, is not enough.
Like all security measures, code obfuscation should be part of a larger protection scheme deployed within your app. App makers who rely only on code obfuscation are taking a big risk with their users, data, and IP.
Binary-Based Obfuscation is Simply Better
At Appdome, we give users the ability to obfuscate an entire app binary in seconds. Using Appdome, developers do not code obfuscation manually. Instead, a machine codes the obfuscation directly to the mobile app binary, automatically, instantly, on-demand.
To complete the app protection, a developer should also protect the app against Dynamic Code Analysis. Dynamic Code Analysis is the process of researching the apk/ipa while it is running either by tampering with the app and inserting code or by remotely debugging the app with an interactive debugger. The ones we mostly use at Appdome are LLDB and GDB. Appdome protects against Dynamic Code Analysis with various features including ONEShield, trusted mobile sessions, Jailbreak/ Root prevention, Data At Rest Encryption and Memory Encryption.
Leveraging technology to implement binary-based obfuscation in a mobile app eliminates the challenges of manual implementation. There are no learning curves, no coding tradeoffs, and users can combine obfuscation with other security methods inside the mobile app to protect users, data and IP. On top of that, you don’t need specialized expertise to implement obfuscation using Appdome. Anyone can implement sophisticated obfuscation and shielding methods to any app, instantly.
Preventing the WhatsApp Breach Without Coding a Thing
Using Appdome, no one has to face what WhatsApp faced. Our users can combine three features of Appdome’s security suite to eliminate vulnerabilities seen in the WhatsApp breach.
- First, ONEShield™, Appdome’s anti-tampering, anti-debugging and protection from reverse engineering system provides automatic protection for all apps built on Appdome.
- Second, TOTALCode™ Obfuscation and TOTALData™ Encryption can be combined to protect all libraries, including the native .so libraries (where the vulnerability was found).
- Finally, using TOTALCode Obfuscation, you can add Flow relocation automatically, and using TOTALData Encryption, a user could have applied Encrypt strings and resources automatically.
These features would have made the app much harder (impossible) to reverse engineer. As you can read in an earlier post, Appdome’s TOTALData Encryption also offers encryption of in-memory data, which protects and obfuscates binary source code at runtime as well. Encrypting in-memory data would have made researching the stack overflow exploit in the WhatsApp breach much harder.
Here’s a 5-minute video that shows *exactly* how. Anybody can independently verify this by creating their account on Appdome, with no development work needed.
The bottom line, the technology exists to add app hardening and app shielding to apps automatically. That same technology can be used to combine app hardening and app shielding with other security measures. That technology is called Appdome, it would have prevented the WhatsApp breach, and it can prevent breaches of your app too. Our Developer’s Guide for Mobile App Security goes into detail how developers can use Appdome to protect and secure their apps. I encourage everyone to try Appdome, which you can accomplish in less time than it took you to read this article. | <urn:uuid:7217c83d-154c-41e8-b8ea-23fad9ad3592> | CC-MAIN-2022-40 | https://www.appdome.com/dev-sec-blog/mobile-malware-prevention/whatsapp-breach-why-binary-based-obfuscation-is-better/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00144.warc.gz | en | 0.915341 | 1,560 | 2.578125 | 3 |
A report out by Cybersecurity Ventures predicts global annual cybercrime costs will grow to $6 trillion by 2021.
While a $6 trillion estimate might be a little high, “a trillion dollars plus is a real possibility,” says Larry Ponemon, chairman and founder of the Ponemon Institute. Though this isn’t a number he saw coming down the pipeline. “If you asked me five or six years ago, I’d fall over,” he says.
The predicted cybercrime cost takes into account all damages associated with cybercrime including: damage and destruction of data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud, post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data and systems, and reputational harm. It does not include the cost incurred for unreported crimes.
Other research has shown that the cost of cybercrime increases the longer it takes to detect it, if it’s detected at all. According to the Ponemon Cost of Data Breach report, the longer it takes to find and resolve a breach, the more costly it will be for an organization. Breaches identified in fewer than 100 days cost companies an average of about $1 million less than those that take more than 100 days to be discovered, according to Ponemon. And in the 2016 Dark Reading Security Salary Survey, 9% of IT and infosec pros don’t even know if they’ve been breached. A study by The Office of National Statistics for England and Wales found that most cybercrimes go unreported.
The Cybersecurty Ventures report, which is a compilation of cybercrime statistics from the last year, also predicts that the world’s cyberattack surface will grow an order of magnitude larger between now and 2021. | <urn:uuid:6536bbcb-e0cd-4eee-97e2-6959655d4b5e> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/global-cost-of-cybercrime-predicted-to-hit-6-trillion-annually-by-2021-study-says | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00144.warc.gz | en | 0.916616 | 386 | 2.625 | 3 |
Thursday, September 29, 2022
Published 2 Years Ago on Saturday, Nov 14 2020 By Inside Telecom Staff
Blockchain is a system of recording information in a way that makes it difficult or impossible to change, hack, or cheat the system.
According to Euromoney.com, A blockchain is essentially a digital ledger of transactions that is duplicated and distributed across the entire network of computer systems on the blockchain. Each block in the chain contains a number of transactions, and every time a new transaction occurs on the blockchain, a record of that transaction is added to every participant’s ledger. The decentralized database managed by multiple participants is known as Distributed Ledger Technology (DLT).
Blockchain technology has the potential to fundamentally change the way business is conducted, and to transform the foundations of our economic and social systems. One of the major sectors is the telecoms industry, this sector has the ability to adopt this new technology and open up the doors of a new kind of competition.
Blockchain technology started to emerge in the telecom business in 2018 and has been used for many purposes and is expected to grow in the near future. Telecoms are usually sensitive to their subscriber information such as the airtime/data balance, age on the network, calls, SMS’s, etc. They do their best to keep this information protected from external parties.
The competitive advantage and the value proposition blockchain is bringing to the telecom industry relies mainly on security, transparency, & integrity for the subscriber and transaction information.
We all know that a telecom network has many core services such as operating support system (OSS) and business support system (BSS) which mainly fills the role of a telco partner, sourcing, supply chain, service providers, etc. and managing the legal and financial relationship. These processes are very sensitive as confidential information are usually being shared with these types of information exchange, hence, the value of blockchain will come in as an added layer of security to protect and speed up processing and storing of data.
We have identified 3 main reasons why telcos should focus more on adapting the blockchain technology within their network systems:
This advantage relies mainly on automation processes that blockchain can bring and by eliminating intermediary systems. With the trust that this technology brings, this means that in most cases, the middleman can be eliminated.
The telcos can improve the transaction process with multi-parties by automizing the process in a trustworthy way, particularly with partners who require a monthly verification/reconciliation – this process is time and energy consuming which can instead, be done automatically through blockchain technology while both parties have trust in the end result.
In this section, we don’t just mean financial fraud, but we must also consider identity fraud as an important scale in the telecom business; a subscriber’s identity includes, but is not limited to, name, age, gender, location, etc. With blockchain technology, data is protected in a decentralized system where it can neither be deleted nor stolen.
With the special and new advantages/features 5G is bringing into the world including high-speed, low latency, and more capacity, this will undoubtedly open doors for new technologies and services to rise up. Telcos will have major input on this while introducing the new OTT and IOT services to the market targeting different business segments whilst ensuring this is being delivered and spread across their network reach. A major issue we noticed here is the miss of a standardized ecosystem. With blockchain technology implemented, everyone will have access to everything, and the competition between small and big companies will intensify.
Telcos are in real competition not only with companies in the same industry but with other enterprises across industries, that might share the same goals and/or targets; these competitors may include banks (while telcos introduce mobile money), retail, airlines (while introducing the loyalty program), etc.
Quick decisions on the roll out plan in terms of strategy is highly needed at this stage. Telcos are urged to start creating a proof of concept “POC” for a real case study in order to take this opportunity forward.
Article Written by Mr. Hussein Taki, Business Development Director at Mondia Group.
Even during its current winter state, the crypto world is still alive. New buyers are still coming in, maybe not as before, but still, some are committed to buying the dip. The Crypto wallet conversation is one to be had when venturing into the crypto world. Between the crypto physical wallet and its virtual counterpart […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:a40942db-c4e4-48cc-9882-0d7489a4b740> | CC-MAIN-2022-40 | https://insidetelecom.com/blockchain-and-the-telecom-operators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00144.warc.gz | en | 0.947124 | 963 | 2.71875 | 3 |
It’s Not Just Your Health at Risk
As the coronavirus (COVID-19) continues to generate headlines around the world, it’s not just your health at risk – your private data is at risk too.
In more than one sense, the coronavirus is impacting your security. Don’t let this cybersecurity threat spread any further by staying alert to these coronavirus-related risks.
Cybercriminals are racing to capitalize on and exploit the current global surge of panic by targeting innocent individuals through malicious online scams designed to steal your personal information and money.
During a global health crisis where everyone is hungry for information, email is one of the main modes of communication. Service providers, schools, government officials, and more all use it to share information with the public about safety measures, closures, and other changes. In turn, this makes for a hotbed of malicious activity by hackers and scammers with the sole intention of taking whatever they can from those that fall for their schemes.
As of late, cases of phishing attacks designed to harvest usernames and passwords from victims who click on links from seemingly legitimate outlets have been growing. These bogus emails have been utilizing tactics like asking potential victims to click on links in order to “prevent the spread of the virus” by downloading new information, safety measures, and more. These links lead to compromised web pages where the goal is to gain access to personal and sensitive information; some try to infect computers with malware.
The Organizations at Play
The cybercrime at play here is due largely to cybercriminals impersonating the very sources millions have been turning to for advice and answers.
The World Health Organization (WHO), a United Nations unit, has reported that scammers have started to use their name, image, and realistic-looking fake domains as part of phishing attacks and other scams to gain access to victims’ personal email credentials.
The Centers for Disease Control and Prevention (CDC) has also fallen victim to fraudsters. Cybercriminals are sending out phishing emails that contain domain names very similar to the CDC’s actual domain. The emails encourage potential victims to click on a link that supposedly contains details about new cases of coronavirus specifically around where they live. There’s also another scam going around via email where CDC imposters have been reported asking citizens for donations in bitcoin.
These fraud attempts are ultimately leaving potential victims at risk for identity theft, personal fortune loss, and much more. As the global health crisis continues to thrive, hackers will continue to take advantage of the situation relentlessly.
Tips for Safer Data
It’s important to protect yourself from these exploitative attempts. To do so as best as possible, follow these tips:
- Think before you click. Don’t click on links from sources you don’t know, especially from an email you were not expecting. Or check the URL to verify its safety – Google Safe Browsing is a good place to start.
- Look for spelling and grammatical errors. These can be common in phishing emails.
- Never enter data that a website shouldn’t be asking for. The CDC and WHO will never ask you to enter in your personal data in order to get updates about the pandemic.
- Do your homework when it comes to donations, whether through charities or crowdfunding sites. If someone is requesting donations in cash, by gift card, or by wiring money, DON’T do it.
- Don't fall for anyone offering you a cure or vaccine to the virus. There are currently no vaccines, pills, potions, lotions, lozenges, or other over-the-counter product or prescription treatment available for the coronavirus – both online and in stores.
- Never use the same password on more than one site!
- Turn on multi-factor authentication. Sure, it can be an inconvenience at times, but it could also act as a barrier for someone trying to hack into your accounts.
How Software Solutions Can Help
Keep Your Email Free from Cybercriminals
To prevent scammers from cashing in on the current health crisis, Clearswift, a data loss prevention solution from HelpSystems, offers advanced threat protection. Our SECURE Email Gateway provides unprecedented spam detection, redaction, and sanitization – it’s not just for your hands, folks!
Clearswift delivers highly secure email without delay, whether it’s in the cloud or on-premises. Its multi-layer defense leverages signatures, recipient authentication, machine learning engines, and more to provide superior protection. This next generation email security results in 99% spam detection and 0% false positives.
Related Reading: GoAnywhere MFT + Clearswift
Keep Patient Information Secure
Even at times of crisis, patients’ personal/protected health information (PHI) should stay that way – protected. As of now, the number of people getting infected by the coronavirus is growing rapidly, which means more and more people are going to be seeking treatment.
Depending on the location and the amount of people infected, there’s a good chance patients may not have much of a choice with who to get treatment from. Regular doctors or hospitals of choice may not have the room or available resources to take patients in. It’s truly important to know that your data can get to the next hospital or doctor safely.
The less time doctors and hospitals spend receiving and verifying personal data, the quicker patients can receive effective treatment and stop the spread of the virus. GoAnywhere Managed File Transfer (MFT) is a great resource for the healthcare field to have in times of crisis such as this.
How to Protect Personal Health Information with MFT
GoAnywhere MFT is a secure file transfer solution that works for healthcare organizations and business associates to safeguard the transfer and storage of sensitive electronic PHI (ePHI) and electronic health record (EHR) data. The software is easy to implement and requires no programming experience to use, so any team can get up and running quickly. It also complies with HIPAA and HITECH regulations.
GoAnywhere simplifies and secures file transfer operations in the following ways:
- Streamlines the transmission of patient histories and insurance information
- Secures patient data transfers to The U.S. Department of Health and Human Services (HHS) or the CDC
- Authenticates all users so only intended parties can access data
- Secures medication records collection from pharmacies
- Coordinates patient updates with outside physicians
Remote Automation for Working from Home
With the increasingly high numbers of staff from organizations all over the world now working from home (whether temporarily or long-term), MFT agents are a great additional benefit to have.
MFT agents provide real-time, remote file transfer capabilities and can be installed virtually anywhere. Servers and desktops located on-premises, external systems like retail locations or trading partners, or even the cloud are all viable options.
With MFT agents you can take advantage of:
- Remote Monitoring
Right now, many people are likely struggling with keeping their logs centralized as they are using non-standard company hardware while working from home. With agents, individuals can use projects on the agent host to collect remote log files and send them back up to the MFT server or other syslog servers to have them centralized into a single location.
- Syncing Directories
With file listing capabilities, along with the logic you can put into projects, you can keep directories in sync with one another by adding/removing files on an as needed basis.
- Remote Script Execution
Agents have the capability to run native commands ad hoc or based upon schedules or conditions.When you couple this with the ability to transfer files to/from the remote agents servers, this could be a great mechanism that allows for easy mass deployment and execution of remote scripts.
An additional way GoAnywhere MFT can help protect your data during this period of working from home is through its mobile app capabilities.
BYOD (Bring Your Own Device) for secure file management is possible with the GoAnywhere File Transfer mobile app. This app enables users of Apple and Android devices to easily and securely send ad hoc file transfers with GoAnywhere MFT.
Coronavirus and Cybersecurity Go Hand in Hand
Through uncertain times there’s no need to feel uncertain about your personal data – GoAnywhere's got you. | <urn:uuid:fd5e0c39-20b5-4ba5-b1fa-cad70fa8507f> | CC-MAIN-2022-40 | https://www.helpsystems.com/blog/how-coronavirus-impacting-your-data-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00144.warc.gz | en | 0.925651 | 1,757 | 2.640625 | 3 |
Overview: How to Use an SFTP Client
In this tutorial, I'm going to show you how to use an SFTP client to connect with an SFTP server and then upload and download files with it. In addition, I will talk about host keys and how they are used to verify the server's identity. Finally, I will also demonstrate how to load up a private key and take advantage of public key authentication.
The SFTP client I'll be using throughout this article is our platform-independent file transfer client, AnyClient. Aside from SFTP, AnyClient also supports several other file transfer protocols, including FTP, FTPS, HTTP, HTTPS, WebDAV, WebDAVS, Amazon S3, and AFTP. It's totally FREE, so I encourage you to try it out.
Ready now? Let's begin.
Requirements for establishing a connection
When you connect to an SFTP server, you will have to submit the following basic information:
✔ Username - This is the username assigned to your user account on the SFTP server.
✔ Password - The password associated with that username. Depending on how your SFTP server's authentication (login) settings is set up, you may need to enter a password, a private key, or both each time you login.
✔ IP address or hostname - The designated IP address/hostname of the server.
✔ Port number - This is the corresponding port number of the SFTP service. Normally, that number would be 22.
✔ Private key - This is a special file used by the SFTP client to generate a digital signature which is uniquely identified with your user account and recognized by the server (by virtue of the private key's corresponding public key stored on the server). That signature will then be used by the SFTP server to confirm your identity.
About private keys and public key authentication
Now, why would you want to employ a private key when a password can already be used to authenticate a person's identity? Actually, a password is only one way of proving a person's identity. It is a piece of information which, ideally, only the person represented by the username should know.
Unfortunately, passwords can be stolen. Crooks can steal passwords through brute force attacks or through a variety of social engineering (psychological manipulation) techniques.
Sometimes, they even steal users' passwords from other software applications. Because many users reuse their passwords across several applications, crooks only need to obtain a user's password from one application and then apply that password to that user's accounts in other applications, including the SFTP service.
What a private key (and public key authentication in general) does is to provide another way of proving a person's identity. When a user submits his digital signature using his private key, he in effect is presenting something only he, as the authorized owner of the key, should possess. Obviously, the private key file must be kept in a secret location known only to the user.
To distinguish the two, a password is something a user knows, while a private key is something a user has. By combining these two methods (password and public key authentication), you will be able to strengthen your user authentication process considerably.
Connecting to an SFTP server using a password
To connect to an SFTP server that only requires a username and password as login credentials, you would only need to enter the server's IP address or hostname (e.g. 10.0.0.2), the port number (22), and of course, the username and password. If you're using a multi-protocol file transfer client like AnyClient, you would also have to select "SFTP" from the list of supported protocols.
Once you're done entering the needed information, click the "Connect" button. Assuming the connection attempt is successful, one of two things can then happen:
1) If it's the first time you've ever connected to the server, you'll first be asked to verify the server's host key as shown below.
2) If it's not your first time to connect and your client recognizes the server's host key (more about host keys below), you'll automatically be granted access into the server.
If it's your first time to connect and you're prompted with the dialog shown earlier, click the Accept and Save button. This will allow your SFTP client to save the SFTP server's host key and use that key to identify the server in future connection attempts.
Understanding Host Keys
The use of host keys is a feature of the SFTP protocol. Basically, a server's host key fingerprint is unique to each particular server. In other words, it can be used to distinguish one SFTP server to another.
Hence, if in the future, your client attempts to connect to a server believed to be one it has already connected to in the past but then receives a host key that doesn't match the one associated with that server, then it's possible that the machine you're trying to connect to isn't really the server you thought it was. Worse, you could actually be falling for a spoofing attack.
Spoofing is a technique used by attackers to divert your connection to a malicious machine in order to obtain your password. Host keys can be used to counter these attacks.
Connecting to an SFTP Server using a private key
Let's now talk about logins that implement public key authentication to authenticate users. In this kind of logins, users are required to submit a digital signature using their private key.
Note: The keys being referred to in this section is different from the host keys discussed earlier.
To submit your digital signature, simply load your private key file unto the SFTP client. In AnyClient, you can do this in the Options tab.
First, tick the checkbox labeled Use public key authentication and then navigate to your SFTP private key file.
After making sure you've entered all other pertinent information (i.e., Host, Port number, username, protocol) found in the General tab, click the Connect button.
If all goes well, you should encounter the Verify Host Key dialog shown earlier. Again, click Accept and Save to proceed.
Uploading and downloading files with an SFTP client
You'll then come face to face with two panes. The left pane will be populated with the files and folders/directories of your local system (where your SFTP client is running), and the right pane with those files and folders/directories on your SFTP server (a.k.a. remote system) that you have access to.
You can navigate into a subdirectory by double-clicking on it. To navigate up to a directory's parent directory, just click the ellipses (..) at the top of the pane. To upload files unto the current remote directory, select the files in your local system that you want to upload and then click the Upload button.
Similarly, to download files unto the current local directory, select the files you want to download and click the Download button.
That's it. For more tips like this, follow us on Twitter! Follow @jscape
How to test an SFTP Server for FREE
To come up with this post, we paired AnyClient with JSCAPE MFT Server - a Managed File Transfer Server that allows you to upload and download files via SFTP, FTPS, FTP, and other file transfer protocols. JSCAPE MFT Server comes with a FREE evaluation edition which you can download now. | <urn:uuid:c3771ac6-2c1e-4565-8e6c-4aa496f6efd7> | CC-MAIN-2022-40 | https://www.jscape.com/blog/how-to-use-an-sftp-client | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00144.warc.gz | en | 0.919149 | 1,557 | 2.578125 | 3 |
Researchers at Paul Scherrer Institute Detail New Blueprint for More Stable Quantum Computer
(EurekaAlert) Researchers at the Paul Scherrer Institute PSI have put forward a detailed plan of how faster and better defined quantum bits – qubits – can be created. The central elements are magnetic atoms from the class of so-called rare-earth metals, which would be selectively implanted into the crystal lattice of a material. Each of these atoms represents one qubit. The researchers have demonstrated how these qubits can be activated, entangled, used as memory bits, and read out.
The authors describe how logical bits and basic computer operations on them can be realised in a magnetic solid: qubits would reside on individual atoms from the class of rare-earth elements, built into the crystal lattice of a host material. On the basis of quantum physics, the authors calculate that the nuclear spin of the rare-earth atoms would be suitable for use as an information carrier, that is, a qubit. They further propose that targeted laser pulses could momentarily transfer the information to the atom’s electrons and thus activate the qubits, whereby their information becomes visible to surrounding atoms. Two such activated qubits communicate with each other and thus can be “entangled.” Entanglement is a special property of quantum systems of multiple particles or qubits that is essential for quantum computers: The result of measuring one qubit directly depends on the measurement results of other qubits, and vice versa.
“Our method of activating and entangling the qubits, however, has a decisive advantage over previous comparable proposals: It is at least ten times faster,” says Grimm. The advantage, though, is not only the speed with which a quantum computer based on this concept could calculate; above all, it addresses the system’s susceptibility to errors. “Qubits are not very stable. If the entanglement processes are too slow, there is a greater probability that some of the qubits will lose their information in the meantime,” Grimm explains. Ultimately, what the PSI researchers have discovered is a way of making this type of quantum computer not only at least ten times as fast as comparable systems, but also less error-prone by the same factor. | <urn:uuid:2119dea9-7b8b-4bd0-807a-bbebd9a4988b> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/researchers-at-paul-scherrer-institute-detail-new-blueprint-for-more-stable-quantum-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00144.warc.gz | en | 0.924948 | 462 | 3.796875 | 4 |
Machine learning is a computer’s way of learning from examples, and it is one of the most useful tools we have for the construction of artificially intelligent systems.
It starts with the effort of incredibly talented mathematicians and scientists who design algorithms (a fancy word for mathematical recipes) that take in data and improve themselves to better interact with that data. The algorithms effectively “learn” how to be better at their jobs.
Consider the spam filter working in the background to block your junk email. Since it has “studied” a large set of sample spam emails, it can come to mathematically “learn” what spam email looks like and accurately identify new spam before it leaks into your inbox.
An excellent documentary called “The Smartest Machine On Earth” tells the story of Watson, IBM’s famous Jeopardy-winning supercomputer, and delves into how IBM used machine learning to make its creation into a game show champion. | <urn:uuid:9df55d43-51e7-4345-a3b8-e2c8deecd1c7> | CC-MAIN-2022-40 | https://www.crayondata.com/what-the-heck-is-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00144.warc.gz | en | 0.951231 | 200 | 3.375 | 3 |
Crypto security between clients and servers on the Internet often rely on Transport Layer Security (TLS) protocols. Today’s protocols evolved from Secure Sockets Layer (SSL). Many people still use the acronym SSL when referring to TLS.
Steven Levy’s book Crypto provides an entertaining history of public-key cryptography and SSL through 2001.
Transport Layer Security (called both TLS and SSL) combines public key secret sharing with a secret key cipher to protect a connection.
Video notes: cys.me/vid/c04.
Video #5 shows how we detect alterations in messages using hash functions vimeo.com/199836576
The previous video explains secret sharing with public key crypto vimeo.com/197452327
See the entire Cryptosmith series in its album vimeo.com/album/4229550
Last updated: 13 March 2017 | <urn:uuid:4511286e-367c-4446-a74b-4807f8fc08b2> | CC-MAIN-2022-40 | https://cryptosmith.com/vid/c04/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00144.warc.gz | en | 0.811542 | 188 | 3 | 3 |
The Date/Time data type handles years from 1 A.D. to 9999 A.D. in the Gregorian calendar system. Years beyond 9999 A.D. cause an error.
The Date/Time data type supports dates with precision to the nanosecond. The data type has a precision of 29 and a scale of 9. Some native data types have a smaller precision. When you import a source that contains datetime values, the import process imports the correct precision from the source column. For example, the Microsoft SQL Server Datetime data type has a precision of 23 and a scale of 3. When you import a Microsoft SQL Server source that contains Datetime values, the Datetime columns in the mapping source have a precision of 23 and a scale of 3.
The Integration Service reads datetime values from the source to the precision specified in the mapping source. When the Integration Service transforms the datetime values, it supports precision up to 29 digits. For example, if you import a datetime value with precision to the millisecond, you can use the ADD_TO_DATE function in an Expression transformation to add nanoseconds to the date.
If you write a Date/Time value to a target column that supports a smaller precision, the Integration Service truncates the value to the precision of the target column. If you write a Date/Time value to a target column that supports a larger precision, the Integration Service inserts zeroes in the unsupported portion of the datetime value. | <urn:uuid:2d15524b-3a7e-4860-b926-66a6783cebdd> | CC-MAIN-2022-40 | https://docs.informatica.com/data-integration/powercenter/10-4-1/designer-guide/datatype-reference/transformation-data-types/date-time-data-type.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00144.warc.gz | en | 0.653878 | 307 | 3.15625 | 3 |
A new meta-analysis study by researchers from the Department of Clinical Nutrition, School of Nutrition and Food Science, Food Security Research Center, Isfahan University of Medical Sciences-Iran involving various relevant published studies from around the world has shown that antioxidant supplements like Vitamin C, Vitamin D, Selenium and Zinc does help with COVID-19 clinical outcomes and prevents disease severity.
The study findings were published in the peer reviewed journal: Food Science & Nutrition.
Summary of key findings
In this systematic review of primary human studies, we investigated the role of vitamins A, C, D, and E, selenium, zinc, and α-lipoic acid in major clinical outcomes of people with COVID-19. Among the aforementioned seven antioxidants, eligible studies were found only for vitamins C and D, selenium, and zinc. The findings suggest that vitamin C may cause beneficial effects on inflammation status, Horowitz index, and mortality rate of COVID-19 patients. Moreover, vitamin D may have a positive role in the reduction of disease manifestations and severity, inflammatory biomarkers, lung involvement, ventilation requirement, hospitalization, ICU admission, and mortality in individuals with COVID-19. Also, selenium may have the potential to increase and decrease the cure rate and mortality of COVID-19 patients, respectively. Furthermore, zinc may be able to lower hospitalization, ventilation requirement, ICU admission, biomarkers of inflammation and bacterial infection, and disease complications in individuals infected with COVID-19.
Mechanisms of actions
Although none of the included studies examined the role of vitamin A in subjects with COVID-19, bioinformatics findings proposed that this antioxidant may be beneficial for individuals infected with SARS-CoV-2 (Li et al., 2020). Vitamin A has an important role in enhancing the body’s immunity and regulating both cellular and humoral immune responses (Jayawardena et al., 2020). The production of antibodies, also known as immunoglobulins (Ig), is integral to the maintenance of humoral immune responses (Huang et al., 2018). An animal study showed that vitamin A can promote humoral immunity by increasing serum levels of IgG, IgM, and IgA (Ghodratizadeh et al., 2014). Vitamin A also plays a pivotal role in the development of epithelium, which is considered a frontline defense against pathogen invasion (McCullough et al., 1999). As vitamin A enhances mucin secretion in the respiratory tract and intestine, it is able to improve the antigen nonspecific immunity function of these tissues (Huang et al., 2018). Moreover, vitamin A may inhibit inflammatory processes induced by COVID-19 through the regulation of multiple key genes including mitogen-activated protein kinase 1 and 14, interleukin-10, epidermal growth factor receptor, protein kinase C beta type, intercellular adhesion molecule 1, and catalase (Li et al., 2020).
The results of this systematic review indicated that vitamin C may exert favorable effects on clinical outcomes of COVID-19 patients. Vitamin C acts as a powerful antioxidant, especially for epithelial cells of the lungs (Farjana et al., 2020). It appears to scavenge reactive oxygen species (ROS) and inhibit pathways involved in neutrophil extracellular trap formation and cytokine storms (Cerullo et al., 2020). Moreover, vitamin C can suppress lactate production. This can be of great importance because serum and tissue concentrations of lactate are elevated in critically ill patients with COVID-19 (Earar et al., 2020). Lactate weakens the host immune system by decreasing the production of type I interferon and limiting viral clearance (Lottes et al., 2015; Zhang et al., 2019).
The findings of this systematic review showed that vitamin D may play a positive role in improvement of COVID-19 clinical outcomes. It seems that antioxidative, antiinflammatory, and immunomodulatory properties of vitamin D can be involved in this regard (Hajhashemy et al., 2022; Musavi et al., 2020). Besides, some researchers discussed the key role of vitamin D in the RAS (Kumar et al., 2020; Malek Mahdavi, 2020; Musavi et al., 2020). As noted in the introduction, SARS-CoV-2 binds to ACE2, which is expressed on the surface of alveolar epithelial cells (Silvagno et al., 2020). Once the virus is attached, the activity of ACE2 is suppressed, which further enhances the activity of ACE1, that accordingly increases the formation of angiotensin II, leading to intensified pulmonary vasoconstriction and severe COVID-19 reactions (Malek Mahdavi, 2020). In an animal study, the expression of ACE2 in the lungs was significantly elevated by calcitriol, the bioactive form of vitamin D (Xu et al., 2017). Therefore, as a result of vitamin D supplementation, ACE2 may be expressed more, which can decrease lung injury (Imai et al., 2005). Moreover, vitamin D may reduce the production of angiotensin II and result in less pulmonary vasoconstriction through suppressing renin activity (Kumar et al., 2020).
Although none of the included studies investigated the role of vitamin E in individuals with COVID-19, bioinformatics findings suggested that this micronutrient may be beneficial for patients infected with SARS-CoV-2 (Kim et al., 2020). Vitamin E is a lipid-soluble antioxidant with the ability to protect cells from damage caused by ROS, especially in respiratory infections (Lewis et al., 2019). Moreover, vitamin E is involved in various aspects of the immune response, including but not limited to the production of antibodies, phagocytosis, and T cell function (Akhtar et al., 2021). This vitamin modulates T cell function through affecting T cell membrane integrity, cell division, signal transduction, and several inflammatory mediators such as prostaglandin E2 and proinflammatory cytokines (Lewis et al., 2019). Furthermore, it seems that vitamin E can induce signals of gene expression that counteract signals associated with COVID-19 (Kim et al., 2020).
The results of this systematic review revealed that selenium may have a promising role in amelioration of COVID-19 clinical outcomes. As mentioned earlier, COVID-19 increases the production of ROS in host cells, which can cause oxidative stress if not counteracted by the antioxidant defense system (Chernyak et al., 2020). Glutathione peroxidase-1 (GPx1), a cytosolic selenoenzyme with antiviral properties, is considered as a crucial antioxidant defense against ROS (Sajjadi et al., 2022). This selenoprotein catalyzes the detoxification of hydrogen peroxide to water molecules and is particularly involved in protection against viral respiratory infections (Guillin et al., 2019). There is evidence of an interaction between GPx1 and the main protease of SARS-CoV-2, 3-chymotrypsin-like protease, which is essential for viral replication. This interaction depends on host selenium status to combat SARS-CoV-2 virulence (Seale et al., 2020). Accordingly, selenium may improve clinical outcomes of patients with COVID-19.
The findings of this systematic review manifested that zinc may have desirable effects on clinical outcomes of COVID-19 patients. Multiple protective mechanisms of zinc against COVID-19 infection have been proposed in the literature. It seems that SARS-CoV-2 can weaken mucociliary clearance and expose the lungs to further viral and bacterial infections (Koparal et al., 2021). In turn, zinc may enhance mucociliary clearance by improving cilia morphology and increasing cilia beat frequency (Darma et al., 2020). This mineral can also improve the integrity and barrier function of the respiratory epithelium by increasing its antioxidant activity and upregulating its tight junction proteins such as claudin-1 and zonula occludens-1 (Skalny et al., 2020). In addition, zinc may exert antiviral effects through interference with viral replication cycles (Read et al., 2019). Moreover, zinc can be beneficial for bacterial coinfection in viral pneumonia, because it may inhibit the growth of Streptococcus pneumoniae by modulating bacterial manganese homeostasis (Eijkelkamp et al., 2019). Furthermore, zinc can downregulate the production of proinflammatory cytokines through the inhibition of IκB kinase activity and nuclear factor-κB (NF-κB) signaling (Skalny et al., 2020).
Although none of the included studies evaluated the role of α-lipoic acid in patients with COVID-19, some researchers hypothesized that this potent antioxidant may be advantageous for subjects infected with SARS-CoV-2 (Sayıner & Serakıncı, 2021). Α-lipoic acid is able to reduce oxidative stress through the regeneration of other antioxidants and chelation of metal ions. In addition, this quasi-vitamin can inhibit the activation of NF-κB, an inflammatory transcription factor (Tibullo et al., 2017). Furthermore, α-lipoic acid may decrease the activity of a disintegrin and metalloprotease 17 (ADAM17), also known as tumor necrosis factor-α-converting enzyme (Cure & Cure, 2020). The lower activity of ADAM17 can reduce the shedding of ACE2 and severity of COVID-19 infection (Peron & Nakaya, 2020). Moreover, α-lipoic acid may increase intracellular pH by activating Na+/K+-ATPase (Cure & Cure, 2020). It seems that higher intracellular pH can inhibit SARS-CoV-2 cellular entry (Petersen et al., 2020). Also, α-lipoic acid has a potential to activate pyruvate dehydrogenase and reduce serum lactate levels (Konrad et al., 1999). | <urn:uuid:3c338d65-3008-49e1-8297-31b73217f6e3> | CC-MAIN-2022-40 | https://debuglies.com/2022/09/08/meta-analysis-study-shows-that-antioxidant-supplements-help-in-covid-19-clinical-outcomes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00344.warc.gz | en | 0.878368 | 2,169 | 2.875 | 3 |
A specification for data encryption, created by IBM in the early 1970s. DES (Data Encryption Standard) is a symmetric cipher: the very same key is used to encrypt and decrypt the data. It is also a block cipher: it converts fixed length blocks of plain text to ciphertext blocks of the same length.
Theoretically, data encrypted with DES can be decrypted only using the same key that was used to encrypt it. DES relies on 64-bit keys, though 8 bits are used as parity bits for error detection, so the effective key is just 56 bits long. This key length was never considered secure, and in 1998 EFF (Electronic Frontier Foundation) proved that data encrypted with DES could be decrypted in 56 hours.
NIST has since withdrawn the specification, and DES is no longer considered a standard for encryption. | <urn:uuid:b8a8afb8-90fb-463f-837c-93a17756ea7d> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/glossary/des-data-encryption-standard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00344.warc.gz | en | 0.955004 | 172 | 3.6875 | 4 |
Keeping Documents Secure While remote
In response to the COVID-19 pandemic, global businesses have restructured operations to accommodate remote work and telecommuting. Humanity has collectively separated and isolated to prevent the virus’s spread. In under a couple weeks, organizations that were once entirely in-person and on-site shifted to off-site experiences.
This event is both impressive and unprecedented, and there’s no question it’s profoundly impacting the current state of business. But while there are many positives to infer, there are negatives, too. One of the most prominent being that remote work, cloud technologies and digital content exchange are all extremely vulnerable. In an age where cyberattacks are commonplace and attackers are wreaking havoc on the business world, that’s a scary thought.
How are the systems and documents people are creating — at home and online — secure? What does this move to remote work mean for email, online materials and media?
Undoubtedly, one solution stems from artificial intelligence (AI) cybersecurity platforms — or more accurately, AI-focused security.
AI Cybersecurity During the Age of Remote and Digital Opportunities
Modern cybersecurity systems are already robust and reliable. The biggest problem is keeping up with hackers by fending off new and more sophisticated attacks. Of course, another element involves social engineering, where people are tricked into bypassing security through phishing attempts and similar actions.
AI can provide immense benefits by filling in the gaps, supplementing traditional security systems in various ways. Big data, especially contextual information, will go a long way toward helping those artificial systems identify and deal with new threats. Like any form of knowledge, this data will reveal trends, patterns and other insights into the world of cybercrime:
1. First Response
One of the simplest ways to fend off phishing and social engineering attacks is to strike the source. By eliminating potential contenders and keeping the content away from vulnerable employees and users, the risk is diminished.
That’s precisely what IRONSCALES does. An AI control system monitors the affected email account, using specific algorithms to detect fraudulent or phishing-related communications. When it discovers something, the system removes the email from the inbox to stop the attack. AI platforms like this enable a first and instant response to potential threats.
2. Perpetual Security
Attacks can come from anywhere, anytime, which is why security solutions must be always-on. Humans, by nature, cannot always be available. We have to eat, sleep and take breaks. AI, on the other hand, runs on computers and can be ever-vigilant.
Barring a hardware or software failure, once you flip the on-switch, AI remains active indefinitely. That’s crucial to preventing attacks and is unprecedented in the cybersecurity world, going beyond real-time monitoring and live updates. AI solutions can respond and react to events near-instantly, but for that to work, the system must be continually on.
Security’s always-on nature helps most in the cloud and collaborative spaces. Many secure and sensitive documents hosted in the cloud are vulnerable, but not in the way one might think. Even something straightforward — like one user accessing another’s account, terminal or online content — can pose major threats. This issue is especially prominent in medical and law fields, where large teams share highly sensitive information.
3. Proactive Prevention
Traditionally, security solutions offer monitoring and firewall tools. They provide detection by identifying potential threats, anomalous users and other security concerns. But they only allow security teams a reactionary opportunity, enabling a response to issues after discovery and damage occur.
These solutions still exist today, enhanced via Machine Learning and AI. AI and real-time monitoring shift power back into the security team’s hands, allowing them to be proactive. These systems are almost more predictive than conventional applications.
To give an example, when the system discovers a user accessing content or information they’re not authorized for, it can limit their account until IT can review with the individual. When unauthorized users are detected altogether, they can be immediately shut down and blacklisted. A system can instantly respond to other suspicious behavior or events to decrease threats.
The result is two-fold — attacks are thwarted, yet the entire system remains available to those who need it with little to no service interruptions.
Examples of the technology include Paladion, Darktrace, Vectra AI, Cylance and many others. Many organizations across the globe are already utilizing these platforms.
4. Embedded Security
Security is often an afterthought. Engineers build the system or network first and layer protection on top to fill in potential issues. It should be the other way around — where security is implemented first and built into the system’s foundations to maximize efficiency and reliability. This method provides a seamless, non-siloed platform for detecting threats and reacting in real-time.
Google’s Chronicle employs AI to discover “embedded threat signals” using “proprietary data sources, public intelligence feeds” and similar data. It allows teams to react to security issues faster. Plus, the system grows more accurate over time as it ingests more information. It’s baked right into the platform and designed to flag anomalies and potential threats based on behavior, known data and trends. That also means the AI is specific to the system and knows what does and doesn’t look right.
5. Analytics-Driven Security
AI cybersecurity solutions show the most significant potential in using big data and existing information to provide better visualizations and reports on security threats. That means analytics-driven machine learning platforms give a complete window into the networks and environments where they are installed. It’s about having the whole picture instead of reacting to one puzzle piece. Security teams and executives can see precisely where security is lacking and why.
They can identify underlying issues, like authentication Vulnerabilities, weak users or applications, and even external problems — like mobile devices. This information collectively aids in building profiles about trends, people and events specific to the system, which substantially boost security if employed adequately.
Businesses could identify what parts of their site or platform attackers are hitting most. Maybe hackers are targeting a login system through brute force attacks? Perhaps the sales team has seen an influx of phishing attempts? Maybe someone in-house has been exhibiting a lot of suspicious behavior while accessing internal systems and applications?
These examples require a complete data profile before a company can react, especially when dealing with in-house events. Businesses want to be sure of what’s happening before they respond. Yet, they still want to respond fast enough to prevent or mitigate more severe damage to the system and organization.
AI Is the Future of Security
Experts expect worldwide cybersecurity spending to reach $133.7 billion by 2022. Furthermore, Capgemini predicts 63% of organizations will deploy AI in 2020 to improve cybersecurity, with the most popular application being network security. These statistics show that organizations understand AI cybersecurity’s value more than ever.
Considering the growing sophistication and ever-increasing prevalence of threats, it’s clear that smarter, more accurate and more capable protections will be necessary going forward. Utilizing AI for cybersecurity is the answer.
By Kayla Matthews
Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life.
Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website. | <urn:uuid:df55ebaa-d335-4ead-8ebe-19a305fa7e17> | CC-MAIN-2022-40 | https://cloudtweaks.com/2020/04/keep-documents-secure-remote-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00344.warc.gz | en | 0.930985 | 1,569 | 2.53125 | 3 |
- Phishing attempts to exploit the user and their system via social engineering. These attacks can contain ransomware or other malware.
- Persistent Presence on a compromised account to monitor email. The attack vector can be malware or simple knowledge of the credentials themselves. This is a surveillance vector by a threat actor to monitor email traffic.
- SPAM, Hacktivism, Sabotage, Denial of Service, Blackmail, etc. can involve malicious emails sent from a compromised account, including attachments that may phish other users and contain malware or illegal content.
- Lateral Movement simply by forwarding or sending a malicious email to a group of people or distribution list -- with or without prior system exploitation. Malware can potentially access your address book to perform this function automatically.
Compromise of UK Parliament reveals weak passwordsIf you consider the attack on British members of Parliament, the successful exploitation could have been devastating if it went undetected. When the attack was detected, one of the first publicly disclosed mitigations was to shut down remote email access. The assumption for this step was to stop accounts from being accessed outside of the government’s trusted computing environment (i.e., within a traditional firewalled network). As of the last report, up to 90 accounts were compromised in a brute force attack due to weak passwords. Why? Security professionals had an indication that accounts were compromised from the outside and that the attack was being conducted against remote systems. Strong credentials would have helped in this situation, but it is unknown if the brute force attack used only weak passwords or included personal passwords that were reused as well. While government email systems, including those in the UK, have been considered some of the most protected, no system is perfect (yes, even with multi-factor authentication). We know that the attack was password-based, but the source of the attack and its motivation is still a mystery. Hopefully the details are disclosed soon. If a successful attack can occur at this level, it can happen to all of us.
How to mitigate the risk of a similar attackSo, how can the average company or government agency protect against a similar attack? Even if you don't have sophisticated counter cyber security solutions and an unlimited budget, you can still start with these security basics:
- Enable multi-factor authentication for initial access from new or untrusted systems.
- Enforce complex passwords for every user and administrator account.
- Enforce password rotations on a periodic basis and limit password history reuse.
- For administrator accounts, consider forced rotations after use in addition to periodic password changes (even for service accounts).
- Never let executables be sent via email (very old-school but it still happens today) and disable macros within applications like Microsoft Office that can actually run scripts (i.e., code).
- Ensure email systems, client mail applications, browsers, operating systems, and third-party applications have the latest security patches.
- Restrict access to sensitive email groups; never allow everyone to email an “All” group comprised of the entire company; and limit the number of email addresses a standard user can email at one time. There is no reason an average user should have more than a few dozen people in the To: line.
Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored three books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook. | <urn:uuid:e76214f1-f767-44ee-a138-92d5ca787cbb> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/uk-parliament-cyber-attack-potential-ramifications | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00344.warc.gz | en | 0.955147 | 880 | 2.703125 | 3 |
You might be frustrated with computers right now. If you're like most people, there is a love-hate relationship between technology and mankind, and the reasons are pretty apparent. Sometimes, the computer just doesn't do what you want it to do, or it's extremely slow. But like it or not, computers, software, and information technology is the way of the future, and people everywhere should learn more about it or be left in the dust.
However, even for younger audiences, IT can be a bit intimidating, especially since it's changing all the time. You could learn about an IT concept today, but tomorrow, there are updates. Newer technologies are invented every day, and if you went to school about to learn about programming a couple of years ago, it could be that the programming language you learned is now outdated. But learning about IT doesn't have to be complicated, and if you want to get a slight grip on these concepts, there are ultimately five routes that you can take.
Read a Book About IT
Why not read a book? First, there are millions of great books out there that offer a beginner's walk through the world of IT. Textbooks can even be a great help here. When reading a book about IT, it's best to read about 10-20 pages a day, so that you won't be overloaded with too much content about subjects that will be completely foreign to you, such as algorithms or an MLops definition. Read a book about IT, and you'll know more than a large number of other people.
Take an Online Course
Another great road to take to learn about IT is to take an online course. Many people don't realize it, but there are entire websites dedicated to offering online courses with modules, tests, and even certificates. These courses will walk you through the entire process using a knowledgeable instructor, and you can email this person if you have questions. You can take these courses from the comfort of your own home at your own pace. Come home from work and listen to one or two ten-minute classes.
Enroll in Junior College
Why not enroll in a junior college and earn your certificate or Associates' degree? This means that you don't have to go back to a university and get the whole college experience again, but a lot of local community colleges offer 6-month to two-year programs that will teach you just about everything you need to know about the basics of IT. It's a great way to learn quickly and make friends, and you can also do this online.
Read One or Two Online Articles
You could also enjoy some online articles filled with information about IT. If you go this route and choose to learn IT from a random Joe online, it's best to limit yourself to about one or two sources, because if you don't, you'll just get an information overload. You'll get hundreds of different ideas and definitions about tons of different IT subjects, and you'll be drowning in confusion. It's best to find a couple of great sources from people that you like, and choose to religiously sit at their feet to learn, and only their feet.
Embrace and Use IT More
Last, learn to embrace IT and technology more and use it more often. When people don't understand something, they generally stay away from it. But sometimes, the best way to learn about a topic is through a hands-on approach, and this means that you'll have to get your hands on some IT and experience it for yourself. There's no better way than to grab a computer or laptop, use it, and learn some of its more advanced features.
Learning about IT can be simplified and doesn't have to be intimidating. In the end, you simply learn about it the way you would learn anything else. As you begin to embrace it, have fun with IT and realize its possibilities.
Publish Date: December 18, 2021 10:14 PM | <urn:uuid:f8547dd4-96af-41f2-816e-bb6501ab112c> | CC-MAIN-2022-40 | https://www.contactcenterworld.com/blog/mytechblog/?id=39fb210d-a54c-4a7a-bcfd-7d191717b6bb | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00344.warc.gz | en | 0.969896 | 801 | 2.84375 | 3 |
Famed computer architect, professor, author, and distinguished engineer at Google, David Patterson, wants to set the record straight on common misconceptions about carbon emissions and datacenter efficiency for large-scale AI training.
First, the picture is not quite as bleak as it seems for energy consumption and AI training at hyperscale. That is, of course, if you own the means of production. Patterson says the total energy consumption for all Google datacenters in the U.S. was just over 12 terawatt hours in 2019 with a slight uptick in 2020 but the actual training of AI workloads was relatively small.
The image below shows some of Google’s largest training runs in comparison to broad energy consumption. “Energy for Meena, T5, Gshard, SwitchTransformer are round-off errors,” although he says this is based on the final training run versus all the lead-up training runs before the long, expensive phase of final training, which can be a month of longer.
“The thing to keep in mind is how much was consumed in the final run versus all the preliminary and other training tasks. That final run can be expensive in itself. GPT-3 takes one month on 5,000 computers, it’s not possible to do this continuously.” Even still, given the whole picture of energy consumption by Google, “even a factor of 10-100X larger models training would still not represent a significant part of Google’s overall energy footprint.”
Patterson and crew at Google and Berkeley made some observations about how to make AI training more efficient, even if it’s clearly not catastrophically expensive/impossible (yet) in their current operations.
“It’s possible to make 100-1000x improvements [in carbon emissions by making some changes in AI training systems, but none of them are easy,” Patterson says. “Carefully picking the accelerator can provide 2-5X improvement, carefully picking the DNN between 5-100X, improving datacenter facilities between 1.4-2X and location can bring 5-10X.” He adds that it’s hard to change accelerators (porting, etc.) or the model itself. It’s also hard to change location but, he says, the light at the end of the tunnel for the non-Googles of the world is the cloud, where a large factor improvement can be gained easily. Of course, it’s not cheap.
That overall energy footprint, by the way, is not a static number. Patterson says they’ve found that carbon emissions across all U.S. datacenters can vary by 10X—a striking figure by any means. He adds that next-generation datacenters, including those that will be focused on AI/ML training at large scale, are well-placed in areas like Iowa and in an upcoming example, Oklahoma, where nighttime temperatures drop and days provide enough wind. While Google is committed to reporting what percentage of their energy use is from carbon-free sources, Patterson stressed that this figure will change with wind, so to speak.
All of this led him down a path of poking holes in some common misconceptions about carbon emissions for datacenters. His first task was to dispel the notion that AI training will contribute to massive increases in datacenter usage, thus more energy consumption. We cited the reasons he thinks that untrue using Google’s numbers above, but it’s more nuanced. While AI training will be part of overall workloads for companies like Google, Facebook, and Microsoft, among others, in the case of cloud providers with their high levels of datacenter efficiency and utilization, it will mean less overall emissions because it will not make economic or carbon footprint sense to build your own datacenter if you’re not one of the big cloudbuilders or infrastructure giants.
More compute usage from the largest companies will not translate into far higher carbon emissions for a few reasons: clouds are more efficient/don’t need individual datacenters; the largest companies have the most sophisticated means of leveraging green energy.
“The shift from people not buying servers for their own datacenters and instead renting servers from the cloud is actually great for the environment. The datacenters from Google, Microsoft, and others, run far more efficiently with much greater utilization. There are not servers sitting idle. Think of a university, for instance. That’s a bad place to put a server. Think of it like a book in a library versus one in your home—which one is more efficient?,” Patterson asks.
That idea makes great sense from a carbon footprint perspective—the concept that the broad “everyone else” of IT that wants to adopt AI model training can just flock to highly efficient clouds—but it doesn’t factor in the cost issue for users of large-scale model training on cloud resources.
Large but sparsely activated DNNs can consume <1/10th the energy of large, dense DNNs without sacrificing accuracy despite using as many or even more parameters. Geographic location matters for ML workload scheduling since the fraction of carbon-free energy and resulting CO2e vary ~5X-10X, even within the same country and the same organization. We are now optimizing where and when large models are trained. Specific datacenter infrastructure matters, as Cloud datacenters can be ~1.4-2X more energy efficient than typical datacenters, and the ML-oriented accelerators inside them can be ~2-5X more effective than off-the-shelf systems. Remarkably, the choice of DNN, datacenter, and processor can reduce the carbon footprint up to ~100-1000X. These large factors also make retroactive estimates of energy cost difficult. To avoid miscalculations, we believe ML papers requiring large computational resources should make energy consumption and CO2e explicit when practical.
Another fallacy Patterson points to is the idea that while the largest datacenter operators will the most energy efficient and carbon-neutral in the next decade, that will come at the expensive of everyone else.
This is an important set of points. Just because Google (or any other hyperscale datacenter) will be leveraging solar and wind doesn’t mean there will be none left for any other use. That might sound obvious, but there is plenty of griping about this to be found. “Using renewables by some doesn’t mean others cannot use it. There’s a sense that there’s this fixed pod of energy. The companies that are taking out these contracts and building more renewable energy are doing so with an eye on the future, thinking the grid will provide enough clean energy by 2030,” Patterson says. The idea that Google will be a mecca of carbon-free energy use while everyone else has “dirty” energy is complete fallacy, he argues.
With these misconceptions in mind, what can be done on the ground to turn around on inefficiencies and encourage better carbon emissions behavior more generally? Patterson has a number of suggestions, many of which begin with those who publish results. While MLPerf is now adding performance per watt metrics and companies like Google are publishing what percentage of their energy is carbon-free, there’s still much work to be done. He argues that any hardware metrics that are published in the research community should always have descriptions of perf/Watt and on top of that, should not use peak metrics as the source of reporting.
“If the ML community working on computationally intensive models starts competing on training quality and carbon footprint rather than on accuracy alone, the most efficient datacenters and hardware might see the highest ML demand. If paired with publication incentives to improve emission metrics in addition to accuracy, we can imagine a virtuous cycle that slows the growth of the carbon footprint of ML by accelerating innovations in the efficiency and cost of algorithms, systems, hardware, datacenters, and carbon free energy.”
Findings from the Google and Berkeley team can be found in this detailed analysis. | <urn:uuid:55aff6e8-fa86-4d03-8167-3ae5360fe2e0> | CC-MAIN-2022-40 | https://www.nextplatform.com/2021/07/09/what-were-getting-wrong-about-efficient-ai-training-at-scale/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00344.warc.gz | en | 0.933075 | 1,686 | 2.515625 | 3 |
Change Management is one of the systematic approaches to deal with the transformation and transition of the goal of an organization, technologies, and its processes. Change Management is the process of designing, implementing, controlling, and assisting while implementing strategies to effect change and to help individuals to adapt to the change. These strategies include structured procedure as per the request to change, mechanism of the change, and following up with it.
For the effective process, the change management shall consider several factors like how an adjustment or a replacement shall impact the process, system, and the employees within the organization. Before testing the change, there shall be proper planning, communication regarding the change, also scheduling looking for implementing the change.
The change should be documented and evaluated based on its effect. Documentation is one of the critical components of change management. In addition to ensuring compliance with internal and external controls, including regulatory compliance, it also serves to maintain an audit trail in case a rollback becomes necessary.
What is Project Change Management?
Project Change Management is an effective method for leading a people-side change, which is a structured process and set of tools for supporting the desired outcome. While there may be notable changes as all the individuals are unique. There were decades of research that show certain actions can have an influence on people and the individual transition. Change Management provides a structured approach for supporting the people and by helping them to move from their present state and to own their future state as well.
Change Management can occur on three-level:
- Individual Change Management- It is nothing but understanding change in people’s experience
- Organizational or Initiative based Change Management- It involves creating a customized plan in creating awareness to bring success to the organization
- Enterprise Change Management Capability- Enterprise Change Management provides a competitive ability to adapt to the ever-changing world.
Project Management and Change Management
Project management and Change management are two separate disciplines that apply to the change of the organization, which also improves the Return of Investment and improves success. They are closely intertwined disciplines that bring necessary change into life.
Implementation of Project Change Management
Three steps primarily handle the management using the form of change-request. The request part for the change shall include the explanation of why the change is required ( acts like a justification section), while the analysis that is made on the impact of the change that occurs on the project will be the second part of the process ( impact-analysis section), and the third and final section involves approval for the process of change that happened within the project.
- For the justification section, it shall be completed by the person who is requesting for the change to happen. It could be a customer, sponsor, another stakeholder, or sometimes the project team.
- The project leader will complete the justification section, signed, and approved by the person requesting the change. The request is sent via email or through a phone call from the project leader
- The project team shall complete the second section, i.e impact analysis.
- The third approval section shall be completed by people who required approval for the change to be implemented in the plan.
The implementation involves three steps:
Step 1: Requesting for a change
A person from inside or outside the management shall request the change. In the justification section, the request form should be completed. The project team shall analyze the change, if the change makes sense then they proceed further. If the requested change does not make sense, then it shall be discussed further with the sponsor and the person who requested the change.
The justification section shall include the following questions:
- What are the changes the requestor wants to be made?
- Why does the requester want the change made?
- What problem is the requestor trying to solve?
- How can the request be implemented into the plan? Is this a priority request?
Step 2: Impact of the change
The team shall analyze the change that was requested and proceed to assess its impact on the project. The impact of the change shall be reviewed with the project team that will analyze the change requested and propose a plan that will be reviewed by the originator, sponsor, and customer. The team shall go through the steps to assess the impact of change. The team must be aware of the effect of the change of plans like its scope, resources, and risk.
For eg: The requested change can affect many areas like resource allocation, budget, development of additional tech support for handling the schedule. It is very important to consider the risk factor that would incur and changes in the risk assessment. The information captured shall have an impact on the change form.
Step 3: Approval/ Denial of the change requested
If the change requested is approved the proposed originator and the team gets notified. If the change request does not get approved the proposed originator shall be notified regarding the denial. Change management can be broken into three processes. Firstly the request process, secondly the analysis of the impact of the requested change, third is the process of approval.
In the final process, the impact shall be analyzed, if the project plan is amended the request will be approved. Ultimately, the originator, the team leader of the project, the sponsor, and the customer must approve the plan. If other approvals are required, it shall be listed in the form. The change request form must include the instruction to complete it.
Tools that can be used for change management:
There are many tools in the market to help project managers to implement change systematically and keep a track of it. To name a few,
- BMC Remedy Change Management 9
- StarTeam by MicroFocus
- Rocket Aldon
- ChangeGear Change Manager
- Give eChangeManager
- ServiceNow Change and Release Management Application
- The Change Compass
It is very important to recognize the need for the change and implement the requested change at the right time will have vital growth for the development of the business and sustaining in the industry. Adapting to the failures could be far better than undoing to lose the competition. While the changes may be challenging, the change management within the project shall make it accessible. | <urn:uuid:e3ac1ebf-bb94-4fc4-aff9-665f7e131e41> | CC-MAIN-2022-40 | https://www.certificationplanner.com/resources/change-management-definition-implementation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00344.warc.gz | en | 0.929473 | 1,268 | 2.796875 | 3 |
You have probably watched Happy Days at some point in your life (even Mark Whatney watched it while stranded on Mars). In the first years it was generally considered a good show and had good ratings. But then they showed the episode where the Fonz jumped over a shark on water skis. After that episode, Happy Days never recovered. The show stayed on the air but the ratings continued to drop. Thus the phrase “jumping the shark” was coined. While it originally applied to television programs, it has been extended to, “… indicating the moment when a brand, design, franchise or creative effort’s evolution declines.”
Storage technology is evolving rapidly. New types of storage, new storage solutions and new storage tools that hold a great deal of promise for pushing storage technology forward are being developed. It is extremely likely that some current technologies may not survive in their current form. Which technologies may jump the shark is the subject of this article.
Taking current trends and making predictions from them is always a dicey proposition. However, there are some trends that will have a clear impact on hard drives.
NVRAM and Burst Buffers
While hard drives are still being produced at a very high rate, other storage technologies are catching up in different categories. Solid State Drives (SSDs) are becoming increasingly popular, and now Intel/Micron’s 3D XPoint NVRAM (Non-Volatile RAM) will provide a completely new way to store data.
3D XPoint will look like DIMMs and sit in the system DIMM slots. With regular DRAM memory, the data in memory is lost if the system is turned off. However, for NVRAM, the system can be turned off and the memory state remains in the memory (hence the label “non-volatile”). Rather than write data to a conventional storage device, the data can be left in memory and shared with other applications. This can improve application performance because an application doesn’t have to read data from conventional storage — the data is already in memory, and the application just needs a pointer to its location. Moreover, NVRAM will usually come in terabyte (TB) quantities for systems, instead of gigabytes (GB), as is the case for DRAM.
Alternatively, some or all of the NVRAM can be used as storage (a block device) allowing the creation of a “burst buffer.” Data can be quickly copied from DRAM to the NVRAM burst buffer because it’s inside the system. Theoretically, the state of the system can be stored in the burst buffer while the memory contents are still stored in NVRAM. Then the system can be power cycled, the state can be read from the burst buffer, and the system will resume its previous state. While this can be done today, the fact that the memory contents stay in memory means that the restart to the last state is very, very fast.
The key to burst buffers is the extremely high bandwidth because the storage is on the memory bus. The current projections are that NVRAM won’t be as fast as regular memory but it will be faster than SSDs. It will also cost less than DRAM but be more expensive than SSDs. NVRAM will first go on the market in 2016 or 2017 in HPC systems.
The introduction of NVRAM reduces the performance and capacity gap between main memory and an external file system. In the figure below, courtesy of The Next Platform, is an outline of the storage hierarchy before the advent of NVRAM and after the advent of NVRAM for a new HPC system at Los Alamos named Trinity. The image is from a talk given by a talk by Gary Grider from Los Alamos.
Trinity is expected to have a peak performance of more than 40 PetaFLOPS. It is also expected to have an 80 Petabyte (PB) parallel file system with a sustained bandwidth of 1.45 Terabytes/s, and a burst buffer file system that is 3.7 PB in capacity with a sustained bandwidth of 3.3 TB/s [Note: It will have a memory capacity of about 2PB, so the burst buffer can easily hold the entire contents of memory].
Notice how the burst buffer storage (NVRAM) in Trinity (on the right in the diagram) has a bandwidth that is 2-6 times that of the parallel file system but still lower than main memory. However, when the power is turned off, the data is not lost as it is with DRAM.
In Trinity, Los Alamos has introduced burst buffers into the storage hierarchy as well as something new they refer to as “campaign storage.” The campaign storage layer is below the parallel file system and above the archive layer. It has about 1/10th the performance of the parallel file system but presumably has a greater capacity than the layers above it. It is intended to hold the data longer (months-years) and flushed less frequently.
The righthand diagram, which is the storage stack for Trinity, is projected to become a very common hierarchy for HPC systems in the next few years. The prior hierarchy on the left hand side of the diagram only had two layers of storage, but now users have to contend with four layers.
With two layers, the parallel file system was the primary storage for the system. Any data that needed to be kept for a longer time but wasn’t accessed very often was sent to tape (archive). Tools that automatically performed the migration between the two tiers of storage were, and are, fairly common. There are tools and techniques for making all data, regardless of whether it’s on tape or disk, appear to be on the same file system (e.g. HSM – Hierarchical Storage Management). The has tremendous advantages for the user because the same set of commands can be used on a file regardless of whether it’s in the parallel file system or in the archive.
John Bent, who has worked at Los Alamos for many years on innovative storage systems, predicts that several of the storage layers will ultimately collapse into a single layer. He illustrates this in the diagram below.
Specifically he sees parallel file systems, object storage (“campaign storage”) and archive storage collapsing down into 1-2 layers. This brings the number of storage layers down to 2-3, which is better than four.
The top storage layer is NVRAM (burst buffers) that are inside the nodes. The next layer down can either be parallel file systems or a combination of parallel file systems and object storage. The final and third storage layer is either an archive or a combination of an archive and object storage (recall that the Bent says that the parallel file system, object store and archive are to be split into two tiers).
New Archive Media
Traditionally, archive meant a storage layer where you place data that is infrequently accessed but still has to be available to be read. The data is written to the archive layer in a sequential fashion, and there is really no such thing as random access because the data is to be accessed very infrequently. The classic solution for this has been tape.
Today tape is commonplace. It has high density, several tape solutions have very large capacities, and the media is stable and reliable. However, the needed tape robots are expensive and generally have high maintenance costs. For archive data they are an obvious choice versus storing everything on spinning media (hard drives). But there is some new technology that might change things.
Recently, there was an article about storing data in five dimensions on nanostructure glass that can survive for billions of years. This comes from the University of Southampton, where researchers have developed a method of using lasers to read and write to a fused quartz substrate (glass). Currently they are capable of writing 360TB to a 1-inch glass wafer. These wafers can withstand temperatures of up to 1,000 deg. Celsius and are stable at room temperature for billions of years (13.8 Billion years at 190 deg Celsius).
The technology is still being developed and commercialized, so many aspects of it are unknown. The read and write speeds are unknown, but it is a fair assumption that the data is written to the glass wafers in a sequential manner, and random IO is not allowed (sounds a great deal like tape). But the promise of the technology is massive. The researchers have already written several historical documents to a wafer as a demonstration. Such a dense and stable media is an obvious solution for archiving data.
Gunfight at the Storage Coral
The burst buffer storage layer uses NVRAM for storage, and the archive layer either uses tape or most likely, a new media such as the glass wafers previously mentioned. The two middle layers of parallel file system such as Lustre, and the object storage layer, are where data needs to be accessed in a random manner including random write access and re-writing data files. These two layers are the only places where classic storage media such as hard drives or SSDs could reside.
The capacity of hard drives is continually increasing with manufacturers releasing 8TB and 10TB 3.5″ drives. To create these increased capacities, manufacturers have started to use shingled magnetic recording drives (SMR). SMR drives allow the density of the individual platters to be increased at the cost of greatly reduced random access write performance. To write some changed data involves first reading the data from surrounding tracks, writing it to available tracks, and then writing the changed data to the drive. Consequently, re-writing data is a very time-consuming process. This has led people to refer to SMR drives as “sequential” drives. This also sounds a great deal like archive storage.
At the same time, the performance of SSDs is much better than hard drives — although the capacity is not quite the same nor is the price. The $/GB of SSDs has consistently been much higher than hard drives, but with new technologies such as 3D NAND chips and TLC (Triple Level Cells) there has been a bit of change.
To get an idea of the $/GB for both hard drives and SSDs, Newegg was searched on Feb. 13 for the least expensive SATA hard drives and SATA SSDs. These are consumer storage devices, but the intent is to get a feel for the trends. The results are in the table below:
|Drive Capacity (GB)||SSD $/GB||5400 RPM
|1,000 GB (1 TB)||0.2299||0.0445|
|2,000 GB (2 TB)||0.3289||0.0249||0.0269|
|3,000 (3 TB)||0.0299||0.0269|
|4,000 (4 TB)||0.0304||0.0399|
|5,000 GB (5 TB)||0.0339||0.0399|
|6,000 GB (6 TB)||0.0358||0.0391|
|8,000 (8 TB)||0.0278||0.0549|
|8,000 (8 TB)
The table indicates that the hard drives are roughly an order of magnitude less expensive than SSDs on a $/GB basis. Hard drives also have larger capacities than SSDs, at least for consumer drives. Recently, Intel and Micro announced that 10TB SSDs will be available.
However, one trend that the table is not presenting is the price drops of SSDs. Just a few years ago, the average $/GB for an SSD was about $1/GB, even for consumer drives. Now some of them are below $0.25/GB. Given that the sequential performance of an SSD is about 5-10 times that of a hard drive and the random IOPS performance is 3-5 orders of magnitude greater than a hard drive, SSDs are becoming extremely popular as a storage medium.
What Does the Future Hold?
A quick summary:
- Burst buffers will likely become the very fast layer of storage for systems, replacing parallel file systems that are external to the system.
- The introduction of Burst Buffers will likely cause a consolidation in the middle layer of storage.
- New archive media that have a very high density and a very long life are being productized.
- Hard drives are not increasing in performance and with SMR drives the random IO write performance is decreasing.
- Hard drives are still the least expensive storage media that isn’t archive oriented.
- SSDs are rapidly coming down in price and the capacities are quickly increasing.
Putting these trends together points to the fact that hard drives, as they exist today, are not evolving at the same pace as other storage solutions. They are being squeezed by much higher-performing technologies such as burst buffers, and from the bottom by tape, and most likely a new media such as glass.
Hard drives will be in use for a long time. They have a wonderful $/GB ratio so if capacity and reasonable performance are important then hard drives are a great solution. However, just like Happy Days, hard drives may have already jumped the shark.
Photo courtesy of Shutterstock. | <urn:uuid:b190458d-37a6-4f6b-bc4d-be8f7ecaa170> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/have-hard-drives-jumped-the-shark/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00344.warc.gz | en | 0.953559 | 2,868 | 2.703125 | 3 |
Operating systems are key components of computer systems and responsible for management of computer hardware, software resources and provide common services for computer programs and enable interface for end users to load and execute programs. Operating systems several flavours emerged from time to time and the growing complexity of hardware and application programs eventually made operating systems a necessity for everyday usage.
In this article we will learn more about Darwin OS, its features, advantages, and limitations.
Definition: Darwin OS
The OS X kernel is an open source project. The kernel along with core parts of OS X are together referred to as Darwin. Darwin is a complete operating system based on many technologies however Darwin does not include Apple proprietary graphics or application layers such as Quartz, Concoa, Carbon or OpenGL. Darwin has a BSD command line application environment.
Origin of Darwin OS
Darwin is from Apple and the operating system for Mac OS X. Darwin is a Mac OS X without user interface. Darwin is compatible with FreeBSD distribution and emerged having efficiency and stability of UNIX with simple usability of Mac OS. Developers of Apple and the open source community worked together for PowerPC and x86 operating system versions, modifications, and developments flow back to the public, after a free registration of source code was made available for download from Apple website. Standard format supported for applications in Darwin is Mach-O and Linux applications can be ported.
Architecture of Darwin OS
Darwin technology is based on BSD Mach 3.0 from Apple technologies. It is an open source technology having developers’ full access to source code. Same software forms the core of both OS X and Darwin, developers can create a low-level software which runs on both OS’s. Majorly the technology is derived from FreeBSD, a version of 4.4BSD which offers advanced features like networking, performance, security, and compatibility features. Most part of the OS is platform dependent and it provides a clean set of abstractions for dealing with memory management, interprocess communication (IPC), and other low-level operating system functions.
Architecture components of Darwin:
- Mach components : Manages process resources (CPU usage and memory) , handles scheduling, memory protection, messaging center for entire OS , support for low level functions (Remote procedure calls, scheduler support for SMP , real time services support, virtual memory support, paging etc.)
- BSD – provides OS ‘Personality’ services such as file system, networking, FreeBSD Kernel API, kernel support for threads, support for syscall, security policies such as user IDs and permissions
- Networking – Modern features for BSD network capability, network address translation and firewalls, provides support for both IP and AppleTalk, multihoming, routing – multicast support and packet filtering , server tuning, socket-based AppleTalk, Mac OS classic support , open transport APIs
- File system – File systems support (HFS, UFS, NFS etc), The default file system is (HFS+), Mac OS X boots (and ‘roots’) from HFS+ , enhanced virtual file system (VFS) file systems are stackable ,UTF-8 (Unicode) support, increased performance over previous versions
- I/O Kit – simplified driver development Framework, modular and extensible, object oriented I/O architecture , true plug and play, dynamic device management, on demand drivers loading and multiprocessor capabilities , power management of desktop systems as well as portables
- Network kernel Extensions (NKE) – Without interruption and re-compilation Add/remove kernel modules for networking
Characteristics of Darwin OS
- Layered architecture
- Improved reliability and performance
- Enhanced networking
- Object based systems programming interface
- Industry standards support
Darwin OS Pros and Cons
- Reliable and good performance
- Virtual file system design
- Foremost networking features
- Distributed under open source hence security threat
- Not extensible like windows OS
- Slow loading and execution of classic applications
- Shortcoming and support in software compatibility | <urn:uuid:ea0b12dd-a03c-4845-a04d-ba4d3cddc0a5> | CC-MAIN-2022-40 | https://networkinterview.com/what-is-darwin-os/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00344.warc.gz | en | 0.897992 | 838 | 3.421875 | 3 |
EDR / MDRIdentify, contain, respond, and stop malicious activity on endpoints
SIEMCentralize threat visibility and analysis, backed by cutting-edge threat intelligence
Risk Assessment & Dark Web MonitoringIdentify and quantify unknown cyber risks and vulnerabilities
Cloud App SecurityMonitor and manage security risk for SaaS apps
SOC ServicesProvide 24/7 threat monitoring and response backed by ConnectWise SOC experts
Policy ManagementCreate, deploy, and manage client security policies and profiles
Incident Response ServiceOn-tap cyber experts to address critical security incidents
Cybersecurity GlossaryGuide to the most common, important terms in the industry
Expanded Definition: Endpoint Security
What is endpoint security?
To understand endpoint security, you have to first understand what an endpoint is. An endpoint is an end-user’s device, including: laptops, mobile phones, printers, tablets, servers, and more. Any device that is connected to a larger digital network can be considered an endpoint.
Endpoint security is the practice of protecting end-user devices, or “endpoints”, from any range of cyber threats.
Given the breadth of the endpoint security category, it’s often seen as a core tenet of enterprise cybersecurity programs. While the legacy antivirus software you may have installed on your first laptop could be considered endpoint security, the need for more advanced approaches that have the capability to rapidly detect and repair any vulnerabilities continues to grow.
Any new hardware connecting to a network can present a new cybersecurity risk. With this in mind, the advent of bring-your-own-device (BYOD) workplace practices and a shift to cloud-based storage led to a massive shift in endpoint security—and cybersecurity overall—in the early 2010s. Employees connecting to enterprise networks through personal smartphones, tablets, and even printers have outmoded the legacy approach to cybersecurity, which relies on securing the network perimeter. The new approach to endpoint security requires a people-centric cybersecurity approach.
Endpoint security needs vary by network type and size. Any endpoint can serve as a cybercriminal’s entry point to the network, linking them to sensitive user data and intellectual property.
To protect the network, common endpoint security features include encryption, application control, data loss prevention (DLP), antimalware protection, antivirus protection and more. Advanced technologies in the internet of things (IoT), cloud, and artificial intelligence (AI) categories are increasingly being used to fortify endpoint security.
The MSP role in establishing endpoint security
Managed service providers (MSPs) can help fortify their clients’ frontline defenses by offering a holistic view of their networks and IT systems. The number of connected endpoints is growing exponentially, which is why many MSPs are using asset and device management automation software to monitor and manage all endpoint devices across a network.
The ability to access a bird’s eye view of every endpoints’ status across a network can allow IT teams to manage more devices with less resources.
Automate endpoint updates with simple commands
Keeping endpoint devices up to date improves their performance while decreasing their cybersecurity risk. Automating patch management gives MSPs granular control over the process and enhanced visibility of their clients’ endpoint devices.
Automatic daily updates or quick configuration setup for third-party applications and productivity tools can safeguard devices when using programs like:
- Microsoft Skype, Zoom
- Adobe Shockwave, Reader, and more
- Apple iTunes
- Google Chrome, Mozilla Firefox, and other popular web browsers
Remote monitoring and management across endpoints
Proactive endpoint security can help teams minimize the impact of an attack through quick detection, or even identify vulnerabilities before an attack transpires. For MSPs, automated remote monitoring of endpoint devices is increasingly crucial for their clients’ remote or hybrid workforces.
Remote monitoring features of session auditing, agentless monitoring, virtual machine monitoring, and more come together to arm IT teams with the information they need to confidently prevent and detect endpoint security exploitations.
Automatically scan and catalog new endpoint devices
The number of managed devices (another way of saying “endpoints”) connected to a single network has grown exponentially over recent years. Automated discovery and cataloging of managed devices keeps IT teams in-the-know when new equipment is accessing the network, or when a device is in need of support. Automated network scans delivered in a single dashboard flag healthy, warning, critical, or unknown device statuses and empower automated agent deployment to address any necessary endpoint security vulnerabilities.
Did you know?
70% of successful cybersecurity breaches originate on endpoint devices.
— International Data Corporation
Connectwise Control: Remote Unattended Access Simplified
Feature Sheet >>
Whether you’re handling internal IT, point-of-sale, a remote workforce, or are a technology solution provider (TSP), this all-in-one remote access and control platform simplifies service and maintenance operations, delivering higher value to your business—and your bottom line.
Seven RMM Tools or Features Every MSP and TSP Needs
Blog post >>
More and more people are working from home, which has made endpoint management a key concern for many businesses. MSPs are tasked with tracking devices across many different networks, environments, and locations. RMM software must support this “new normal” and make endpoint management for remote workforces easy and automatic.
Remote Support Software Cyber Attacks
Join ConnectWise’s Sean White, Senior Product Manager, and Topher Barrow, Product Marketing Manager, as they train you in the ways of best practices to secure your remote access and control tools and share the latest security improvements to ConnectWise Control®.
ConnectWise Cybersecurity Starter Kit
Want to get started selling cybersecurity? We’ve put together a kit to help. Download the kit today for helpful resources that will transform your business from an MSP to an MSP+ model, including educational information for your SMB customers, templates, and more.
Managed cybersecurity services - a growth strategy to support remote work
Blog post >>
A quick cybersecurity assessment is a great way to help customers understand the security risks that arose when they transitioned to remote work and BYOD. Once they understand the present risks, you can show them how your solution can save their organization money over time via threat prevention. | <urn:uuid:7f0207b3-a0da-4808-b511-046e6cf10a2c> | CC-MAIN-2022-40 | https://www.connectwise.com/cybersecurity-center/glossary/endpoint-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00344.warc.gz | en | 0.891492 | 1,347 | 2.734375 | 3 |
The Eurovision Song Contest is a special occasion where countries like Russia, Ukraine, Armenia, Georgia, Azerbaijan and many more come together under the same roof.
The contest, originally created to unite Europe after the war, also started as a technological experiment. On May 24 1956, Switzerland hosted and won the first edition, which was mainly broadcast on radio. TV footage of the event has been lost, apart from the winning entry.
Today, nearly 200 million viewers in Europe watch the show live every year. CBR has put together this ‘making your mind up’ list in the run up of this weekend’s extravaganza.
Why? Because it’s Eurovision.
1. European Broadcasting Union
Underestimated, the EBU is the most technological aspect of the contest itself.
The organisation was founded in 1950 and includes 72 public media outlets in Europe, Asia and North Africa.
After discussions for the ESC started in 1955, with the first edition in 1956, the EBU has been behind several advancements in the radio and TV industries.
The organisation has helped with the development of AES3, an AES/EBU digital audio interface, and was involved in the development of serial and parallel interfaces for digital video.
The EBU also started the Radio Data System used on FM broadcasting, which has become an international standard of the International Electrotechnical Commission.
The association, a member of the European P2P-Next project, also played a major role in the development of the radio data system (RDS), digital audio broadcasting (DAB), digital video broadcasting (DVB) and high-definition TV (HDTV).
2. Colour TV
Probably the greatest revolution of television after the invention of the technology itself, was the introduction of colour footage.
The first ever contest to be broadcasted in colour was produced by the BBC, live from the Royal Albert Hall in London in 1968. This followed the first ever colour broadcast in the UK which was in 1967 during Wimbledon’s Championship.
But for some countries, like Ireland, Eurovision was the first time they saw a colour television broadcast. Hosted by RTE in Dublin’s Gaiety Theatre, Eurovison 1971 was the nation’s first colour broadcast.
On March 31 1979, it was time for Israeli IBA to use the contest as a transistor to colour transmissions.
The EBU introduced televoting in 1997, giving it a real push the following year. The breakthrough has allowed televoting to be used in competitions like The X Factor, Idol and Britains Got Talent.
The need to bring together results from different countries across the continent has led to the development of new solutions to analyse the data from these phone calls and text messages.
In 2009, a record of 10,680,682 televotes were received, with no further numbers being revealed since then.
Svante Stockselius, former Executive Supervisor of the ESC said at the time: "Those who question the reliability of the outcome are often amazed when they see how much effort we put in securing a reliable outcome.
"If you organise a competition of this magnitude, you better assure the results are correct."
This year, with the special participation of Australia, Eurovision will push the televoting boundaries to include the Aussies’ votes.
This year’s edition in Wiener Stadthalle arena, Viena, includes 29 cameras, a record for the contest itself.
In 2009, Karsten Jacobsen, a cameraman involved in the production, received the Award for Excellence by Guild of Television Cameramen Worldwide for a shoot that had no consideration for health and safety.
Jacobsen drove a segway, which he sprinted down the main aisle of the arena in Moscow, reaching the stage, jumping out of the vehicle and carrying on filming the Belarusian entry, all while also holding the latest HD camera attached to his body.
Years before, in 1977, BBC’s camera operators and technicians involved in the production were on strike postponing the contest originally scheduled for April 2 to May 7.
Eventually, UK hopefuls Lynsey De Paul and Mike Moran managed to sing "Rock Bottom" at Wembley Conference Centre and finished second.
5. Lights & LED
Long gone are the days when the venue was decorated with flowers and curtains, typical of a theatre or opera house.
It took until 2004 for the adoption of LED technology to feature in the arenas hosting Eurovision around Europe.
When Russia hosted the contest for the first time in 2009, they built the heaviest (450 tonnes) and widest stage – measuring 100m. The Russians also spent the highest amount on show production ($44m).
The amount of LED panels used on set surprised the industry. Designed by New York-based set designer John Casey, the creation used 30% of all the world’s LED panels available at the time.
In 2014, Denmark set a new record for the number of lights used. To design the "Diamond" shaped stage, the Dannish used over 3,000 lights, more than 2000 light-cues and LED panels spanned over 1200 m2.
Kasper Lange, light designer told eurovision.tv that year: "This is by far the biggest lighting production ever in Denmark and one of the biggest in the world."
Holograms made their Eurovision debut in 2014, when the Romanian contestant started the performance projecting herself as a hologram.
From Denmark’s Copenhagen, Paula Seling was superimposed on screen by the holographic effect that lasted 30 seconds.
She appeared on the right side of the stage, showing up moments later on the left side to join her song partner Ovi while performing "Miracle".
Holographic TV is something under development and according to Holo-TV, 3D ‘holographic’ displays will be made available in mobile phones and TVs by 2016.
To power ‘Europe’s Favourite Show’ takes a lot of electricity. In 2011, Germany organised the event in Düsseldorf Arena, a second division football stadium in the Bundesliga.
The event, which had a satellite inspired stage, was powered off the regular grid by eight independent generators.
In total, they produced a combined output of 6 megawatts, enough to power a town. 35 kilometres of cables were also used to transport and distribute energy across the arena.
8. Nil Point!
Technology doesn’t always work as expected on the night, as in 1962 when the Netherlands were singing their entry "Katinka" and the duo De Spelbrekers were left in the dark for some seconds.
Today a power outage would not occur while the event is being broadcasted. Every edition has a set of generators to keep the show running.
As a last resource, if power is lost for good, producers will used pre-recorded footage from the dress rehearsals to keep Europe entertained. | <urn:uuid:af273061-2ab9-4223-be10-06031ac54fd8> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/hardware/60-years-of-eurovision-in-10-tech-aways-4582922 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00544.warc.gz | en | 0.956275 | 1,459 | 2.890625 | 3 |
According to reports, cybercrime rates have increased by 600% since the pandemic started. The integral part of staying safe online is keeping your IP address unexposed. Hackers can use it to breach your devices, track your web activity, or even stalk you. Keep reading to learn about the dangers of letting someone get your IP address and what you can do to prevent that from happening!
What Is An IP Address?
Each device connected to the internet has an IP address. The abbreviation stands short for an “internet protocol address.” It’s how your device communicates with the rest of the virtual world. That’s how you exchange data online, including sending emails, browsing websites, and any other internet activity.
The IP address has a format of “x.x.x.x.” Each “x” represents a number, and that’s your device’s unique internet identifier. Considering it’s unique, it only belongs to that gadget. But what if someone gets your IP address?
Here’s What They Could Do:
- Discover your location. It won’t give away your exact location but show your city and country.
- Track your web activity. It’s a huge privacy and security risk.
- Hack your device. Someone could infect it with malware or use DOS (denial of service) attacks to shut it down.
- Use it for malicious activity. From impersonating you to trying to download illegal stuff online, they could abuse your IP address for malicious deals.
How Does Someone Get Your IP Address?
Before they can do anything, hackers need to find your IP address. So you should be careful and maximize cyber security awareness while browsing the web. A single wrong move could give away your IP address, and here’s how!
Your Email: The hacker might scam you into sending them an email. If you do that, they can get access to your IP address. Some providers, such as Outlook and Yahoo Mail, publicly show the sender’s IP in the email’s header. But even if they don’t, special analyzers could reveal the desired info in seconds.
Torrenting Files: Each torrent has seeds and peers. Seeders are people who upload files and make them available for download. Peers download the desired files, but the trick is that their IP addresses are visible. It allows hackers a way to collect yours and potentially abuse it.
Phishing Attacks: Phishing is a popular internet scamming option where hackers falsely represent themselves as somebody else. They could be a bank or any other institution you do business with regularly. Their goal is to collect sensitive info. And even if they don’t get your credit card or login credentials, clicking on their link is enough to give away your IP address.
Online Ads: You might find an online advertisement attractive, so you click on it. The best-case scenario is you share your IP with advertising companies, helping them learn more about your preferences. And the worst-case scenario is hackers are behind the ad, and they have other plans with your IP address.
What To Do If Someone Has Your IP Address?
You shouldn’t panic, but it’s necessary to change your IP address as soon as possible. It’s the only thing you can do since you can’t steal your address back. Some providers will assign a new address if you unplug your router for three minutes and then reattach it to the computer.
If you have administrative privileges on your account, you can enter your network’s properties and obtain a new IP address. The process varies, so it might be best to consult your ISP. They can assign you a fresh IP address upon request.
Does AVPN Hide My IP Address?
Yes, VPN hides your IP address. Furthermore, it’s the safest and simplest way of maximizing data protection when browsing the internet. VPN will direct all your internet activity via the chosen server. Instead of using your IP address, it will use the one related to that server. It ensures there’s no danger of someone in the area you are browsing from or any other online activity details.
How Can I Stop Hackers From Getting My IP Address?
Using a VPN is a top priority to prevent your IP address from being exposed. Here are some other tips to use for improved privacy protection when browsing the web:
Activate a firewall. Most operating systems come with an integrated firewall. This feature blocks unrecognized traffic to improve device security.
Use mobile data instead of public Wi-Fi. Shared and public networks are a privacy risk. If you switch mobile data, you get a new IP address for each session. It’s safer and ensures no one could abuse the public hotspot to collect your private info.
- Be careful when browsing the web. You shouldn’t click suspicious links, open messages from unknown senders, etc. Manage cookies carefully and erase them and internet history regularly.
- Adjust the router password. Keeping the default password is risky, and there are ways to make it stronger.
- Don’t let anyone else use your device. Even if they are your friend, at least be around and monitor their activity.
The fact someone knows your IP address isn’t automatically a threat. But if it ends up in the wrong hands, it could be a privacy risk or expose you to potential cyberattacks. Understanding how to keep your IP address safe and maximize internet privacy is an integral part of cybersecurity. Make sure to use a VPN and apply other tips to stop hackers from discovering your IP address! | <urn:uuid:5741078b-8333-4bb4-ab92-774da92c99e4> | CC-MAIN-2022-40 | https://www.infostor.com/what-can-someone-do-with-your-ip-address-tips-for-securing-ip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00544.warc.gz | en | 0.909547 | 1,178 | 3.046875 | 3 |
Security researcher Aviv Raff has published a vulnerability affecting Google’s Toolbar browser feature. The weak spot Raff reported could let a hacker gain control of a user’s PC when the user tries to add a new Google Toolbar button.
The vulnerability is based on spoofing a trusted site that would normally provide a safe toolbar button — basically tricking the user into downloading malicious files that could then be used, for example, to conduct nefarious activities like phishing attacks that could target banking information.
Raff published the details on his Web site and notified Google, which is working on a fix.
Spoofing the Source
Google Toolbar provides an API (application programming interface) for creating toolbar buttons, Raff reported, and the button information is stored in an XML (extensible markup language) file. In order to add a button, the user would have to click on a link that refers to the button’s XML file.
The problem lies in the resulting dialog box that pops up, which supposedly shows the user where the button is being downloaded from, some information about the button, and privacy considerations. A hacker, however, can use an open redirector-based link to spoof the URL shown in the dialog box, making it seem, for example, that a button would be downloaded from Google.com, when in fact it would come from the hacker.
Finding the Vulnerability
“I actually didn’t use this toolbar for a long long time, way before there was a possibility to add new buttons, and I was curious about the new beta version,” Raff told TechNewsWorld. “I downloaded it and looked into this nice feature, which was new to me.”
There’s a couple of levels of work a hacker would have to go through to make this vulnerability pan out, such as getting a user to start downloading a button in the first place. That would likely have to come from a site or e-mail the user believed was safe.
“It is a good, effective way for attackers to gain their victim’s trust, but … there are other easier ways for attackers to gain access to their victim’s PC’s,” Raff noted.
Still, Google has a massive programming staff that basically lives for creating Web-based applications that should be rock-solid and secure. Is this a surprising hole?
“I wasn’t surprised,” Raff said. “Even Google can have bugs. My recommendation for the end user is to avoid adding new buttons until Google provides a fixed version of the toolbar.”
Raff also published a proof-of-concept example. The affected versions are Google Toolbar 5 beta for Internet Explorer, Google Toolbar 4 for Internet Explorer, and Google Toolbar 4 for Firefox. The Firefox version only allows for a partial URL spoof, however. | <urn:uuid:d78a0ec8-9db5-4d7a-8614-a25f4fa63b66> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/security-specialist-spots-source-spoof-vulnerability-in-google-toolbar-60850.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00544.warc.gz | en | 0.9424 | 601 | 2.765625 | 3 |
Are You Checking Types?Static type checking with mypy
The dominoes game is simple, there are 28 tiles (standard version), each one with a unique combination of two numbers of pips between 0 and 6. The game’s objective is to be the first player to place all the own tiles on the table. For this, each player takes turns to place a tile adjacent to those already on the table as long as the number of pips matches. Most people believe that dominoes is more a game of luck than anything else. In fact, it’s a game of strategy. A good player checks the tiles on the table, counting how many pieces of a certain number of pips are already placed and which ones the opponents have. By knowing this, they can choose the best tile to place and force the opponents to play in a certain way. So, if you always bite the dust in the dominoes, maybe it’s because you’re not checking enough.
We can relate the coding in Python with a game of dominoes. The script is the tiles on the table, so the players are the developers. The tiles would be small pieces of code. Again, developers take turns to place tiles. However, the goal now is that all players win! So, they can put in a single script all the code tiles they have. The final result would be a perfectly coupled script specially assembled to do a certain task. But if everyone can win, then everyone can lose. And if your developers' team is already used to constantly losing games, then, just like in dominoes, you’re not checking enough!
Figure 1. Dominoes can be compared with the development activity.
There are many reasons why you can lose in the coding-in-Python game, but not checking well is one of the most common. Specifically, I’m talking about checking the type of variables or data structures in your code.
Add integers method in Python.
def add_integers(a, b): return a + b add_integers(2, 3) # 5
At first sight, the function seems fine. It works as expected, but it has a huge problem. In the following example, we’ll use the same \add_integers\ method, but we’ll make a change.
add_integers('2', '3') # '23'
The code still works as it should be, but it’s not the result we expected; we managed to "cheat" the function to add strings instead of integers.
I know this doesn’t say much, but I’ll show you the destructive potential of this feature with another example using the same add_integers function:
A more complex application with no variable typing.
def taxes_calculation(apple_price, taxes_rate): return apple_price * taxes_rate def apples_sale(n_apples, apple_price): initial_price = n_apples * apple_price taxes = taxes_calculation(initial_price, 0.16) result = add_integers(initial_price, taxes) return result apples_sale(3, 20) # 69.6 # Nothing bad until here, but what if we… apples_sale('3','20') # TypeError: can't multiply sequence by non-int of type 'str'
Now you can cry. Your apple sales business went bankrupt by simply changing the type of input variables.
Oh, the irony!
Dear reader, if you’re a pythonista who doesn’t allow yourself to be surprised so easily, you may be saying: "Wait, what? Python is a program with dynamic typing; that’s its point, I don’t have to define the type of variables because the interpreter can understand what the type is." Yes, that’s true, but the interpreter is not guilty of having an entanglement of thousands of methods that depend on each other. The interpreter is not guilty that any method can modify the state, including the variable type.
I’ll give you the solution now: Go functional and set the type of your variables! If you want to know how to do that, keep reading.
Canard à l’orange
Many scholars call the typing in Python "duck typing."
The name comes from this premise:
"If it goes like a duck
and it quacks like a duck,
then it must be a duck."
In this way,
we understood that Python knows
what the type is
by analyzing the behavior
and attributes of a variable.
we prefer the Canard à l’orange
("duck with orange" in French)
instead of living with it in our code.
How to pluck a duck?
We already know why we shouldn’t let the interpreter choose what type of variable we’re working with. It may sound a little laborious to have to type each variable, but this task is easy in Python 3:
Add integers method with typed variables.
def add_integers (a: int , b: int) -> int: return a + b add_integers(2 , 3) # 5
Let’s see if this solves the problem:
The cruel reality.
add_integers('2', '3') # '23'
I lied to you again. Typing variables in Python doesn’t do anything to how the code is executed. Python is like a child who believes everything you tell him; no matter if you set the type or not, it’ll continue to obey.
Mypy to the rescue
Setting variable types is useful
when we use a tool that has become popular
among the pythonistas:
Mypy is a static type checker.
It uses the type hints defined in the
code to validate that these hints are met
in the parts of the code
where the variables are used.
This tool runs separately from the execution of the code.
You can use the following command to install mypy in Python 3:
python3 -m pip install mypy
Now, we just have to make sure that the code we want to check is saved in a script and then run the following command:
Command to use mypy.
python3 -m mypy name_of_my_file.py
Let’s go back to the example of <<\adding-integers\>> and save it in a script called add_integer_method.py. Now we use mypy:
Using mypy in a known script.
python3 -m mypy add_integer_method.py #... No output
If there’s no output when running the command, the code is correct and can be executed. Now we add the <<\adding-strings, adding strings example\>> to the file and run mypy again:
$ python3 -m mypy add_integer_method.py # add_integer_method.py:4: error: Argument 1 to "add_integers" has incompatible type "str"; expected "int" # add_integer_method.py:4: error: Argument 2 to "add_integers" has incompatible type "str"; expected "int"
Eureka! Mypy was able to discover that we set a string into a method that was defined with integer type inputs. Here we use a very small and maybe obvious example, but imagine applications of thousands of code lines. Now, with a single command, we can check the variable types.
We demonstrated the importance of setting the variables' types that we’ll use and showed how fatal it’s to not check them. Mypy is a useful tool in any development activity, but it’s especially powerful in projects where more than one developer contributes. With mypy, we can debug easier or ensure that code with the wrong types is not deployed to production. Of course, Mypy is not a straitjacket; this library doesn’t impose anything on us; we decide to ignore or solve the warnings it shows us. Finally, we make the recommendation to implement functional code in your programs; this will make your code more durable, cleaner and easier to debug. This programming paradigm takes on more versatility when merged with tools like mypy, which turns very tedious processes into a matter of seconds. If you still don’t know much about functional programming in general or functional programming in Python, we invite you to read the posts "Why We Go Functional?" and "Road to Functional Python". You already have the knowledge, so will you check types?
Ready to try Continuous Hacking?
Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying. | <urn:uuid:179fa5b8-549d-4dbb-8b1b-dd4ce0891d37> | CC-MAIN-2022-40 | https://fluidattacks.com/blog/are-you-checking-types/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00544.warc.gz | en | 0.867944 | 1,894 | 2.71875 | 3 |
A surprising number of seniors are embracing digital technology, including computers, tablets and smartphones. Many use social media and email to stay in touch with friends, children and grandchildren. All of which is good. What is bad is the fact that seniors are being heavily targeted by scammers and fraudsters and are at risk of becoming victims of scams.
These scams include using all sources of communication, including the telephone, and through phishing and texting.
How can we help the seniors in our lives from becoming a victim? Educate them just like you educate yourself and your children about safe online behavior, using appropriate tools, such as firewalls and virus protection, and being suspicious of emails and texts from people they don’t know.
Many seniors have been scammed through telephone calls. Suggest to them that they should register their number with the state and federal “Do not Call” list. It is easy to do. Make sure they never give their personal information, and most importantly, their Social Security number or financial information, to anyone over the phone or through an email or text. Tell them not to agree to solicitations for charity or anything else over the telephone or through email or text.
Educate them about phishing and how it works. Encourage them not to fall for phishing emails and texts that ask them to click on a link, and that they should not provide their user name or password to anyone. Encourage them to delete all emails and texts that are from unfamiliar sources.
Grandchildren—help your grandparents with setting up passwords and implementing basic security measures on their phones, tablets and computers. Help them understand what their privacy settings are and how to implement privacy settings that they are comfortable with on their phones and social media accounts. For that matter, help your parents too!
We can all become the victim of a scam. But in general, seniors have less experience with digital media as the younger generations, as they did not grow up with it. Empower the seniors in your life with knowledge and the tools to embrace digital technology in a safe way and enjoy the time you spend with them bringing them into your digital world. | <urn:uuid:cb7665fa-f546-4522-bbf6-5a8d131e6fb7> | CC-MAIN-2022-40 | https://www.dataprivacyandsecurityinsider.com/2016/01/privacy-tip-19-protecting-seniors-from-scams/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+DataPrivacyAndSecurityInsider+%28Data+Privacy+%2B+Security+Insider%29 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00544.warc.gz | en | 0.967442 | 437 | 2.90625 | 3 |
- Data is a huge market commodity now and the threat of unsolicited data collection from online services is real.
- With security and privacy breaches hitting the news frequently, limiting what data your internet service provider or other online entities can collect is a great way to avoid being a breach victim.
The date and time of internet access, your location when accessing the internet, websites visited, and downloaded content are some of the metrics that your internet provider may monitor and collect. Here are a few things you can do to allow ISPs to collect minimal information about you.
Explore different browsers
Instead of going with popular choices, take time to research about the privacy features of different browsers, including the lesser-known ones. Look for privacy options they provide and how much data they limit from your ISP.
Keep in mind that using a browser’s incognito mode does not prevent ISPs from monitoring your activities online.
Invest in a VPN
A Virtual Private Network (VPN) can help block your ISP from tracking your data. However, make sure that you do ample research and read the fine print before investing in a VPN service. Because in the process of protecting yourself from your ISP, you don’t want the VPN provider to collect data about you.
Look beyond the laptop
It isn’t just the laptop that you should be worried about. Remember that any device that uses the internet can be traced. Mobile phone and other wearable devices can often be used to collect much more sensitive information about users than possible through simple web browsing.
Although using HTTPS does not ensure complete security, it is still one step ahead of the HTTP. Using HTTPS means that the data you send and receive is encrypted. Avoid browsing through and providing any personal information on websites still using the HTTP protocol.
Block third-party cookies
To avoid cross-site tracking by online services, one can choose to install browser extensions or apps that block third-party cookies and scripts. This prevents malicious services from tracking you through compromised websites and also prevents highly personalized targeting for online advertisements.
It goes without saying that protecting your data is now more important than ever. There are places and times when you can stay in control of your data, and these methods can help you gain more privacy in your online activity. Invest in solutions to protect data from your internet service providers to stay safer online. | <urn:uuid:91c76f64-ef42-4f1c-8b93-ce53c7173786> | CC-MAIN-2022-40 | https://www.infosec4tc.com/2019/11/18/how-to-avoid-being-tracked-by-your-internet-service-provider-or-online-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00544.warc.gz | en | 0.911306 | 483 | 2.5625 | 3 |
In December 2019, the World Health Organisation (WHO) commemorated the 40th anniversary of smallpox eradication. Ironically, later that month the world was introduced to a new epidemic caused by the Novel Coronavirus. At the same time, the WHO’s fight against other epidemics such as the Ebola virus outbreaks in Congo is far from over. As the world becomes smaller in terms of access, the risks associated with a disease outbreak becomes greater.
Every disease outbreak puts our healthcare system in disarray. Not only does it affect the country where it originated, but it also has a far-reaching impact on healthcare systems in other countries. As of today, the coronavirus has reportedly spread beyond China to 16 countries. A visit to a public healthcare facility in Singapore in the last few days shows how the healthcare system is tracking everyone who visits a hospital or a polyclinic – not just patients. Clinicians are also conducting extra screenings. This has ramifications for healthcare systems, that are already strapped with staff shortage.
There are obvious economic ramifications – at least in the short term. Several companies are banning travel to China for their employees, while many manufacturing units in China have had to temporarily shut down. The impact is not restricted to China alone and has the potential to impact global trade and economy.
Every new disease that comes into the limelight also impacts the life sciences industry that has to divert their R&D resources into finding a cure and/or a vaccine for the disease. While winning the race for the first breakthrough can be a huge opportunity for the pharmaceutical company, it also impacts the regular research being conducted to protect us from other deadly diseases.
Unfortunately, we are always one step behind diseases, and we have to first think of cure and containment before we can consider prevention and eradication. As we wait and watch to see how fast the coronavirus epidemic is contained, we must acknowledge the role technology plays in managing epidemics and other disasters. Here are some initiatives:
One of the success stories to emerge from this disaster is the speed at which the risk of the outbreak was detected. 10 days before the WHO announcement, BlueDot, a healthcare monitoring platform had already detected the epidemic, from intelligence gathered from news reports, disease networks and official sources. The same platform – and a few others – are also predicting the global spread of the virus by mining global airlines ticketing data. This is a reassuring outcome of how technology and human analysis can effectively come together to improve health outcomes.
While the current global concern is the speed of containment of the disease, eventually there will have to be more proactive measures to prevent another outbreak and to even eradicate the disease. To be able to understand the full nature of the pathogen and to come up with a vaccine, it is important that the virus is isolated. Scientists from the Peter Doherty Institute for Infection and Immunity in Melbourne successfully grew the Wuhan coronavirus from a patient sample. While the Chinese authorities had released the genome sequence to help with the diagnosis, this ‘game-changer’ can be potentially used to detect the virus in patients who do not yet display the symptoms and eventually to develop a vaccine. Cutting-edge research in healthcare has always been conducted by such research and pharmaceutical organisations. They have consistently pushed the adoption of new technology in healthcare, especially in their R&D practices.
As mentioned earlier, any outbreak taxes the front-line healthcare providers the most. They have very little time to change their triage and protocols to combat a disease that they have possibly never encountered. This is where clinical decision support systems that can incorporate these new protocols into the workflow comes in handy. Epic, the EHR provider has pushed a software update that does just that. According to Epic, this update was developed in collaboration with biocontainment experts, infectious disease physicians and the US Centers for Disease Control and Prevention (CDCs). Collaborations such as this will be required if we have to devise a global protocol for epidemic management and containment.
There have been several other initiatives during this outbreak that show how different technologies can come together to benefit healthcare, especially to handle a crisis. Technology has always played a huge role in spreading the message in times of disaster, especially in emerging economies – with technologies such as AI, the potential of technology benefitting healthcare increases exponentially. | <urn:uuid:3b6dafab-eedd-4a65-9a4e-87e82e5964ba> | CC-MAIN-2022-40 | https://blog.ecosystm360.com/how-technology-is-helping-to-combat-coronavirus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00544.warc.gz | en | 0.964859 | 888 | 2.921875 | 3 |
What is a message broker?
A message broker is software that enables applications, systems, and services to communicate with each other and exchange information. The message broker does this by translating messages between formal messaging protocols. This allows interdependent services to “talk” with one another directly, even if they were written in different languages or implemented on different platforms.
Message brokers are software modules within messaging middleware or message-oriented middleware (MOM) solutions. This type of middleware provides developers with a standardized means of handling the flow of data between an application’s components so that they can focus on its core logic. It can serve as a distributed communications layer that allows applications spanning multiple platforms to communicate internally.
Message brokers can validate, store, route, and deliver messages to the appropriate destinations. They serve as intermediaries between other applications, allowing senders to issue messages without knowing where the receivers are, whether or not they are active, or how many of them there are. This facilitates decoupling of processes and services within systems.
In order to provide reliable message storage and guaranteed delivery, message brokers often rely on a substructure or component called a message queue that stores and orders the messages until the consuming applications can process them. In a message queue, messages are stored in the exact order in which they were transmitted and remain in the queue until receipt is confirmed.
Asynchronous messaging (15:11) refers to the type of inter-application communication that message brokers make possible. It prevents the loss of valuable data and enables systems to continue functioning even in the face of the intermittent connectivity or latency issues common on public networks. Asynchronous messaging guarantees that messages will be delivered once (and once only) in the correct order relative to other messages.
Message brokers may comprise queue managers to handle the interactions between multiple message queues, as well as services providing data routing, message translation, persistence, and client state management functionalities.
Message broker models
Message brokers offer two basic message distribution patterns or messaging styles:
- Point-to-point messaging: This is the distribution pattern utilized in message queues with a one-to-one relationship between the message’s sender and receiver. Each message in the queue is sent to only one recipient and is consumed only once. Point-to-point messaging is called for when a message must be acted upon only one time. Examples of suitable use cases for this messaging style include payroll and financial transaction processing. In these systems, both senders and receivers need a guarantee that each payment will be sent once and once only.
- Publish/subscribe messaging: In this message distribution pattern, often referred to as “pub/sub,” the producer of each message publishes it to a topic, and multiple message consumers subscribe to topics from which they want to receive messages. All messages published to a topic are distributed to all the applications subscribed to it. This is a broadcast-style distribution method, in which there is a one-to-many relationship between the message’s publisher and its consumers. If, for example, an airline were to disseminate updates about the landing times or delay status of its flights, multiple parties could make use of the information: ground crews performing aircraft maintenance and refueling, baggage handlers, flight attendants and pilots preparing for the plane’s next trip, and the operators of visual displays notifying the public. A pub/sub messaging style would be appropriate for use in this scenario.
Message brokers in cloud architectures
Cloud native applications are built to take advantage of the inherent benefits of cloud computing, including flexibility, scalability, and rapid deployment. These applications are made up of small, discrete, reusable components called microservices. Each microservice is deployed and can run independently of the others. This means that any one of them can be updated, scaled, or restarted without affecting other services in the system. Often packaged in containers, microservices work together to comprise a whole application, though each has its own stack, including a database and data model that may be different from the others.
Microservices must have a means of communicating with one another in order to operate in concert. Message brokers are one mechanism they use to create this shared communications backbone.
Message brokers are often used to manage communications between on-premises systems and cloud components in hybrid cloud environments. Using a message broker gives increased control over interservice communications, ensuring that data is sent securely, reliably, and efficiently between the components of an application. Message brokers can play a similar role in integrating multicloud environments, enabling communication between workloads and runtimes residing on different platforms. They’re also well suited for use in serverless computing, in which individual cloud-hosted services run on demand on a per-request basis.
Message brokers vs. APIs
REST APIs are commonly used for communications between microservices. The term Representational State Transfer (REST) defines a set of principles and constraints that developers can follow when building web services. Any services that adhere to them will be able to communicate via a set of uniform shared stateless operators and requests. Application Programming Interface (API) denotes the underlying code that, if it conforms to REST rules, allows the services to talk to one another.
REST APIs use Hypertext Transfer Protocol (HTTP) to communicate. Because HTTP is the standard transport protocol of the public Internet, REST APIs are widely known, frequently used, and broadly interoperable. HTTP is a request/response protocol, however, so it is best used in situations that call for a synchronous request/reply. This means that services making requests via REST APIs must be designed to expect an immediate response. If the client receiving the response is down, the sending service will be blocked while it awaits the reply. Failover and error handling logic should be built into both services.
Message brokers enable asynchronous communications between services so that the sending service need not wait for the receiving service’s reply. This improves fault tolerance and resiliency in the systems in which they’re employed. In addition, the use of message brokers makes it easier to scale systems since a pub/sub messaging pattern can readily support changing numbers of services. Message brokers also keep track of consumers’ states.
Message brokers vs. event streaming platforms
Whereas message brokers can support two or more messaging patterns, including message queues and pub/sub, event streaming platforms only offer pub/sub-style distribution patterns. Designed for use with high volumes of messages, event streaming platforms are readily scalable. They’re capable of ordering streams of records into categories called topics and storing them for a predetermined amount of time. Unlike message brokers, however, event streaming platforms cannot guarantee message delivery or track which consumers have received messages.
Event streaming platforms offer more scalability than message brokers but fewer features that ensure fault tolerance (like message resending), as well as more limited message routing and queueing capabilities.
Learn more about event driven archtecture.
Message broker vs. ESB (enterprise service bus)
An enterprise service bus (ESB) is an architectural pattern sometimes utilized in service-oriented architectures implemented across enterprises. In an ESB, a centralized software platform combines communication protocols and data formats into a “common language” that all services and applications in the architecture can share. It might, for instance, translate the requests it receives from one protocol (such as XML) to another (such as JSON). ESBs transform their message payloads using an automated process. The centralized software platform also handles other orchestration logic, such as connectivity, routing, and request processing.
ESB infrastructures are complex, however, and can be challenging to integrate and expensive to maintain. It’s difficult to troubleshoot them when problems occur in production environments, they’re not easy to scale, and updating is tedious.
Message brokers are a “lightweight” alternative to ESBs that provide a similar functionality—a mechanism for interservice communications—more simply and at lower cost. They’re well-suited for use in the microservices architectures that have become more prevalent as ESBs have fallen out of favor.
Message broker use cases
Implementing message brokers can address a wide variety of business needs across industries and within diverse enterprise computing environments. They’re useful whenever and wherever reliable inter-application communication and assured message delivery are required.
Message brokers are often employed in the following ways:
- Financial transactions and payment processing: It’s critical to be certain that payments are sent once and once only. Using a message broker to handle these transactions’ data offers assurance that payment information will neither be lost nor accidentally duplicated, provides proof of receipt, and allows systems to communicate reliably even when intermediary networks are down.
- E-commerce order processing and fulfillment: If you’re doing business online, the strength of your brand’s reputation depends on the reliability of your website and e-commerce platform. Message brokers’ ability to enhance fault tolerance and guarantee that messages are consumed once and once only makes them a natural choice to use when processing online orders.
- Protecting highly sensitive data at rest and in transit: If your industry is highly regulated or your business confronts significant security risks, choose a messaging solution with end-to-end encryption capabilities.
Message brokers and IBM Cloud
Message brokers are taking on new kinds of importance as organizations modernize applications on the journey to cloud. Many of the world’s most successful companies—including 85% of the Fortune 100—rely on IBM’s message broker capabilities, which are built to support today’s agile development environments, microservices-based and hybrid cloud infrastructures, and a broad array of system types and connectivity requirements.
Get started with an IBM Cloud account today. | <urn:uuid:d46fdf6e-da20-4845-b684-055e77ce8e20> | CC-MAIN-2022-40 | https://www.ibm.com/cloud/learn/message-brokers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00544.warc.gz | en | 0.922129 | 2,020 | 3.25 | 3 |
Robocalls are a scourge. They waste everyone’s time, come when you least expect them, and multiply in number if you make the mistake of picking up. Americans alone received over 26 billion robocalls in 2018, and there seems to be no sign of them slowing down.
Technically, some robocalls are illegal in the U.S. under the Telephone Consumer Protection Act. However, methods used to dial calls make them difficult to screen or regulate.
Combine this with the fact that so many robocalls are made from outside the country, and you have an epidemic that’s almost impossible to contain.
In a surprise turn of events, the FCC appears to be finally listening to consumer demands about robocalls. A new set of regulations rolling out may help to choke off spam calls before they can ever reach your phone. Could this upcoming legislation give carriers the incentive they need to end the madness once and for all!
Why are robocalls nearly impossible to stop?
Robocalls, despite their name, aren’t really phone calls in the traditional sense. Most of the time, they’re made by computers. The companies that run these massive call operations typically generate numbers at random to call — and add any number that happens to pick up to their list of prospects. This is why you tend to get more robocalls after you make the mistake of answering one.
In addition to randomly generating numbers to call, robocallers also generate what’s known as “spoofed numbers.” This means software running the calls is able to provide a fake caller-ID number that disguises its origin. These spoofed caller-IDs sometimes come from real phone numbers, which is why you’ll sometimes reach a random location like a store or pizza restaurant if you try to call the spammer back.
Worst of all, these number generators try to create numbers you’re more likely to pick up. This is accomplished by robocall programs analyzing your phone number, and curating the spoof to match it as much as possible. This is why you’ll often get robocalls with the same area code that your phone has.
Even though spam calls are illegal, and people have been fined and prosecuted for running phone scams, the trend seems unlikely to slow down. The above factors make tracking and screening calls nearly impossible — since you never know where the calls are really coming from.
Regulators also run into legal grey-areas with shutting calls down, since many of their true origins lie overseas. A robocall from Russia that’s spoofing American numbers would not be subject to U.S. law, making punitive measures frustratingly pointless.
What is the FCC doing about robocalls?
After numerous complaints and consumer feedback, the FCC is starting to take action against the rising number of robocalls. In a strategy outlined by FCC chairman Ajit Pai, the commission is proposing carriers develop blocking tools that allow calls to be filtered before ever reaching consumers’ phones.
Originally, according to Pai, mobile phone carriers were unsure if developing call screening tools would be legal under current regulations. In the proposed strategy, any ambiguity on this matter would be settled, and the FCC would fully support efforts from carriers to help fight robocalls.
On top of this, a bill is currently circulating the U.S. congress with explicit language that empowers law enforcement to fight and punish robocallers more effectively. The bill includes additional language that makes it easier for the FCC to levy fines on robocallers, with the hope that the penalties are enough to deter spammers once and for all.
Currently, the FCC has set a 2019 target for carriers to comply with the proposal. A carrier summit is scheduled for July 11 to review progress on call-screening programs, as well as discuss new strategies to better combat the spam epidemic.
Despite these hopeful developments, experts believe robocall numbers will only continue to grow. The FCC proposals aren’t mandatory, and no solid details have emerged around carrier strategies to thwart spam calls.
They could potentially pass the cost off to consumers as an extra service to sign up for, or have it be an opt-in service rather than a mandatory screening. Since the FCC is leaving it up to carriers themselves, we can only imagine that they’ll take the route that’s best for business rather than consumers.
What can you do about robocalls?
In the meantime, it’s up to us to fight robocalls on our own. The best thing you can do is avoid picking up phone calls for numbers you don’t recognize. Doing so may put you on a list, and potentially open you up to even more scams in the future.
Additionally, adding yourself to the existing “Do Not Call” list can head-off some of the minor players in the robocall game, including many locally-based spam operations.
The battle between regulators and robocalls is an ever-changing cat and mouse game. All we can do for now is know the spammers’ patterns and screen strange numbers instead of answering. Alternatively, you can always put your phone on silent. It won’t stop the call, but hey, at least you won’t know it came in. | <urn:uuid:93a8c6db-644b-4413-a33f-fb4ec80e496e> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/heres-why-theres-no-quick-fix-for-robocalls/567369/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00544.warc.gz | en | 0.953702 | 1,108 | 2.609375 | 3 |
Let’s take a minute to talk about better understanding Your Computer, specifically Web Browsers.
Web browsers allow you to navigate the internet.
There are a variety of options available, so you can choose the one that best suits your needs.
How do web browsers work?
A web browser is an application that finds and displays web pages.
It coordinates communication between your computer and the web server where a particular website “lives.”
When you open your browser and type in a web address or “URL” for a website, the browser submits a request to the server, or servers, that provide the content for that page.
Then it loads any other elements such as Flash, Java, or ActiveX that are necessary to generate content for the page.
After the browser has gathered and processed all of the components, it displays the complete, formatted web page.
Every time you perform an action on the page, such as clicking buttons and following links, the browser continues the process of requesting, processing, and presenting content.
How many browsers are there?
There are many different browsers.
Most users are familiar with graphical browsers, which display both text and graphics and may also display multimedia elements such as sound or video clips.
However, there are also text-based browsers. The following are some well-known browsers:
- Internet Explorer
- Safari – a browser specifically designed for Mac computers
- Lynx – a text-based browser desirable for vision-impaired users because of the availability of special devices that read the text
How do you choose a browser?
A browser is usually included with the installation of your operating system, but you are not restricted to that choice.
Some of the factors to consider when deciding which browser best suits your needs include
Does the browser work with your operating system?
Do you feel that your browser offers you the level of security you want?
Ease of use.
Are the menus and options easy to understand and use?
Does the browser interpret web content correctly?
If you need to install other plug-ins or devices to translate certain types of content, do they work?
Do you find the interface and way the browser interprets web content visually appealing?
Can you have more than one browser installed at the same time?
If you decide to change your browser or add another one, you don’t have to uninstall the browser that’s currently on your computer.
You can have more than one browser on your computer at once.
However, you will be prompted to choose one as your default browser.
Anytime you follow a link in an email message or document, or you double-click a shortcut to a web page on your desktop, the page will open using your default browser.
You can manually open the page in another browser.
Most vendors give you the option to download their browsers directly from their websites.
Make sure to verify the authenticity of the site before downloading any files.
To further minimize risk, follow other good security practices, like using a firewall and keeping anti-virus software up to date.
Now you know the basics about web browsers, and better understand your computer.
I’ll see you in my next post! | <urn:uuid:614c6e7a-dce5-4ed3-9b3d-2271fb7428d2> | CC-MAIN-2022-40 | https://hailbytes.com/how-can-you-use-your-web-browser-safely/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00744.warc.gz | en | 0.866572 | 724 | 3.609375 | 4 |
Database administrators ensure that business data is accurate, available and secure. The corporate database is the heart of key business systems that drive payroll, manufacturing, sales and more, so database administrators are recognized - and rewarded - for playing a crucial role in an organization's success. Beyond database administrators' high salary, DBA roles offer the personal satisfaction of solving business problems and seeing (in real-time) how your hard work benefits the firm.
A typical database administration learning plan begins with an undergraduate degree in computer science, database management, computer information systems (CIS) or a related field of study. A balance of technical, business and communication skills is critical to a database administrator's success and upward mobility, so the next step in a DBA's education is often a graduate degree with an information systems concentration, such as a MBA in Management Information Systems (MIS) or CIS. Database administrators can continue to learn and advance their career by getting certified in one or more database management systems (DBMS); in-demand DBMS include Oracle, Microsoft SQL Server, IBM DB2 & MySQL. Learn more about DBA education requirements and compare the top-rated database administrator training programs.
a.k.a. DBA | Database Analyst | Database Manager
Database Administrator Salaries
DBA Education Requirements
DBA Training & Degree Programs
Database Administrator Certifications
Database Administration Jobs
Database Administrator Job Outlook
DBA Skills and Responsibilities
Typical day-to-day duties and in-demand skill sets for DBAs include the following. Database administrators:
- Implement, support and manage the corporate database.
- Design and configure relational database objects.
- Are responsible for data integrity and availability.
- May design, deploy and monitor database servers.
- Design data distribution and data archiving solutions.
- Ensure database security, including backups & disaster recovery.
- Plan and implement application and data provisioning.
- Transfer database information to integrated mobile devices.
- Some database administrators design and develop the corporate database.
- Some DBAs analyze and report on corporate data to help shape business decisions.
- Produce entity relationship & data flow diagrams, database normalization schemata,
logical to physical database maps, and data table parameters.
- Database administrators are proficient in one or more of the leading database
management systems, such as, Microsoft SQL Server, IBM DB2, MySQL and Oracle.
Database Administrator Salary
The mean annual salary for database administrators is $99,000, according to the latest data from the US Bureau of Labor Statistics.
Average salaries for database administrators and related positions:
- Database Developer: $92,000
- MySQL Database Administrator: $94,000
- IBM DB2 Database Administrator: $97,000
- Oracle Database Administrator: $98,000
- Database Administrator: $99,000
- Senior Database Administrator: $102,000
- Oracle Applications Specialist DBA: $107,000
- Database Engineer: $109,000
- Database Team Leader: $144,000
Top paying US cities and metro areas for DBAs:
- San Jose, California: $143,000
- Seattle, Washington: $120,000
- New York City Metro Area: $119,000
- Washington DC Metro Area: $116,000
- San Francisco, California: $115,000
The hourly wage for database administrators ranges from $30 to $90, depending on the DBA's education, location, proficiency in known database management systems, certifications and experience.
Deep dive into database administrator salary ranges.
Sources: U.S. Bureau of Labor Statistics • Indeed.com
Database Administrator Education Requirements
Database administration positions typically require at least a bachelor’s degree in Computer Information Systems (CIS), Computer Science, Database Administration or a related field of study. Many employers prefer to hire MBAs for database administration jobs, because in addition to the extra technical database training, MBAs are well-versed in key business domains, e.g., accounting, marketing and management, and they're more adept at communicating with technical and non-technical employees - two traits of highly successful DBAs. Popular MBA concentrations for database administrators include Management Information Systems (MIS), Database Management and CIS. Database administrators can further distinguish themselves and advance their careers with specialized training and certifications in the leading database management systems, i.e., Oracle 11g, Microsoft SQL Server, IBM DB2, Sybase and MySQL.
Research and compare the top-reviewed database administration training programs in the U.S. and online.
DBA Training & Degree Programs
Compare undergrad and graduate degrees, professional certificates and self-paced online training courses matching the database administrator education requirements and career path.
Bachelor's in Computer Science - Data Analysis
- Gain the Skills and Credentials to Pursue Sought-After Careers in Data Management
- Create and Manage Structured Databases
- Analyze Data to Meet Organizational Goals
- Advanced Statistics for STEM Disciplines
- Use Emerging Tech in Cloud Computing, Artificial Intelligence (AI) and Machine Learning (ML) to Analyze Big Data
Master of Science in IT - Analytics
- Use analytics, statistics & forecasting to drive smarter business decisions
- Identify relevant data and sources to solve complex business problems
- Address global, ethical, legal & cultural factors in data analytics
- Create effective data visualizations and stakeholder presentations
- Must have a bachelor's degree to apply. GRE / GMAT not required.
Master's in Technology Management
- Prepare to Lead IT Personnel and Wield Emerging Technologies to Achieve Business Goals
- Choose from courses like:
- Business Intelligence and Data Analytics
- Cyber Security Threats & Vulnerabilities
- Managing Diverse Organizations in a Flat World
- Cloud Computing and Virtualization
- Computer Systems Analysis
- Cryptography & Network Security
- Must have a bachelor's degree to apply. GRE / GMAT not required
Google Data Analytics Pro Certificate
- Includes Certification Preparation for:
- Google Data Analytics Professional
- Learn to use Popular Data Analytics Tools inc. Tableau, SQL, R Programming, Spreadsheets & Slideshows
- Clean, Organize and Analyze Complex Data Sets
- Data Visualization & Stakeholder Presentation
- Constructive Questioning and Structured Thinking
Marketable certifications for database administrators include the following:
- MCSA: SQL Database Administration
- MCSE: Business Intelligence
- Oracle Database 11g Administrator Certified Associate
- Oracle Database 11g Administrator Certified Professional
Database Job Openings
Your specialized database administration training, experience and certifications may qualify you for a range of lucrative positions. Browse and apply to these DBA job openings:
- Database Administrator jobs:
- Oracle Database Admin jobs:
- Microsoft SQL Server DBA jobs:
- Database Manager jobs:
- MySQL Database Admin jobs:
- IBM DB2 Database jobs:
Database Administrator Job Outlook
Employment of database administrators is expected to grow by 8% from 2020 to 2030, right inline with the 8% average for all occupations, according to the U.S. Bureau of Labor Statistics. As businesses continue to accumulate record amounts of data, the demand for trained and certified database administrators to store, organize, analyze and secure this data will continue to rise.
In addition, as more databases are integrated with the Internet and cloud, data security will become increasingly complex, thus a growing number of database administrators with skills in cybersecurity and cloud computing will be required to protect sensitive information from hackers and other threats. DBAs with expertise in the leading database management systems, such as Microsoft SQL Server, Oracle, MySQL, and IBM DB2 will enjoy greater hiring prospects as well.
Source: U.S. Bureau of Labor Statistics' Occupational Outlook Handbook
- Data Scientist
- Website Developer
- Mobile Application Developer
- Software Engineer
- Network Administrator
- Technology Manager | <urn:uuid:47fde74c-a258-45d3-b4f3-e58010a75e13> | CC-MAIN-2022-40 | https://www.itcareerfinder.com/it-careers/database-administrator.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00744.warc.gz | en | 0.822352 | 1,642 | 2.640625 | 3 |
Today Google announced the first public full SHA-1 collision, i.e. the first pair of distinct values that when hashed with the SHA-1 function produce the same digest. This should not come as a surprise - it follows the free-start collisions announced at the end of 2015, and many cryptographers had been anticipating full SHA-1 collisions imminently.
To understand what this means, it helps to look at what happened after collisions were found in the MD5 hash function.The first MD5 collisions were announced in 2004. During 2005, various researchers showed examples of pairs of documents that have the same MD5 hash, or pairs of executable files that have the same MD5 hash.I
t took until 2008 for researchers to find a pair of certificates that have the same MD5 digest and were well-formed enough to have one signed by a trusted CA and the other used as an intermediate certificate. This is a very powerful attack as the intermediate certificate can then be used to sign more website certificates to mount man-in-the-middle attacks on any site. To get to this point, they first had to find a procedure to produce chosen prefix collisions, i.e. a way to take two files M and M' and concatenate two suffixes N and N' such that the MD5 hash of M,N is the same as that for M',N'. However, when the FLAME malware was detected in May 2012, forensics showed it was using pair of certificates with a colliding digest, and that the MD5 collision method was different from that which became public, suggesting that government agencies already had techniques for producing chosen-prefix MD5 collisions well before academic researchers.
For SHA-1, the collision revealed by CWI and Google is an identical prefix collision, which is generally a weaker a result than finding a procedure for a chosen prefix collision (however, note the researchers worked on a specific collision that takes advantage of certain particularities of the PDF format to allow the same collision to used to create any number of colliding pairs of PDFs containing two different embedded JPGs - visual explanation here and site for generating colliding PDFs here).
All this suggests that nation-state level adversaries may well already be able to produce SHA-1 collisions. They may even be able to produce chosen-prefix collisions, and find viable certificates with colliding SHA-1 digests (though since the MD5 attacks, certificates are supposed to contain carefully-placed random data to make this harder). Other less well-resourced actors won't be far behind.In conclusion, it's long past time to dump SHA-1 as a digest function for certificates, documents, binaries and elsewhere.If you use SHA-1 inside an HMAC, the problem is much less serious.
First, HMAC is still secure even if the underlying hash function is not collision-resistant. It is only necessary that the hash function be a pseudo-random function. The security consideration is that SHA-1’s output length is only 160 bits. Some agencies such as ENISA already consider this too short for future use.
If you use SHA-1 inside PBKDF2 for storing passwords, you’re not in danger from collisions, but you should probably reconsider your choice anyway. As we explained in a previous post, SHA-1 is easier and cheaper to implement in hardware than SHA-256 or SHA-512, and hence leaves password files more vulnerable to brute-force dictionary attackers.
Want to know what your applications and dependencies are using SHA-1 for? We have a tool for that :) | <urn:uuid:8877a5d5-b4e8-43ff-aca5-7c58c5aa2a26> | CC-MAIN-2022-40 | https://cryptosense.com/blog/google-announces-full-sha-1-collision-what-it-means | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00744.warc.gz | en | 0.948303 | 738 | 2.78125 | 3 |
Human Resources and HIPAA Training
HIPAA, which stands for the Health Insurance Portability and Accountability Act, is a law that was passed to guarantee the security and privacy of protected health information (PHI) as it travels throughout the healthcare industry. There are standards and safeguards that are required through the law for all those who can access this type of personally identifiable information to follow.
Most people know that doctors, nurses and other people working for medical practices have to comply with HIPAA, but what about the HR team within other organizations that have employee sponsored health plans? During the course of handling these health plans and benefits, HR employees often have access to PHI. In order to guarantee that all employees are aware of the security procedures and protocols, HIPAA training is required for all of these people, including Human Resources staff members.
What is PHI?
According to HIPAA, those who have the potential to access protected health information throughout the course of their job must be trained in and comply with all the requirements. Since needing to follow the requirements of HIPAA is contingent on accessing PHI, we will define what that is so that HR employees will know whether or not they fall under HIPAA.
Simply put, PHI is any medical information that could potentially identify an individual, that is created or used during the course of providing healthcare services. The HIPAA Privacy Rule lays out 18 identifiers for PHI. A few of those are: full or last names, dates relating to birthday or treatment day, social security numbers, medical record numbers and any photograph that can identify an individual. These are only a couple of the identifiers, and the rest of the PHI identifiers can be reviewed here.
HR and HIPAA
If you are working in Human Resources, especially within the medical industry, you will regularly access and use protected health information (PHI) during the course of your work day. Since you have access to this information through your job, you are required to understand and comply with HIPAA to ensure that each patient’s PHI is kept secure. A wide variety of organizations handle this type of data as a part of their necessary operations just as HR departments do.
The main component of HR that leads to employees in that role having access to PHI is their work with and management of the organization’s sponsored health plans. While dealing with these insurance plans for all the employees, it is reasonable to assume that HR professionals would access PHI. The ability to view this private information is why HR professionals commonly need to be trained in HIPAA so that they can achieve compliance.
How Often Should Training Take Place?
Just like with HIPAA training for other departments, HR employees should be trained on HIPAA during their initial onboarding process and then be required to complete annual training each year after that. Beyond this, additional training may be needed if there are changes in the company’s policies and procedures relating to HIPAA or even if there are changes or additions to the law itself.
Do Employment Records Count as PHI?
One common question that HR teams may have about HIPAA is whether an employee’s general employment record counts as PHI and therefore must be protected to the same extent. The Privacy Rule does not pertain to employment records, therefore the information inside of this record is not considered PHI even if some of it is health related. Within the text of HIPAA, the Privacy Rule clearly lays out requirements and standards for how and when someone is able to access an employee’s PHI. However, these guidelines do not apply to an organization’s specific employment records that they create and utilize.
Even if an employer is seeking an employee’s doctor’s note or some other direct information from a healthcare provider, the supervisor or any other staff member cannot access the health information needed to do so, unless they have received explicit consent from that employee. HR staff members can be very valuable in this situation by making all employees aware of their rights to the protection and security of their health information. | <urn:uuid:98d16536-ac30-40ca-9b19-cbb5bf906804> | CC-MAIN-2022-40 | https://www.accountablehq.com/post/hipaa-training-for-human-resources | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00744.warc.gz | en | 0.962819 | 812 | 2.84375 | 3 |
It’s too late for a Halloween story, but year-round, it’s the things in the dark that scare us. This is true in the online world as much as the real world.
The Dark Web Defined
The web lets us instantaneously access information and resources all around the world by typing a URL into a browser, but there’s a part of the web that’s not easily accessible. URLs that aren’t known to the search engines are called the deep web, and much of that is innocuous, such as pages under development that aren’t yet released to the public. A small corner of the deep web is the more dangerous dark web, where anonymity is preserved and criminality thrives.
The dark web is a vibrant marketplace, filled with stolen data (account numbers, social security numbers, passwords, and other personal information) and tools for hacking. When a data breach occurs, it’s often made possible by malware sold on the dark web, and the stolen data often ends up for sale there, as well. For all the value this data has to its owners, there’s so much of it that it’s cheap for criminals to buy: according to Experian, social security numbers sell for just one dollar.
Dark Web Dangers for Business
As both the source of hacking tools and the destination for stolen data, the dark web is a threat to data security. The dark web is also an inspirational source for criminals. There are those hacking kits that are available, plus guides on how to deploy malware and ransomware, and how to open fraudulent accounts. Wannabe criminals who don’t have their own technical skills can rent a botnet to execute a DDoS attack or buy admin credentials to gain access to a company’s systems.
It can be used in other ways to harm businesses, too. There are sites that aggregate personal information—not just your accounts but also your social media—that can be used to threaten executives.
Learn more in What is the Dark Web and Why Should We Care?
Shine Light into the Dark Web
For businesses to protect themselves against the dark web’s dangers, the first step is to know when the dark web is brushing up against them. Monitoring tools allow companies to detect if any data stolen during a breach has been made available on dark web sites. You can make sure the data is yours through watermarking or fingerprinting.
In addition to monitoring for data from your business, you should also monitor the dark web for references to your business, including names of employees. Monitor for references to specific software and hardware you use, as that chatter can reveal vulnerabilities and potential attacks.
Beyond monitoring, make sure you have a strong cybersecurity process in place. Ensure patches are applied quickly, firewall rules are correct, and consider intrusion detection and data loss prevention software to help prevent theft of data. Make sure your employees are trained to detect phishing emails and to use safe computing practices such as strong passwords.
CCS Technology Group provides security services to help businesses against the dangers of the dark web. Get a dark web scan to learn how to stay safe at Halloween and year round. What you don’t know will hurt you. A Dark Web Scan can uncover if your data is for sale, and tell you if your personal or business data may be at risk. | <urn:uuid:2d9541e0-000a-4d8c-91bf-d4cead58874d> | CC-MAIN-2022-40 | https://www.ccstechnologygroup.com/discover-the-dangers-of-the-dark-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00744.warc.gz | en | 0.939981 | 694 | 2.796875 | 3 |
New Training: Understand the BGP Best Path Selection Algorithm
In this 12-video skill, CBT Nuggets trainer Knox Hutchinson explores the finer details of how BGP makes forwarding decisions. Watch this new Juniper training.
Learn with one of these courses:
This training includes:
57 minutes of training
You’ll learn these topics in this skill:
Accumulated Interior Gateway Protocol (AIGP)
AS-PATH (Like a Hop Count)
Multi-Exit Discriminator (MED)
eBGP vs iBGP
Lowest IGP Metric (Closest Exit)
Summarizing BGP Best Path Selection
How Does BGP Routing Optimize Data Transfer?
BGP or Border Gateway Protocol is an internet routing protocol designed to determine the shortest path for sending data packets across the internet. This protocol leverages the use of a vast interconnection of autonomous systems or AS that help serve routing information for packets traveling across the internet. An Autonomous System or AS can be thought of as any large network that has multiple IP addresses with a unique Autonomous System number known as an ASN.
These autonomous systems help optimize the transmission of information through the use of routing tables. These autonomous system routing tables keep track of the transmission times between various neighboring autonomous systems. The value in managing these AS tables allows for packets in transit to follow the shortest route or optimized path based on AS table values. By leveraging these advanced routing table architectures, we as internet users can access web pages and internet services around the world in milliseconds! | <urn:uuid:688b0973-3e27-4915-b4c5-bbb7eeda52f4> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-understand-the-bgp-best-path-selection-algorithm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00744.warc.gz | en | 0.819465 | 347 | 2.75 | 3 |
Interactive technologies in the classroom boost morale, learning, and results alike.
Gone are the days when teachers would scratch on blackboards with dusty chalk to teach the class. Even now, overhead projectors are becoming a tool of the past. These days, it’s all about SMART Boards, which implement smart technology into the classroom to create a visual interactive display unlike any other. The results from these boards are pleasing, too. Smart Technologies found that collaborative technology and group learning make student success 3.4 times more likely.
Check out just some of the benefits of interactive displays in the classroom!
1) Boosts Social Learning
Group activities are often supporting with technology, and using SMART Boards are no different. Students in classrooms with the top technology indicate they are 13 percent more likely to feel confidence contributing to class discussions. Furthermore, the technology can lead to 20 percent higher levels of socio-emotional skill development. These developments help to foster collaboration and communication skills, making the students’ group experience that much more valuable.
2) Interactive Displays Move Anywhere
Anyone who has been around an interactive projector will know that no matter where you sit in the classroom, you have the same experience. Essentially, this technology eliminates the classroom walls, engaging all students in the room, front and back. New projectors can turn any surface into a touchscreen, meaning that teachers are able to reposition as necessary and move into any space.
3) Encourages Real-Time Collaboration
SMART Boards facilitate the collaboration necessary for students to develop these skills. Creating an audience response system on the interactive display makes it easier for students to use devices to participate in class surveys, quizzes, and games. The results can also be analyzed in real time.
Contact D&D Security by calling 800-453-4195 or by clicking here. | <urn:uuid:6228f0a9-ea4b-4c4e-a5f8-ff792feabfd1> | CC-MAIN-2022-40 | https://ddsecurity.com/2017/10/12/3-interactive-displays-boost-classroom-collaboration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00744.warc.gz | en | 0.927834 | 379 | 3.265625 | 3 |
What is TOTP?
The abbreviation TOTP stands for Time-based One-time Password Algorithm. It is a method that generates time-limited, one-time use passwords for logging into a system. In contrast to HOTP (HMAC-based One-time Password), the procedure is time-based and not event-driven. In addition, there is no validation window with multiple simultaneously valid passwords.
The Initiative For Open Authentication (OATH) developed the procedure. It is standardized in RFC 6238, which was published in 2011. TOTP passwords are often used as part of two-factor authentication together with apps or tokens to generate the passwords. If unauthorized persons gain knowledge of a TOTP password, they can hardly use it because it loses its validity after just a few seconds.
How the Time-based One-time Password Algorithm works
The Time-based One-time Password Algorithm uses the Keyed-Hash Message Authentication Code (HMAC) to calculate time-based passwords. The generation requires a secret key agreed between the user and the system he wants to log in to, and time information synchronized between the user and the system. The time information is Unix time, which counts the seconds since January 1, 1970 00:00 UTC.
The number of seconds is rounded to 30 seconds. The algorithm generates a hash value from this rounded number and the secret key. It is truncated to a specific bit length and represented as a six- or eight-digit decimal number using a modulo operation. Since the calculation provides the same value for the user and the system due to the synchronous time information, authentication works. If sufficiently synchronized and accurate time information is not available, authentication fails.
Differentiation between HOTP and TOTP
In addition to TOTP, there is another method for generating one-time passwords called HMAC-based One-time Password (HOTP). HOTP is event-driven rather than time-driven. In addition to the secret key, an event-driven counter is used to generate the one-time password, rather than the rounded seconds value.
The counter is incremented by one for the generation of each new password. On the server, the counter also increases after each successful authentication. Since this method can cause the counters to diverge and usually does not allow constant synchronization of the counter, the servers usually accept a larger number of one-time passwords. This is called a validation window. Only if the one-time password is outside the window, the authentication fails and a new synchronization between the user’s token and the server must take place.
Since with TOTP, only one password is valid for about 30 seconds at a time, the method is considered more secure than HOTP.
Using the time-based one-time password algorithm for two-factor authentication
TOTP is often used to generate an additional authentication feature as part of two-factor authentication. It is generated using a special hardware token or an app on the user’s smartphone.
As a second factor, the time-dependent one-time password can only be used for a limited time thanks to TOTP. Since unauthorized persons can hardly come into possession of a one-time password and it is only valid for a short time, two-factor authentication via TOTP is considered extremely secure. However, the secret key used to generate the passwords must not become known to unauthorized persons. | <urn:uuid:159fa3b4-7c64-43d6-92c1-f3f3928f1f30> | CC-MAIN-2022-40 | https://informationsecurityasia.com/what-is-totp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00744.warc.gz | en | 0.935104 | 711 | 3.75 | 4 |
As the vision of smart cities increasingly becomes reality, so too does the issue of security surrounding them. What can be done to ensure the security practices put in place are up to the job?
No longer is the smart city concept a far-off, futuristic idea, it is a very real initiative that governments all over the world are embracing. However, as the internet of things (IoT) continues to spur the growth of smart cities, it also elevates the need to address the security concerns that come with them.
When the initial wave of IoT implementations began rolling out a few years ago, the core focus was understandably, on communication and connectivity. Giving ‘dumb’ objects such as televisions, lightbulbs and thermostats network connectivity was a huge technical achievement. So great were the rewards being reaped from these new connected ‘things’, that the identity, and access management aspect was often overlooked. As the maturity and stability of IoT communications increases, we now have gained a better understanding of the potential risks and vulnerabilities with respect to data loss.
The sheer volume of IoT devices provides a large attack vector for malicious operators. When considered on a citywide scale, where thousands of devices are communicating with both operators and each other simultaneously, the security implications are significant. Smart cities present the ideal target for hackers to create bot-net style networks of compromised devices, and use them to perform tasks other than those they were originally designed for.
For example, imagine if a hacker was able to compromise a city’s traffic flow system, turned all of the traffic lights around the city centre to red during rush hour. Team this with interference to local radio stations meaning there is no way for citizens to be warned. As commuters take their usual routes to work, unaware of the issues, the entire city could become gridlocked in minutes. Not only does this cost the city money from a productivity perspective, but it also means the emergency services cannot get to call-outs quickly, potentially costing lives.
How can threats be mitigated?
The first step to preventing threats of this nature is to understand where they are coming from. The best way to do this is to ensure that every connected device within the smart city infrastructure, be it a car, a street lamp or a earthquake sensor, has a validated identity and is correctly attached to the network. If a device can be identified, it is that much easier to confirm that the data it is generating is genuine and can be trusted. Importantly, it also means if the device is trying to do something that it is not permitted to do, it can be identified and prevented.
Focus on effective risk management
It is unrealistic to expect any network to be entirely free from malicious behaviour. Even with the best security measures in place, with so many attack vectors and threats out there, eventually something/somebody will get through. As such, effective risk management is a key component in evaluating and responding to threats in any smart city. Controls and, more importantly, recovery plans should also be put in place to not only reduce the risk window, but to also actively respond once any issue is discovered.
Public infrastructure will always be a highly attractive target for criminals and terrorists. As such, it is fundamentally important that the steps taken to secure smart cities will be effective at that scale. Any smart city security programme needs to be regularly reviewed, to ensure new innovations are incorporated, and compliance is always met. This, when teamed with identity management and a sound knowledge of threat vectors, should be enough to protect the ever-growing smart city.
[su_box title=”Simon Moffatt, Solutions Director at ForgeRock” style=”noise” box_color=”#336588″]Simon Moffatt, is a solution director, at ForgeRock. The ForgeRock mission is to transform the way organizations approach identity and access management, so they can deliver better customer experiences, strengthen customer relationships, and ultimately, drive greater value and revenue. We make it happen with the best commercial open source identity stack for securing anything, anywhere, on any device.[/su_box] | <urn:uuid:c7bfdff4-cdbf-4254-8f51-e4e0252cfebf> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/securing-the-smart-city/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00144.warc.gz | en | 0.955091 | 846 | 2.96875 | 3 |
Maximizing Network Capacity by Minimizing Passive Intermodulation (PIM)
Modern wireless high-speed data networks use tightly grouped channels and complex modulation schemes to enable transmitting vast amounts of data. This in association with ultra-sensitive receivers may face unanticipated but serious capacity losses if the network is disturbed by Passive Intermodulation or PIM for short. Generally, modulating RF signals is necessary to transport information, but arbitrary passive intermodulation can significantly impact RF signal performance. Unfortunately, PIM can happen whenever more than one signal is channeled through one RF path. As a result, we may see unwanted non-linear frequency responses of passive components including connectors and cable feeds. These components start acting like mixers, modulators and frequency multipliers creating unwanted spurious products.
PIM (Passive Intermodulation distortion)
PIM may become a major problem when Tx and Rx signals share one RF path. Typically, VSWR measurements are standard procedure after network installation. Such measurements determine how much RF energy the antenna emits, and how much unwanted energy is reflected back into the transmitter. VSWR meters are, however, are incapable of detecting non-linearity in system components. Validating PIM network quality requires special PIM test systems. The preferred scenario for best network quality is – preventing PIM in the first place. To achieve this, it is paramount to utilize only high quality, low PIM components, apply proper installation procedures, and ensure excellent grounding of the RF system.
Why is it critical to eliminate PIM?
PIM can be generated whenever base stations transmit RF signals. The resulting intermodulation frequency products are often found within the receiving bands of a network. Since RX signals are, by nature, of very low power, interference with regular voice and data traffic occurs. Unwanted PIM interference may desensitize one or more receiving channels to such a degree that it not only creates very high BER that reduces network bandwidth, but it may even drop calls altogether. In the worst case scenario, it can even lead to permanently unusable receiver channels. Loss of already sparse network capacity caused by PIM is unacceptable for high volume, high speed wireless data networks.
What causes Passive Intermodulation?
- Ferromagnetic metals, like iron, nickel and steel, show hysteresis effects with applied energy. The resulting signal levels are altered and the signal response is no longer linear.
- Dissimilar metal plating on connectors constitutes potential voltaic elements that act like a diode, causing unwanted random modulation effects.
- Corroded surfaces cause PIM. Corrosion may happen on unprotected component surfaces or by human influence (e.g. touching a connector pin with bare fingers)
- Irregular contact surfaces, even on a microscopic scale, can cause an inconsistent flow of charge carriers and generate inhomogeneous electromagnetic fields. Causes can be of mechanical or electrical nature: low quality components, shearing by forced connections and disconnections of components, spark craters caused when “hot” connections are disconnected.
- Wind load and dissimilar expansion coefficients of tower and feed lines stress both, connectors and cables, and will cause deteriorating connection quality.
The wireless network of a large Australian operator presented serious bandwidth problems when extended with modern 3G and 4G technologies. Despite installing latest technologies, the overall network performance was not even close to the expected levels. It turned out that the network itself generated extraordinarily strong interference during signal transmissions. Culprits were inferior diplexers that generated strong PIM. Once these components were replaced with high quality, low PIM types, the network performed flawlessly.
Another operator in the US faced high rates of dropped calls in a certain market. Many sites were tested and swept by service crews but revealed to be in great condition. Basic screening, however, did not detect that the newly installed wideband antennas generated too much PIM due to a manufacturing issue. Once the problem had been pinpointed with PIM measurement equipment, action could be taken. The faulty antennas were replaced and dropped calls virtually disappeared.
Minimizing Passive Intermodulation (PIM) is critical for achieving maximal system capacity and efficiency of wireless high-speed networks. PIM awareness is paramount for PIM prevention. Installers need to be trained properly to ensure their familiarity with the causes of PIMs and their expertise on how to prevent it. RF and DAS equipment manufacturers must deliver products that possess low PIM characteristics, but guarantee sustained specifications over time under environmentally harsh conditions. System designers must account for PIM in their DAS designs, consider low PIM products where appropriate, and pay special attention to material and plating of mating component surfaces. Finally, wireless operators need to maintain the performance level of their network, ensuring that PIM behavior does not deteriorate the operation of the system.
- Wi-Fi Design Strategies in a Converged World Webinar Questions Part 4: Industry Related Questions - December 4, 2017
- Wi-Fi Design Strategies in a Converged World Webinar Questions Part 3: iBwave Products - October 5, 2017
- Wi-Fi Design Strategies in a Converged World Webinar: Your Questions Answered. - August 9, 2017 | <urn:uuid:98045b04-ade3-42a0-a80e-c5957681f59f> | CC-MAIN-2022-40 | https://blog.ibwave.com/maximizing-network-capacity-by-minimizing-passive-intermodulation-pim/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00144.warc.gz | en | 0.924227 | 1,076 | 2.515625 | 3 |
Clinical trials have never been more in the public eye than in the past year, as the world watched the development of vaccines against covid-19, the disease at the center of the 2020 coronavirus pandemic.
Discussions of study phases, efficacy, and side effects dominated the news. The most distinctive feature of the vaccine trials was their speed. Because the vaccines are meant for universal distribution, the study population is, basically, everyone. That unique feature means that recruiting enough people for the trials has not been the obstacle that it commonly is.
“One of the most difficult parts of my job is enrolling patients into studies,” says Nicholas Borys, chief medical officer for Lawrenceville, N.J., biotechnology company Celsion, which develops next-generation chemotherapy and immunotherapy agents for liver and ovarian cancers and certain types of brain tumors. Borys estimates that fewer than 10% of cancer patients are enrolled in clinical trials. “If we could get that up to 20% or 30%, we probably could have had several cancers conquered by now.”
Clinical trials test new drugs, devices, and procedures to determine whether they’re safe and effective before they’re approved for general use. But the path from study design to approval is long, winding, and expensive. Today, researchers are using artificial intelligence and advanced data analytics to speed up the process, reduce costs, and get effective treatments more swiftly to those who need them. And they’re tapping into an underused but rapidly growing resource: data on patients from past trials. | <urn:uuid:0ff9751b-7a86-42bd-9940-cb5517c1b070> | CC-MAIN-2022-40 | https://techmonitor.ai/whitepapers/clinical-trials-are-better-faster-cheaper-with-big-data-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00144.warc.gz | en | 0.961504 | 326 | 2.859375 | 3 |
A password is private and confidential piece of data. It has the ability to protect sensitive personal and business information. Because of this, attackers continuously target passwords in hopes of gaining access to data.
Let’s take a look at a few techniques most commonly used among attackers.
Brute force involves using an automated program that can guess passwords very quickly. This program may use several different techniques, including:
– Using a dictionary of common words.
– Using a list of the most common passwords.
– Failing other techniques, attempt combinations of letters and numbers.
Since account lockouts are generally tracked for each account separately, a variation of this technique is to guess the most common passwords against a list of accounts to avoid triggering the account lockout safety mechanism.
Research has shown that some of the passwords most commonly used on the Internet include “12345”, “123456”, “12345678”, “password”, and “iloveyou”.
Passwords comprised of simple words, names, places, numbers, and even combinations (such as ‘abc123’) are trivial to guess.
One of the oldest and simplest methods for someone to get your password is to simply steal it by:
– Watching over your shoulder as you type it.
– Finding a sticky note hidden under the keyboard (or worse, right on the monitor!).
– Viewing it in a text file on the computer when you step away for a coffee break.
Believe it or not, hackers can steal your password because, for whatever reason, you directly told it to them at some point in time.
Safelight Security | Information Security Awareness Training | More Password Security
Safelight‘s employees are security experts who are also educators. The company combines real-world security skills with innovative adult learning methodologies, focusing on the best ways to teach information security to everyone in customers’ organizations. | <urn:uuid:cb311295-c60a-4376-9579-67fa5ed69355> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/identifying-common-password-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00144.warc.gz | en | 0.929026 | 416 | 3.15625 | 3 |
Artificial Narrow Intelligence (ANI), or narrow intelligence, is the courteous name for the weak AI. Narrow artificial intelligence is a type of artificial intelligence in which a learning algorithm is created to perform a single function. Any knowledge acquired through this activity will not be applied to other activities.
Artificial narrow intelligence is designed to complete a single activity without human help successfully. Language translation and image recognition are two examples of common uses for narrow AI.
Table of Contents
What is artificial narrow intelligence?
The AI in our world today is known as Artificial narrow intelligence or “Weak” AI. Narrow AI is a type of artificial intelligence that has been created to accomplish a single activity, such as forecasting the weather, playing chess, or analyzing raw data to generate reports.
Artificial narrow intelligence systems can perform in real-time but retrieve data from a particular database. As a result, these technologies aren’t capable of handling other tasks.
Artificial narrow intelligence is not conscious, aware, or motivated by emotion in the same way that humans are. Even if Narrow AI appears to be considerably more sophisticated, it operates within a pre-determined, pre-defined scope.
Today’s machine intelligence is all Narrow AI. Google Assistant, Google Translate, Siri, and other natural language processing tools are examples of Narrow AI. Although these tools appear to be able to interact with us and process human language, they lack the capability for autonomous reasoning, self-awareness, consciousness, and genuine intelligence. In other words, they are unable to think.
The question is: What are the characteristics of an intelligent machine? In other words, what distinguishes a conscious computer from one that merely responds to queries? The difference is that a conscious computer can think independently, make judgments, and make decisions. We already have this ability as human beings. When we talk to Siri, it isn’t performing a conscious function of answering our questions. Rather, what Siri is capable of — what it was designed to do — is human process language and submit it to a search engine for retrieval.
This explains why, when we ask Siri or Google Assistant abstract questions like “What is the meaning of life?” or “How do I deal with a personal problem,” we frequently get evasive answers that make no sense or links to pre-existing articles from the Internet that address these topics. However, when we ask Siri what the weather is outside, we get a proper answer. These digital assistants were designed to handle basic inquiries.
People can perceive their surroundings, be conscious, and experience emotionally charged reactions to events. In many cases, AI agents lack the flexibility and adaptability to think as we do. Except that a self-driving car comprises numerous artificial narrow intelligence systems, even something as complicated as a self-driving automobile is labeled Weak AI.
Reactive AI and limited memory AI
Artificial narrow intelligence has made significant progress in the last decade, fueled by machine learning and deep learning breakthroughs. AI systems diagnose cancer and other illnesses using human-like intellect and logic replication.
Narrow AI uses NLP, natural language processing, to understand us and execute tasks. Natural language processing (NLP) allows AI agents to be programmed to communicate with people in a natural, personalized way by employing natural language understanding and speech and text analysis.
Artificial narrow intelligence has two forms: Reactive and limited memory. Reactive AI is extremely basic; it has no memory or data storage abilities, mimicking how the human mind responds to various stimuli without prior knowledge. AI with limited memory is more advanced, featuring data storage and learning capabilities that allow machines to draw upon previous experiences.
The most popular kind of AI is limited memory AI, which utilizes massive amounts of data for deep learning. Deep learning allows for personalized AI experiences like virtual assistants and search engines that keep track of your information and customize your future encounters.
Artificial narrow intelligence (Narrow AI) applications
We mentioned that almost all artificial intelligence systems we use today are of the narrow AI type. A few examples of these are as follows:
- Image and facial recognition systems automatically identify people and objects in images
- Chatbots and conversational assistants include popular virtual assistants, Google Assistant, Siri, Alexa, and customer-service chatbots.
- Self-driving vehicles such as autonomous cars, drones, boats, and factory robots are applications of narrow AI.
- Predictive maintenance models rely on machine data, often collected through sensors, to help predict when a machine part may fail and alert users ahead of time.
- Recommendation engines predict content a user might like or search for next are forms of weak AI.
The difference between artificial narrow intelligence and artificial general intelligence
The term “artificial general intelligence” (AGI) refers to the original AI goal, computers that mimic human cognitive functions. It is worth noting, however, that we have not yet created such a machine, and AGI remains a concept.
Artificial General Intelligence (AGI) is a machine capable of comprehending or learning any intellectual activity that a human being can. AGI researchers, academics, and thought leaders believe it will be at least many decades before artificial general intelligence arrives. On the other hand, scientists have created many useful tools to realize the ambition of constructing thinking machines. The term “narrow AI” refers to all these technologies.
An easy distinction formula: If technology is good at specific activities, it’s weak AI. It’s strong AI if it behaves like a human and isn’t restricted to completing a particular operation.
Artificial narrow intelligence systems are excellent at executing a single operation or a restricted number of operations. In many cases, they outshine humans in their specific fields. However, they fail miserably when placed outside their problem domain. They also can’t pass on their expertise from one field to another. A bot created by Google-owned AI research lab DeepMind, for example, can compete at the highest level in the popular real-time strategy game StarCraft 2. However, an AI capable of playing another RTS game such as Warcraft or Command & Conquer would be unable to do so.
Artificial narrow intelligence has succeeded in various applications, despite its inability to achieve human-level intelligence. Narrow AI algorithms handle Google Search queries, make suggestions on YouTube and Netflix, create Weekly Discovery playlists on Spotify, and power digital assistants, such as Alexa and Siri.
What’s Next: Artificial general intelligence and artificial super intelligence
We mentioned that artificial narrow intelligence is a building block of more advanced types of artificial intelligence. Aiming to copy the cognitive abilities of the most intelligent being that it knew with AGI for a while, humanity wants to reach dreams far beyond this with artificial super intelligence.
Artificial general intelligence (AGI) is a term used to describe computers that are intelligent like humans. In other words, an AGI can complete any intellectual activity that a human being can accomplish. This is the artificial intelligence we see in blockbusters like “2001: A Space Odyssey” or “Her,” in which humans communicate with conscious, aware, and emotional computers and operating systems.
Currently, computers are superior at processing data than humans are. Humans can reason abstractly, plan ahead of time, and tap into our thoughts and memories to make well-thought-out judgments or develop innovative concepts. Our ability to be conscious creatures, as opposed to machines’ inability to do so, distinguishes us from them and gives us an advantage. However, it isn’t easy to describe because it is primarily driven by our capacity to be sentient beings. As a result, emulating this type of intelligence in computers is extremely challenging.
Artificial general intelligence is expected to be able to solve problems, form judgments under uncertainty, plan, learn, integrate prior knowledge in decision-making, and be inventive and creative. However, machines will need to be able to experience consciousness to have real human-like intelligence.
Artificial super intelligence (ASI) will surpass human intellect in all areas, including creativity, general knowledge, and problem-solving. Machines will be able to display intelligence that humans haven’t yet achieved. This is the sort of AI that many people are concerned about, and it’s the type of AI which many visioners believe will result in humanity’s demise. | <urn:uuid:f3b3005b-7ab6-45b8-a967-d3e55960dde4> | CC-MAIN-2022-40 | https://dataconomy.com/2022/06/artificial-narrow-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00144.warc.gz | en | 0.936719 | 1,732 | 3.6875 | 4 |
I was not sure what to call this article. I first thought it should be titled, “Why The Johnson Criteria is Wrong.” We use this criterion to predict how far away we can see something using a specific camera and lens. The criteria define the threshold for detection, recognition, and identification (DRI). The industry has used this criterion since World War II. It has not been updated to reflect today’s technology and resolution requirements.
Resolution required for Recognition, Detection, Identification depends on the type of camera
By Bob Mesnik
There is some confusion in the industry about how much camera resolution is required to detect an object, recognize the type of object, or identify exactly what or who it is. The criteria are different between thermal and optical cameras. Resolution for thermal cameras and optical IP cameras are measured differently.
For example, when defining the performance of a thermal camera we use the Johnson Criteria of “detection”, “recognition” and “identification” (DRI).
On the other hand, IP camera resolution performance is usually defined by the number of pixels in the sensor, and we are usually interested in the ability to identify a person.
How much resolution do you need? This article compares how resolution is defined using thermal and optical technologies.
This article was updated on 4/12/2018 to reflect new IP cameras
IP Camera manufacturers provide product specification sheets that help you select the right camera for your IP security and surveillance system. But, which specifications are important? They include such things as resolution, minimum light sensitivity, lens, WDR, signal to noise, etc. This article reviews the important camera specs, and how to avoid being fooled by specsmanship (from the marketing department).
The importance of each of the camera specifications depends on your objective and application for your IP camera system. For example, if you want to use the camera outdoors where it can get dark, then the low light specification is important.
If you are only using the IP camera indoors, you may be more interested in the how wide a viewing angle you can achieve. Here is a review of the important specifications.
What is the right lens and resolution for your IP camera? When you put together your IP camera system, you want to make sure that the camera you select for each location meets your expectations. It is important to first know the objectives for each area you are viewing. Do you want to identify a person’s face, a license plate, or just detect a person walking far away? In general, the more detail you want, the higher the resolution you need. This article shows you how to determine the viewing area and distance you should expect.
Note: this article was updated on 8/15/2017 to correct an error in calculation.
Why should I use an IP camera when the analog cameras are so much cheaper?
That’s a question we get especially from those people who have been using analog CCTV (closed-circuit TV) systems for many years. Actually CCTV has been around for over 45 years. Olean, NY was the first municipality in the US to use cameras on its main street to help reduce crime (according to Wikipedia this was back in 1968).
Not only have the analog CCTV systems been around for a very long time, but they also haven’t really changed from their original capability. Well yes, they have gotten much cheaper, and there are efforts to use higher resolution cameras, but their capability hasn’t changed. The first systems were based on the TV standards established by the National Television System Committee (NTSC). The standard indicated that there should be 525 vertical TV lines, with a frame rate of 30 frames per second. Take a look at our video, How the Video Camera Works. | <urn:uuid:d0ee92c1-e0e8-490a-acd1-42b4ad0c3a71> | CC-MAIN-2022-40 | https://kintronics.com/tag/camera-resolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00144.warc.gz | en | 0.936978 | 779 | 2.859375 | 3 |
Internet security is based on a delicate ecosystem of trust between certificate authorities, browsers, and users. Some CAs are making potentially misleading and confusing claims about the digital certificates used to protect online communications and websites, prompting several security experts to call them out.
The CA Security Council, an alliance of certificate authorities formed in 2013 to collaborate on security and best practices for digital certificates, recently announced its London Protocol initiative at a CA/Browser forum event. The council hopes to address the growing problem of criminals obtaining legitimate certificates to use on phishing sites through its four-phase ten-month initiative.
CAs are blaming the influx of domain-validated certificates for the phishing problem.
Comodo CA, Entrust Datacard, Globalsign, GoDaddy, and Trustwave pledged to work together on policies and procedures to “improve website identity assurance” and present a report with recommendations on how other CAs can adopt and implement the new protocols.
This sounds like a laudable project, since phishing is one of the biggest online threats facing organizations and individuals. The problem with the London Protocol is that the participating CAs are blaming the influx of domain-validated certificates for the phishing problem.
Primer on Certificates
Transport Layer Security (an updated, more secure, version of the Secure Sockets Layer protocol) encrypts data sent between the site and the user’s machine so that eavesdroppers can’t see or modify the contents of the network traffic. There are two uses for TLS for websites: to protect traffic in transit to and from the site, and to assure users that the site they are visiting passed the identity checks and is owned by the entity it is claiming to be.
There are three main types of certificates used for TLS: Organization Validated (OV), Extended Validation (EV), and Domain Validation (DV). The idea is that the domain owner provided a business license or another form of identification, or passed an identity verification test in order to get the EV certificate. OV certificates have a lower burden of proof. The CA has verified that the owner of the website is really who it claims to be—the domain owner claiming to be Google really is Google. When the CA issues a DV certificate, it has checked the entity requesting the certificate actually owns the domain. The DV certificate is used to protect the traffic going to and from the domain so that someone can’t hijack network traffic intended for someone else.
EV proves identity. DV indicates ownership. Different purposes, different use cases. Both important. Both secure.
Free and low-cost SSL/TLS certificates have been around for years, but they have gained prominence in recent years as Let’s Encrypt, a free certificate authority from the Internet Security Research Group hosted by the Linux Foundation, has gained steam in its goal to make encryption the default for the web. Let’s Encrypt makes it easier for website owners to obtain, manage, and renew DV certificates (protect traffic) and has issued over 315 million certificates since its inception in 2015. The side effect is that it is also easier for criminals to get certificates for their malicious sites. Last year, The SSL Store, a division of security company Rapid Web Services, warned that Let's Encrypt had issued 15,270 certificates to domain names that have ‘PayPal’ in the name, and that well over 95 percent of those sites were malicious.
DV Not the Issue
Organizations want to ensure users are going to websites where their information is properly protected, and that users are going to sites that aren’t pretending to be something else. That's understandable. Goals conflict when browsers mark websites with DV certificates as “Secure” to indicate the connection is encrypted and users translate “Secure” to mean safe. Users are not very good at recognizing that a site marked “Secure” can still be malicious, but CAs hawking EV certificates as the better alternative to DV is not the answer.
Some CAs have been pushing the narrative that DV certificates are more prone to phishing attacks than EV certificates because criminal enterprises can’t pass the identity-verification process while masquerading as well-known brands. Theoretically that is true, but that assumes the CA in question has stringent rules for vetting identity and is adhering to the CA Browser Forum Baseline requirements for certificate assurance. That’s a big “if.”
“This claim is made without any recognition or acknowledgement that their identity-vetting methods and processes are completely unreliable as practiced. It's a bit like hawking ‘certified-organic’ on your snake oil, and hoping no one will look into what it actually is,” Ryan Sleevi, a software engineer working on the Google Chrome team, wrote on Twitter. In the same thread, Sleevi pointed out the methodology flaws in CA-supported research claiming EV certificates prevent phishing, such as a “flawed and arbitrary dataset,” ambiguous definitions of “safer,” and cherry-picking which certificates to analyze.
The recent primary focus on phishing does not fulfill the purpose for which CASC was created.
Unfortunately, it looks like the London Protocol is continuing this line of argument with its emphasis on identity assurance for websites. Participating CAs will focus on ways to “reinforce the distinction between Identity Websites and websites encrypted by domain validated (DV) certificates, which lack organization identity,” the CA Security Council wrote in its statement.
“[The] FUD from the CA Security Council is deafening,” security expert Troy Hunt wrote on Twitter. While he was specifically referring to a marketing video from the council, the comments were part of his response to CAs touting the benefits of EV certificates in the wake of the London Protocol announcement.
By trying to distinguish between EV and DV certificates, the CA Security Council is going in the opposite direction of the browsers. Users automatically assume that HTTPS and the presence of the padlock icon means the site is good even if the website address looks a little dodgy, because major browsers currently display the name of the organization and the word “Secure” (EV certificates) or just the word “Secure” (DV certificates) in the URL bar. Google is currently testing removing those cues in Google Chrome and relying on new (and simpler) indicators marking HTTP sites as “Not Secure.” Currently, Chrome is the only browser testing this change, but in many things related to website security, where Google leads, other browsers follow.
If certificates were really the problem, the world’s most-phished websites, which include Facebook, eBay, Amazon, Netflix, and Pornhub, would not be betting their multi-million dollar businesses on DV certificates. “If you buy an EV cert for your site, that doesn't stop a phishing site getting a DV cert which people trust just as much because 99.x% of people have no idea what an EV cert is anyway!” Hunt wrote, accusing the CA Security Council of caving into marketing pressure from from member CAs.
We need to assume the Internet is secure by default and using HTTPS.
DigiCert, the certificate authority which purchased Symantec’s website security business last year, recently withdrew from the CA Security Council over disagreements over the council’s fixation with phishing and EV certificates.
“DigiCert is electing to withdraw from the CA Security Council (CASC) as we believe CASC is moving in a direction that DigiCert does not support,” Jeremy Rowley, vice-president of business development and legal at DigiCert, said in a statement. DigiCert “strongly believes in the value of identity provided by CAs” but also believes "EV certificates can be significantly strengthened without negatively impacting the ability of legitimate businesses to get EV certificates,” Rowley said.
"The recent primary focus on phishing does not fulfill the purpose for which CASC was created. We would have preferred if the focus of CASC had instead broadened to address the many challenges and opportunities that the CA industry faces,” Rowley said. DigiCert senior director Dean Coclin said DigiCert will continue working with various industry standards groups and consortiums such as IETF and ICANN to improve PKI and strengthen TLS certificate standards for the web and in IoT.
In a separate statement, Rowley said the London Protocol originally envisioned CAs consuming high-quality threat intelligence information from multiple sources, but that plan was abandoned in favor of building out a common database which participating CAs will query to find related phishing reports before issuing EV or OV certificates.
“The goals of the London Protocol are much less ambitious, with ad hoc data being shared among participants strictly focused on the problem of phishing,” Rowley said. “We would prefer that if CAs are going to engage in website monitoring and information sharing, that it would address the full spectrum of fraud and abuse that exists.”
EV proves identity. DV indicates ownership. Both important. Both secure.
Security experts are on the side of more sites using HTTPS, whether that is EV, DV, or OV certificates, and not perpetuating the myth that only sites collecting data need to be on HTTPS.
“I'm not against EV, I'm against the misrepresentation of what it achieves,” Hunt said."By all means, go and get an EV cert, but don't expect it to make an ounce of difference to your customers getting phished.”
We need to assume the Internet is secure by default and using HTTPS, and to call out sites still on HTTP. The assumption isn’t far-fetched, since Mozilla’s latest telemetry results show that that 70 percent of websites loaded by the Firefox browser used HTTPS.
The CA should not be trying to handle both privacy and identity. “I don’t think that should be the CA’s responsibility, we already have both browser vendors and ISPs doing this,” Hunt wrote. | <urn:uuid:5667cb14-a9ce-4877-82f8-60cf25ca3974> | CC-MAIN-2022-40 | https://duo.com/decipher/new-ca-focus-ev-certs-wont-stop-phishing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00144.warc.gz | en | 0.948044 | 2,106 | 2.84375 | 3 |
Exploring Model Insights in DataRobot
The Models > Insights tab provides several additional graphical representations of model details. Some are model agnostic and applicable to any model or the data as a whole, while others are representations of model details that apply to a particular model that you select.
- Tree-based Variable Importance provides a ranking of the most important variables in a model by using techniques specific to tree-based models.
- Hotspots indicate predictive performance as a set of rules—the rules being combinations of feature values of a subset of important features.
- Variable Effects illustrate the magnitude of existing and derived features by way of coefficient values.
- Word Cloud visualizes the relevance of text related to the target variable
- Anomaly Detection provides a summary table of anomalous results sorted by a scoring of the most anomalous rows.
- Accuracy Over Space provides a visualization for how predictions change over time for Regression projects.
- Text Mining, similar to Variable Effects, visualizes the relevancy of words and short phrases, and also by way of coefficient values.
Now let’s take a look at each in greater detail.
Tree-Based Variable Importance
Tree-based Variable Importance shows the sorted relative importance of all key variables driving a specific model, relative to the most important feature for predicting the target. In models based on random forests, this can be derived using entropy or Gini calculations, which are based on measurements of impurity or information gain.
In the dropdown list shown in Figure 2 are all tree-based models in the project, and each is available to be selected and displayed. This is helpful to quickly compare models. It is useful to compare how feature importance changes for the same model with different feature lists. Generally, we recommend using Feature Impact to understand a model, but tree-based variable importance may provide insights.
For example, a feature that is recognized as important on a reduced dataset might differ substantially from the features recognized on a full dataset. Or if a feature is included in only one model out of the dozens that DataRobot builds, it may not be that important. If this is the case, excluding it from the feature set can optimize model building and feature predictions.
This investigation tool shows hot spots and cold spots which represent simple rules with highly predictive performance either in the direction of the target, which is a hot spot, or in the opposite direction of the target, which is a cold spot. These rules are often good predictors and can be easily translated and implemented as business rules.
Note that Hotspots are available when you have a rule fit classification or regression model, requiring at least one numeric feature and fewer than one hundred thousand features.
In Figure 3, we see the size of the spot, which indicates the number of observations that follow the rule, and the color of the spot, which indicates the difference between the average target value for the group defined by the rule and the overall population.
Variable Effects tells us the relevance of different variables, many derived from raw features in the model. The variable effects chart shows the impact of each variable in the prediction outcome. Notably, this chart is useful to display and compare variables via different constant splines from applicable linear models. This is useful to ensure that the relative rank of feature importance across models doesn’t vary wildly. If in one model, a feature is regarded to be very important, but in another model it is not very important, then it’s worth double-checking both the dataset and the model with Variable Effects.
You can sort the Variable Effects by the dropdown menu at the bottom by coefficient value or alphabetically by feature name.
This tool displays the most relevant words and short phrases in a word cloud format. The size of the word indicates its frequency in the dataset and the color indicates its relationship to the target variable.
Text features can contain words that are highly indicative of a relationship to the target variable. You can use the Word Cloud to easily view and compare text based models in the dropdown list, but it’s also available in the Leaderboard for a specific model via the Understand division.
Also referred to as outlier and novelty detection, Anomaly Detection is an unsupervised method for detecting abnormalities in your dataset. Similar to supervised learning, anomaly detection works on historical data, but is unsupervised in that it does not take the target into account when making predictions. DataRobot does this by simply ignoring the target when building anomaly models.
Because you still do enter a target, however, DataRobot also can build accurate non-anomalous models. (Anomaly detection will be discussed in greater detail in a future article.)
Accuracy Over Space
Accuracy Over Space provides a spatial residual mapping, enabling you to assess model fidelity for a selected model. It provides similar visualizations to Location AI ESDA (Exploratory Spatial Data Analysis), but allows you to explore prediction error metrics across all data partitions.
Lastly, the Text Mining chart displays the most relevant words and short phrases in any features detected as text. Like Variable Effects, you can use the dropdown list at the bottom of the page to sort by coefficient value or alphabetically by feature name.
Find out more about the insights available in DataRobot by visiting our public documentation portal and navigating to the Insights section. | <urn:uuid:d2bcd0f7-30f7-46ee-9d34-ac294ec9f55f> | CC-MAIN-2022-40 | https://www.datarobot.com/blog/exploring-model-insights-in-datarobot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00144.warc.gz | en | 0.901534 | 1,117 | 2.59375 | 3 |
Human-Machine Interface (HMI) is the hardware or software through which an operator interacts with a controller. An HMI can range from a physical control panel with buttons and indicator lights to an industrial PC with a color display running dedicated HMI software. Human-machine interface technology has been used in different industries like electronics, entertainment, military, medical, etc. Human-machine interfaces help in integrating humans into complex technological systems. Human-machine interfaces are also known as man-machine interfaces (MMI), computer-human interface or human-computer interface. HMIs are often seen on printers, your GPS, or even inside one of the new Teslas.
Source: NIST SP 800-82 Rev. 2
Additional Reading: Bardstown Bourbon Upgrades HMI for Strategic Connectivity | <urn:uuid:b0e8153b-d383-4776-b8ff-0c28eb016c89> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/human-machine-interface-hmi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00345.warc.gz | en | 0.893853 | 160 | 3.40625 | 3 |
A PACS (Physical access control) is a mechanical form and can be thought of physical access to a room with a key and type of access control system used as an electronic security counter-measure. PACS can be used to control employee and visitor access to a facility and within controlled interior areas. Within the federal government, compliant PACS solutions are made up of three distinct components, which are the: (1) Infrastructure, (2) Certificate Validation System and (3) Personal Identity Verification (PIV) Card Readers.
The PACS Infrastructure is made up of many compatible and interoperable software and hardware components that may include the software application and server (head-end), database, panels, door controllers, and a workstation. The PACS Infrastructure typically interoperates with Card Management System or Credential Management System (CMS) Intrusion Detection Systems (IDS), Video Management Systems (VMS), and Visitor Management Systems.
The line is often unclear whether or not an element can be considered a physical access or a logical access control. When physical access is controlled by software, the smart card chip on an access card and an electric lock grants access through software (see checklist: An Agenda for Action for Evaluating Authentication and Access Control Software Products), which should be considered a logical access control. That being said, incorporating biometrics adds another layer to gain entry into a room. This is considered a physical access control. Identity authentication is based on a person’s physical characteristics. The most common physical access controls are used at hospitals, police stations, government offices, data centers, and any area that contains sensitive equipment and/or data. | <urn:uuid:e4b1f80e-4baf-42dc-b1dd-d442459e1a83> | CC-MAIN-2022-40 | https://www.cardlogix.com/tag/pacs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00345.warc.gz | en | 0.900459 | 339 | 2.625 | 3 |
The News: The ongoing coronavirus COVID-19 pandemic has received saturation coverage not only in the mass media, but in specialized scientific and research channels. In the public interest, many news outlets have put their COVID-19 coverage outside their paywalls.
One of the most useful of these free news sources for the technical community has been the MIT Technology Review. If interested, here is a link to their free coverage of the coronavirus COVID-19 outbreak. One of the recent articles on this site is a comprehensive discussion of research into how AI can potentially be used in the battle against pandemics, and how AI is, artificial artificial intelligence (AI) is helping to stem the outbreak’s tide. Just as important, the article highlights the limitations of current AI tools, approaches, and implementations in dealing with the current pandemic.
How AI Can Potentially be Used in the Battle Against Pandemics
Analyst Take: AI is playing many roles in the world’s battle against the COVID-19 pandemic. But AI is certainly not a panacea and its role in helping stem the tide of infections and mortality should not be overstated.
Pandemics have afflicted the human race for as long as our species has walked the Earth. As sure as the sun rises every morning and sets every evening, these devastating viral outbreaks will return and wreak havoc.
But that doesn’t mean the human race is defenseless in the battle against contagious disease. Indeed, we have added a powerful new weapon — AI — in this struggle, and it’s proving its worth in the present coronavirus COVID-19 pandemic. While there are no doubt many benefits AI can provide, AI also has its limits, which is what I wanted to discuss here. With that in mind, let’s go a little deeper.
AI Can Be Used as an Early Warning System
One immediate benefit of AI is that it can be used as an early warning system. AI enables epidemiologists both to spot emerging outbreaks and to predict how they might spread from region to region and perhaps even from one demographic cohort to others.
For example, vendor BlueDot uses an AI-based solution to monitor outbreaks of infectious diseases around the world. In late December 2019, more than a week before the World Health Organization officially flagged the COVID-19 outbreak, BlueDot alerted governments, hospitals, and businesses to an unusual spike in pneumonia cases in Wuhan, China. The outbreak was also identified early by AI-based tools HealthMap (at Boston Children’s Hospital) and Metabiota in San Francisco.
However, AI-automated early warning systems may find themselves racing against online social channels for the distinction of being first to detect a new outbreak in the offing. The MIT Technology Review article that I cited earlier in this article reported that human teams spotted the current Coronavirus/COVID-19 outbreak on the same day as these AI-powered research tools. That’s not surprising, considering outbreaks tend to have highly localized initial stages, in which at least one close-up observer raises an alarm. We are seeing that play out today, as physicians and healthcare workers the world overtake to social media channels, private or otherwise, to share concerns, thoughts, observations, and we are also seeing that as citizens of affected areas share their stories. Social media channels are powerful conduits of information
Bottom line, AI can be used as an early warning system, but let’s not overlook, or underestimate in any way, the power of human to human contact.
The Potential for AI as Infection-Path Predictor
I think there is great potential for AI being used as an infection-path predictor, predicting how COVID-19 or any other outbreak is likely to spread, and, just as importantly, how tactics such as “social distancing” might curtail or even lessen its severity.
In theory, it might be possible to run unsupervised learning algorithms that simulate all possible evolution paths, experiment digitally with how well potential vaccines perform in each scenario, and even determine whether and how the viruses develop resistance through mutations. But this approach is a bit far-fetched to offer near-term hope in the current pandemic. That’s due to the need for rapid advances in the science, modeling, and computing capabilities that would be needed to pull it off.
Another practical obstacle is the need to find sufficient amounts of behavioral, social, clinical, airline, and other data sources of sufficient quality to build and train accurate enough machine-learning models of an outbreak’s likely evolution path. The companies that detected the current COVID-19 outbreak were using NLP algorithms to look for relevant reports coming from news outlets and official health care channels in different languages around the world. However, especially in fast moving viral outbreaks, those sources may be too vague, inconsistent, and biased by political, cultural, and other factors to offer the proverbial “single version of the truth.”
In addition, the chances of pooling this data from diverse global sources in the middle of a fast-moving pandemic are not great, and the difficulties of harmonizing and cleansing it all are so great that the effort would take longer than the pandemic itself to come to fruition.
It’s also next to impossible to find reliable data on “social distancing” variables of a behavioral nature, such as the incidence of handshaking, the frequency with which people wear surgical masks and gloves in public, the average size of public gatherings, and so on. As one of the researchers in the MIT article states: “We…don’t really know what behaviors people are adopting—who is working from home, who is self-quarantining, who is or isn’t washing hands—or what effect it might be having. If you want to predict what’s going to happen next, you need an accurate picture of what’s happening right now.”
Where behavioral factors come in, there’s the need for high degree of predictive precision to drive proactive alerting of the relevant countries, regions, and authorities. The efficacy of such tactics as quarantines, school closures, and vaccination of at-risk demographics depends on having early enough intelligence so that outbreaks can be squelched before they spin out of control.
But having early warning is not enough in a fast-moving public emergency. All the AI-driven insights in the world are powerless in a situation such as what we’re facing in the United States and a federal government that has been slow to respond, less than transparent, and difficult to trust to have the best interests of the public first and foremost.
No matter how powerful its tools and accurate its data, AI can’t immunize us against a political establishment that refuses to take effective timely action.
Using AI as Diagnostic Instrument
AI is being used to examine medical images for early signs of many diseases that human doctors might miss. In recent weeks, preprint research papers have begun to appear online in which machine learning has been shown to diagnose COVID-19 from CT scans of lung tissue.
However, this approach might not be effective as an early diagnostic, considering that physical signs of the disease may show up in scans only after infection, making it not very useful as an early diagnostic. Also, the paucity of training data on a disease so new makes it difficult to assess the predictive accuracy of the approaches in the research literature, especially where it concerns identify subtle patterns in medical images.
Techniques such as few-shot learning and transfer learning might be used to train AI models to look for COVID-19 in the absence of much training data, but those approaches remain largely unproven for the current outbreak.
Exploring AI as a Research Discovery Tool
AI can accelerate access to a vast, constantly changing corpus of research literature, data, and analytical tools pertaining to outbreaks, their spread, and effective treatments.
This recent MIT Technology Review article discusses a new open database—known as CORD-19 (COVID-19 Open Research Dataset)—which contains over 29,000 coronavirus research papers. Researchers from several organizations released the Covid-19 Open Research Dataset, which includes papers from peer-reviewed journals as well as preprints from websites such as bioRxiv and medRxiv. The research covers SARS-CoV-2 (the scientific name for the coronavirus), Covid-19 (the scientific name for the disease), and the coronavirus group. It was compiled under the request of the White House Office of Science and Technology Policy (OSTP).
The database, now available on AI2’s Semantic Scholar website, leverages AI to speed searches through academic literature. It incorporates natural-language processing models such as ELMo and BERT to map out the similarities between papers and create personalized feeds based on researchers’ interests. The OSTP also launched an open call for AI researchers to develop new techniques for text and data mining that will help the medical community comb through the mass of information faster.
Though no one can dispute the value of this database or the need for more powerful AI tools to search it, it’s clear that most insights that researchers might gain from them over the next 4-8 weeks will apply more to the next pandemic—which could be decades in the future—but probably won’t come in time to be much use in combatting the current outbreak. And most research studies included in the database now are probably derived from studying previous outbreaks, limiting their usefulness in devising strategies for dealing with today’s unfolding emergency.
AI as Treatment Tool
AI can facilitate medical researchers’ investigations into pharmaceutical and other treatments to arrest COVID-19’s progress and possibly find a cure.
Though there’s no proven treatment yet, the World Health Organization has identified more than 70 drugs or “therapeutic” combinations thereof that are potentially worth trying. It’s highly likely that AI-based tools such as this experimental DeepMind offering are being used to explore how protein structures and interactions how the virus functions and how, conceivably, it can be neutralized. In addition, generative design algorithms use AI to produce millions of candidate biological or molecular structures and sift through them to highlight those that are worth looking at more closely for their possible efficacy.
However, this approach may be too little too late, because it can take months before a promising candidate emerges from the pack.
The Takeaway on the Potential of AI in Battling Pandemics
Clearly, AI is stepping up to the many challenges of dealing with current coronavirus COVID-19 outbreak. It has proven to be an invaluable tool for detecting the pandemic’s onset; predicting when, where, how, and at what speed it’s likely to spread and evolve; diagnosis its incidence and severity; and discovering effective cures and treatments.
However, AI is largely standing on the sidelines when it comes to helping people, groups, businesses, and government agencies to cope with the outbreak. Though simulation tools like this, developed by the Washington Post allude to the possibility of “flattening the curve” of the pandemic’s spread through “social distancing,” the underlying technology doesn’t seem amenable to packaging into a personal digital assistant of the sort that would be needed to help each of us avoid exposing ourselves to the virus in the normal course of living our lives.
Even if it were possible to have our own personal recommenders that steer us away from behaviors that might expose us to coronavirus COVID-19, many people would find these tools so intrusive and nagging as to be practically unusable.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Related content from our Futurum Research Team:
Image Credit: US and News Report
The original version of this article was first published on Futurum Research.
James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications. | <urn:uuid:afbb7017-593d-41c5-9311-b95c5aa25926> | CC-MAIN-2022-40 | https://convergetechmedia.com/how-to-use-ai-in-a-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00345.warc.gz | en | 0.951432 | 2,602 | 2.984375 | 3 |
Ofcom announced that it would allow British 3G telecom operators to increase the transmission base stations in order to improve reception.
The request was initially made by Vodafone but O2, Orange, T-Mobile and Three also backed the request to up maximum broadcast power. Ofcom subsequently consulted on raising the maximum power by 6dBm, twice what Vodafone initially asked for, although network operator Three objected.
The maximum power limit having been raised does not mean all base stations will transmit twice as much power but rather it’s an option available to networks when the additional oomph is deemed necessary.
"Considering the citizen / consumer interest, we believe that the increase will be of benefit to consumers because it has the potential to facilitate the provision of better in-building penetration, wider coverage in rural areas and reduced impact on the environment and visual amenity for a reduced requirement for new masts," said Ofcom in a statement.
Ofcoms consultation had received a number of comments from members of the public concerned with the effects of increased electromagnetic radiation. " Some detailed instances of ill-health which they felt could be attributed to the presence of masts," the regulator said.
Ofcom’s response drew attention to existing regulations set by the International Commission on Non-Ionizing Radiation Protection (ICNIRP), citing that power levels are well below those recommended by the ICNIRP. The regulator went further and cited the 2006 World Health Organisation advice sheet which said "there is no convincing scientific evidence that the weak RF signals from base stations and wireless networks cause adverse health effects”.
Scientific evidence will probably fail to sway correspondants convinced their sore toe was caused by mobile phone radiation but nevertheless Ofcom is required to address such concerns.
To subscribe to our Twitter feed, head over to @PCR_online. | <urn:uuid:c8a1085a-9c86-4a3f-b625-a3230050eb83> | CC-MAIN-2022-40 | https://www.pcr-online.biz/2010/09/10/ofcom-clears-british-3g-power-boost/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00345.warc.gz | en | 0.961417 | 380 | 2.53125 | 3 |
Developments in 5G, artificial intelligence (AI) and IoT technologies are leading towards what is being hyped as the era of intelligent connectivity. While this fusion of technologies has the potential to change the way we live and work, there are still uncertainties and challenges that need to be addressed before the benefits can be realized.
Intelligent Connectivity: the fusion of 5G, AI and IoT
Intelligent connectivity is a concept that foresees the combination of 5G, the Internet of Things and artificial intelligence as a means to accelerate technological development and enable new disruptive digital services. In the intelligent connectivity vision, the digital information collected by the machines, devices and sensors making up the Internet of Things is analysed and contextualised by AI technologies and presented to users in a more meaningful and useful way. This would both improve decision-making and allow delivery of personalised experiences to the users, resulting in a richer and more fulfilling interaction between people and the environment surrounding them.
As artificial intelligence becomes increasingly sophisticated thanks to advances in computing power, the education of data scientists and the availability of machine learning tools for creating advanced algorithms, the Internet of Things is getting closer to becoming a mainstream
phenomenon. 5G represents the missing element to bring these technologies to new levels and enable the intelligent connectivity vision. The ultra-fast and ultra-low latency connectivity provided by 5G networks, combined with the huge amount of data collected by the Internet of Things and the contextualisation and decision-making capabilities of artificial intelligence technologies will enable new transformational capabilities in virtually every industry sector, potentially changing our society and the way we live and work.
Intelligent connectivity is expected to play a major role in five key areas:
- Transportation & logistics,
- Industrial & manufacturing operations,
- Public safety & security and
- Other sectors.
1. Transportation & Logistics
In the transportation sector, intelligent connectivity could lead to an increased level of road safety and efficiency resulting in a smoother traffic flow. While in the logistics sector, intelligent connectivity has the potential to improve efficiency and flexibility in the delivery of goods, making logistics faster and cheaper.
USE CASE 1: AI-based driver assistance and traffic monitoring systems.
Exploiting the low latency of 5G networks, road users and the roadside infrastructure could collect and share an abundance of real-time information. For example, data about the location and speed of vehicles, bikes and pedestrians on the road, weather and road surface conditions, traffic jams and other obstacles on the road. Intelligent traffic monitoring systems and AI-based on-board computers would then use this information to provide assistance to the drivers. For example, helping them avoid accidents and collisions with other vehicles,
or dynamically planning the best route to the destination.
USE CASE 2: Self-driving vehicles.
Eventually, 5G and AI will lead to reliable self-driving vehicles. These will be provided with an AI-based on-board computer that, based on data both collected by on-board sensors and provided by roadside units and other vehicles via the 5G network, will be aware of the vehicle’s surrounding environment and able to adjust to any situation. Self-driving vehicles will also lead to new Mobility-asa-Service models similar to what we have today with services like Uber but tailored for driverless public transport. The latter would eventually be less expensive than current public and private transport options, as they would allow savings in time and money needed to train and pay drivers.
USE CASE 3: Deliveries by unmanned vehicles.
5G networks will be able to support high volumes of both terrestrial and aerial unmanned vehicles, such as unmanned delivery robots and drones, and allow operators to precisely coordinate their movements, avoiding collisions with other unmanned vehicles, buildings or other static obstacles along their path. Drones, for example, are already a very promising means of delivering goods in a fast and secure way. Drones are particularly convenient when the end location is characterised by challenging terrain or congested roads, and they have a lower cost than current human-based delivery systems.
2. Industrial & Manufacturing operations
In the industrial sector, intelligent connectivity will lead to improved productivity and reduced human errors, while resulting in lower costs and increased worker safety. By enabling remote operations to industrial facilities, intelligent connectivity may also lower the need for on-site employees and thus increase the flexibility in choosing where to locate production facilities, as the latter would become independent on the geographical availability of skilled labour.
USE CASE 1: Factory automation and remote control of industrial robots.
5G’s high data-rates, ultra-low latency and high reliability would enhance the automation of industrial processes and the remote control of machines and robots. For example, machine learning algorithms can use data collected from sensors and cameras along a supply line to immediately alert an operator of any inconsistencies in the process or the system could automatically correct the mistake in real-time. 5G would also enable human operators to monitor and adjust the actions of industrial robots from a remote location and interact with them in real time using both haptic and visual feedback enabled by connected tools such as touch-sensitive gloves and virtual or augmented reality (VR/AR) headsets.
USE CASE 2: Remote inspections and maintenance, and worker’s training.
At the same time, tactile internet applications driven by Intelligent Connectivity would also enable conducting tasks such as inspections, maintenance and repairs remotely. This results in much lower costs and reduced risks related to operations in hazardous, inaccessible or inhospitable locations, such as in nuclear plants, oil rigs or mining sites. The same tools could also be used to perform or support workers training and simulate complex situations in a safe environment.
Intelligent connectivity will help provide a more effective preventive care at a more affordable cost while allowing healthcare managers to optimise the use of their resources. In addition, intelligent connectivity could also facilitate remote diagnosis and enable remote surgery, potentially revolutionising access to medical care that today is limited to the geographical location of medical experts.
USE CASE 1: Remote health monitoring and illness prevention.
5G’s high availability and its support of a massive number of connections is expected to help accelerate the adoption of wearable devices used for the monitoring of different biometric parameters of the wearer. As these
solutions become more commonplace, AI-based healthcare platforms will analyse the data collected from these devices to determine a patient’s current health status, provide tailored health recommendations and predict potential future issues. In addition, by having a more informed, real-time overview of the medical status of their patients, healthcare managers could optimise the use of their resources and make sure their clinics are always provided with enough medicine and medical tools
USE CASE 2: Remote diagnosis and medical operations.
The tactile nature of intelligent connectivity internet applications enabled by the high speed, low-latency, and ultra-high reliability delivered by 5G networks, will enable doctors to provide a full medical examination from remote locations with full audio-visual and haptic feedback, making it possible to provide a diagnosis anywhere at any time. With 5G and IoT, it would even be possible for doctors to perform remote surgery by operating specialised robots.
4. Public safety and security
Intelligent Connectivity has the potential to make cities safer and help governing bodies fight crime, mainly by improving the efficiency of video-surveillance, security systems and emergency services while reducing their costs.
USE CASE 1: Intelligent video-surveillance and security systems.
5G networks will facilitate the deployment of massive numbers of security alarms, sensors and cameras, and enable the transmission of real-time, high quality videos for enhanced remote surveillance and better assessment of crime scenes. On top of that, AI-based systems will automatically analyse activities, body language and facial expressions of suspects, detect crimes and spot offenders in real-time such as for tracking suspicious characters as they move among the fields of view of different cameras. In addition, by analysing data on past crimes, AI-based platforms will be able to predict future offences and help optimise the use of crime prevention resources.
USE CASE 2: Emergency services and border controls.
Massive amounts of 5G connected cameras, either fixed, mounted on moving vehicles, body-worn or installed in drones, will help control and coordinate emergency service operations. Remotely controlled or autonomous robots will replace humans for operations in hazardous environments, such as for looking for survivors in collapsed or burned-out buildings, while drones will be used to survey areas hit by disasters or to patrol coastlines and mountainous areas to detect smugglers and other unwanted situations.
5. Other Sectors
In addition to the applications described above, intelligent connectivity may enable innovation in many other contexts.
USE CASE 1: Virtual personal assistants, for example, could be further empowered by the combination of 5G and AI and make it much faster and easier to retrieve information, make reservations or buy goods.
USE CASE 2: Cloud-based gaming servers could allow players to enjoy videogames without the need for bulky and expensive equipment, while making their experiences more immersive through the use of AR/VR visors and devices with haptic feedback.
USE CASE 3: 3D hologram displays could provide users with a realistic feeling of a live sporting or music event in a location far away, while comfortably sitting at home or in a nearby location specifically equipped for that purpose.
Furthermore, the combination of AI capabilities with the massive capacity of 5G networks would further enhance the real-time collection and analysis of data from sensor networks, increasing the efficiency of how we use energy, irrigate fields or distribute goods while reducing waste and pollution. The table below shows some examples of existing 5G trials that could be a solid foundation for intelligent connectivity trials.
From vision to reality
The applications described provide a good representation of what the intelligent connectivity concept may enable. While some may seem farfetched, others are already possible today and, in some cases, already deployed – although leveraging different connectivity technologies than 5G, such as LTE, Wi-Fi or fibre. The powerful combination of 5G, AI and big data coming from the IoT would provide the basis to enable the most futuristic applications, while allowing the ones already possible today to reach their full potential.
It is too early to declare that the era of intelligent connectivity is already here.
Technologically speaking, the elements required to enable this vision are yet to reach maturity, in spite of the excitement in the mobile industry. Applications that involve aspects such as VR/AR, the tactile internet or self-driven vehicles are still at a very early stage of development, with lots of technical and regulatory issues that still need to be solved. IoT Analytics believes it will take at least another five to ten years before these issues are solved and the application scenarios described earlier become viable.
5G is also still at an early stage of deployment, however in the past couple of years mobile operators have made quick progress with the development and testing of 5G technologies, and the standardisation process is expected to be completed in 2020 with 3GPP release 16. Only after that point will the industry start to see the first 5G networks that can really provide the performance improvements promised by the technology.
The main challenge is to ensure intelligent connectivity technologies fit with the real needs of the industry and society.
Reaching such technological maturity is not the main problem here. Technologically speaking, the industry will get there eventually. The real challenge is to make sure these intelligent connectivity technologies fit with the actual needs of the industry and society. Technology providers often tend to push heavily technologies without properly assessing the real demand for it. And by doing so the risk is that on the other side there is no one really ready to embrace it.
The 5G example is probably the most prominent. There is no doubt that 5G will enable improvements in virtually all industries. However, operators have realised that there are no compelling business cases yet to motivate the huge investment it would require to build the infrastructure, especially when it comes to ultra-low latency and massive connectivity. This doesn’t mean that no one needs it, but simply that the expected improvements in terms of productivity, efficiency and monetary returns
are not deemed enough to justify its cost. We believe this is the aspect where the industry still has to mature, in determining the actual needs of the technology users.
Media and entertainment will be the primary business case for 5G in the short run.
In the short term, operators have identified the media and entertainment industry as their primary business case for 5G. Consumers want more bandwidth and higher speed to be able to watch Netflix, do video calls and stream live videos to their Facebook and Instagram accounts from any place at any time. The story is simpler here, and easier to sell. Following that, the automotive sector seems to be quite promising as well. The connected car environment is quite mature on an IoTcontext, new consumer-facing services have emerged, and people are willing to spend money for it. But beyond these two sectors, the story seems to be more uncertain.
There are a lot of promising use cases, however there is still much to be done before we reach the intelligent connectivity era. Technologies have to mature, policies have to be perfected and – lots of – money has to be spent. However, if all the parties interested in this cooperate towards this common goal we’ll eventually get there. For more information on intelligent connectivity check out the IoT Analytics’ existing LPWAN connectivity report and upcoming report on 5G.
In case of questions you can contact the author Eugenio Pasqua HERE.
Are you interested in continued IoT coverage and updates? Subscribe to our newsletter and follow us on LinkedIn and Twitter to stay up-to-date on the latest trends shaping the IoT markets. For complete enterprise IoT coverage (Enterprise subscription) with access to all of IoT Analytics’ paid content & reports including dedicated analyst time, contact us now and tell us what you are specifically interested in.
This article was also published in the IoTNow magazine. | <urn:uuid:e253d109-03c8-4588-b8c6-cddb6fbc6d54> | CC-MAIN-2022-40 | https://iot-analytics.com/how-5g-ai-and-iot-enable-intelligent-connectivity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00345.warc.gz | en | 0.938503 | 2,884 | 2.96875 | 3 |
Contingency Table Tool
One Tool Example
Contingency Table has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer.
Use Contingency Table to look at up to four variables/fields and determine how they relate to each other. The Contingency Table tool has a similar use to that of the Frequency Table tool. The tool produces two outputs, a data output that lists all of the combinations of values between the fields selected, with a frequency and a percent column. The report output produces tables to show the combinations of values between the fields and also includes some additional row and column percentages.
If you are just analyzing two fields, you can also select to output the chi-square statistic to be included with the report. A chi-square statistic is used to investigate whether distributions of categorical variables differ from one another.
R must be installed for this option to run successfully. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R tool. Visit Download and Use Predictive Tools.
Configure the Tool
- Include chi-squared statistic: A chi-square (X2) statistic is used to investigate whether distributions of categorical variables differ from one another. This data will be included in the report output. Select the two fields to analyze via Variable 1 and Variable 2.
- Do not include chi-squared statistic: At least two fields and up to four fields may be selected. When you select fields for either option, these rules apply:
- Each variable must have unique values. If the values are not unique across the fields, an error will be thrown.
- Certain field types cannot be selected: FixedDecimal, Float, Double, Date, Time, DateTime, Blob, and SpatialObj. Integer field types are allowed but should only be used if the field is truly categorical.
View the Output
- D anchor: Data output includes these fields:
Name Description InputField_SelectedField1 (2, 3, 4) Original field name of the input data.
Depending on how many fields are selected InputField_SelectedField3 and InputField_SelectedField4 might not be present and the part in italics is updated with the actual selected field name.
Frequency Count of times the value is present in the input data for the given Field Name. Percent (Frequency/Total Records) *100
- R anchor: Report Output includes a Contingency table for each field selected.
The first record in this output shows any warnings for field types. If any of the selected fields are set to numeric data types then a warning is shown. The rest of the report shows a contingency table for each combination of field values, the header for the table shows the fields that were selected by the user and the values for any fields which are not shown in the table. The table also shows a Total column and rows for Frequency, Percent, Row Percent, and Column Percent.
If the chi-square statistic option is selected then underneath the table these values are displayed:
- Chi-squared: The calculated chi-square value.
- df: Degrees of freedom.
- p-value. The returned statistic value from R. The lower the p-value the more likely it is that the variables are dependent to each other.
- I anchor: Interactive Output includes a chart where the viewer can customize what displays with a series of dropdown options. | <urn:uuid:56f09b48-f1de-4f4e-bd87-bd7bf245424c> | CC-MAIN-2022-40 | https://help.alteryx.com/20221/designer/contingency-table-tool | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00345.warc.gz | en | 0.86177 | 736 | 3.015625 | 3 |
The Simple Network Management Protocol (SNMP) is an Internet Standard protocol that is based on the manager/agent model with a simple request/response format. The network manager issues a request and the managed agents will send responses in return.
Currently, there are three major versions of SNMP: SNMPv1, SNMPv2c, and SNMPv3.
These different generations of SNMP have created a definite fracturing of what was once a simple architecture. Now, you have to consider the multi-generational SNMP versions you have in play and consider mediation devices to convert older SNMP to the newer version.
It is important that you are able to make informed decisions when it comes to your system communication methods. This is why we'll dive into these SNMP versions and learn the main differences between them.
SNMPv1 is the first version of SNMP. It's easy to set up, as it only requires a plain text community.
Although it accomplished its goal of being an open, standard protocol, it was found to be lacking in key areas for certain managing applications. For example, it only supports 32-bit counters and has poor security features - a community string is the only security method in the SNMPv1.
Later versions have addressed many of these problems. Smaller RTUs commonly support SNMPv1.
Designed in 1993, SNMPv2c (where c stands for community) is a sub-version of SNMPv2.
The Get, GetNext, and Set operations used in SNMPv1 are identical as those used in SNMPv2c. However, SNMPv2c's key advantage over previous versions is the Inform command. Unlike Traps, which are simply received by a manager, Informs are positively acknowledged with a response message. If a manager does not reply to an Inform, the SNMP agent will resend the Inform.
Other advantages include:
Improved error handling
Improved SET commands
SNMPv2 security, just like for SNMPv1, comes into the form of community strings. This is a password that your devices will need to able allowed to talk to each other and transfer information when SNMP requests occur.
Also, keep in mind that not all devices are SNMPv2c compliant, so your SNMP manager should be downward compatible with SNMPv1 devices. You can also use an SNMPv3 mediation device to ensure compatibility with earlier versions.
SNMPv3 is the newest version of SNMP. Its management framework features primarily involve enhanced security.
The SNMPv3 architecture introduces the User-based Security Model (USM) for message security and the View-based Access Control Model (VACM) for access control.
SNMPv3 supports the SNMP "Engine ID" Identifier, which uniquely identifies each SNMP entity. Conflicts can occur if two entities have duplicate EngineID's. The EngineID is used to generate the key for authenticated messages.
SNMP v3 security models come primarily in 2 forms: authentication and encrypting.
The SNMPv3 protocol also facilitates the remote configuration of the SNMP agents. It is defined by RFC 1905, RFC 1906, RFC 3411, RFC 3412, RFC 3414, RFC 3415.
The NetGuardian 832A G5 is one example of an RTU that supports SNMPv3.
If now you have to use only secure/encrypted SNMPv3, you need a way to avoid replacing all of your current v1/v2c SNMP network devices. A conversion device allows you to do that. Talk to us about that, this way you can keep your older gear and add SNMPv3 security.
The Fast Track Introduction to SNMP is a quick, 12-page introduction to SNMP. You'll learn about traps, message formats, message processing, MIB objects, SNMPv3 security and administration, and other fundamental SNMP concepts.
At DPS, we're totally focused on remote monitoring, including SNMP protocol. We've worked on thousands of projects that involve SNMP in one form or another.
That experience means that we have SNMP experts on staff. Send us a quick online message (or just give us a call) and we'll answer any SNMP question you have.
Next Page: Field-Tested, Proven SNMP Mediation
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SNMP White Paper. Featuring SNMP Expert Marshall DenHartog.
This guidebook has been created to give you the information you need to successfully implement SNMP-based alarm monitoring in your network.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:85ead86a-0fdc-492e-ad8e-14fcf7ae1e88> | CC-MAIN-2022-40 | https://ih1.dpstele.com/snmp/v1-v2c-v3-difference.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00345.warc.gz | en | 0.907386 | 1,045 | 2.546875 | 3 |
Researchers develop lip-reading system to beat voice-spoofing
Researchers at Florida State University have developed a way for smartphones to read user’s lip gestures with sonar, enabling the gestures to be used as a liveness detection system to thwart replay attacks.
The system, which the researchers call VoiceGesture, uses the phone’s speaker to transmit a high-frequency sound, which is reflected back to the microphone as the user says his or her password. It does not require and additional hardware, and can be integrated into existing smartphone operating systems and mobile apps to secure logins. Research into the system is published in a paper titled “Hearing Your Voice is Not Enough: An Articulatory Gesture Based Liveness Detection for Voice Authentication” (PDF).
Using articulatory gestures to authenticate along with their voice avoids the risk of spoofing attacks carried out using samples audio and video from readily available sources like social media.
The research was carried out using the Samsung Note 5, Note 3, and Galaxy S5 smartphones.
“Our experimental evaluation with 21 participants and different types of phones shows that it achieves over 99% detection accuracy at around 1% Equal Error Rate (EER),” study authors Linghan Zhang, Sheng Tan, and Jie Yang write. “Results also show that it is robust to different phone placements and is able to work with different sampling frequencies.”
Yang told Digital Trends that Google is currently reviewing the technique, and the researchers plan to take it to other smartphone manufacturers, including Samsung and Huawei.
As previously reported, University of Michigan researchers recently announced the development of a technique to use wearables to mitigate voice authentication vulnerabilities. | <urn:uuid:9a5cba1b-d535-46e7-a6c1-35f2b9b93608> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201711/researchers-develop-lip-reading-system-to-beat-voice-spoofing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00545.warc.gz | en | 0.922842 | 350 | 2.5625 | 3 |
Blockchain is the newest technology to fuel the Silicon Valley hype train. Everyone is talking about the wonderful things it can do, but few have explained how it works in layman’s terms. Stay ahead of your competition with a crash course in this new and exciting technology!
What is blockchain?
Although the technology was first associated with Bitcoin and other digital currencies, blockchain is not exclusive to the financial sector. To grasp why blockchain is such a game changer, there are three basic components you need to understand:
- Blocks: essentially these are just encrypted information or documents. In the case of Bitcoin it was transaction histories, but in healthcare this could be something like test results.
- Chains: by linking every block to the one that preceded it with an encrypted address, chains are created that add complexity and make data harder to counterfeit. For example, one set of test results would be much easier to forge than an entire patient history.
- Decentralized networks: each time a block is added to the chain, that information is distributed to a vast network of computers. Each computer in the network has its own copy of the chain, which means if one computer tries to alter previous blocks in the chain, others can compare it with their local copies and recognize it as a fake.
In the financial industry, blockchain technology means transactions and account balances no longer need to be validated by a centralized authority, like banks. One person can transfer money to another because each computer in the network can examine the chain to confirm he or she has the funds, and add a block logging the transfer and updating both account balances.
The record of a transaction can’t be altered unless the security of every computer in the network was compromised simultaneously. That level of data security and integrity is perfect for reducing costs in the healthcare industry.
Blockchain technology in healthcare
Even as providers shift from paper documents to digital files, data loss continues to be one of the biggest problems in the industry. According to CRICO Strategies, miscommunication causes $1.7 billion in damage and results in as many as 2,000 lost lives. Test results get lost, records aren’t properly updated, and care settings are misunderstood; whatever form miscommunication takes, it’s losing your practice money.
For now, there are three main benefits of blockchain technology in the healthcare sector:
- Medical histories and records can be stored in a secure chain that patients have full control over. If a primary care physician needs to see a diagnosis the patient received from a specialist, it can be viewed as soon as the patient provides authorization.
- Payments between banks, government entities, providers and patients can all be coordinated in a fraction of the time and without costly intermediaries.
- Healthcare equipment usage, depreciation and lifecycles can be automatically tracked in a chain to keep better tabs on the status of expensive and fragile fixtures.
The past few years have been tough on the healthcare industry. Ransomware has taken a toll on data security, and providers are relying on costly solutions to keep up with the exponential growth in digital records. Blockchain technology is poised to take care of both problems in one fell swoop.
Adopting new technology is intimidating, especially in a heavily regulated industry where data security is so important. We’re starting to see blockchain-based healthcare startups pop up, but before you can embrace this shift, you need a full-time team to manage the integrity of your files. To find out how we can protect you today and prepare you for tomorrow, give us a call. | <urn:uuid:303ac860-d408-41b9-a6d5-a97a237041c9> | CC-MAIN-2022-40 | https://www.datatel360.com/2017/07/31/blockchain-and-healthcare-what-to-expect/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00545.warc.gz | en | 0.949883 | 727 | 3.015625 | 3 |
AI For Their Citizens
Every year, the Center for Digital Government conducts a survey of every county in the U.S. and ranks them on how well they use information technology (IT) to enhance public service.
Based on the results of this year’s Digital Counties Survey, 60 counties across the nation in five separate population categories were recognized for their effective use of tech. We’ll highlight the winners of each category, but first, here are some of the top insights from the report:
- 52 percent of participants are using artificial intelligence (AI)
- The most common use of AI is for cybersecurity
- Social media was the most used citizen engagement tool
- 77 percent of counties had at least one dedicated cybersecurity employee
- 66 percent are considering the Internet of Things (IoT) in their planning
For counties that haven’t introduced much tech yet, this survey shows that citizen engagement tools such as live-streaming and social media are a useful first step. Improved internet access is another vital prerequisite. Open data portals and cybersecurity improvements should be top priorities as well.
Counties that already have a significant amount of IT services in use should focus on optimizing how they use those tools. Collecting and analyzing data on how well initiatives are working can help officials identify where improvements are needed.
These improvements can be made by expanding current technologies, investing in new ones or changing how they’re used. Some tech can even make improvements automatically. Software now exists that can automatically fix broken website links or collect information on user interactions with government apps.
That’s the strategy many of the winners of the most recent Digital Counties Survey followed. The focus this year, the Center for Digital Government noted, was more on how best to use current technology rather than introducing new kinds. This signals a shift from excitement about new capabilities to figuring out what these resources can really do for people.
Here’s exactly what the highest-ranking counties did to earn the top spots.
King County, Washington
In the 1 million or more population category, King County in Washington took first place for its use of online technologies for citizen engagement and efforts to provide internet access at low costs and, sometimes, even for free.
The county saw a 33 percent increase in its social media followers. It says its micro-blog, where 64 different contributors write about county programs, contributed to that growth.
King County also partnered with the Community Connectivity Consortium to provide access to high-speed fiber to public, government and education organizations. This has helped the local government in reaching digital equity goals.
Westchester County, New York
Westchester County in New York earned the top spot among counties with 500,000 to 999,999 residents. This county has delved into the world of mobile apps, AI and cybersecurity in an effort to improve government function and the lives of its citizens.
Westchester has developed several mobile apps that its workers use to help them get their jobs done. It’s also set up a shared services program to help government entities save money on tech services.
It has begun using AI to detect potential cybersecurity threats and invested in a fiber network that provides internet access to businesses and other organizations in the county.
Douglas County, Colorado
In the 250,000 to 499,999 population category, the Colorado county of Douglas won the top spot for its efforts to share data with citizens and use Machine Learning to help those in need.
Through a partnership with the state of Colorado, Douglas County is participating in a pilot project that aims to use AI to identify at-risk children. The machine learning algorithm is combing through historical data in an effort to identify patterns that could be helpful in identifying these children.
The county has also demonstrated a strong commitment to open data and to share information with its citizens. It’s currently working with municipalities to expand that open data initiative. Douglas also recently utilized that data to launch a mobile application that alerts citizens about roadwork that’s being conducted.
Arlington County, Virginia
This is the second year in a row that Arlington County, Virginia, has placed first in the 150,000 to 249,999 population group. Its work on using technology to create a more open government and to make better decisions earned it the top spot.
The county livestreamed all its public meetings, which it says led to increased engagement. It also operates an open data portal with over 100 data sets that it uses to help make more-informed decisions.
Arlington County also uses AI in its cybersecurity efforts and plans to launch a virtual call center for those seeking to do business with the county.
Albemarle County, Virginia
In the classification of counties with up to 150,000 citizens, Albemarle County improved its ranking from seventh to second to first over the past three years by focusing on customer engagement, transparency, infrastructure and security.
The county has used tech to increase transparency by livestreaming meetings and providing county records as well as other information to the public. It also actively sought public input on its upcoming telecommunications plan and other projects.
Albemarle also recently received a $118,000 grant to further develop broadband service in its rural areas, and conducted an in-depth security audit. This led the county to look into AI for use in cybersecurity as well as other areas.
Modern technology is a powerful tool that it seems many government organizations don’t take full advantage of. That’s not the case with the counties in this post. They’ve used the power of information technology to improve everything from citizen engagement to government transparency to cybersecurity. Hopefully, more and more counties will continue to follow suit.
By Kayla Matthews
Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life.
Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website. | <urn:uuid:f7cab43a-1cd1-4d96-8360-bb73f4cc6051> | CC-MAIN-2022-40 | https://cloudtweaks.com/2017/08/u-s-counties-best-ai-citizens/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00545.warc.gz | en | 0.948964 | 1,239 | 2.71875 | 3 |
The advent of big data requires for highly efficient and capable data transmission speed. To solve the paradox of increasing bandwidth but spending less, WDM (wavelength division multiplexing) multiplexer/demultiplexer is the perfect choice. This technology can transport extremely large capacity of data traffic in telecom networks. It’s a good way to deal with the bandwidth explosion from the access network.
WDM stands for wavelength division multiplexing. At the transmitting side, various light waves are multiplexed into one single signal that will be transmitted through an optical fibre. At the receiver end, the light signal is split into different light waves. There are 2 standards of WDM: coarse wavelength division nultiplexing (CWDM) and dense wavelength division multiplexing (DWDM). The main difference is the wavelength steps between the channels. For CWDM this is 20nm (course) and for DWDM this is typically 0.8nm (dense). The following is going to introduce DWDM Mux/Demux.
DWDM technology works by combing and transmitting multiple signals simultaneously at different wavelengths over the same fibre. This technology responds to the growing need for efficient and capable data transmission by working with different formats, such as SONET/SDH, while increasing bandwidth. It uses different colors (wavelength) which are combined in a device. The device is called a Mux/Demux, abbreviated from multiplexer/demultiplexer, where the optical signals are multiplexed and de-multiplexed. Usually demultiplexer is often used with multiplexer on the receiving end.
Mux selects one of several input signals to send to the output. So multiplexer is also known as a data selector. Mux acts as a multiple-input and single-output switch. It sends optical signals at high speed over a single fibre optic cable. Mux makes it possible for several signals to share one device or resource instead of having one device per input signals. Mux is mainly used to increase the amount of data that can be sent over the network within a certain amount of time and bandwidth.
Demux is exactly in the opposite manner. Demux is a device that has one input and more than one outputs. It’s often used to send one single input signal to one of many devices. The main function of an optical demultiplexer is to receive from a fibre consisting of multiple optical frequencies and separate it into its frequency components, which are coupled in as many individual fibres as there are frequencies.
DWDM Mux/Demux modules deliver the benefits of DWDM technology in a fully passive solution. They are designed for long-haul transmission where wavelengths are packed compact together. FS.COM can provide modules for cramming up to 48 wavelengths in 100GHz grid(0.8nm) and 96 wavelengths in 50GHz grid(0.4nm) into a fiber transfer. ITU G.694.1 standard and Telcordia GR1221 are compliant. When applied with Erbium Doped-Fiber Amplifiers (EDFAs), higher speed communications with longer reach (over thousands of kilometres) can be achieved.
Currently the common configuration of DWDM Mux/Demux is from 8 to 96 channels. Maybe in future channels can reach 200 channels or more. DWDM system typically transports channels (wavelengths) in what is known as the conventional band or C band spectrum, with all channels in the 1550nm region. The denser channel spacing requires tighter control of the wavelengths and therefore cooled DWDM optical transceiver modules required, as contrary to CWDM which has broader channel spacing un-cooled optics, such as CWDM SFP, CWDM XFP.
DWDM Mux/Demux offered by FS.COM are available in the form of plastic ABS module cassette, 19” rack mountable box or standard LGX box. Our DWDM Mux/Demux are modular, scalable and are perfectly suited to transport PDH, SDH / SONET, ETHERNET services over DWDM in optical metro edge and access networks. FS.COM highly recommends you our 40-CH DWDM Mux/DeMux. It can be used in fibre transition application as well as data centre interconnection for bandwidth expansion. With the extra 1310nm port, it can easily connect to the existing metro network, achieving high-speed service without replacing any infrastructure.
With DWDM Mux/DeMux, single fibres have been able to transmit data at speeds up to 400Gb/s. To expand the bandwidth of your optical communication networks with lower loss and greater distance capabilities, DWDM Mux/DeMux module is absolutely a wise choice. For other DWDM equipment, please contact via email@example.com. | <urn:uuid:cd1b2504-6457-437f-ae50-a712c3312c53> | CC-MAIN-2022-40 | https://www.fiber-optic-equipment.com/wise-decision-choose-dwdm-muxdemux.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00545.warc.gz | en | 0.934403 | 1,020 | 3.8125 | 4 |
Non-root bridges need to find the shortest path to the root bridge. What will happen if we have a mix of different interface types like Ethernet, FastEthernet and Gigabit? Let’s find out!
Here’s the topology I will use to explain the spanning-tree cost calculation:
In the picture above we have a larger network with multiple switches. You can also see that there are different interface types, we have Ethernet (10 Mbit), FastEthernet (100Mbit) and Gigabit (1000Mbit). SW1 on top is the root bridge so all other switches are non-root and need to find the shortest path to the root bridge.
Spanning-tree uses cost to determine the shortest path to the root bridge. The slower the interface, the higher the cost is. The path with the lowest cost will be used to reach the root bridge.
Here’s where you can find the cost value:
In the BPDU you can see a field called root path cost. This is where each switch will insert the cost of its shortest path to the root bridge. Once the switches found out which switch is declared as root bridge they will look for the shortest path to get there. BPDUs will flow from the root bridge downwards to all switches.
Here’s an example of the different spanning-tree costs for our topology:
SW2 will use the direct link to SW1 as its root port since this is a 100 Mbit interface and has a cost of 19. It will forward BPDUs towards SW4; in the root path cost field of the BPDU you will find a cost of 19. SW3 is also receiving BPDUs from SW1 so it’s possible that at this moment it selects its 10 Mbit interface as the root port. Let’s continue…
This picture needs some more explanation so let me break it down:
- SW3 receives BPDUs on its 10 Mbit interface (cost 100) and on its 1000 Mbit interface (cost 4). It will use its 1000 Mbit interface as its root port (shortest path to the root bridge is 19+19+4=42).
- SW3 will forward BPDUs to SW4. The root path cost field will be 100.
- SW4 receives a BPDU from SW2 with a root path cost of 19.
- SW4 receives a BPDU from SW3 with a root path cost of 100.
- The path through SW2 is shorter so this will become the root port for SW4.
- SW4 will forward BPDUs towards SW3 and SW5. In the root path cost field of the BPDU we will find a cost of 38 (its root path cost of 19 + its own interface cost of 19).
- SW3 will forward BPDUs towards SW5 and inserts a cost of 42 in the root path cost field (19 + 19 + 4).
The complete picture will look like this: | <urn:uuid:8501dd52-c947-43d5-8087-4f2f3fa037cb> | CC-MAIN-2022-40 | https://networklessons.com/switching/spanning-tree-cost-calculation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00745.warc.gz | en | 0.90211 | 662 | 3.359375 | 3 |
How Florida Professionals Use Pivot Tables in Microsoft Excel
No matter how much data you have, unless you can analyze your information and glean a deeper meaning from your data, it’s just taking up space. How can you take large amounts of data and quickly present it in a way that gives you greater insight?
What makes Microsoft Excel so sophisticated as a data analysis tool is its intuitive features, including pivot tables. Every column of data in a Microsoft Excel spreadsheet can represent a dataset in a pivot table in a user-friendly automated workflow designed to create easy-to-read data tables.
How to Create a Pivot Table
Pivot tables are an extremely useful and popular tool within Microsoft Excel. Creating a pivot table is easy with a few simple steps:
- Select the cells that have the data you want to include in your pivot table, making sure there are no empty rows or columns
- In your menu toolbar, choose “Insert” and then “PivotTable”
- When the user widget prompts you to choose your data, opt for “Select table or range”
- Verify the cell range you want to be represented in “Table/Range”
- When prompted, you’ll need to decide if you want your pivot table to be in an existing worksheet or a new worksheet, and then choose the cell location for where the table will display within that worksheet
- After you click “OK”, your next step involves choosing fields for your pivot table
Tips to Prepare Your Data for Use In a Pivot Table
There are a few steps you should take to make sure your data is ready to be used in a pivot table:
- Make sure your raw data is set up properly, in rows and columns without any empty rows or columns – but it’s okay to have empty cells.
- Make sure you have a column heading for each column – this helps Microsoft Excel both recognize and label your data set. Bonus tip: Try making the column header entry bold.
- Don’t mix dates and text in the same column
Why You’ll Love Using Pivot Tables
When you add a row to a data set that is defined for use in a pivot table, that row of data automatically gets added to a pivot table when you refresh your data. Plus, any new columns you add to a data set will be added to your pivot table fields list. One of the most popular features of pivot tables is the ability to filter, sort, group, or conditionally format portions of your data so you can focus on a smaller data set.
If you’re new to using pivot tables, there’s a great built-in feature for Microsoft Excel users called “Recommended PivotTable”. Microsoft Excel instinctively reviews your data set and determines a meaningful layout for your data. In step #2 above, instead of choosing “Insert” and then “PivotTable”, you’ll choose “Insert” and then “Recommended PivotTable” and experiment with the different ways Microsoft Excel recommends your data be presented.
You can also connect external data sources, like SQL Server tables, XML files, Microsoft Access databases, and more. | <urn:uuid:c0e6db37-66ce-4461-a8c3-55420def4b79> | CC-MAIN-2022-40 | https://www.4it-inc.com/how-florida-professionals-use-pivot-tables-in-microsoft-excel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00745.warc.gz | en | 0.832739 | 676 | 2.796875 | 3 |
Galileo has been making headlines once again, and this time not for the right reasons. It was reported on January 18th, 2017 that several of the atomic clocks responsible for the satellites’ ability to calculate precise time have failed.
The importance of precise timing
Timing is everything in GNSS – very precise time is required to calculate an accurate value of the delay in receiving signals that have been transmitted from a given satellite. This allows users to determine their position on Earth accurately. Also, many applications today take advantage of the very precise timing that GNSS can provide via the atomic clocks in use on the satellites.
The atomic clocks
Each Galileo satellite is equipped with four clocks. Two are Rubidium Atomic Frequency Standard (RAFS) clocks like those found in GPS and GLONASS satellites. The other two are the more accurate (and much more complex) Passive Hydrogen Maser (PHM) clocks that offer the Galileo constellation increased timing accuracy. Whilst only one clock in working order is required for each satellite, a minimum of two is required to provide redundancy.
A PHM clock uses the properties of the hydrogen atom to serve as a frequency reference. It is a complex and high cost device but has a significantly higher precision than the Rubidium clock. Typically, PHM clocks are expected to have a 20-year lifetime.
An RAFS clock uses the transition of Rubidium 87 atoms as a frequency reference. RAFS clocks are less costly and more compact than PHM clocks, and have an expected lifetime of 12 years or more.
The six PHM clocks that failed are almost exclusively on the In-Orbit Validation satellites. According to a statement from the European Space Agency (ESA) (opens in new tab), the failure was "related to the fact that when some healthy [hydrogen maser] clocks are turned off for long periods, they do not restart due to a change in clock characteristics.”
The ESA has since been able to remotely restart one of the failed PHM clocks, leaving only five PHM clocks offline.
Meanwhile four RAFS clocks have failed – all of them on Full Operational Capability satellites. The ESA also stated the rubidium-based clock failures "all seem to have a consistent signature, linked to probable short circuits, and possibly a particular test procedure performed on the ground."
India has had the same experience with RAFS clocks – it was announced that 3 clocks (one primary and two backups) on board satellite IRNSS 1a had failed.
The impact of satellite clock failures
Whilst a total of nine clocks have failed, so far, no more than two have failed in a single Galileo satellite. Provided each satellite has at least one clock remaining, they can continue to function as normal. For now, then, these clock failures won’t have a direct impact on the performance or stability of Galileo.
However, the impact to the IRNSS programme is much more severe - the failure of all three RAFS clocks mean that the satellite is totally unusable and will have to be replaced. India has already plans to do this later in 2017.
That said, the clock failures highlight a concern that all members of the GNSS community should share: failure can happen at any stage of a GNSS system – from the satellite level, right down to the device or chipset firmware layer.
Detecting segment errors
Those involved in GNSS receiver design and integration need to be prepared to detect segment errors or failures at satellite level - whilst they are less frequent, they do happen. The industry has already seen the effects of a major malfunction of the GLONASS system in April 2014, thought to have been caused by the upload of corrupted ephemeris data. And in January 2016, because of a satellite decommissioning, faulty timing data was transmitted by GPS satellites, which affected thousands of users world-wide. Subsequently it was discovered that the incorrect timing data was flagged as being out of date. Receivers designed and tested to the GPS Open Service ICD rejected the incorrect data as being invalid and were not affected. It should concern the industry that many receivers accepted the incorrect data which was either 13 or 13.7 microseconds inaccuracy depending on the receiver’s use of the data.
No GNSS system is immune to software or hardware failures. Manufacturers of GNSS chipsets and location-aware devices need to know how their equipment will respond in the event of a system segment failure (software or hardware). In the case of the GPS timing issue of January 2016, thorough testing against the GPS Open Service ICD would have highlighted any problems with data associated with an expired date/time. In the case of the GLONASS event, testing the receiver’s response to corrupt or incorrect ephemeris data could have provided an additional level of assurance/protection.
If a receiver is unable to detect the difference between healthy and unhealthy satellite signals, it will appear to be operating as normal. But all the while it has the potential to output misleading positioning and timing data that could compromise business operations – and in the most extreme cases even be hazardous to the end user. Blind trust in the integrity of all received GNSS signals can be dangerous – it can leave the receiver open to being affected by a GNSS segment error and can also leave the receiver or system susceptible to deliberate spoofing attacks (a growing threat to GNSS users since the rise in popularity of Augmented Reality games such as Pokémon GO).
It is possible to simulate a variety of real-life hardware and software failure scenarios by making the most of modern GNSS simulation equipment and rigorous test plans. With these in place, it is possible understand how receivers and systems will react to errors from all components of a GNSS system, and see the potential issues before they disrupt the user experience.
The future of Galileo
Galileo is rightly being hailed as a major success story for Europe; Commissioner Elżbieta Bieńkowska, stated in December: "Galileo offering initial services is a major achievement for Europe and a first delivery of our recent Space Strategy. This is the result of a concerted effort to design and build the most accurate satellite navigation system in the world. It demonstrates the technological excellence of Europe, its know-how and its commitment to delivering space-based services and applications. No single European country could have done it alone."
The ESA and the European Commission (EC) have the required technical and programmatic expertise and knowledge to improve the situation with the Galileo clocks that will be onboard future satellites. Launching a new satellite navigation constellation is not a trivial undertaking and many important lessons have been learned by Europe on the pathway to making Galileo a valuable and sustainable global satellite navigation constellation.
Guy Buesnel, PNT Security Technologist, Spirent (opens in new tab)
Image Credit: Spirent | <urn:uuid:168b5be8-ba6b-44a0-bd3d-e201d7a1f6f5> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-galileo-clock-failures-have-a-lot-to-teach-us-about-gnss-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00145.warc.gz | en | 0.95622 | 1,406 | 3.203125 | 3 |
Enterprises can maintain network uptime with wireless broadband incorporated in their router
With an increasing strain on networks from advancing technology, applications, and cloud services it has become a necessity for businesses to adopt wireless broadband networking solutions. Enterprise routing protocols that support continuous network uptime range from VRRP, RIP, and more.
VRRP – Virtual Router Redundancy Protocol
VRRP allows you to associate multiple routers with one LAN so that if the primary physical router fails, a secondary router will switch to take the duties of the primary router. This adds another important layer to a business continuity system. With VRRP, if the primary router fails, the network stays up.
One router is set as the "master," and a second router is set as "backup". If the master router fails, the backup becomes the master with the same "virtual" IP address.
STP allows a network design to prevent bridge loops while still including redundant paths. As the name suggests, Spanning Tree Protocol creates a "spanning tree," a graph theory term for a set of connected edges that hits all the vertices in a connected graph, but without any loops.
By eliminating loops, STP prevents unwanted broadcast radiation. STP still allows for redundancy by automatically finding an alternate path if a link fails.
Dynamic Routing Protocols: RIP, OSPF, and BGP
The larger the network, the more complex. Additional layers mean that there are additional hops for packets to go from one end of the network to the other, but by default a router only knows its immediate neighbors. A packet can't go across multiple layers of a network without the routers knowing the broader topology. With RIP, OSPF, and BGP, Cradlepoint routers can learn the topology dynamically.
All of these routing protocols specify how routers communicate with each other, disseminating information that enables them to select routes between any two nodes on a network. Routing algorithms choose the route. Each router has a prior knowledge only of networks attached to it directly, but a routing protocol shares this information with immediate neighbors and then throughout the network. This way, routers learn the network topology.
RIP – Routing Information Protocol (versions 1 and 2)
RIP is used to synchronize the routing table of all the routers on a network. RIP is an older, more established protocol, but it has significant limitations, especially for larger networks. It is relatively simple in that it measures the distance of a route by hop count, which doesn't factor in traffic costs. To prevent infinite loops, the hop count is limited to 15, which can be a limitation if the network is large enough.
RIP causes routers to broadcast their entire current routing database periodically (30 seconds by default). This system is straightforward, but it causes slow convergence.
OSPF – Open Shortest Path First (version 2)
OSPF is used more than RIP in larger scale networks because it has a more efficient system for communication between routers and because it scales better to larger networks. Only changes to the routing table are sent to all the other routers in the network, as opposed to sending the entire routing table at a regular interval (which is how RIP functions).
OSPF is a link-state protocol – each router on the network shares its "link-state," which is the basic information of that router and its immediate connections. The OSPF protocol pieces together the information from all the link-states throughout the network to create a complete mapping. It then uses the Dijkstra algorithm to calculate the shortest path between any two points.
BGP – Border Gateway Protocol (version 4)
BGP is widely used across the Internet, but usually externally rather than internally. Internal use of BGP is typically only for very large networks. For example, it might be used as a connection between multiple networks that are already using OSPF, when the whole network is too large for OSPF by itself. BGP is unique in that it uses TCP as its transport protocol. It is commonly used as the protocol between Internet service providers. It includes cost metrics for each path so that packets take the most efficient route.
Learn more about which wireless broadband solution can help your network. | <urn:uuid:49e80c48-ff1d-4bd2-9869-062fe7bc397f> | CC-MAIN-2022-40 | https://cradlepoint.com/resources/blog/enterprise-routing-protocols-vrrp-stp-rip-ospf-and-bgp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00145.warc.gz | en | 0.932077 | 861 | 2.515625 | 3 |
Knowing how to test the sensitivity of a fiber optic receiver is an important skill. A fiber optic receiver provides optimal performance when the optical input power is within a certain range. But how do you test the receiver to see if it will provide optimal performance at the lowest optical input powers? One way is to use Optical Attenuators, such as bulkhead attenuators. Typically only a couple of values are required to complete your testing. This process involves three steps shown as following.
- Measure the optical output power of the fiber optic transmitter with the power meter. Remember that industry standards define transmitter optical output power and receiver optical input power for a particular network standard. If you are testing a 100BASE-FX receiver, you should be using a 100BASE-FX transmitter. The optical output power of the transmitter should be within the range defined by the manufacturer’s data sheet.
- Connect the transmitter to the receiver and verify proper operation at the maximum optical output power that the transmitter can provide. You need to test the receiver at the minimum optical input power that the receiver can accept while still providing optimal performance. To do this, you need to obtain the lowest optical input power level value from the manufacturer’s data sheet.
- Calculate the attenuation level required for the test. For example: The transmitter’s optical output power is -17 dBm and the minimum optical power level for the receiver is -33 dBm. The difference between them is 16 dB. You would use a 16 dB bulkhead attenuator at the input of the receiver and retest the receiver. If the receiver still operates properly, it’s within specifications.
Note: The optical loss is not considered about in the example above. Suppose that the transmitter is located 10 km from the receiver and the loss for the whole optical fiber link (including interconnections) is 6 dB, then you should use a 10 dB bulkhead attenuator rather than the 16 dB one for your test. | <urn:uuid:9d4d126e-24fe-4ac7-b717-4494eb9882ba> | CC-MAIN-2022-40 | http://www.fiberopticshare.com/how-to-test-the-sensitivity-of-a-fiber-optic-receiver-by-using-an-optical-attenuator.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00145.warc.gz | en | 0.893502 | 407 | 2.625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.