text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The asteroid belt is still a rather unknown place in the Solar System, but NASA’s Dawn spacecraft has been working its way through the rocky road over the past eight years, briefly spending time on Mars in 2009 and Vesta in 2011.
Its final location is the largest object in the asteroid belt, a dwarf planet named Ceres. It has only just been labelled a dwarf planet, in line with the recent change in classification for planets, dwarf planets and asteroids.
NASA is hoping to find some sort of information on Ceres, potentially leading to another point of landing for humans in the next 20 years. Scientists already claim humans can live on Mars and some of Jupiter’s moons, but the asteroid belt has not been examined for any possible signs of life or new minerals.
The planned date for landing on Ceres is Friday 7:20AM (ET) or 12:20PM (GMT). It is the first spacecraft to orbit two objects in the same flight, and this is great news for space missions in the future, if space organisations do not need to send multiple spacecraft in case one fails to reach its destination.
Dawn spacecraft uses an ion propulsion system, capable of pushing it out of the orbit of small asteroids and stars. Landing on Ceres will end the Prime Mission, and NASA will look towards sending new Deep Space rockets into the asteroid field.
Deep Space is one of the most interesting parts of space missions nowadays, as rovers continue to map out the world of Mars and the Moon remains untouched.
Learning about the asteroid belt and planets further away from Earth could bring huge amounts of information for scientists, including potential places where life would have been possible. | <urn:uuid:ba81bae7-d128-4249-8bcb-a27a8e0f9eae> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/03/05/nasa-dawn-spacecraft-close-landmark-dwarf-planet-landing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00594.warc.gz | en | 0.936827 | 338 | 3.484375 | 3 |
Academic institutions have traditionally been slow to embrace technology and provide students with access to the latest digital tools. Meanwhile, college and university students have been quick to integrate digital devices, tools and apps into their daily lives. A new study released b yonline scheduling platform Doodle reveals the true extent of the digital divide in education and how it may put college and university students on a path to academic failure in these COVID-19 times.
According to Doodle’s “Time Management in Education” study, the vast majority (65 percent) of college and university students are digital natives, using between six and 15 digital tools and apps on a daily basis.
Following this, 32% of students say they prefer to use an online scheduling tool to book office hours with their professors. That’s a healthy percentage of students with the desire to automate the office hours setup process. But it’s not what’s currently available to them, with 66 percent of professors still clinging to outdated methods and using email or syllabus listings to coordinate their office hours.
These findings prove there is a big gap between the digital-first behaviors of students and the non-digital processes used by professors. As further proof of this, an overwhelming majority (83 percent) of students think that their professors should use technology more in their day-to-day work. While this digital divide has been evident for some time and before COVID-19 came along, it now has the potential to negatively affect students’ academic performance in a remote learning environment.
Renato Profico, CEO of Doodle, explains, “This is a great opportunity for academic institutions to change their processes and implement new technologies. It’s not about stripping away all existing processes and systems that have been in place for decades. Rather, it’s about making small, impactful changes. It’s also about implementing the right technology solutions to facilitate the kinds of change that will allow academic institutions to deliver the best experience possible to students, faculty members and administrative staff, while helping them to be highly productive, focused and successful in achieving their goals.”
The research study is based on a survey of 1,019 students enrolled in colleges and universities in the United States. Key findings and trends include:
- Time management and student-faculty interaction outside the classroom unlock academic success. Budgeting time (28 percent) and meeting weekly/biweekly with professors for feedback (16 percent) are the top two strategies used by students to improve their academic performance. So, students are prioritizing time in their schedules to meet with their professors, with 26 percent meeting several times a week and 26 percent meeting at least once weekly.
- Remote learning: doomed for failure, a good thing, or somewhere in between? According to the study, 37 percent of students have found it harder to manage their time and stay productive since classes moved online. This is a serious issue, as a majority of students (66 percent) say time management is extremely important in regards to their ability to meet their academic goals. On top of this, 42 percent of students feel like they’re working more since their classes have gone virtual.
- Technology: a savior for higher education. Fifty five percent of students say technology makes learning more flexible and convenient. Meanwhile, 16 percent of students value how technology makes it easier to collaborate with classmates and 13 percent see it as being useful in increasing access to their professors and faculty members.
- Don’t underestimate the importance of a strong support system. The good news is that 76 percent of students feel their professors and faculty provide the necessary support to help them manage their time and balance other commitments. However, that means nearly a quarter (24 percent) of US college students aren’t getting enough support. Another 23 percent would like to see increased one-to-one access to professors beyond office hours.
Profico concludes, “Time management is one of the biggest challenges and priorities for students, professors, faculty and administrators alike. This is where technology can be immensely helpful by cutting out administrative tasks like scheduling, as it allows educators to take back control of their time and focus their full attention on delivering an exceptional remote learning experience, supporting and empowering students to excel in their classes, enabling faculty and administrative staff to perform their jobs effectively and ensuring the institution runs smoothly.” | <urn:uuid:170035d3-a1fa-47a1-8083-c641d1d3744a> | CC-MAIN-2022-40 | https://educationitreporter.com/doodle-study-reveals-digital-gap-between-college-students-and-faculty-is-a-big-problem-for-remote-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00594.warc.gz | en | 0.955678 | 889 | 2.921875 | 3 |
As lithium-ion batteries become more useful in the manufacturing of vehicles, the more experts predict that the batteries will regularly appear in the waste stream within the next coming years. Despite the batteries still in the early stages of entering the waste stream, British Columbia-based company American Manganese sees this trend as a great opportunity. The company is convinced that there is already a critical mass due to the sheer value and predicted growth. In response to the need for end-of-life management as well as supply metals for battery manufacturing, American Manganese has employed hydrometallurgy.
Larry Reaugh, president and CEO of American Manganese, told Resource Recycling that the company’s process targets the “cathode” section of large-format lithium-ion products, which is the portion of the battery that contains manganese, cobalt and other metals. Reaugh commented, “It’s the most valuable part of the battery. This one singular item probably represents 25 to 30 percent of the value of the battery.” Reaugh said the EV battery stream is large enough to be profitable. He estimates there were roughly 280,000 spent EV batteries entering the waste stream globally in 2015.
However, there are many challenges and barriers within the battery stream. In an effort to combat these, American Manganese is looking to scale up its proprietary hydrometallurgical process and create a pilot plant, which could cost as much as $5 million. Reaugh indicated that a commercial plant, which would be the model for a portable processing solution, is about two years down the line. Using American Manganese’s technology, Reaugh believes that a plant could have a throughput of up to 20 tons per day.
“Right now, they’re being burned. You get some cobalt out, 40 to 60 percent, and the rest of it, aluminum, manganese all goes into a slag, which is a waste product,” Reaugh said. “That’s not a solution.”
Instead, Reaugh states American Manganese aims to employ a process that allows recovery of all those different battery chemistries. Using a hydrometallurgical process, similar to what is found in a mining circuit, the process will use thickeners, tanks and pumps to separate the metals contained in the cathode. So far, Reaugh says the process has been extremely successful and has extracted 92 percent of the lithium. He anticipates that 100 percent lithium recovery can be reached if the metal is cycled through the process multiple times.
Currently, American Manganese has filed for a U.S. patent for the technology. The company plans to file in China, Europe, and other countries that will be leaning heavily into the lithium-ion battery space in the near future. | <urn:uuid:4cf6edf3-eaf4-46f4-8e8d-08c1198bdb4a> | CC-MAIN-2022-40 | https://hobi.com/canadian-company-targets-growing-lithium-ion-battery-stream/canadian-company-targets-growing-lithium-ion-battery-stream/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00594.warc.gz | en | 0.958322 | 588 | 2.6875 | 3 |
Merriam Webster defines context as the interrelated conditions in which something exists or occurs—such as the environment or setting. Another definition for the word context is also “parts of a discourse that surround a word or passage that can throw light on its meaning.”
As we all know, context is supremely important in our lives. Many of the decisions we make, actions we take and things we say are highly dependent on context. Context serves as a shorthand to denote all the (complex) background information and insights that we store in our heads as it relates to the situation on hand. Human brains are very effective in acquiring, storing and using context. For example, reading the word “car” instantly and unconsciously activates all the information (i.e., mental models) you have in your head about cars—about different makes and models, outer shapes, interiors etc. If you parse the word “wash” after the car, another mental model is activated about information your brain has learnt about washing cars. A completely different set of mental models are activated when you read the words “wash sale”. The meaning of the word wash is highly dependent on the context.
As you might imagine, context is also supremely important in information technology (IT), and in particular with cybersecurity. It is very hard to make a good decision without context. And for cybersecurity, accurate context is often not available. Most organizations end up making poor cybersecurity decisions based on intuition and not on data.
For example, let’s take an alarm generated by a “next-generation” network firewall indicating the download of some malicious payload by a user. This may or may not be relevant based on factors like:
- the type of device and software being used by the user, e.g. iPhone or PC—the malicious exploit may work only on Windows PCs.
- the version of software running on the device, a software update to fix the vulnerability may have been installed yesterday, or not
- the presence or absence of a security control, like anti-virus or other endpoint security protection on the device, which would prevent the malicious payload from doing anything bad
- and several other factors…
If security teams are able to get good context, all those nuances surrounding a security event can help to unravel even the most complex problems.
Having good context is particularly critical in cybersecurity vulnerability management (VM) which involves proactively analyzing and fixing weaknesses before attackers can find and exploit them. For example, a certain emergency patch from Microsoft to fix a major vulnerability in IE that inconveniently shows up just a couple of days before Christmas may only need to be applied to those Windows laptops and desktops whose users actually use Internet Explorer as their main browser (as opposed to Chrome or Edge). This context can save IT and security teams from wasting time and effort on unneeded patches, while focusing on stuff that actually matters (or enjoying their Christmas break).
Context is essential in modernizing your VM effort to take a more advanced risk-based approach. Why? Because risk-based VM (RBVM) is all about understanding the unique risk factors of every IT asset touching a network. And that level of analysis and detail requires context for every asset—and its security state and role in the business.
Why is building security context so difficult?
The main challenge to achieving context within RBVM is the massive, dynamically changing datasets that most organizations must analyze to understand the enterprise attack surface. Instead of looking at functions with a few simple variables — and figuring out what those variables are describing — companies must analyze functions with hundreds, or hundreds of thousands, of variables. And even if they could achieve that feat, those variables change over time, which makes it an uphill struggle that makes gathering accurate context very hard to achieve, much less maintain. This leads to organizations making critical decisions using gut instincts without knowing all the facts. And even one bad call can be disastrous.
Another challenge that makes context so hard to achieve is the fact that legacy VM systems struggle to understand the role of assets in business. For example, traditional VM systems don’t properly interpret the distinctions between print servers and DNS servers, especially as it pertains to how those nodes support business objectives.
As a result, companies are left with large amounts of data, a lack of intelligent insights about it, no automated capabilities to help out, and nonexistent or even flawed information about their assets, valuable data and vulnerabilities.
Ingredients to generating accurate context
Advanced AI techniques offer effective ways companies can gather good contextual information. For an easily accessible example of this, query Google, looking for “coffee”. What does google respond with? It will return information about coffee shops nearby, their operating hours, directions on how to get there by car, on foot, or using public transport (as appropriate), and how much traffic you are likely to encounter. Google may also indicate how those shops have been reviewed by other coffee drinkers, and any conditions related to special events you need to worry about.
Now, if you ask Google about “coffee manufacturing”, you will get a completely different answer, one that probably won’t include directions to the nearest coffee factory, which would be silly.
Google can do this because it overlays your location data as well as whatever preferences it knows about you and other people like you, its model for coffee shops and other coffee related things as this relates to your specific query. However, Google might have tried to send you to Argentina if it didn’t have good context about your query based on millions of other searches that people have made, and your personal preferences. And the key is having good AI that is able to quickly generate context based on a lot of ever-changing factors.
In a similar light, Balbix has created information-driven context engines to achieve similar results in the field of breach risk, vulnerability management and cyber-resilience. Balbix’s content-based VM:
- Uses AI, automation and machine learning to comprehensively capture and compile every relevant data point across the enterprise attack surface. This includes human-inspired understanding of what is on the corporate network, elements sitting on extended or partner networks, users and applications. It then analyzes those data points, in context, to determine the full attack potential aligned against each asset.
- Continuously performs its data capture and analysis to ensure that none of its context becomes stale or irrelevant. Decisions made based on the context provided by Balbix’s engines are always founded on accurate, timely and complete information.
Balbix enables security and IT team members to query the enterprise security context using natural language search (e.g., “what is my risk from phishing”) as well as clickable dashboards where you can drill down from a ten-thousand feet risk and view to an individual device’s security posture.
Context is very essential to be effective within cybersecurity. Unfortunately, capturing and studying the full context surrounding a cybersecurity situation or an event is almost never very straightforward. Organizations need to partner with a company that can leverage AI and machine learning to produce accurate, timely and contextually sound results. Doing that can support a highly efficient, risk-based vulnerability management program capable of locating and mitigating critical vulnerabilities long before attackers can strike. | <urn:uuid:53cd985c-596f-43c4-825a-3adaefc437de> | CC-MAIN-2022-40 | https://www.balbix.com/blog/context-your-super-weapon-in-vulnerability-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00594.warc.gz | en | 0.943601 | 1,501 | 3.5625 | 4 |
If you’ve ever longed to give your best friend a piece of chocolate with the shape of her face on it for her birthday – take heart. 3D-printed food is now making it possible.
This article covers how 3D printed foods are created and explores some ways this new type of cuisine is already being used.
How 3D Food Is Created
3D printing, also known as additive manufacturing, is the process of creating three-dimensional objects from digital files. Objects are created one layer at a time, and 3D printers can create complex shapes relatively quickly (faster than many traditional manufacturing methods, in fact).
Most printers for 3D food use a similar technique to regular 3D printers. They deposit a food-safe 3D printer filament (like chocolate, tomato, or other flavors) onto a build plate based on a model you design yourself, or one that you download.
Have you ever put icing on a cupcake using a piping bag? Food printers work in a similar way. They deposit the edible filaments into your desired shapes, one layer at a time, creating a three-dimensional food model as it prints.
Is 3D Printed Food Safe to Eat?
In a word, yes. Food for human consumption must meet stringent federal safety requirements, and 3D food printers and edible filaments must follow those standards.
4 Exciting Use Cases for 3D Printed Food
Cruelty-free, environmentally-friendly meat: Redefine Meat and Novameat are working to develop 3D printed meat that mimics the taste, smell, and texture of real meat, using printable, plant-based materials. Novameat wants to start supplying national supermarkets by 2021 with 3D printers that will supply meat without killing farm animals.
Space food: NASA is experimenting with 3D printed pizza as an alternative to typical boring astronaut food. The Beehex company can 3D print entire 12-inch pizzas in under five minutes, which are ideal not only for use in space but also potentially in pizza restaurants and takeaways.
Biometric 3D Printed Sushi: Open-Meals may revolutionize the way we eat with their digitized food. When making reservations for their restaurant, Sushi Singularity (set to open in Tokyo, Japan), guests receive a health test kit that will give the restaurant information about their unique biometrics nutritional needs, which allows the restaurant to use bespoke 3D printers to create a meal that is personalized to people's biodata.
Food for people who have difficulty chewing: Nursing homes in Germany serve a 3D-printed food product called Smoothfoods to elderly residents who have trouble chewing and swallowing.
Curious about 3D food printing? There are quite a few 3D printers on the market right now that will help you create digital versions of things like pastries, chocolate sculptures, and pasta. The current 3D food printers are a bit pricey, however, with many in the range of $3,000 - $4,000.
For more information on 3D printing and other transformative technology trends, check out my books and subscribe to my YouTube channel. | <urn:uuid:093efd86-37db-4e1c-904c-22eae19abd46> | CC-MAIN-2022-40 | https://bernardmarr.com/what-is-3d-printed-food/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00594.warc.gz | en | 0.93756 | 644 | 3.171875 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
The plot resembles science fiction, but it’s rooted in fact.
Fans of the Star Trek television series will recognize the key player, Gene Roddenberry, as the man who masterminded the popular show and its cast of memorable characters: Captain Kirk, Science Officer Spock, Dr. McCoy and the rest of the crew of the Starship Enterprise.
The Enterprise sailed across deep space in the 23rd Century on a five year mission: “to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.”
Fans will recognize those words as part of the “Captain’s Oath,” a narrative by Kirk that was used as an introduction for the weekly TV episodes that ran for three seasons on NBC, starting in 1966. The original Star Trek program spawned several other “Trek-type” TV shows (Star Trek: The Next Generation, Star Trek: Deep Space Nine, Star Trek: Voyager and Star Trek: Enterprise) and helped launch the immensely popular Star Trek movie franchise.
Little did we know that data recovery would play a big role in the next chapter.
Computer Creations Come Back to LIfe
The scripts for the futuristic TV program were written on an old-fashioned typewriter. Roddenberry used a pair of custom-built computers late in his career to capture story ideas, write scripts and notes. The author moved on to working with more mainstream computers over time, but kept the custom-built pair in his possession.
Roddenberry died in 1991, but it wasn’t until much later that the estate discovered nearly 200 5.25-inch floppy disks on which the Star Trek creator stored his work.
When it came time to read the disks, a serious problem was discovered. One of the computers had long since been auctioned and the remaining device no longer worked. It had died, taking down it’s proprietary operating system, special text processing software and any normal means of reading what was on all of those 5.25-inch floppy disks.
Next Stop: DriveSavers
LunaTech, the IT company retained by Roddenberry Entertainment, suggested DriveSavers Data Recovery. “We’ve been working with DriveSavers for over 5 years,” said Bobby Pappas, president and founder of LunaTech IT, “we knew if anyone could get this unique data back, they could.” As the world’s original data recovery specialist, DriveSavers offered to help get back the information from Roddenberry’s system that had been missing for decades. But these were no ordinary floppies.
First, we had to develop our own method of extracting the data, which had been set up with special formatting that was no longer in use. There was no user manual. There was nothing written down anywhere about any of this.
Still, DriveSavers was very familiar with recovery from inaccessible storage devices, including floppy disks. The bigger challenge was making sense out of what was on those disks, since they were written in a proprietary format using an out-of-date operating system.
Star Trek: Final Chapter
It took three months for the DriveSavers engineering team to develop software that could read the disks. Even though we were able to crack the formatting, reading the nearly 200 disks was painstakingly slow work that took the better part of a year to finish.
When the project was completed, DriveSavers had unearthed files that haven’t been seen in over 30 years including Star Trek materials, personal notes, and even a copy of his son’s homework!
And so the missing files of Gene Roddenberry were successfully recovered 100% and material that may have been lost forever was saved. | <urn:uuid:0432bab1-acec-4797-940d-6ee7f5a78915> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/star-trek-the-lost-files/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00594.warc.gz | en | 0.973532 | 811 | 2.640625 | 3 |
What is file security?
File security is all about safeguarding your business-critical information from prying eyes by implementing stringent access control measures and flawless permission hygiene. Apart from enabling and monitoring security access controls, decluttering data storage also plays an important role in securing files. Regularly optimize file storage by purging old, stale, and other junk files to focus on business-critical files. Tackle data security threats and storage inefficiencies with periodic reviews and enhancements to your file security strategy.
How is file security different from data security?
- Files are the most basic securable units of a repository. Often data is stored and shared as files and folders. Therefore, file security is a subset of data security that focuses on the secure use of files.
- Data security protects data in use, in transit, and at rest. Infrastructural and software controls are used to implement stringent data security strategies. File security, on the other hand, protects sensitive files like personal information of customers and other business files.
Why is file security important?
To protect sensitive data
Personally identifiable information (PII), electronic personal health information (ePHI), confidential contracts, and other business-critical data must be stored safely. Careless transmission or use of such files could lead to data privacy violations, resulting in heavy fines for the organization.
To secure file sharing
Files transferred through unsecured channels can be misused by insiders or hackers for malicious activities. Comprehensive data leak prevention software can help prevent unauthorized movement of business-critical data out of the organization.
To avoid data breaches
In 2019, personal details of 10.6 million MGM resort guests were breached. The impact of such a breach can be fatal to any organization. It is not just the fines and legal consequences, but also the loss of trust that can destroy a business.
File security best practices
- Eliminate permission hygiene issues
The principle of least privileges (POLP) ensures that only the bare minimum privileges required to complete a task is granted. It is advisable to define access control lists (ACL) for files and folders based on user roles and requirements. Resolve permission hygiene propagation issues like undue privileges, and open access to files with a security permission analyzer.
- Secure file sharing channels
All file transfers should be authorized and secure. Audit all the possible ways files can be transferred, and block private devices like personal USB drives. Use USB data theft protection software to stop unofficial data transfers.
- Implement file server auditing
Be wary of multiple failed accesses, bulk file renames, or modifications. Mass, unofficial file modifications such as delete events may indicate a ransomware attack. Be prepared by automating your incident response against file threats with robust file server auditing software.
- Enforce authentication and authorization protocols
Enforce multi-factor authentication (MFA) for all users in your organizations. MFA makes it difficult for hackers to penetrate the network. Authorize only valid and official data access requests. Grant open access to all employees and partners only when necessary.
- Conduct file storage analysis
Analyze and manage your file repositories periodically. Know where your critical files are stored in the organization. Continuous review of stale files and unused files helps eliminate permission misuse incidents. Revoke permissions on files owned by former employees. Compute the cost of storing stale files with the help of our infographic. | <urn:uuid:ccd13ca6-a393-474c-aa12-51a8e3f4fe98> | CC-MAIN-2022-40 | https://www.manageengine.com/data-security/what-is/file-security.html?source=what-is | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00594.warc.gz | en | 0.892242 | 696 | 2.65625 | 3 |
Windows 11 snap layouts are the new feature introduced to help end-users provide a better multitasking experience. This offers many different options to arrange multiple windows in the most preferred way.
You can use the snap layout feature for all the apps in Windows 11. This provides six layouts to arrange applications on the desktop.
Windows 11 helps multitask with tools like Snap layouts, Desktops, and a new more-intuitive redocking experience. The Windows 11 snap layouts help users arrange applications in six (6) different ways.
What is Windows 11 Snap Layouts
Windows 11 snap layouts help end-users arrange applications in six (6) different layouts. This feature is helpful in many scenarios during working on other consoles or portals. As shown in the screenshot below, you can hover over the maximize button to find the snap layout options on Windows 11.
Snap Groups are here to help you group applications in 6 layouts.
Windows 11 Snap Layout 1 – Two Equal Parts
The first option in Windows 11 snap layout comes with two application windows, as you can see below. The desktop screen gets divided into two equal parts. This is useful if you want to multitask with two different applications.
50-50 Ratio Snap Layout.
Snap Layout 2
The second option in snap layout helps to arrange two applications in an 80/20 layout. This is useful when you multitask with two different applications in the 80/20 layout.
80-20 ratio layout.
Snap Layout 3 – Windows 11 Snap Layouts
The third option in snap layout helps arrange three applications in three equal layouts. This is useful when you multitask with three different applications in 3 equal layouts.
3 equal layouts.
Snap Layout 4 – Windows 11 Snap Layouts
The fourth option in snap layout helps arrange three applications in 50-25-25 ratio layouts. This is useful when you multitask with three different applications in 3 layouts.
50-25-25 ratio layouts.
Snap Layout 5
The fifth option in snap layout helps arrange four applications in 25-25-25-25 ratio layouts. This is useful when you multitask with three different applications in 4 layouts.
25-25-25-25 ratio layouts.
Windows 11 Snap Layout 6
The fifth option in snap layout helps to arrange three applications in 3 layouts. This is useful when you multitask with three different applications in 3 layouts.
- Windows 11 Version Numbers Build Numbers Major Minor Build Rev | Easy Way to understand | Updated List
Anoop C Nair is Microsoft MVP! He is a Device Management Admin with more than 20 years of experience (calculation done in 2021) in IT. He is Blogger, Speaker, and Local User Group HTMD Community leader. His main focus is on Device Management technologies like SCCM 2012, Current Branch, and Intune. He writes about ConfigMgr, Windows 11, Windows 10, Azure AD, Microsoft Intune, Windows 365, AVD, etc. | <urn:uuid:5be69b7a-bb36-485b-a781-c56a4365ab2c> | CC-MAIN-2022-40 | https://www.anoopcnair.com/windows-11-snap-layouts-new-feature-six-layouts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00594.warc.gz | en | 0.850853 | 613 | 2.609375 | 3 |
The US state with the highest population is California. At the end of 2019, it was 39.56 million. That’s A LOT of people, right?
Yes. However, according to the recent study published by Fortified Health Security, 40 MILLION Americans were affected by a healthcare data breach in 2019 alone. That represents an increase of 65% over the total of the year prior.
How Do They Do It?
First and foremost, they hack into various systems. This happens most often because of human error. That is the common denominator in the majority of breaches of any kind. Phishing emails are an easy way to get unsuspecting employees to reveal passwords or deploy methods that allow cybercriminals to easily access the systems illegally.
In the Fortified Health Security report, it was revealed that after analyzing data from 2009 through 2019, more than 189 million records have been breached during that time. The most targeted organizations were those that were providers – and they were the most successful for a hacker to breach. In 2019, more than 334 provider entities were affected, allowing access to over 22.7 million patient details. Health plans and healthcare business associates were next in line for the most often hacked businesses.
The Fines Add Up
HIPAA breaches can often result in fines and penalties issued by the Office for Civil Rights (OCR). OCR was able to accrue eight resolution agreements in the first 10 months of the year. These were accompanied by a fine, and the average amount of each fine was $1.6 million – and that also meant corrective action plans that had to be in place, which could uncover other possible flaws in an organization’s HIPAA compliance, such as gaps in their policies and procedures.
The rate at which data breaches are happening isn’t increasing slowly and steadily, it’s rocketing past any expectations we could have set. As a healthcare organization, it’s important to take a step back and ensure that you have HIPAA and cybersecurity policies in place, action plans outlined, and corrective action plans in place for when a breach occurs. While it used to be common to assign the person at the front desk the role of overseeing all things, this should ideally be a qualified individual who has a dedicated role as a HIPAA Security (and in some cases) Privacy Officer. Ongoing programs should also be in place to train employees against the tactics that are being used daily to take down their business through their human error. | <urn:uuid:0b48b717-eac2-4e77-b34f-25022e968184> | CC-MAIN-2022-40 | https://www.hipaasecurenow.com/healthcare-data-breaches-affected-40-million-americans-in-2019/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00794.warc.gz | en | 0.980754 | 509 | 2.5625 | 3 |
Code Signing certificate installation process
Code Signing Digital Certificate used a two key verification mechanism such as a public key and a private key to sign the codes and content of the software. This method protects the user’s information on the internet. A code signing certificate must be signed by a valid certificate authority so that the user’s operating system recognizes the signature as a genuine identity on the internet. A user would need to generate the public/private key pair and the authority would issue the other to unlock its features.
To continue reading, download our white paper - Code Signing (CS) certificate installation process (Other than JAVA code signing)
Download White Paper Now | <urn:uuid:ec34aed4-ab62-4943-89e7-1cd7370ee520> | CC-MAIN-2022-40 | https://www.acmetek.com/code-signing-certificate-installation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00794.warc.gz | en | 0.88421 | 143 | 2.703125 | 3 |
What's the difference between cryptography in .NET Framework and .NET Core?
A large part of the .NET APIs are common to both .NET Core and .NET Framework. Microsoft even released the .NET Standard, a subset of .NET APIs provided by all .NET implementations, to simplify things for cross-implementation developers. However, there are still significant differences between Core and Framework, and cryptography is one of them.
While the cryptography provided by .NET Core 2.0 is close to that of the latest .NET Framework, .NET Core 3.0, to be released in September 2019, will provide two major improvements: authenticated encryption and interoperable key formats. Below we'll look at why these are important and what support Microsoft will give us.
Authenticating a ciphertext before decrypting it is now seen as fundamental for security. It could traditionally be achieved with the Encrypt-then-MAC scheme (for example, with AES-CBC and HMAC-SHA256). Unfortunately, it is also easy to get wrong.
Authenticated modes of encryption like CCM and GCM aim to fix that by encrypting and authenticating data in one step. For instance, a developer can't forget to authenticate the ciphertext since the Decrypt method will only return the plaintext if authentication succeeded.
CCM and GCM modes for AES will both be supported in .NET Core 3.0 thanks to the AesCcm and AesGcm classes.
Interoperable Key Formats
You can import and export keys in all versions of .NET, but .NET Core 3.0 will make it easier to interoperate with other systems, including OpenSSL, by supporting standardized formats.
PKCS#1 for RSA Keys
PKCS#1 is the most common format for RSA keys. It is an ASN.1 DER structure notably used in TLS certificates, PKCS#8 and .pem files (in which case it is also Base64-encoded).
PKCS#8 for Asymmetric Keys
PKCS#8 is used to store and protect asymmetric keys for various algorithms including RSA and elliptic curve. Private keys can be encrypted.
PKCS#12 for Keystores
PKCS#12 can bundle several keys or certificates in a single structure that can be encrypted and signed. It replaces Microsoft's PFX format. In Java, PKCS#12 recently replaced JKS as the default key store format. There is also support in OpenSSL.
Finally, note that all these standards are rather old and contain both good and bad cryptography. They need to be used with care - more on that in another post.
These additions to .NET Core 3.0 make important updates to cryptography: authenticated modes are needed for best practice encryption, and interoperable key formats are much more useful for modern hybrid environments. Both are good reasons to upgrade from Core 2.1, or to make the move from .NET Framework.
You can find out more in Microsoft's blog post announcing .NET Core 3.0 Preview 1.
Annex: New classes and methods in .NET Core 3.0
Interesting classes and methods added in .NET Core 3.0:
A complete list is available in dotnet/core on GitHub. | <urn:uuid:4f5fec85-f31a-4fc0-9d87-8a81b35fab06> | CC-MAIN-2022-40 | https://cryptosense.com/blog/new-cryptography-in-net-core-3-0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00794.warc.gz | en | 0.896299 | 814 | 2.625 | 3 |
Spyware goes by many names, including adware, malware, crimeware, scumware and snoopware, but no matter what you call it, its purpose is still the same: to creep into your computer files and steal your personal information.
Once the information is in their hands, hackers can steal your identity, use your credit cards, siphon funds from your bank accounts, and more.
Simply put: it’s bad news and you want nothing to do with it.
The good news is that spyware prevention is possible — and there are many ways to keep these dangerous programs at bay.
In addition to installing the right software, consumers can practice these computer security tips from Webroot:
Download software directly from the source. Primary common distributor of spyware information infection is free, pirated programs downloaded from file-sharing sites which have been booby-trapped with malware. Set your browser security settings to “high” and protect yourself from “drive-by” downloads and automatic installations of unwanted programs.
Avoid questionable websites, such as those featuring adult material. They’re notorious for spreading spyware threats and causing users problems.
Use a firewall and be suspicious of email and IM. For instance:
- Don’t open attachments unless you know the sender and are expecting a file from him or her.
- Delete messages you suspect are spam (don’t even open them).
- Avoid clicking on links within messages.
- Do not provide personal information to unsolicited requests — even if they seem legitimate. Instead, if you receive a request for personal information from your bank or credit card company, contact that financial institution directly, but do not click on a link embedded in the email message. | <urn:uuid:bf687760-e16b-4975-acbf-08d92750d75b> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2011/01/21/spyware-prevention-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00794.warc.gz | en | 0.919295 | 362 | 2.734375 | 3 |
A new microchip developed by researchers at the University of Michigan uses 30,000 times less power in sleep mode and 10 times less power in active mode than comparable chips now on the market, the university announced Friday.
Intended for use in sensor-based devices such as medical implants, environment monitors or surveillance equipment, the new Phoenix processor consumes just 30 picowatts during sleep mode. A picowatt is one trillionth of a watt — in theory, the energy stored in a watch battery would be enough to run the Phoenix for 263 years.
Scott Hanson, a U-M doctoral student who co-led the project, will present the design on Friday at the Symposium on VLSI Circuits, which is sponsored by the Institute of Electrical and Electronics Engineers.
Many modern sensors and electronics measure one square millimeter and smaller, so the Phoenix’s diminutive size of one square millimeter is not remarkable in itself, the researchers said. What is remarkable, however, is that the Phoenix is the same size as its thin-film battery.
Indeed, batteries are typically larger than the processors they power, drastically expanding the size and cost of the entire system, said David Blaauw, a professor in U-M’s department of electrical engineering and computer science.
The battery in an average laptop computer, for example, is about 5,000 times larger than the processor, and it provides only a few hours of power, he noted.
The Phoenix is made out of standard chip-fabrication materials, and is actually based on an older technology than many modern chips are, Blaauw told TechNewsWorld.
“It’s a bit counterintuitive, but we found that the newer chip technologies are really good for cases where you want to push performance, like in a laptop or server,” he explained. Sensors, on the other hand, benefit from lower performance, which can be traded off for higher power efficiency.
“Low power consumption allows us to reduce battery size and thereby overall system size,” Blaauw explained. “Our system, including the battery, is projected to be 1,000 times smaller than the smallest known sensing system today. It could allow for a host of new sensor applications.”
In fact, a group of U-M researchers is testing the Phoenix in a biomedical sensor to monitor eye pressure in glaucoma patients, but other possible applications include sensor networks to monitor air or water or detect movement; sensor-enriched concrete that senses the structural integrity of new buildings and bridges; and robust pacemakers that could take more detailed readings of a patient’s health.
The Phoenix’s low power usage stems from its sleep mode. Sensors can spend more than 99 percent of their lives in sleep mode, waking only briefly at regular intervals to perform computations, the researchers explained.
“Sleep mode power dominates in sensors, so we designed this device from the ground up with an efficient sleep mode as the No. 1 goal,” said Dennis Sylvester, an associate professor in the department of electrical engineering and computer science. “That’s not been done before.”
Specifically, the Phoenix defaults to sleep, and a low-power timer acts as an alarm clock on perpetual snooze, waking it every 10 minutes for 1/10th of a second to run a set of 2,000 instructions. That list includes checking the sensor for new data, processing it, compressing it into a sort of short-hand and storing it before going back to sleep.
The timer “isn’t an atomic clock,” Hanson explained. “We keep time to 10 minutes plus or minus a few tenths of a second. For the applications this is designed for, that’s OK. You don’t need absolute accuracy in a sensor. We’ve traded that for enormous power savings.”
Critical Power Gate
The Phoenix also features a unique power gate design as part of its sleep strategy. Power gates keep the electric current from entering parts of a chip not essential for memory during sleep.
In typical state-of-the-art chips, power gates are wide with low resistance to let through as much electric current as possible when the device is turned on. These chips wake up quickly and run fast, but a significant amount of electric current leaks through in sleep mode.
Engineers for the Phoenix, on the other hand, used much narrower power gates that restrict the flow of electric current. Coupled with its use of an older process technology, the result is a reduction in energy leaks.
“I certainly haven’t heard of a device like this — that’s a very small amount of power,” Roger Kay, president of Endpoint Technologies, told TechNewsWorld. “This opens up some interesting possibilities in medical applications.” | <urn:uuid:9e05b98a-cbf8-4763-9308-9e214c05f1bd> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/ultra-low-power-sensor-chip-really-knows-how-to-relax-63430.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00794.warc.gz | en | 0.940611 | 1,028 | 3.09375 | 3 |
When the U.S. National Science Foundation announced its intent to form a program dedicated to making next-generation networks more resilient, the message was clear. In an increasingly connected world, we can’t afford to have communication networks experience measurable levels of failure or degradation in the wake of a possible attack—we can’t even afford human error.
The Resilient and Intelligent Next-Generation Systems (RINGS) program came on the heels of the Department of Defense’s US$600 million investment in 5G technologies. Other industries, including mission-critical ones like healthcare and utilities, are also banking on next-gen telecommunications systems to advance everything from telemedicine to smart equipment management.
But to achieve true resilience, organizations must adopt approaches capable of going beyond traditional network monitoring and embrace new technologies. Solutions like AIOps and network observability can reduce the time it takes to identify and repair network failures, boosting network resiliency and performance.
What Is AIOps?
AIOps is the method of applying artificial intelligence and its components, including predictive analytics and machine learning, to IT operations. AIOps collects data from various sources and turns it into actionable intelligence, which organizations can use proactively to address and even anticipate certain situations—for example, signs of a network intrusion or service disruption.
How Is AIOps Different From Traditional Network Monitoring?
AIOps goes further than traditional network monitoring. Not only does the system provide organizations with predictive intelligence allowing systems to detect a potential problem before it occurs, it can also automatically respond to those problems without the need for IT’s involvement.
When IT needs to be involved, AIOps cuts through the noise by collecting data from connected resources (like sensors, cameras, other devices, and network elements). It streamlines information by reducing noise and identifies high-priority information so IT managers can focus on pressing items and not suffer from alert fatigue. In this way, AIOps provides the team with a high degree of observability of everything happening across their next-generation networks.
What Is Network Observability, and Why Is It Important?
Having the ability to observe the entire network gives IT managers a significant advantage for achieving true resiliency. Observability isn’t just about seeing what’s happening across the network; it’s about being able to use multiple data sets to quickly identify issues and fix them before they become disruptive. Instead of using only log data to track the root of a problem, organizations can leverage a combination of log data, application data, and other metrics.
Think of observability as being able to look to the left, center, and right:
- The left is the past—what happened on the network recently.
- The center is the present—what’s happening on the network right now.
- The right is the future—what will likely happen given what’s happened before and what’s currently taking place.
What someone sees when they look to the right is informed by the wealth of past and present data.
How Is Observability Different From Traditional Network Monitoring?
Network monitoring is a reactive measure; IT managers are alerted to issues as they happen. AIOps-based observability is an anticipatory measure. Therefore, it’s more likely to prevent challenges and preserve resiliency.
Plus, today’s networks are highly complex. They consist of in-house, on-premises, and hybrid clouds, and they’re continually changing. Managing them effectively and ensuring they continue to operate as expected requires an unfiltered viewpoint.
How Does All This Tie Into Performance?
With the emergence of 5G, the prevalence of smart devices, and the prospect of long-term remote work environments, our world is more connected than ever before. A single loss in connectivity can lead to minor inconveniences (such as when a social media site goes dark) or large-scale disruptions (like those resulting from an attack on a utility company).
AIOps and observability provide significant layers of protection against these disturbances. AIOps can help anticipate downtime and proactively remediate threats. Even if a problem arises, observability can help teams identify the problem quickly and trace it back to the source so networks can remain high-performing and resilient.
Visit our website for more information on network observability. | <urn:uuid:1a3d9a42-d706-4c09-b229-2005cb169d74> | CC-MAIN-2022-40 | https://www.carahsoft.com/community/solarwinds-aiops-network-resiliency-blog-2022 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00194.warc.gz | en | 0.914741 | 903 | 2.890625 | 3 |
Within three years, nearly 75 percent of smartphones and tablets will have a multicore processor, predicts In-Stat, a research firm. This trend is no surprise, considering how mobile devices are increasingly handling multiple tasks simultaneously, such as serving as a videoconferencing endpoint while downloading email in the background and running an antimalware scan.
Today’s commercially available dual-core mobile devices include Apple’s iPad 2, HTC’s EVO 3D and Samsung’s Galaxy Tab, and some vendors have announced quad-core processors that will begin shipping next year. Operating systems are the other half of the multicore equation. Android and iOS, which are No. 1 and No. 2 in terms of U.S. market share, already support multicore processors.
“IOS and Android are, at their core, based on multithreaded operating systems,” says Geoff Kratz, chief scientist at FarWest Software, a consultancy that specializes in system design. “Android is basically Linux, and iOS is based on Mac OS, which in turn is based on BSD UNIX. That means that, out of the box, these systems are built to support multicore/multi-CPU systems and multithreaded apps.”
For enterprise developers, what all this means is that it’s time to get up-to-speed on programming for mobile multicore devices. That process starts with understanding multicore’s benefits — and why they don’t always apply.
Divide and Conquer
Performance is arguably multicore’s biggest and most obvious benefit. But that benefit doesn’t apply across the board; not all apps use multithreading. So when developing an app, an important first step is to determine what can be done in parallel.
“This type of programming can be tricky at first, but once you get used to it and thinking about it, it becomes straightforward,” says Kratz. “For multithreaded programming, the important concepts to understand are queues (for inter-thread communications) and the various types of locks you can use to protect memory (like mutexes, spin locks and read-write locks).”
It’s also important to protect shared data.
“Most programmers new to multithreaded programming assume things like a simple assignment to a variable (e.g., setting a “long” variable to “1”) is atomic, but unfortunately it often isn’t,” says Kratz. “That means that one thread could be setting a variable to some value, and another thread catches it halfway through the write, getting back a nonsense value.”
It’s here that hardware fragmentation can compound the problem. For example, not all platforms handle atomic assignments. So an app might run flawlessly on an enterprise’s installed base of mobile devices, only to have problems arise when non-atomic hardware is added to the mix.
“Using mutexes (which allow exclusive access to a variable) or read/write locks (which allow multiple threads to read the data, but lock everyone out when a thread wants to write) are the two most common approaches, and mutexes are by far the most common,” says Kratz. “For Android programmers, this is easily done using the ‘synchronized’ construct in the Java language, protecting some of the code paths so only one thread at a time can use it.”
For iOS, options include using POSIX mutexes, the NSLock class or the @synchronized directive.
“Ideally, a programmer should minimize the amount of data shared between threads to the absolute bare minimum,” says Kratz. “It helps with performance and, more importantly, makes the app simpler and easier to understand — and less liable to errors as a result.”
No Free Power-lunch
As the installed base of mobile multicore devices grows, it doesn’t mean developers can now ignore power consumption. Just the opposite: If multicore enables things that convince more enterprises to increase their usage of smartphones and tablets, then they’re also going to expect these devices to be able to run for an entire workday between charges. So use the extra horsepower efficiently, such as by managing queues.
“You want to use a queue construct that allows anything reading from the queue to wait efficiently for the next item on the queue,” says Kratz. “If there is nothing on the queue, then the threads should basically go idle and use no CPU. If the chosen queue forces you to poll and repeatedly go back and read the queue to see if there is data on it, then that results in CPU effort for no gain, and all you’ve done is use up power and generate heat. If a programmer has no choice but to poll, then I would recommend adding a small sleep between polling attempts where possible, to keep the CPU load down a bit.”
When it comes to power management, many best practices from the single-core world still apply to multicore. For example, a device’s radios — not just cellular, but GPS and Wi-Fi too — are among the biggest power-draws. So even though multicore means an app can sit in the background and use a radio — such as a navigation app constantly updating nearby restaurants and ATMs — consider whether that’s the most efficient use of battery resources.
“For some apps, like a turn-by-turn navigation app, it makes sense that it wants the highest-resolution location as frequently as possible,” says Kratz. “But for some apps, a more coarse location and far less frequent updates may be sufficient and will help preserve battery.
“There may be times where an app will adapt its use of the GPS, depending on if the device is plugged into power or not. In this case, if you are plugged into an external power source, then the app can dial up the resolution and update frequency. When the device is unplugged, the app could then dial it back, conserving power and still working in a useful fashion.”
Photo Credit: @iStockphoto.com/skodonnell | <urn:uuid:1a611e44-5d57-45e2-a8c1-2be3486d51e5> | CC-MAIN-2022-40 | https://intelligenceinsoftware.com/SmartphonesandTabletsGoMulticoreNowWhat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00194.warc.gz | en | 0.92839 | 1,323 | 2.578125 | 3 |
San Diego, United States – GBT Technologies Inc., is implementing a machine learning driven, pattern matching technology within its Epsilon, microchip’s reliability verification and correction electronic design automation (EDA) tool. Design rules are getting increasingly complex with each new process node and design firms are facing new challenges in the physical verification domain.
One of the major areas that are affected by the process physics, is reliability Verification (RV). Microchips are major components nearly in every major electronics application. Civil, military and space exploration industries require reliable operations for many years, and in severe environments. High performance computing systems require advanced processing with high reliability to ensure the consistency and accuracy of the processed data. Complex integrated circuits are in the heart of these systems and need to function with high level of dependability.
Particularly in the fields of medicine, aviation, transportation, data storage and industrial instrumentation, microchip’s reliability factor is crucial. GBT is implementing new machine learning driven, pattern matching techniques within its Epsilon system with the goal of addressing the advanced semiconductor’s physics, ensuring high level of reliability, optimal power consumption and high performance. As Epsilon analyses the layout of an integrated circuit (IC), it identifies reliability weak spots, which are specific regions of an IC’s layout, and learns their patterns. As the tool continues analysing the layout it records problematic zones taking into account the pattern’s orientations and placements.
In addition, it is designed to understand small variations in dimensions of the pattern, as specified by the designer or an automatic synthesis tool. As the weak spots are identified, the tool will take appropriate action to modify and correct them. A deep learning mechanism will be performing the data analysis, identification, categorisation, and reasoning while executing an automatic correction.
The Machine Learning will understand the patterns and record them in an internal library for future use. Epsilon’s pattern matching technology will be analysing the chip’s data according to a set of predefined and learned-from-experience rules. Its cognitive capabilities will make it self-adjust to newest nodes with new constraints and challenges, with the goal of providing quick and reliable verification and correction of an IC layout.
The company released a video which explain the potential functions of the Epsilon tool here.
“The ability to analyse and address advanced IC’s reliability parameters is necessary to mitigate risk of system degradation, overheating, and possible malfunction. It can affect microchip’s performance, power consumption, data storage and retrieval, heat and an early failure which may be critical in vital electronic systems. Epsilon analyses a microchip data for reliability, power and electrothermal characteristics, and performs auto-correction in case violations found.
We are now implementing an intelligent technology for Epsilon with the goal of utilising pattern matching algorithms to formulate a smart detection of reliability issues within integrated circuits layout. The new techniques will analyse and learn weak spots within microchip’s data, predicting failure models that are based on the process’ physics and electrical constraints knowledge. It will take into consideration each device’s function, connectivity attributes, electrical currents information, electrothermal factors and more to determine problematic spots and perform auto-correction.
Particularly for FinFet and GAA FET (Gate All Around FET) technologies, a device’s functionality is developed with major reliability considerations ensuring power management efficiency, optimal thermal analysis aiming for long, reliable life span. Using smart pattern matching methods, we plan to improve reliability analysis, achieving consistency and accuracy across designs within advanced manufacturing processes.
As dimensions of processes shrink, IC’s layout features become much more complex to analyse for electrical phenomenon. To provide an intelligent answer for these complexities, we are implementing deep learning-based pattern matching technology with the goal of ensuring efficient, ‘green’ microchip’s power consumption, higher performance, optimised thermal distribution, and ultimately superior reliability” states Danny Rittman, the company’s CTO.
There is no guarantee that the company will be successful in researching, developing or implementing this system. In order to implement this concept, the company will need to raise adequate capital to support its research and, if researched and fully developed, the company would need to enter into a relationship with a third party that has experience in manufacturing, selling and distributing this product. There is no guarantee that the company will be successful in any or all of these critical steps.
Follow us and Comment on Twitter @TheEE_io | <urn:uuid:eea79e28-8f59-4945-a5e8-fb170c248b23> | CC-MAIN-2022-40 | https://www.theee.ai/2022/09/13/20207-gbt-is-implementing-machine-learning-driven-pattern-matching-technology-for-its-epsilon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00194.warc.gz | en | 0.898798 | 937 | 2.5625 | 3 |
The US government, corporate and academic researchers are working on a network that would be able to configure itself, intelligently cache and route data, and allow for fast and reliable sharing of data, all while maintaining military-grade security.
The project is called Knowledge Based Networking and is under development by the US Department of Defense Research Projects Agency (DARPA).
Academic concepts such as artificial intelligence and Tim Berners-Lee´s “Semantic Web”, combined with technologies such as the Mobile Ad-hoc Network (MANET), cognitive radio, and peer-to-peer networking, would provide the nuts and bolts of such a network. Although the project is intended for soldiers in the field, the resulting advances could trickle down to end users. “Military networks are going to converge as closely as we can to civil technologies,” says Preston Marshall, the program manager of DARPA´s Advanced Technology Office.Read Full Story | <urn:uuid:e9098e83-03c2-48a4-b807-d3def192c43a> | CC-MAIN-2022-40 | https://it-observer.com/military-researching-intelligent-secure-wireless-nets.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00194.warc.gz | en | 0.948168 | 194 | 3.140625 | 3 |
In the aftermath of the Watergate scandal in the 1970s, state and federal governments moved to become more open, and the Internet has made the achievement of transparency even easier.
Information is now available to the general public on government websites or from federal agencies in response to requests made under the Freedom of Information Act (FOIA). Many states have open records laws (like the Texas Public Information Act ) requiring a similar provision of information on the state level.
Court Filings Are Public Documents
Lawyers file documents electronically in most federal courts using a system known as PACER, and in many state courts using a variety of systems. In Texas, for example, an eFiling System was instituted by the Texas Supreme Court Judicial Committee on Information Technology (of which I was the founding Chair, and continued in that role for 12 years).
As a result, the federal and state courts (and clerks of courts) maintain electronic copies of those pleadings. Under our open government, anyone can get a copy — if not filed under seal for confidential reasons — for some nominal fee, or for free. Interestingly, the author of the legal filing does not control access to the filing or the distribution of copies.
How Does the Copyright Act Protect Authors of Legal Documents?
The 1976 U.S. Copyright Act predates the advent of widespread use of the Internet and social media, and Internet-based copyright infringements. In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), but most of the cases dealing with the DMCA are copyright infringements arising from YouTube, music, movies and the like.
Yet under the Copyright Act, the moment the author creates a work, it is deemed protected by copyright without any further action by the author. Unless the author assigns his or her rights to another, the author retains the copyright.
When lawyers draft pleadings and briefs, they clearly appear to be the authors under the Copyright Act. Merely filing the papers in court does not appear to deprive them of their copyright.
The public court system appears to have an implied right to distribute papers filed in lawsuits, related to those suits. Probably even the opposing parties in lawsuits have the right to make copies to provide as exhibits to papers filed in response, and to share with clients and expert witnesses.
The question is, do commercial businesses have the right to copy documents filed in court and share these documents for commercial gain? That question may soon be answered.
Class Action Lawsuit Against West and LexisNexis
Two lawyers filed a class action suit against West and LexisNexis for violating the copyrights of authors of court filings. The lawsuit filed in Federal Court in New York City on Feb. 22, 2012 by Edward White (of Oklahoma City) and Kenneth Elan (of New York) starts with a description of the case:”This is a copyright infringement action against West and LexisNexis based upon their unabashed wholesale copying of thousands of copyright-protected works, created by, and owned by, the attorneys and law firms who authored them.”White and Elan also allege that”West and LexisNexis have engaged in wholesale unlawful copying of attorneys’ copyrighted work, bundled those works into searchable databases, and sold access to those works in the form of digitized text and images for huge profits….”
Is this case appropriate for a Class Action?
The first step in this lawsuit will be for the Federal Court to establish whether White and Elan can actually claim a class of plaintiffs for this case. White and Elan seek court certification of a class that includes”…all attorneys and law firms … that authored works … that are contained in the Defendants searchable databases.
If the U.S. District Court certifies the class, the lawsuit can proceed.
Can West and LexisNexis Rely on Their User Contracts?
The West (owned by Thomson Reuters) Terms of Service (ToS) grants its users a license to the content of materials, but there are limits found in West’s Public Records Privacy Statement that “All data in the WestlawNext, Westlaw Classic, and CLEAR (Consolidated Lead Evaluation and Reporting) public records databases are supplied by government agencies and reputable private suppliers.
Further, West defines Public Records, Nonpublic Information, and Publicly available information, and also has a Notice of Copyright and Trademarks that states that “MATERIALS IN THIS WEBSITE ARE PROVIDED ‘AS IS’ WITHOUT WARRANTY OF ANY KIND.” So in plain English, West users have no promise that West has the right to the content at all.
The LexisNexis (a division of Reed Elsevier) ToS are much like those of West. LexisNexis also has a separate Statement Regarding Copying, Downloading and Distribution Of Materials From The Lexisnexis Services, which includes specific provisions dealing with whether the “fair use” provision of the Copyright Act may apply.
But is indexing and making copies of such works actually “fair use”? Fair use applies when a copyrighted work is used for certain noncommercial purposes, such as in a research paper or in giving a lecture at a university.
Soon after White and Elan filed their lawsuit, Don Cruse, in his Supreme Court of Texas Blog, shared a thoughtful opinion on why he does not support White and Elan’s position on policy grounds, and he offered to opt out of a class action. He suggests this case probably will turn on the question of fair use.
This lawsuit, if fully adjudicated, may result in judicial or subsequent legislative redefining of public documents, at least as filed in courts.
Of course, there is a long way to go. If the court does not certify a class, each individual author will need to separately sue West and LexisNexis and other services, even if White and Elan win on their copyright infringement claims. | <urn:uuid:5b891d6d-53c8-46f4-96c0-146c10b0f3e5> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/copyright-law-vs-public-court-documents-74834.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00194.warc.gz | en | 0.935261 | 1,237 | 3.25 | 3 |
The ultra-cheap Linux computer on a circuit board has its roots in the classroom. But the bare-bones computer, dubbed “Raspberry Pi,” has potential to teach industrial embedded programmers some new tricks.
Raspberry Pi, a US$35 credit-card-sized computer sold without keyboard or monitor, runs several Linux distros and can hook up to a mouse, keyboard, HDTV and Ethernet. It went on worldwide sale last month and quickly sold out. It supports Python and Perl programming languages.
But the tiny computer will not replace its fully endowed hardware alternatives. And several design decisions could well limit its performance.
“It’s an advanced, innovative circuit board. It’s similar to a Beagle board maybe. It’s not unlike other Linux PCs. It’s good at some things. It’s a little slow at others,” Eben Upton, director of the Raspberry Pi Foundation, told LinuxInsider.
Flood of Interest
The Raspberry Pi computer was supposed to be a computer for kids. But its initial audience clearly is comprised of computer geeks, according to Upton.
So many of them tried to get their hands on the device that their buying efforts last month emptied shelves at two United Kingdom retailers. Online visitors trying to secure pre-orders caused a 300 percent increase in Web traffic and flooded servers at distributors Premier Farnell and RS Components with some 600 visits per second, Upton confirmed.
“The buyer demand got a little out of hand in meeting production supplies. We’re in the process of solving that. We’re focusing on ramping up volume,” Upton said.
The foundation has now added a Newark,N.J.-based distributor to handle U.S. sales, he explained.
School House Roots
In computing’s infancy, students would show up for computer degrees with a knowledge of programing from their access to staples such as the Commodore 64 and the TRS-80, noted Upton. That is not the case today.
“We see kids coming into the university now who don’t know anything. We have to teach them programming from scratch,” he said.
The idea to develop the Raspberry Pi computer grew from a need at the University of Cambridge for a platform where kids could learn to program. Upton, who now works for Broadcomm, used to be on the computer sciences teaching staff at Cambridge.
“You have a three-year course at Cambridge but only 22 weeks of contact time a year. It’s really hard to get kids ready to work on computers,” he said.
The board seems to be excellent for education and prototyping. But most of the 15,000 members of an embedded Linux forum that he moderates on LinkedIn have never actually laid hands on one, noted William Weinberg, senior director at Olliance Services and principal analyst and consultant at Linux Pundit.
“It’s a great gateway device for embedded Linux beginners, at the right price, too,” Weinberg told LinuxInsider.
The Raspberry Pi is small in size but does not appear to fit any standard form factors. This could be limiting in some industrial applications, he said.
“As far as real-world applications go, it does have a very nice GPU and so could theoretically serve a range of low-end gaming, consumer device or even IVI applications,” though not on an industrial scale, Weinberg said.
More Gutsy Info
The Raspberry Pi computer board uses a Broadcom RM11-based BCM2835 SoC chip. This chipset (Broadcom A) is not exactly new or cutting edge, but the configuration is representative of much real-world hardware, noted Weinberg.
Additionally, the ARM11-based chipset is old enough that some mainstream ARM Linux distros no longer support it, although embedded SDKs do, he said.
But the computer might fit a variety of consumer uses. Broadcomm positioned the 2835 chip as a multimedia applications processor, added Weinberg.
A Raspberry Pi computer can set the stage for a number of consumer uses that presently are not available for anything close to its price point. For example, it plugs into a horde of available peripherals consumers already have.
“Because you can plug it into to a crappy old television, it makes a snappy computer for someone in the developing world,” said Upton.
That serves a need. Many places around the globe have people with televisions who lack computers. So it is a good way of turning a television into a computer, he added.
More Pi, Please
“The RPi creates an entry-level embedded system that is in the same price range as an Arduino with an Ethernet shield,” wrote Paul Haas, software engineer at ProQuest, in response to a LinuxInsider query posted on Weinberg’s forum.
With the device’s USB host port, it can interface to a huge range of peripherals that are beyond the scope of an Arduino. These include USB cameras, networking, keyboards, mice, joysticks, game controllers, and more, Haas added.
The Raspberry Pi computer is a low-cost way to create an embedded system that supports all of the programming languages available for Linux and most of the supported USB devices, he explained.
“While I am sure that the RPi will displace some existing applications that would otherwise be done with more expensive systems, the big growth range will be in doing things that weren’t possible, or were too difficult, for the lower-end systems,” said Haas.
Tapping Into Needs
The Raspberry Pi Foundation’s computer creation can fill some industrial needs at a very low cost, Upton suggested. These include displays and automation.
“It’s got some potential use in digital signage, maybe. I can see people using it to drive a display. It’s got some use in maybe industrial automation,” he offered.
For example, there are some cases in which people are using PCs to aggregate. That applies to factories where they have to gather data and beam it back to some sort of central point.
“We’ve talked to some factories and see maybe some uses there,” said Upton.
The Raspberry PI computer can fit in nicely as an embedded device at a price not available anywhere else. These uses can include security modules, GPS location mapping clients for OpenStreetMaps and embedded servers/controllers, said Johan Damas in response to the forum query.
“Think about it. If you’re a startup and want to do something like this within a minimum amount of time, why not? Big enterprise might not, of course. But smaller companies who need an embedded ARM board with the specs of the Raspberry Pi are not going to build it themselves due to cost and time,” wrote Dams.
Other suggestions for using Raspberry Pi include digital signage, a thin client integrated with a screen, a UAV controller and a GPS Tracker, he added. Dams is director of software engineering at Genesi USA.
How successful the Raspberry Pi Foundation is in making its low-cost computer an educational necessity remains to be seen. At this point, commercial success through industrial sales is secondary.
“We have a theory that if you reduce the cost of competing to virtually zero, interesting things will happen,” Upton said.
“This is serious stuff for us. We are well into the first generation of our product. This is a production-quality device now. We’re going to take this all the way though,” he predicted.
Full Speed Ahead
The main goal for the company now is getting the focus with computers back into education.
The Foundation was not set up for making computers. Now it is setting up to teach kids to program. Making computers to do that is a necessary thing, according to Upton.
“For the next six months we are going to be very focused on meeting the dual educational purpose of getting the software together and getting the educational books together,” he concluded.
Meanwhile, the Raspberry Pi computer is a viable product that people can order online from anywhere. | <urn:uuid:35a66368-a196-40ff-b747-abbaeac7950d> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/what-does-one-serve-with-raspberry-pi-74601.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00194.warc.gz | en | 0.93908 | 1,729 | 2.875 | 3 |
The Internet consists of Internet Service Providers (ISPs) of all shapes and sizes. ISPs have two options when it comes to connecting to the Internet. They can either purchase IP transit from an upstream provider, or form a peering relationship with other ISPs. To achieve the best balance between cost and performance, ISPs very often use a combination of these two options.
In the peering relationship ISPs exchange routing information and network traffic in order to provide access to their customers’ networks. However, only customer prefixes are exchanged; prefixes received from the upstream provider are not advertised to the peers.
There is no charge for traffic exchanged between ISP peers as they do not pay the upstream provider to interconnect their customer’s networks. This is what we call settlement-free peering. The ISPs pay only for the port on the fabric at the public peering point (IXP) or, in the case of private peering, share the cost of the circuit. The volume of IP transit data is therefore reduced and hence the cost.
Unlike peering, IP transit is a paid service whose price is determined by bandwidth usage, which can be metered using the 95th traffic percentile method. The role of a transit provider, also called an upstream provider, is to connect a customer’s network or downstream ISP to the global Internet. To do this, the transit provider allows the customer traffic to pass through its network so that it can reach all possible destinations on the Internet.
IP transit service is BGP-based, so customers who buy IP transit must operate their own Autonomous System (AS). Customers receive a full BGP Internet table that includes:
- prefixes of other customers of the upstream provider;
- prefixes advertised by ISP peers of the upstream provider;
- prefixes received by upstream provider from its upstream providers.
A transit provider advertises all customers’ prefixes to its peers and to all upstream providers.
Tier-1, Tier-2 and Tier-3 Service Providers
ISPs are organized into a hierarchical structure that consists of three tiers.
Tier-1 Service Providers
Tier 1 transit providers have a global reach and they are the backbone of the Internet. They do not buy transit service, and they peer with each other at zero cost. Tier-1 networks connect Tier-2 and Tier-3 (lower tiers) ISPs and they charge lower tier ISPs to allow traffic to transit their networks.
Tier-2 Service Providers
Tier 2 providers have large networks and a wide global presence. Tier 2 providers peer with each other to reduce costs associated with IP transit but they also need to buy IP transit from Tier 1 providers.
Tier-3 Service Providers
Tier-3 ISPs are local providers with national reach. They usually buy IP transit from Tier-2 providers to avoid expensive Tier-1 IP transit. Tier-3 providers are typically without any transit customers and have no peering connections.
The interconnection of Tier-1, 2 and 3 service providers is illustrated in Figure 1. The transit connection is indicated by a solid line, while the dotted line is used for peering. Traffic from the lower-tier ISP to a higher-tier provider is called going upstream. Similarly, traffic from the Internet and destined to the lower-tier ISP is called going downstream.
Figure 1 – Tier-1, Tier-2 and Tier-3 Internet Service Providers
Let’s discuss several network topologies that define how a customer is connected to an upstream provider.
Single-Homed Network Topology
The most straightforward design is single-homed, where the customer has a single connection to only one upstream provider (Figure 2). The ISP only announces a default route to the customer; BGP is not needed because there is only one exit path to the Internet. This is the most cost-effective solution with a simple routing policy. The disadvantages are obvious; if the link or router fails, the customer’s entire Internet connection will also fail.
Figure 2 – Single-Homed Design
Dual-Homed Network Topology
A network is dual homed if there is more than one connection to one upstream provider (Figure 3). A customer is protected against a link failure, but the device still represents a single point of failure.
Figure 3 – Dual-Homed Design
We can add another router on the ISP side and connect the customer’s router to the provider’s routers (Figure 4). The failure of one of the ISP routers will have no effect on the customer connection, but the customer-side device still represents a single point of failure.
Figure 4 – Dual-Homed Design with Two ISP Devices
Redundancy on the customer side can be further improved by adding another router to the topology (Figure 5).
Figure 5 – Dual-Homed Design with Redundant Devices
Single Multi-Homed Network Topology
We speak about multi-homed connections when the customer is connected to two different upstream providers (Figure 6). Unlike single-homed design, multi-homed topology provides the highest redundancy, reliability, and efficiency.
The customer is protected from the failure of an upstream provider. When a connection to one of the providers fails, traffic is sent over another link to the second upstream ISP within seconds.
Traffic from the Internet to the customer’s mission-critical applications is also secured because customer prefixes are advertised by at least one of the upstream providers.
The customer can configure custom BGP routing policies to manipulate BGP path attributes to prioritize one of the links for both outbound and inbound network traffic.
Figure 6 – Single Multi-Homed Design
We only have one router at the customer, so we can improve the redundancy by adding a second router on the customer side.
Figure 7 – Single Multi-Homed with Redundant Devices
Dual Multi-Homed Network Topology
The redundancy of the single multi-homed design can be improved by adding additional links between a customer and ISPs. If one of the links fails, Internet connectivity through the same ISP is maintained using the backup link (Figure 8).
Figure 8 – Dual Multi-Homed Design
The design shown in Figure 9 provides the highest redundancy of links, customers and ISPs, but is also the most expensive solution.
Figure 9 – Dual Multi-Homed Design with Redundant Customer and ISP Devices
Understanding ISP interconnection ensures that organizations choose the most cost- and technically effective solution that meets their business needs before actually purchasing an IP transit service.
The Noction Intelligent Routing Platform helps service providers and enterprises that operate a multihomed network environment to improve BGP routing performance.
For more information about IRP, see the FAQ section or contact us. We are happy to answer any questions or deploy a test installation in your infrastructure to automate BGP routing and provide you with reports on your network performance, providers issues, outages, etc.
Boost BGP Preformance
Automate BGP Routing optimization with Noction IRP | <urn:uuid:38458f2a-cfd1-4acf-b085-9fb4f2b279f7> | CC-MAIN-2022-40 | https://www.noction.com/blog/ip-transit-providers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00194.warc.gz | en | 0.923394 | 1,467 | 2.84375 | 3 |
Very large-scale integration or VLSI is a process in which millions of MOS transistors are combined and integrated on a single semiconductor microchip. With the global semiconductor revenue crossing USD 440 Billion in 2020, there is an increasing need to design and produce highly efficient and specialized chips that can power new age technologies such as AI/ML, IoT, AR/VR, Cloud etc., which are increasingly becoming mainstream instead of remaining niche technologies. Growth in consumer electronics, computing devices, post pandemic, smartphones, intelligent vehicles etc., has further increased the demand.
Depending upon the number of components (Transistors) to be integrated, ICs are categorized as SSI, MSI, LSI, VLSI, ULSI & GSI.
Small Scale Integration (SSI): 1-100 Transistors were fabricated on a single chip. eg Gates , Flipflops.
Medium Scale Integration (MSI): 100-1000 number of Transistors could be integrated on a single chip. eg 4 bit Microprocessors.
Large Scale Integration : 1000-10000 Transistors could be integrated on a single chip. eg 8 bit Microprocessors, RAM, ROM
Very Large Scale Integration(VLSI): 10000 – 1 Million Transistors could be accommodated. Eg 16-32 bit Microprocessors.
Ultra Large Scale Integration(ULSI): 1 Million-10 Million Transistors could be accommodated. Eg Special Purpose Registers.
Giant Scale Integration (GSI): More than 10 Million Transistors could be accommodated. Eg Embedded Systems.
Before VLSI, ICs could only perform a limited number of functions and electronic circuits incorporate a CPU, RAM, ROM, and other peripherals on a circuit board (PCBA). However, after this technology got introduced, millions of transistors and all these functions can now be embedded into a single microchip, thus enabling complex semiconductor and telecommunication technologies to be developed.
The advancement in electronics is largely due to the VLSI technologies and its rapid adoption.
What are the advantages of VLSI?
- Circuit sizes are reduced
- Improved performance and speed
- Effective cost reduced
- Requires less power and produces less heat
- Increased reliability
- Requires less space
Where is the VLSI technology used?
VLSI circuits are used everywhere, including microprocessors in a personal computer, chips in a graphic card, digital camera or camcorder, chips in a cell phone, embedded processors, and safety systems like anti-lock braking systems in an automobile, personal entertainment systems, medical electronic systems etc.
VLSI technology is well suited to the demands of today’s electronic devices and systems. With the ever-increasing demand for shrinking in size, compactness, performance, reliability, and functionality, VLSI technology is expected continue to drive electronics advancement. In addition, as India expands its electronics system design and manufacturing capabilities, the number of job opportunities is also expected to grow in the VLSI design area, making a lucrative career option for many. | <urn:uuid:89107960-8721-4924-b03c-b82d821d779f> | CC-MAIN-2022-40 | https://www.dailyhostnews.com/vlsi-uses-and-importance-in-the-semiconductor-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00194.warc.gz | en | 0.895985 | 629 | 3.6875 | 4 |
A Malicious URL is a link created with the purpose of promoting scams, attacks, and fraud. By clicking on an infected URL, you may download malware or a trojan that can take control of your device, or you might be persuaded into providing personal information on a fake website such as your username and password. Malicious URLs are often seen embedded in phishing attacks, tricking users into clicking on the link(s). Hackers use techniques like “typosquatting” to make malicious URLs look legitimate. For example, the URL is r n icrosoft.com rather than microsoft.com can be used to trick users due to it looking legitimate at a glance.
Additional Reading: Smishing, The New Phishing
What does this mean for a Business Owner or Employee?
- Educate employees through an awareness training tool like CyberHoot
- Phish Test Employees to keep them on their toes
- Remove Administrative Access to the local workstations to limit the impact if a user clicks or accidentally tries to install malware on their machine.
- Implement strong passwords
- Unique 14+ character passwords/passphrases stored in a Password Manager
- Implement Two-Factor Authentication wherever possible
- Something you know (password), something you have (cell pho
- Follow the 3-2-1 backup method for securing all your critical and sensitive data
- Govern employees with cybersecurity policies
- Purchase and train your employees on how to use a Password Manager.
Nothing you do will guarantee you cannot be compromised. However, doing these things proactively will act like the ounce of prevention Ben Franklin was fond of talking about with respect to Fire prevention. It’s worth a pound of cure during a fire (or a breach). Watch the video below for more details on these attacks. | <urn:uuid:589e896c-3022-482b-801b-94a589111192> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/malicious-url/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00395.warc.gz | en | 0.897773 | 377 | 2.921875 | 3 |
Network Behavior Analysis (NBA), also known as “Behavior Monitoring” is the collection and analysis of internal network data to identify malicious or unusual activity. Behavioral monitoring tools analyze information from a wide range of sources and use machine learning to identify patterns that could suggest an attack is taking place. When NBAs are conducted over an extended period of time, behavior monitoring allows organizations to benchmark typical network behavior, helping identify deviations; anomalies identified can be escalated for further analysis. Network analysis tools provide valuable insight to help businesses defend against the latest cyber threats. NBA is especially good at spotting new malware and zero day vulnerabilities.
Additional Reading: As Network Security Analysis Proves Invaluable, NDR Market Shifts | <urn:uuid:f2cf20dc-62bb-4d35-97a7-3f90640ecc73> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/network-behavior-analysis-nba/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00395.warc.gz | en | 0.898968 | 142 | 2.546875 | 3 |
James Van Dyke had a hunch last year that the commonly held belief that the Internet was causing an increase in identity theft and credit card fraud was not valid. Extensive research he conducted debunks many of the myths about the correlation between online activity and ID theft.
Contrary to popular opinion, Van Dyke, a research analyst for Javelin Strategy and Research, found that using the Internet for bill paying and banking can reduce risk by up to 18 percent and potentially save consumers up to 60 hours of personal time and US$1,100 in the cost of paper checks and postage.
His report, “Online Banking and Bill Paying: New Protection from Identity Theft,” concludes that using the Internet can actually help protect consumers and businesses from two of the most common kinds of identity theft: fraudulent opening of new accounts and unauthorized use of existing accounts.
Study Follows Federal Crime Reports
Van Dyke told TechNewsWorld that his research shows a correlation to the crime figures cited in the 2003 Federal Trade Commission annual report and United States Postal Service report. Those reports say more than 10 million Americans were victims of identity theft in 2002, and this crime cost businesses more than $47 billion. Figures for 2003 are not yet available.
That amounts to a cost of $10,200 per victim for companies with such thefts and $1,180 per affected individual, Van Dyke said.
The popular view is that expanded use of the Internet by consumers is the chief cause of these growing crime figures. But Van Dyke said using the Internet for banking and paying bills actually reduces the threat of identity theft and banking fraud.
“That’s because criminals get their information from traditional sources, such as low-tech, offline services,” he said. “If consumers did more of their transactions online, they would actually reduce their risk of identity theft.”
Van Dyke offers two examples from FTC statistics that support his view that doing business online is significantly safer. First, 14 percent of all new bank account cases resulting in fraud are traced to theft of paper from in front of victims’ homes. Second, 5 percent of all identity theft could be reduced if paper billing were eliminated. He said paper billing creates a cost of $2.37 billion.
Prevention Strategies Needed
The average household receives 20 paper statements and bills per month, according to postal authorities. Criminals search through easily accessible trash and private mailboxes for bank and credit card information.
So a prime prevention strategy is for consumers to turn off the steady stream of paper billing and account summaries from vendors and banks they use. Many security advocates preach to consumers the need to switch to online billing whenever possible. When bills are provided online, vendors usually allow customers to pay those bills online as well.
The problem develops when a vendor provides an online bill but does not let consumers turn off the monthly mailing of the statement. Shredding paper bills and credit card statements certainly reduces the threat of identity theft. But shredding doesn’t prevent the possibility of document theft before consumers get their mail.
“By the time feeding paper into a shredder can happen, it is often too late. Vendors have to provide prevention upstream,” Van Dyke told TechNewsWorld.
Detection Methods Effective
Detecting credit card and banking fraud goes hand-in-hand with preventing identity theft. According to financial-security experts, criminals turn to credit cards as their first method of finding victims.
It can take consumers between six and 36 days to view a mailed monthly account statement. That time delay drives a criminal’s theft success. As security analysts put it, time is money to a criminal.
“There is a clear correlation between the time lag in seeing account statements and the detection of theft or unauthorized use,” said Van Dyke.
The bottom line, he said, is that the Internet can help the consumer shut down abuses very quickly.
Online Banking on the Rise
The most significant progress toward reducing consumer identity theft can be made by turning to online banking, as well as viewing and paying bills online. The Javelin report concluded that consumers who view online accounts and pay bills online are nearly four times more likely to actively monitor their vendor activity than those who wait for paper bills and monthly statements. That consumer-level watchfulness can be more effective in protecting against account fraud and identity theft than the millions of dollars businesses spend on fraud-monitoring technology.
The report also credits consumers with catching unauthorized account activity in more than 50 percent of all cases. This earlier detection can reduce consumer identity theft by 18 percent, according to experts.
Other industry watchers agree that online banking and bill payment are gaining strong footholds among consumers. A report issued in the first quarter of 2003 by the Yankee Group showed that Internet and credit card services get the highest use by consumers who review and pay their bills online.
“Since users of Internet service are already online, reviewing bills is a natural extension of the service. Additionally, credit card and long-distance service providers have been pushing electronic bill-paying and presentment (EBPP) longer than service providers in other verticals,” wrote Lisa Cebollero, a Yankee Group billing and payment application strategies analyst, in the Yankee Group report. She added that existing Internet use has resulted in more visibility of online bill presentment and bill-paying options in other Internet service areas.
Javelin’s Van Dyke said he is sure that careful use of the Internet by consumers will continue to reduce incidents of identity theft and transaction fraud. He added that many more people than ever before — with less education and technical savvy — are safely using the Internet to handle their banking and bill-paying tasks, and he expects to see that trend continue. | <urn:uuid:575d0e3d-e9ce-48fa-9aa0-91c426d54995> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/identity-theft-online-debunking-the-myths-32622.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00395.warc.gz | en | 0.949562 | 1,188 | 2.671875 | 3 |
Research released last week fingered the iPhone as the source of a text messaging exploit that could be used to steal sensitive information from smartphone users or work mischief on their hardware.
The flaw, revealed by a well-known security researcher and jailbreaker of iPhones, involves the “reply to” line in SMS messages.
In its analysis of the SMS flaw, AdaptiveMobile, a mobile security company, found that Android, Windows Mobile, BlackBerry and Symbian phones either ignore the “reply address” field or display both the originating and the reply addresses in the message. In all cases, it isn’t possible to automatically reply to a message using “reply to.”
The iPhone displays only the “reply to” address. So a text message can be sent from one address but appear to be sent from another.
Most handsets now ignore “reply to,” but Apple has left a significant vulnerability in its handsets which could allow consumers to be fooled and hand over personal details to hackers and criminals, AdaptiveMobile’s researchers maintained.
“It’s quite unusual for the iPhone to react in this manner,” Cathal McDaid, head of security for AdaptiveMobile, told TechNewsWorld. “This makes spoofing much easier.”
Apple’s response the situation was to advise its customers to use its texting service, iMessage. When using iMessage instead of SMS, addresses are verified, which protects against spoofing attacks, it explained.
“That defeats the purpose of having a mobile phone,” declared McDaid. “It’s to communicate with other people, not just other people with iPhones.”
Defending Against DDoS Attacks
Distributed denial of service attacks are a popular weapon used by hacker groups to cripple the websites of those they dislike. While the DDoS attacks that typically grab headlines are launched by hacktivist groups like Anonymous against corporate and government sites, nonprofits and human rights sites are often targets, too. That’s why the Electronic Frontier Foundation (EFF) released last week a guide to “Keeping Your Site Alive” in the face of a DDoS assault.
The idea for the free guide occurred to Jillian York, the EFF’s Director for International Freedom of Expression, while she was working on a study of the effects of DDoS attacks on human rights and independent media websites. Realizing how vulnerable those sites were, she began thinking about ways to help those organizations fend off such attacks.
“It’s not easy to prevent the attacks, but there are things people can do to protect their information so they have it after an attack,” she told TechNewsWorld. “Backing up and mirroring are simple techniques that anyone can do to protect their data.”
While the targets of DDoS attacks can be varied, sites dealing with contemporary events are often a target. “Right now, Syrian opposition websites get attacked all the time,” she said.
If the unfortunate attack on journalist Mat Honan’s digital life earlier this month revealed anything about the times we live in, it’s the cavalier attitude toward data taken by many consumers. They just don’t know what their data is worth until it’s gone, according to Stewart Irvine, CEO of Imogo Mobile Technologies, a provider of secure mobile cloud services.
“When I ask people, ‘What’s your data worth to you?’ I usually get a blank expression,” he told TechNewsWorld. “Then I ask them, ‘What would happen if your smartphone was lost or stolen? A look of shock and dismay comes on their face, and they say, ‘That would be disastrous.'”
Honan acknowledged many security sins in a piece he wrote about his experience, not the least of which was failure to backup the data of his digital life. The journalist isn’t alone in that boat. “People don’t backup their data because they just don’t take their data that seriously,” Irvine said. “They think, ‘It’ll never happen to me.'”
Mat Honan probably thought that, too.
- Aug. 17: Air Force officials at Wright-Patterson Medical Center in Ohio alerted 3,800 individuals of possible data breach when a notebook containing their names and Social Security numbers was temporarily misplaced after a blood drive. There is no evidence that the information was misused, and it was misplaced for only a short amount of time, the officials said.
- Aug. 17: The University of Texas M.D. Cancer Center revealed that health information for 2,200 patents was missing after student lost USB thumb drive with the data on it while traveling on a shuttle bus. No Social Security or financial information was on the device, the university said.
- Aug. 19: The UK newspaper The Telegraph reported sensitive information of 1,367 school children snatched by hackers from education evaluation firm Gabbitas and posted to the Internet. Information included details about personalities, strengths, weaknesses, illnesses and learning difficulties.
- Aug. 20: Chipmaker AMD took down its blog site after weekend attack in which a group called “r00tBeerSec” defaced a Web page and robbed the SQL file used to manage the site. The site is now back online.
- Aug. 21: Colorado State University in Pueblo notified 19,000 students and applicants that their personal information may have been exposed when several students accidentally gained access to the information. The students immediately alerted the university to the breach and no records were changed or stolen, according to the letter.
- Aug. 21: A letter to customers from Bellacor posted to a California breach reporting site stated an unauthorized third-party gained access to some temporary files on the company’s website containing customer name, address, phone number and encrypted credit card information. The company did not reveal how many customers were affected, but the breach could affect anyone doing business with the firm from June 7 to July 26. The company noted that it had no evidence that any data had been compromised but was alerting its customers as a precautionary measure.
- Aug. 22: The University of South Carolina began notifying 34,000 people associated with its College of Education that their personal information may have been compromised in a computer intrusion that occurred three months ago. Information exposed includes names, addresses and Social Security numbers of students, staff and researchers associated with college since 2005. In the last six years, six data breaches have been reported at the university compromising 81,000 records belonging to students and employees at the school.
- Aug. 29: Update Your Software or Die. 2 p.m. ET. Webcast. Sponsored by Qualys. Free.
- Aug. 30: Business Beyond the Perimeter: Endpoint Security in the Cloud Era. 10 a.m. PT. Webcast sponsored by GFI Software. Free with registration.
- Sept. 12-14: UNITED (Using New Ideas to Empower Defenders) Security Summit. Grand Hyatt, San Francisco. Registration: $1,395.
- Sept. 27: Foundational Cyberwarfare (Plan X) Proposer’s Day Workshop. 9 am — 4 pm ET. DARPA Conference Center, 675 N. Randolph Street, Arlington, Va. Closed to media and public. Unclassified session in the morning. U.S. DoD Secret clearance needed to attend afternoon session.
- Oct. 9-11: Crypto Commons. Hilton London Metropole, UK. Early bird price (by Aug. 10): Pounds 800, plus VAT. Discount registration (by Sept. 12): Pounds 900. Standard registration: Pounds 1,025.
- Oct. 16-18: ACM Conference on Computer and Communications Security. Sheraton Raleigh Hotel, Raleigh, N.C.
- Oct. 18: Suits and Spooks Conference: Offensive Tactics Against Critical Infrastructure. Larz Anderson Auto Museum, Brookline, Mass. Attendance Cap: 130. Registration: Super Early Bird, $195 (by Aug. 18); Early Bird, $295 (by Sept. 18); Standard, $395 (by Oct. 17).
- Oct. 25-31 Hacker Halted Conference 2012. Miami, Fla. Sponsored by EC-Council. Registration: $2,799-$3,599. | <urn:uuid:98a9ff56-663f-453d-b2b6-6b9c1a9fe4ce> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/security-sleuths-lay-blame-on-apple-for-sms-vulnerability-76009.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00395.warc.gz | en | 0.93916 | 1,775 | 2.609375 | 3 |
In the last 30 years, many countries have introduced legislation to ensure patient record confidentiality. One notable initiative was the Privacy Rule portion of HIPAA (Health Insurance Portability and Accountability Act), enacted in United States in 1996. The security provision of HIPAA demands that healthcare providers take reasonable care to protect the confidentiality of protected healthcare information (also known as PHI).
IT professionals in the healthcare industry have found HIPAA compliance to be an ongoing challenge, as they have to figure out how to securely authenticate, transmit and store confidential medical documents and patient data. In fact, an entire industry has grown up around products and services designed to help organizations meet the HIPAA data protection requirements. This plethora of rules and regulations might lead the public to believe that their medical secrets are safe, but the sheer amount of data makes security a daunting task.
There are well over one billion healthcare visits per year in the US and each healthcare interaction generates data about patients that is used, shared and analyzed. Effective healthcare requires this data to be routinely shared among general practitioners, specialists, clinics, pharmacists, hospitals, health insurers, governmental agencies and others. These one-billion-plus healthcare visits result in an estimated 30 billion healthcare transactions per year. 1 Conservative estimates say half of these transactions are fax-based.2
It was once thought that standalone fax machines would be replaced with email messaging. But email can’t always guarantee to be as secure a form of communication as faxing. For example, an email message and its content might be archived on any number of servers. Email transmission of information also runs into problems with compliance agencies and regulations, such as HIPAA, that require greater security. Unlike emails, a fax cannot be used to carry a virus, phish or harm a company’s network security.
It’s estimated that there are about 125 million fax machines in use in the world today, and close to six million new purchases each year.3 According to a 2012 survey, 85 per cent of U.S. businesses make use of faxing in some form.4
There are three main reasons why faxing is still important to organizations:
To obtain a phone line and a fax machine is still the simplest and least technical way for a healthcare provider to begin communicating with the outside world.
Many companies, especially those in the healthcare, legal and insurance space, are required to transmit medical documents and patient data via fax because of compliance concerns.
Companies are maintaining legacy applications, such as purchasing and billing systems, which are only able to transmit a document via fax.
Because faxing will be around for the foreseeable future, health care providers are looking for ways to securely transmit protected health information (PHI) via fax. Unfortunately, using a traditional fax machine can be a cumbersome process to create HIPAA-compliant faxes.
Faxing is explicitly named in the HIPAA code as an acceptable method to transmit medical records, test results and other healthcare information and instructions.5 Its Privacy Rule allows health care providers to transmit confidential information as long as they use “reasonable safeguards.” While the definition of a “reasonable safeguard” can unfortunately vary, one certainty is that transmitting a HIPAA compliant fax is difficult using a traditional fax machine.
When using a traditional fax machine, providers must be extremely cautious and establish strict faxing protocols to avoid a security breach. Simply keying in one wrong digit on a fax machine could send protected health information (PHI) to an unintended destination. The HIPAA journal reported that seven doctors’ offices in Texas accidentally faxed PHI to the wrong fax number.6 Names, medical histories, medical results and other types of PHI were sent to a local radio station. One of the highest compliance fines assessed were due to HIPAA violations – the New York-Presbyterian Hospital and Columbia University for $4.8 Million.7
HIPAA guidelines suggest confirming unknown fax numbers before sending, though this may be difficult for larger healthcare institutions that have hundreds of individual fax machines in use.
Limits vary by jurisdiction, but a common requirement is to hold patient treatment information, such as medical results, for seven to ten years. The actual time may even be longer. An institution may need to keep records of a minor until the patient reaches the age of majority for the jurisdiction.
These legal retention requirements are challenging for paper-based records such as faxes. Printed patient files can take up considerable space. They may be lost due to theft or disasters (such as fire). Printed ink pages can degrade within the legal archiving time requirement. Additionally, searching for information is time-consuming if done manually. An institution also runs the risk of faxes not being attached to a patient’s record when required to produce proof of information.
Some PHI safeguards for traditional fax machines include:
Confirm the fax number with the intended recipient when faxing PHI to a telephone number that is not regularly used.
Call the recipient to make sure their fax machine is not in a public area and is in a protected location.
If you know you will be receiving PHI via fax, ask the person faxing you to give you advanced notice so that you will be around to immediately remove the pages from the fax machine.
Pre-program frequently used numbers directly into the fax machine to avoid misdialing.
When faxing PHI, don’t leave the fax machine until the transmission is complete.
Use printed cover sheet pages with the approved HIPAA statement for all PHI faxes.
Include a confidentiality statement on fax cover pages when the fax includes PHI.
Keep an accurate audit trail of every fax involving PHI to avoid fines for non-compliance.
Working with traditional fax machines to produce HIPAA compliant faxes adds a burden to an already heavy workload for frontline staff. Because of this, many health care providers are turning to web-based electronic faxing – using faxing software and network fax servers – to better ensure HIPAA compliant faxing.
Network faxing is designed to work with existing systems and use an organization’s existing network. It needs no dedicated phone line or fax machine. It needs no paper, no ink and no human monitoring. Network faxing enables staff to fax from Electronic Healthcare Record (EHR) applications, Project Management (PM) software, their desktop, from office applications by email, a Customer Relationship Management (CRM) platform and many other applications.
Network faxing eliminates many of the issues that traditional fax machines have in creating HIPAA compliant faxes:
Faxes are received electronically, eliminating the problem of faxes on the fax machine for anyone to read.
The process of manual phone dialing is removed, so sending a fax with sensitive information to the wrong fax number is greatly reduced.
Cover sheets with the approved HIPAA statement for all PHI faxes can be automatically programmed into an electronic fax.
No longer do faxes have to be scanned before being entered in an EHR application.
Staff efficiency is increased, since no one has to wait to scan and monitor the faxing process.
Medical practices that use network faxing are reporting efficiency savings of up to 80 percent.8
Network faxing software can catalog, index and archive faxes automatically.
The risk of losing or misfiling a fax is exponentially reduced.
Network faxing, along with electronic archiving, enables easier tracking and retrieval of past faxes – creating an accurate audit trail of every fax involving PHI.
Medical providers can search their archive database to know who received communications and when.
Faxes are stored more securely.
Some network faxing software can even monitor all types of communications and even block any information from being sent if this is against regulations or hospital policies.
GFI FaxMaker is a network fax server software that enables email to fax and fax to email for Exchange and other SMTP servers in a secure, encrypted environment.
Faxing protocols make it nearly impossible to intercept a fax in mid-transmission – making it more secure than email. Electronic faxing with GFI FaxMaker makes it easy to access this more secure protocol.
An organization can install the GFI FaxMaker fax service as a physical, on-premise service with a standard fax modem; as a virtual Fax over IP (FoIP) through a gateway or VoIP phone system, or through Hybrid faxing with no equipment but integrated with a cloud-based faxing system.
GFI FaxMaker is not only popular in the healthcare industry because it acts as a HIPAA compliant fax service, but also because of its ease of use:
Users can sign in to the GFI FaxMaker web client, fill in fax content on-screen, add attachments and simply click send.
GFI FaxMaker allows users to fax directly through an email application. Simply start to compose an email and in the “To:” box enter a fax number with “@faxmaker.com” at the end. Fill out the subject line, add body content and attachments and send.
Incoming faxes pass through an OCR (optical character recognition) module that makes it possible to search in the fax body. This feature is useful when older faxes have to be retrieved.
It provides features such as API, SMS alerts and digital signatures.
A companion to GFI FaxMaker is GFI Archiver. Healthcare facilities have to employ fast, safe and efficient storage software for faxes and other PHI records. Archiving can all be done with GFI Archiver. The system allows for intelligent reporting, and it is already configured to run reports that comply with HIPAA and other record confidentiality mandates.
GFI FaxMaker trial
Try GFI FaxMaker fax service free for 30 days with access to all GFI FaxMaker features and customer support.
Faxing efficiency through automation
See why in many countries, faxing is still the only way of sending compliant documents electronically.
Faxing in the healthcare industry
Watch this quick video to find out more about faxing in the healthcare industry.
Integrated network faxing key to improved productivity and information security
Download this white paper and discover how network faxing reduces labor costs and increases information security.
Integrated network faxing key to improved productivity and information security. GFI white paper. 2011.
Survey: 85% of US Businesses Rely on Fax Technology. David Kelleher blog - November 8, 2012.
Does the HIPAA Privacy Rule permit …. hhs.gov Q&A - November 3, 2003.
Faxing Error Sees PHI Sent to Local Media Outlet. HIPAA Journal - Feb 16, 2017.
New York-Presbyterian, Columbia to pay largest HIPAA settlement: $4.8 million. Modern Healthcare article - May 07, 2014. | <urn:uuid:63dba5c1-5bc3-438f-993e-0284104f3019> | CC-MAIN-2022-40 | https://www.gfi.com/company/blog/hipaa-compliant-fax | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00395.warc.gz | en | 0.920703 | 2,227 | 3.171875 | 3 |
Providing employees with the right level of access is a significant challenge even in today’s digital world. When you give too many people access to critical information, you are putting your data security at risk. However, you also need to delegate work to the lowest level in your organization to ensure higher productivity. You can achieve that with the help of Active Directory.
In this blog, we’ll explore all the critical aspects of Active Directory Management and how you can use it to manage user access.
What is Active Directory?
Active Directory is a directory service offered by Microsoft for those using the Windows Server. Active Directory is the de facto directory system used in over 90% of the enterprises today, where it acts as a user identity repository.
The core function of the Active Directory is to help organizations manage their user access and permissions for network resources. For instance, when a user logs in to a network, the Active Directory validates the username and password provided by the user against the information in its directory before authenticating the entry into the network.
Another key function of this directory is to restrict user access based on the level of permissions they have. A network may have hundreds of user accounts. When these users request access to critical information, approval is provided only if these users have the right permissions to access resources. | <urn:uuid:832904b7-0b47-458f-bdbb-e8707cb57b83> | CC-MAIN-2022-40 | https://www.mspinsights.com/doc/a-guide-to-active-directory-active-directory-management-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00395.warc.gz | en | 0.925773 | 268 | 2.859375 | 3 |
Saving Water Through Smart Ag
Whether it’s Earth Day or any day, people wonder how we can create more sustainable systems that minimize environmental impacts like water use and greenhouse gas emissions. At AT&T, we wonder the same things. And we’re connecting our technology with customers in ways that meet these goals – often in ways you might not expect.
Smart agriculture is a good example. Every year U.S. farmers use 2.5 million acres of land to grow 6 million metric tons1 of rice, a crop that many of us take for granted. Rice farmers help feed people around the world while contributing to economic growth here in the U.S. At the same time, rice farming is water-intensive, currently using up to 40% of the world’s irrigated water every year.2 And rice farming generates a large amount of methane, a greenhouse gas that is 28 to 36x as potent as CO2 in contributing to climate change.3
We have been exploring how to our connectivity to enable solutions that challenge issues such as these. In one case, we teamed up with PrecisionKing to take a look at how our connectivity together with PrecisionKing’s technology could help save water and reduce greenhouse gas emissions involved in rice farming.
Here’s how it works. PrecisionKing’s RiceKing sensors are placed across rice fields, where they read water levels once an hour. AT&T wireless connections enable the transmission of water-level data to a management system that automatically signals connected pumps to turn on and off as needed. This reduces water use and prevents flooding or excessive drying — all without requiring anyone to be in the field. Managing water levels also helps reduce greenhouse gas emissions by reducing pump energy usage and limiting the methane gas that is released from rotting materials created when too much water is used in the farming process.
This technology is getting results. In Arkansas, instead of gauging water by eye, Jim and Sam Whitaker use PrecisionKing technology and AT&T IoT connectivity. According to data Jim collects, connected RiceKing water-level sensors have reduced Whitaker Farms’ water usage by up to 60%, while the connected PumpKing controls have reduced pump energy usage by 20-30%.
Tackling the impacts of activities like agriculture is just one more step forward to a more sustainable world.
Our work on smart agriculture is just one way our company is helping to create a better, more sustainable world.
We’re working companywide to make our network, fleet and operations more efficient and we’ve set a goal to enable carbon savings 10x the footprint of our operations by the end of 2025. We’re also committing our resources and expertise in the Internet of Things (IoT) to help make cities cleaner, safer and stronger through smart cities. And, we’re helping our customers use IoT for Good, by connecting everything from trucks, to farm equipment, to city infrastructure and more.
More information on AT&T’s sustainability programs is available at about.att.com/csr. | <urn:uuid:fd390531-7c81-4bd3-93a9-2fd0c8fb2d1d> | CC-MAIN-2022-40 | https://about.att.com/newsroom/saving_water_through_smart_ag.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00395.warc.gz | en | 0.912892 | 708 | 2.890625 | 3 |
By Justin Silverman
Before we try to explain how automation and artificial intelligence can integrate productively, it might help to define their differences. Many people no doubt confuse the two, and that isn’t helped by the way the media often conflates the two.
First off, “automation” involves the application of technologies for carrying out processes with minimal human intervention. Robotics and software are forms of automation, but they don’t necessarily include AI.
Artificial intelligence is the simulation of human intelligence by machines. Some see “artificial intelligence” as a monolith, but it’s really a catch-all term for several different capabilities.
Artificial Narrow Intelligence (ANI), for instance, is highly specialized, like a chess program that can beat a human being but will never be able to operate a light switch. There are good examples of ANI now available for commercial usage in natural language processing and machine learning. Artificial General Intelligence (AGI) is “strong” AI, like IBM’s Blue Brain project that simulated—but still in a limited way—human problem-solving and learning processes. Artificial Superintelligence (ASI) is the Ultron or HAL 9000-level stuff of movie nightmares, which doesn’t yet exist, and no one yet knows if it will.
The benefits of teaming the two
When automation and artificial intelligence come together in present-day usage, there are serious benefits to be had. So let’s examine some of the ways they complement each other.
1. Like we said: AI is a form of automation, but some types of automation are entirely devoid of AI. Workflow automation, for instance, can fill in documents and make recommendations without AI. But when AI is added to a workflow solution, a human contributor or gatekeeper can be subtracted from the equation, so that AI-empowered step or steps can be completed in zero time.
2. Non-AI software can automate tasks where it’s highly certain what a human would do or should have done, like forwarding a document for a required review. It does this via conditional logic: the structured data captured in a certain field will dictate what the software does next. By adding AI, though, automation can address more complex situations where unstructured data—which makes up 80-90 percent of the data in most organizations—is involved. An AI risk management platform, for instance, will be able to analyze unstructured data to recognize risks and then recommend a mitigation action. Non-AI software would not have been able to do this, or would have needed an enormous number of fields to be filled out by human users.
3. AI can be self-training, thanks to machine learning. By analyzing unstructured data and through repetition of processes, it can hone its ability and efficiency, which in turn optimizes the automated processes it’s powering.
4. Natural language processing (NLP) is another facet of AI that can benefit automated systems. An example would be sentiment analysis of NPS responses, where an AI tool would read those responses and identify where there are potential issues, or extract insights you can leverage for your purposes. A platform like Qandai, for instance, automates the process of reviewing sales calls in order to call out insights from those calls.
5. Moreover, a trained AI tool can monitor data in real time to detect risks or undesirable trends far faster than a dashboard that relies on user engagement. By alerting users to potential problems or even triggering response automations, these risks can be mitigated far more proactively.
6. AI and automation are most effective when used to target problems or take on tasks that are high-volume and low-to-medium complexity, which are highly time-consuming and tedious, but where there’s high risk if a human isn’t careful; unfortunately, “tedious” usually entails “higher chance of human error.” High-volume tasks also means there’s more historical data available to train the AI. One example where this targeted approach to AI-embedded automation offers significant value for an organization is in software for reviewing complex legal contracts, where risky clauses or language can be red-flagged for attention.
7. AI and software-based automation work best when they’re developed iteratively, meaning software architects and designers should work with historical data and subject matter experts within an organization to develop initial versions for rollout, possibly only to a limited area of your operations. Once you’ve put these pilot versions into use, you’re able to make iterative updates to AI models or software to drive steady improvements and broader implementation.
8. Properly implemented, automation and AI can drive quick time-to-value and ROI, and ladder up to even more improvement. For instance, you might deploy a no-code workflow automation solution to quickly create, streamline and accelerate business processes, then use targeted AI to remove even more human intervention in the processes you’ve automated.
To sum up: Automation software and artificial intelligence are highly complementary, each bringing different strengths to bear on business challenges. By taking advantage of their respective strengths, dramatic improvements in cycle time, efficiency, and quality are within reach.
About the author
Justin Silverman is responsible for leading the Mitratech product management team to drive new innovation, strong platform integration, and streamlined user experiences that bring differentiated value to Mitratech’s customers. He brings over 15 years of product management and strategy experience, including many years in legal technology. | <urn:uuid:d62d9b4e-35d9-4fef-9955-7541baf72110> | CC-MAIN-2022-40 | https://bdtechtalks.com/2021/11/14/artificial-intelligence-automation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00395.warc.gz | en | 0.924978 | 1,158 | 3.15625 | 3 |
IBM’s chip researchers have been in anything but a vacuum lately.
They have been busy developing a special polymer that can self-assemble, putting an insulator around wires at the nano-scale level and allowing the trend for smaller/faster/cooler chips to continue.
IBM (Quote) modeled the concept after self-assembly seen in nature, such as the way a seashell, snowflakes, or tooth enamel form. The technique, called “airgaps” by IBM scientists, isn’t an entirely accurate term, as the gaps are actually a vacuum with no air.
Airgaps are made by coating a silicon wafer with a layer of a special polymer that, when heated, forms a honeycomb of trillions of uniform tiny holes just 20 nanometers across, with a honeycomb wall 20 nanometers across. The pattern is then used to create the copper wiring on top of a chip and the insulating gaps that let electricity flow smoothly.
The technique causes a vacuum of air to form between the copper wires in a computer chip. This is important because as the wires drop in nanometers, they lose electricity.
Dealing with power leakage has become a growing concern for chip makers as they make their CPUs increasingly smaller. Music aficionados who remember the days of vinyl know there was only so much room to put on an album side, because if you tried to put more music on the record, the grooves started to bleed together. It’s a similar situation here.
“It’s like pouring water down a pipe,” said Dave Lammers, director of WeSRCH.com, a research site run by VLSI Research. “Pour it down a one-inch pipe, that’s OK. When you try to pour the same amount down a straw, that gets tough.”
As the copper wires shrink, there’s more resistance as the electronics move down them. So as the wires shrink, they get slower, he explained. “The airgap allows them to keep making the wires smaller without the charge leaking from one wire to the adjacent wire. Airgaps put a better insulation, to a higher degree, between the metal wires.”
Dan Edelstein, IBM fellow and chief scientist at IBM’s research division, told internetnews.com that “this is going to keep enabling us to scale for several generations beyond.” Copper wire was reaching a limit of how small it could be made without significant leakage.
The airgaps reduce the energy needed to put signals on these wires, so the whole chip can run cooler or faster depending on the tradeoff you want to take,” said Edelstein. Circuits can speed up by as much as 35 percent, based on the drop in capacitance.
That drop isn’t across the board, however. It only affects circuits heavily dominated by the wiring delay, and some circuits have short wires and won’t really benefit. Edelstein said he could see a clock speed increase of up to 10 percent or a drop in heat by up to 15 percent from this new process.
Lammers said that’s all a good thing for processor manufacturing. “You get all the benefits when you shrink the logic device from this. You can add more functions and have less power consumption and a lower cost to manufacture. All the good things that happen with processors now will keep on going,” he said. | <urn:uuid:83b8887b-8c02-4ea8-96eb-11907b9ba3e8> | CC-MAIN-2022-40 | https://www.datamation.com/applications/ibm-introduces-the-self-assembling-chip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00595.warc.gz | en | 0.949296 | 728 | 3.75 | 4 |
Does an energy saving computer really exist? Can technology be environmentally friendly, or are such claims just a farce? The answer to those questions depends on who you ask.
Yes, green technology is real
Mobile phones, which are slowly becoming the go-to source for computing for many people, are in fact cleaner now than before. According to a recent report from ifixit.org and HealthyStuff.org, 36 of the most popular cellphones now on the market contain fewer compounds known to wreak environmental havoc than did mobile technology in 2010. While today’s smartphones are still made using known contaminants such as lead and mercury, they are found at levels much lower than was seen in the past.
The report issued scores between zero and five, with low scores indicating lower levels of contaminants. According to the most recent findings, cellphones average a score of 2.75. In comparison, mobile phones averaged a five in mid 2007. Some of the cellphones with the lowest levels of known contaminants include the Motorola Citrus, the iPhone 4S, the LG Remarq and the Samsung Captivate.
“In general, the results are hopeful,” Kyle Wiens wrote in an October 2 article on ifixit.org. “Newer phones are being made with fewer hazardous chemicals: every phone that was ranked of ‘high concern’ was released before 2010. The newest phones, including the iPhone 5, are some of the best.”
No, eco-friendly claims are malarkey
While the report did show that cellphones are being made with fewer toxic chemicals, smartphones still contain known contaminants such as arsenic. The materials used to make modern technology mean that even the most efficient energy saving computer cannot truly be considered eco-friendly, according to The Guardian‘s Lucy Siegle. She also cited the substandard conditions at mining sites for the metals used to construct modern technology as an example of how even supposed green technology is not always that environmentally friendly.
“Brands and consumers [prioritize] perfection over people (and planet),” Siegle wrote. “So ethi-tech (as I’m calling a hoped-for sustainable technological revolution) has yet to get going.”
An ideal way to monitor PC power usage from the manufacturing process would be to use recycled components, Siegle said. However, the rapid pace of technological change means that the parts used to make tablets, computers and smartphones are more intertwined than ever, so it is more difficult to recycle parts.
Does the smartphone study give you hope that technology is becoming more eco-friendly, or do you share Siegle’s belief that no device is truly green? Leave your comments below to let us know what you think about this issue! | <urn:uuid:cd9c2398-fcd9-4159-b415-00443a2e9aa0> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/eco-friendly-technology-is-it-fact-or-fiction-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00595.warc.gz | en | 0.963609 | 574 | 3.078125 | 3 |
High rates of shift workers gain weight and develop diabetes, which has been attributed to a mismatch between their internal clocks and their schedules, so researchers from the Perelman School of Medicine at the University of Pennsylvania created a related mismatch by altering the function of a molecule within the brains of mice that shortened their circadian rhythms from 24 to 21 hours.
“When the external world doesn’t match the internal body’s cycles, metabolism pays the price,” said the study’s senior author, Mitchell A. Lazar, MD, Ph.D., the director of Penn Medicine’s Institute for Diabetes, Obesity, and Metabolism, and the Ware Professor of Diabetes and Metabolic Diseases. “We saw this in our study, and we believe that this happens similarly when people work odd hours that don’t align with how human bodies are wired.”
Published today in Science Advances, the researchers led by Lazar and primary investigator Marine Adlanmerini, Ph.D., a post-doctoral researcher in Lazar’s lab, sought to explore circadian desynchrony, a theory in which a disruption or alteration to a person’s innate, internal clock leads to poor outcomes.
Shift workers – those who may work long hours, overnight, or with irregular rest periods in between work – are subject to this, which could be why they appear to be at higher risk for obesity, diabetes, and metabolic diseases including having a liver that retains more fat.
So to explore whether circadian desynchrony is a viable explanation for this, the researchers removed certain molecules called REV-ERB, which reside in the brain cells of mice, and seem to control the body’s internal clock, holding it around 24-hour cycles. When REV-ERB was deleted, it caused the mouse body clocks to run roughly three hours shorter, which the researchers determined by tracking their regular sleep/awake pattern.
While their body clocks ran faster, some of these mice were kept in a typical day’s 24-hour cycle, with 12 hours of light and 12 of dark. Those mice, when on their regular diet, were able to keep their weight in check.
Moreover, the mice who still had REV-ERB but were given the high-fat and sugar diet did not have the same high amounts of poor outcomes.
“One potential explanation is that the internal clock of the mice missing REV-ERB was running at odds with the 24-hour day, which led to metabolic stress on the body,” Lazar said.
A way that was “fixed” was when the researchers adjusted the length of the mice’s “day” in the lab to match their malfunctioning internal clock: 21-hour days with 10.5-hour cycles of light and dark to match their 21-hour internal clock. When this happened, the mice with the altered clocks no longer were as susceptible to the ill-effects of the unhealthy diet.
“This may be a lesson for how to prevent or reduce obesity and diabetes in shift workers,” Lazar explained. “For example, timing of meals to better match the shift worker’s own clock could be of benefit. That would also be consistent with a number of studies in mice and people that have suggested that eating at specific times of day may improve weight control and metabolism.”
Moving forward, Lazar, Adlanmerini, and their team feel that potentially finding biomarkers which could be tested for and indicate how a person’s internal clock is running would be key.
“Information like that could then be matched to decisions about when to eat, much as blood sugar monitoring can help a diabetic understand when they should be taking more insulin,” said Lazar.
Type 2 diabetes (T2D) is a growing global health problem, with skeletal muscle insulin resistance being a primary defect in the pathology of this disease. While the etiology of this disease is complex, perturbed sleep/wake rhythms from shift-work, sleep disorders, and social jet lag are associated with obesity, T2D, and related comorbidities (1–4), highlighting the critical role of the circadian timing system for metabolic health.
Cell autonomous circadian rhythms are generated by a transcription-translation autoregulatory feedback loop composed of transcriptional activators CLOCK and BMAL1 (ARNTL) and their target genes Period (PER), Cryptochrome (CRY), and REV-ERBα (NR1D1), which rhythmically accumulate and form a repressor complex that interacts with CLOCK and BMAL1 to inhibit transcription (5).
Disruption of the molecular clock in skeletal muscle leads to obesity and insulin resistance in mouse models (6–8). While disrupted circadian rhythms alter metabolism, the extent to which these processes are impaired in people with T2D is unknown.
Several lines of evidence suggest that the link between dysregulated molecular-clock activity and T2D or insulin resistance may be tissue dependent.
In white adipose tissue, the evidence is equivocal. For example, subcutaneous white adipose tissue biopsies showed no difference of rhythm and amplitude of core-clock (PER1, PER2, PER3, CRY2, BMAL1, and DBP), clock-related (REVERBα), and metabolic (PGC1α) genes between individuals with normal weight, obesity, or T2D over a time-course experiment (9).
Conversely, when the sleep/wake cycle and dietary regime are controlled, amplitude oscillations of core-clock genes and number of rhythmic genes are reduced in adipose tissue from people with T2D as compared with healthy, lean individuals (10). In human leukocytes collected over a time-course experiment, mRNA expression of BMAL1, PER1, PER2, and PER3 was lower in people with T2D as compared to nondiabetic individuals (11).
In addition, BMAL1, PER1, and PER3 mRNA expression in leukocytes collected from people with T2D is inversely correlated with hemoglobin A1C (HbA1c) levels, suggesting an association of molecular-clock gene expression with T2D and insulin resistance.
Furthermore, in pancreatic islets from individuals with T2D or healthy controls, PER2, PER3, and CRY2 mRNA expression is positively correlated with islet insulin content and plasma HbA1c levels (12). Thus, there may be tissue specificity of molecular-clock regulation, which contributes to clinical outcomes related to insulin sensitivity and T2D etiology. The underlying mechanisms regulating metabolic rhythmicity and, particularly, whether rhythmicity is lost in T2D remain incompletely understood.
At the cellular level, primary human myotubes maintain a circadian rhythm, with the amplitude of the circadian gene REV-ERBα correlating with the metabolic disease state of the donor groups (13). This apparent link between the skeletal muscle molecular clock and insulin sensitivity may be partly mediated by molecular-clock regulation of metabolic targets. Chromatin immunoprecipitation (ChIP) sequencing has revealed distinct skeletal muscle–specific BMAL1 and REV-ERBα cistromes (14), with prominent molecular clock–targeted pathways, including mitochondrial function and glucose/lipid/protein metabolism (14, 15).
Moreover, these metabolic pathways may participate in retrograde signaling to control aspects of the molecular clock. Pharmacological inhibition of DRP1, a key regulator of mitochondrial fission and metabolism, alters the period length of BMAL1 transcriptional activity in human fibroblasts (16).
However, the signals and the clock-derived alterations that govern the rhythmicity of metabolism remain incompletely understood. Despite the growing evidence that several metabolic pathways are under circadian control, it is not clear whether circadian rhythmicity of the intrinsic molecular clock is altered in T2D. Here, we determined whether circadian control of gene expression and metabolism is altered at the cellular level in skeletal muscle from individuals with T2D.
reference link :https://www.science.org/doi/10.1126/sciadv.abi9654
More information: Marine Adlanmerini et al, REV-ERB nuclear receptors in the suprachiasmatic nucleus control circadian period and restrict diet-induced obesity, Science Advances (2021). DOI: 10.1126/sciadv.abh2007 | <urn:uuid:207a0c44-8f96-4a58-96da-e2595abb45ff> | CC-MAIN-2022-40 | https://debuglies.com/2021/10/29/why-people-who-work-late-or-irregular-hours-are-susceptible-to-weight-gain-and-diabetes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00595.warc.gz | en | 0.924609 | 1,764 | 3.375 | 3 |
Google has benefitted from the General Data Protection Regulation (GDPR) in a way which most of us probably could not have foreseen.
A recently published report (opens in new tab) says the number of trackers (cookies and such), used to monitor people’s behaviour on the internet, has dropped, for pretty much everyone but Google.
Looking at percentages, the number of trackers per page, on EU-accessed websites fell by 3.4 per cent. For US websites, it rose 8.29 per cent. For Google specifically, it rose 0.933 per cent.
The Register (opens in new tab) argues that Google seized the opportune moment, when other companies, smaller companies, reduced the number of their trackers.
“Although the number of trackers fell for EU netizens following the introduction of GDPR, Google was able to step into the gap and hoover up more data on Europeans' web browsing,” it said.
"For users [in Europe] this means that while the number of third parties asking for access to their data is decreasing, a tiny few are getting more of their data," the report noted.
GDPR was drafted by the European Union as a data regulation built for the digital age. It aims to regulate how businesses gather, store, safekeep and share information they have on EU citizens. Fines for not complying can go up to €20 million, or 4 per cent annual global turnover.
Image source: Shutterstock/Wright Studio | <urn:uuid:7294094a-9778-4135-97c8-4c2d690ec49f> | CC-MAIN-2022-40 | https://www.itproportal.com/news/gdpr-may-actually-be-benefitting-big-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00595.warc.gz | en | 0.940694 | 316 | 2.671875 | 3 |
Understanding PoE Standards and PoE Wattage
PoE (Power over Ethernet) technology allows PSE (Power Sourcing Equipment, such as a PoE switch) to use Ethernet cables to deliver both power and data simultaneously to PD (Powered Device, like IP cameras and VoIP phones), which can simplify cabling installation and save cost. Different PoE standards like IEEE802.3af, 802.3at and 802.3bt are released by IEEE (Institute of Electrical and Electronic Engineers) to regulate the amount of power delivered to those PDs. Then how much do you know about those PoE standards? How many PDs can be connected to a PSE based on different PoE wattages? Here offers a detailed explanation.
PoE Standards Introduction
At present, PoE standard has three types: IEEE 802.3af, IEEE 802.3at and IEEE 802.3bt. Those PoE standards define the minimum power that PSE can source and the maximum power that the PD will expect to receive.
IEEE 802.3af is also known as standard PoE with supply voltage of 44-57V, and supply current of 10-350mA. In this standard, the maximum power output of a port is limited to 15.4W. However, some power will be lost on the Ethernet cable during the transmission. Thus, the minimum guaranteed power available at the PD is 12.95 watts per port. It can support VoIP phones, sensors and so on.
The updated IEEE 802.3at standard also named PoE+, which is backward-compatible with standard PoE. The supply voltage of PoE+ ranges from 50V to 57V, and the supply current can be 10-600mA. It provides up to 30W of power on each port of a PSE. Due to power loss, the minimum output power assured on each port is 25W. This type can support devices that require more power like LCD displays, biometric sensors and tablets.
IEEE 802.3bt is the latest PoE standard that defines two types of powering/wattage standards - Type 3 and Type 4 in the table above. They will increase the maximum PoE power by delivering more power through two or more pairs of Ethernet cables. In Type 3 and Type 4 modes, PSEs will identify the PDs and set the power accordingly to the maximum PD power, resulting in a better power-delivery system.
Type 3 is also known as PoE++, which can carry up to 60W for each PoE port (minimum power ensured on each PD port is 51W) over a single RJ45 cable to power devices like video conferencing system components.
Type 4 is named higher-power PoE. It can supply maximum power output of 100W on each PoE port (minimum power ensured on each PD port is 71W), which is suitable for devices like laptops or TVs, etc. Both the two modes of IEEE 802.3bt are backward compatible with 802.3af and 802.3at. The following table concludes the specifications of the PoE standards. The following table concludes the specifications of the PoE standards.
|Type||Standard||PD Min. Power Per Port||PSE Max. Power Per Port||Cable Category||Power Over Pairs||Released Time|
|Type 1||IEEE 802.3af||12.95W||15.4W||Cat5e||2 pairs||2003|
|Type 2||IEEE 802.3at||25W||30W||Cat5e||2 pairs||2009|
|Type 3||IEEE 802.3bt||51-60W||60W||Cat5e||2 pairs class0-4, 4 pairs class5-6||2018|
|Type 4||IEEE 802.3bt||71-90W||100W||Cat5e||4 pairs class7-8||2018|
How Much Wattage Does a PoE Switch Provide?
IEEE 802.3af and 802.3at are the most common PoE standards that a wide majority of PoE devices can support. IEEE 802.3bt has been newly released, which is not in large-scale use. Only a few vendors’ products support this standard, such as FS S5860-24XB-U, which is a PoE++ switch that supports auto-sensing IEEE 802.3af/at/bt PoE standard. Therefore, we just discuss PoE wattage of IEEE 802.3af and 802.3at here.
As described above, standard PoE can supply maximum power output of 15.4W, while PoE+ is 30W. When a plan calls for multiple devices to be connected to one PoE/PoE+ switch, it’s necessary to ensure the total wattage required by the devices does not exceed the maximum wattage of the switch. Here takes FS S3400-24T4FP PoE/PoE+ switch as an example. It’s a managed switch with 24 RJ45 ports and 4 SFP ports. It complies with IEEE 802.3af/at, the total power budget is 370W. This means the 24-port switch provides the availability of PoE and PoE+ power. Therefore, this switch can simultaneously connect 24 (15.4W×24=369.6<370W) devices with PoE standard. And it can support 12 (30W×12=360W<370W) devices with PoE+ standard.
Figure 2: Applications of FS PoE+ switches.
Usually, if a network switch supports both PoE and PoE+ standards, it can automatically detect whether the connected device is compatible with PoE or PoE+, and supply the suitable power to the device. For example, if we connect a PoE-enabled device with 5W power to the S3400-24T4FP PoE/PoE+ switch, then the switch will provide 5W power to the device. If we connect the switch with a PoE-enabled device that requires 20W power, then the switch will supply it with 20W power. And if we connect a device without PoE ability to the PoE switch, the switch will only deliver data to the device.
FS Network Switches That Comply with Various PoE Standards
FS now has PoE/PoE+ switches that follow the PoE standard for higher security and better ability. They are available in 8/16/24/48 ports options. These switches support layer 2+ switching features like VLAN. They also offer advanced management like WEB, CLI, TELNET, SNMP. FS PoE/PoE+ switches can power any 802.3af or 802.3at device of the market, making them flexible and secure. The following table lists the specifications of 4 FS PoE/PoE+ switches.
|PoE Standard||Model||Port||Switch Capacity||Power Budget||Forwarding Rate||Fans|
|IEEE 802.3af/at||S3260-8T2FP||8x RJ45 | 2x SFP||20 Gbps||240W||15 Mpps||With Fans|
|IEEE 802.3af/at||S3410-24TS-P||24x RJ45 | 2x SFP+, 2x RJ45/SFP||88 Gbps||370W||66 Mpps||With Fans|
|IEEE 802.3af/at||S5500-48T8SP||48x RJ45 | 8x 10G SFP+||256 Gbps||370W||192 Mpps||With Fans|
|S5860-24XB-U||24x Base-T | 4x SFP+, 4x SFP28||
||370W||565 Mpps||With Fans|
PoE standards specifies the maximum power output of a PSE, helping protect PoE-enabled devices from high-voltage damage. In addition, PoE technology can make the cabling installation easier and save your costs. It’s especially suitable for IP monitoring and remote monitoring applications, such as a PoE electronic billboard, or a PoE electronic display. | <urn:uuid:66103ffd-331c-454f-9846-e7cc54578213> | CC-MAIN-2022-40 | https://community.fs.com/blog/understanding-poe-standards-and-poe-wattage.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00795.warc.gz | en | 0.827316 | 1,770 | 3.234375 | 3 |
FTTX (Fiber-To-The-X) is know as different Passive Optical Network (PON) configurations which can be used to describe any optical fiber network that replaces all or part of a copper network. It is different from the traditional fiber optic network used for Local Area Network (LAN) applications.
A key difference between FTTX and the traditional fiber optic network is the number of optical fibers required for each user. In most FTTX applications, only one optical fiber is used. The single optical fiber passes data in both directions (bidirectional, or BiDi). This is very different from a LAN application where the transmit optical fiber sends data in one direction and the receive optical fiber sends data in the other direction. In a LAN application, both optical fibers can have data passing through them at the same time.
A transceiver, or converter, is typically a device that has two receptacles or ports. One mates with the transmit optical fiber and the other mates with the receive optical fiber. This allows the device to be transmitting and receiving simultaneously. This is known as full-duplex operation, e.g., a 1000BASE-T SFP transceiver with a RJ45 port can take advantage of this operation. In an FTTX single optical fiber application, full-duplex operation is typically not possible; usually only half-duplex operation takes place. This means that part of the time the optical fiber is carrying a signal in one direction, and the rest of the time, it is carrying a signal in the other direction. The key to making this system work is timing. Data is sent downstream for a predetermined amount of time and then data is sent upstream for a predetermined amount of time. This process is also known as Time Division Multiplexing (TDM).
FTTX systems typically use multiple wavelengths. The downstream laser is always a different wavelength than the upstream laser. The downstream laser is typically the longer wavelength, such as 1480 nm or 1550 nm (or both), and the upstream laser is typically 1310 nm. FTTX is possible with optical fiber distances up to 20 km because optical fiber is capable of transmitting information with a very low level of loss. The typical loss for an FTTX optical fiber at 1550 nm is 0.25 dB/km and 0.35 dB/km at 1310 nm.
Types of FTTX
According to the X, there are Fiber-To-The-Home (FTTH), Fiber-To-The-Building (FTTB), Fiber-To-The-Curb (FTTC), Fiber-To-The-Node (FTTN), Fiber-To-The-Desk (FTTD), etc.
An FTTH PON uses optical fiber from the central office to the home; there are no active electronics helping with the transmission of data in between the two locations. The central office is a communications switching facility. It houses a large number of complex switches that establish temporary connections between subscriber lines that terminate at the central office. At the home, a converter box (e.g., a Fiber to Copper Media Converter with SFP and RJ45 ports) changes the optical signal from the optical fiber into electrical signals. The converter box interfaces with existing home cabling such as coaxial cabling for cable TV, twisted-pair cabling for telephone, and Category 5e or 6 cabling for Internet connectivity.
An FTTB PON is very similar to an FTTH PON. It uses optical fiber from the central office to the building and there are no electronics helping with transmission in between. The optical signal from the optical fiber is converted into electrical signals in a converter box at the building. The converter box interfaces with existing cabling such as coaxial cabling for cable TV, twisted-pair cabling for telephone, and Category 5e or 6 cabling for Internet coonectivity.
In an FTTC PON, optical fiber runs from the central office and stops at the curb. The “curb” may be right in front of the house or some distance down the block. The converter box is located where the optical fiber stops, and it changes the optical signal from the optical fiber into electrical signals. These electrical signals are typically brought into the home through some existing copper cabling. The electrical signals may need to be processed by another converter box inside the house to interface with existing cabling such as coaxial cabling for cable TV, twisted-pair cabling for telephone, and Category 5e or 6 cabling for Internet coonectivity.
FTTN is sometimes referred to as fiber to the neighborhood. An FTTN PON only has optical fiber from the central office to the node. The node is typically a telecommunications cabinet that serves a neighborhood or section of a neighborhood. The optical signal from the optical fiber is converted into electrical signals inside the telecommunications cabinet. These electrical signals are distributed throughout the neighborhood through existing copper cables to the houses.
FTTD is a ideal of FTTX solution. Fiber connection is installed from the main computer room to a terminal or fiber media converter near the user’s desktop. FTTD is a high-bandwidth solution that expands the traditional fiber backbone system by running fiber directly to desktops. It is a horizontal wiring option that pushes the available bandwidth beyond 10G. It is an intriguing, underestimated and overlooked way to create a beneficial system that is expandable and performance-driven.
Fiberstore’s FTTX Solutions
As more bandwidth is needed for digital voice, high-speed data and high-definition video, service providers can count on Fiberstore’s innovative optical infrastructure solutions to meet today’s challenges and prepare for tomorrow’s demands. Fiberstore offers a variety of options to achieve “end-to-end” FTTX architectures that can transmit voice, data and video through the PON technologies. Fiberstore’s FTTX solutions include CWDM & DWDM multiplexers/demultiplexers, transceivers (e.g., SFP, SFP+, XFP), media converters, cables etc. | <urn:uuid:3889acf8-c47a-494b-82c3-2804095eebdd> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/fttx-pon-the-replacement-of-copper-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00795.warc.gz | en | 0.919223 | 1,280 | 3.671875 | 4 |
ZDNet research reports since the start of the pandemic:
- a 40% increase in unsecured remote desktop computers
- brute force attacks on remote desktops increasing by 400% during March and April 2020
- email scams skyrocketing by 667% in March 2020
- 90% of the nearly 5 billion COVID-19 related web pages being found to be scams
- over half a million Zoom credentials for sale on the dark web and a 2000% increase in malicious files with ‘Zoom’ in the name
- a 72–105% spike in ransomware linked to COVID-19.
As of May 2020, the US Federal Trade Commission (FTC) had received 60,000-plus reports of fraud related to COVID-19 and individual losses from these scams totaled around $44 million. The FTC says most scams relate to travel/vacations, online shopping and health care. People working from home and college students are also targets. Scammers impersonating public health authorities is a common trick, where criminals claim to be conducting contact tracing for COVID-19 and ask for personal information or send a malicious link via text to the victim. The US Federal Trade Commission has published advice on how to recognize and deal with COVID-19 scams.
As a consumer and possibly a remote worker, you’re at serious risk of privacy breaches or cyberattack from at least three different angles as the world struggles to contain the deadly virus and work and live in a new normal:
- the inadvertent or deliberate exposure and misuse of your personal data collected at cafes, gyms and other venues to support contact tracing
- inappropriate access to corporate data, via unsecured networks and devices you might be working on
- COVID-19 case reporting data being shared by health organizations and others, including employers, accidentally identifying you in the process.
What’s the risk with contact tracing?
Contact tracing is the process of gathering information about the people with whom a confirmed case of COVID-19 might have been in contact, and the places they have been. The WHO regards contact tracing as a critical weapon in the fight against COVID-19, and most countries around the world are using it in some form, with varying degrees of success.
Contact tracing can be done manually or with technology. Manual contact tracing is the pen and paper or QR code approach you’ve likely seen in your local bars and restaurants. Technology assisted contact tracing (TACT) includes apps such as Australia’s COVIDSafe, Canada’s ABTraceTogether, and Germany’s Corona-Warn-App.
But largely it’s not the government developed or sanctioned contact tracing apps that leave people exposed and at risk so much as the casual and ad hoc collection of personal data at hospitality and other venues to support contact tracing efforts. This type of data collection is highly distributed across many small operations and is being conducted by people with little to no training in handling personal data.
Most commonly, these venues ask you to scan a QR code and submit your details at a web site, or to write your contact details, including name, phone number, email address and residential address, as well as time of visit, on to a single sheet of paper or a log that sits in a common area such as on a counter or at a central checkpoint within the venue.
The problems are obvious: your personal information may be left exposed, fall into the hands of those who don’t know how to securely manage and store it and who may use it for nefarious purposes, or be sold to companies that want to spam you.
Also in the UK, there are reports of some venue staff using the personal information to harass patrons, and of data being used in scam contact tracing activities in an attempt to defraud patrons. New Zealand and Australia report similar data concerns. COVID-19 scams are a growing global problem. The IAPP reports: “COVID-19 has proven to be one of the most effective phishing lures of recent years, as epidemics and health scares tend to provide fertile ground for social engineering attacks.”
What’s the risk with remote work?
Criminals are capitalizing on lower defenses and vulnerabilities caused by the widespread and rapid shift to remote work, especially in companies whose business functions were not previously performed remotely. There’s variable, often outdated, security arrangements in place for the massive conduit that now exists between corporate or cloud and home networks and the myriad connected devices. People globally are focused on fighting the pandemic and stemming its devastating death toll. There’s heightened anxiety as people struggle to live and work in the ‘new normal’.
Put simply, criminals capitalize on vulnerability. The main motivation for these attacks is always the same: financial gain and massive disruption. Criminals want to trick people into giving them access to sensitive data and/or funds and exposing credentials that would allow them to infiltrate corporate information and payment systems. Attacks bring down services, and often open the floodgates for more criminal activity. And it’s incredibly easy for criminals to achieve. As PwC says: “As has been proven time and time again, it only takes one. One click, one missing endpoint agent, one failed alert, one unsuspecting employee, and the adversary can proclaim victory over your network.”
What’s the risk with COVID-19 case data sharing?
The issue here is loss of privacy. Re-identification of data is easier than people think because anonymization of data sets is more difficult than people think.
The International Association of Privacy Professionals (IAPP) warns: “… organizations should … be cognizant that sharing the names of people who have had or recovered from COVID-19 presents a privacy risk for them. Even if that data is anonymized before being shared, the risk of re-identification and subsequent privacy harms can remain.”
So what can you do?
The best way to stay private and safe online is to not share your personally identifiable information (PII) in the first place. Of course this isn’t always possible, as in the case of your data being shared by a health organization or your employer during the pandemic. But MySudo is a useful tool in your personal privacy toolkit for almost all other interactions you have online and off. It’s the only app on the market that offers private and secure phone, email, browsing and payments all in one place, with functionality to use these privacy capabilities via Sudos—secure digital profiles that work as real, alternatives to your personal information.
See how MySudo works:
In addition to using MySudo instead of your personal information online, there are important steps you can take to spot and avoid COVID-19 scams. The FTC has a web page of advice, which includes:
- Do not respond to texts, emails or calls about checks from the government.
- Ignore offers for vaccinations and home test kits.
- Hang up on robocalls.
- Watch for emails claiming to be from the CDC or WHO.
Interpol has this handy infographic:
PwC has advice for remote workers too, which includes being skeptical of emails from unknown senders, not forwarding suspicious emails to co-workers, and reporting suspicious emails to the IT or security department. See more.
At Anonyome Labs, the makers of MySudo consumer app and Sudo Platform business toolkit, we’re creating a world in which people have exclusive control and freedom over their private information. We’re changing the privacy and security paradigm—and resolving the greatest challenges business and consumers face. There’s never been a better time.
Photo by 🇨🇭 Claudio Schwarz | @purzlb | <urn:uuid:4fac5bd8-26c0-4945-94a9-f33e41bc7c69> | CC-MAIN-2022-40 | https://mysudo.com/2020/11/how-to-stay-safe-and-private-online-during-covid-19/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00795.warc.gz | en | 0.950841 | 1,761 | 2.796875 | 3 |
A distance learning course refers to online education method where students learn at their own pace but also have the benefit of teacher instruction.
Interest in distance learning has spiked massively over the past year, due to worldwide lockdowns forcing in-person educators to change tactics, so there is more data than ever on this subject. Often, the term is used interchangeably with “e-learning” and “virtual classroom” although there are important differences. E-learning, for example, refers to self-paced study without the instruction of a teacher or set study times. A virtual classroom, on the other hand, has a teacher overseeing students and a set class time.
Distance learning courses, when at their best, provide quality education to large numbers of people at low cost and with no commute; all that students need is access to the internet.
Further, machine learning programs that students interact with can identify learning styles and note which material they struggle with so the lessons can be adjusted for the student’s benefit. Good courses can even adjust lessons immediately, while sending analysis to the course instructor.
Distance learning courses also integrate well with NLP-run programs like machine translation and transcription programs for video material.
An effective distance learning course requires a course plan and effective student assessments. Teachers moving to a distance learning program should review the data before implementing plans and assessment methods.
Other internal course data includes student behavior, completion rates, and student progress rates. Importantly, data on the amount of times students seek help through chatbots or make use of external plugins like machine learning learning or transcription programs provide important data that instructors can use to either update the course lessons or provide one-on-one assistance to struggling students.
Good distance learning programs incorporate data on student interactions with the entire distance learning environment, including websites, apps, and forums. Additionally, student surveys, reviews, and other sources of feedback provide important information for teachers and developers to improve lesson plans and outreach.
Useful external data include education and course-relevant industry news as well as psychological studies. These sources provide useful information on reaching potential students and engaging current students who may be struggling.
Distance learning course challenges may include inconsistent access to the internet and, for child students, an unsupportive home environment. Younger children, in particular, suffer from a lack of consistent school schedule and peer interactions; adding the struggle of a distracting, neglectful, or even worse home can make learning nearly impossible.
Finally, security is a major concern, especially for schools that were forced to begin distance learning or virtual classroom teaching without warning or adequate preparation. However, many technology departments and software services provide cyber security protection protocol and protection services for distance learning courses.
Cyber actors likely view schools as targets of opportunity, and these types of attacks are expected to continue through the 2020/2021 academic year, says the advisory. These issues will be particularly challenging for K-12 schools that face resource limitations; therefore, educational leadership, information technology personnel, and security personnel will need to balance this risk when determining their cybersecurity investments.
IBM PAIRS Services provides queryable geospatial and temporal data in the form of maps, satellite images, weather data, drone data, and other data.
AMEE provides Medical Courses that are available through Face-to-Face and online courses.
Courses and Training is an online service offered by Intexfy to provide proper guidance to the concerned employees in the specific field.
Pitch Up Company incorporates a business’s own resources to help employees develop presentations. They also offer education and training
Alqami Education Courses provides courses to educate users on how to use Alqami data. | <urn:uuid:2d6e7cda-d8b8-4e65-ad2e-70cbf0c84f5e> | CC-MAIN-2022-40 | https://www.data-hunters.com/use_case/distance-learning-course/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00795.warc.gz | en | 0.93613 | 752 | 3.78125 | 4 |
Metaverse AI: when Artificial Intelligence meets the real world
You must have already seen the captivating video by Mark Zuckerberg about Metaverse. Here, the CEO of Meta gives us a peek into what the metaverse AI can be expected to look like in a 77-minute video.
Indeed, the promise of immersive everything seems tempting as the prospects of metaverse itself. But how is that even possible? And what metaverse technologies fast forward to the Matrix-like future? Let’s find out. We’ll start with the definition of the metaverse.
What is Metaverse, anyway?
The discussions and debates on what metaverse is has been doing the rounds. Some say the metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space. Therefore, metaverse technologies include the worth of all virtual worlds, augmented reality, and the Internet.
It is also commonly alluded to as a networked 3D environment where users can communicate and interact with each other like they do in the real world. Others define metaverse as a network of interconnected experiences and devices, tools and infrastructure, which level up our experience beyond virtual reality.
As you see, there’s no agreed-upon definition. However, venture capitalist Matthew Bohl cites seven characteristics of another world, which are more popular than any other features.
Here are they:
- The metaworld is persistent – it cannot be paused, erased, or terminated.
- All events within the universe take place in real-time, and actions are independent of external factors.
- The meta consciousness is open to everyone.
- The meta-universe has its own economy – people get “money” for the valuable “work” they do or own.
- It is an experience that combines both physical and virtual worlds.
- Data and digital assets from different platforms are combined, thus ensuring data interoperability.
- The meta universe is filled with “content and experiences” created by its users, both individuals and organizations.
There are some other concepts that are attributed to this phenomenon, but they are not widely agreed upon. For example, meta participants will have a single consistent digital identity (or “avatar”) that they will use across all experiences. The Meta company has made this concept a part of its vision.
In reality, the interconnected future means that you can go shopping to a virtual shopping center, purchase a virtual item and pay for that with virtual currency.
Metaverse technology: the seven pillars
This new immersive world can zoom out of nowhere. In its foundation, metaverse relies on well-known technologies that have also been making headlines.
VR metaverse and augmented reality
Virtual reality is a computer-generated setting, typically immersive and interactive, experienced through sensory input such as video images and sound. Virtual realities artificially create sensory experiences, which can include sight, hearing, touch, etc., to stimulate the user’s emotional reactions. In other words, virtual reality uses software to create a simulated environment. It immerses users in this environment with the help of headsets and earphones.
Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics, or GPS data. It is related to a more general concept called mediated reality. There, a view of reality is modified by a computer. As a result, technology functions by enhancing one’s current perception of reality. In simple words, augmented reality AI is a technology that overlays a virtual image onto the real world.
What is the difference between VR and AR?
These two seem similar, especially in the continuum of metaverse technology. However, the difference is huge. Augmented reality AI takes digital visual elements and blends them into the real world. It’s more accessible than VR with every smartphone having it with a camera. Pokemon Go, a famous AR game, is the most prominent example. AR can also be enhanced with intelligent algorithms, hence machine learning consulting is common when creating these applications.
Virtual reality sits on a different concept. Thus, it generates a completely virtual environment with an unrivaled immersive experience. VR headsets, gloves, and sensors help users explore the virtual world. Oculus Quest 2 is the closest to the metaverse so far. This VR headset opens up worlds of games, theater, and experiences that are an early model of the metaverse.
Blockchain and cryptocurrency
The roles of blockchain and crypto will play salient roles in the new virtual world. Blockchain is the underlying technology that powers Bitcoin and many other cryptocurrencies. It is essentially a database of information that can be updated only by consensus. At the same time, the main differentiator of blockchain is its decentralized nature and the promise of secure Internet space.
Therefore, it’s easy to image crypto and blockchain in the Metaverse technology stack (which is also to be decentralized). Since cryptocurrency is completely independent of the real world, it is a convenient payment method for digital realities as well.
For example, Decentraland is built on the Ethereum blockchain. It is a decentralized virtual world software where you can buy virtual plots of lands with an NFT (non-fungible token) transaction. At the same time, blockchain technology secures the ownership of lands.
The new iteration of the digital world will also beguile the users by providing a new level of computer-human interaction. That’s when virtual intelligence or VI comes in. As such, virtual intelligence is a type of AI trends in 2022 that inhabit the virtual world.
Unlike general-purpose artificial intelligence, VI is designed to support specific user-linked actions. Therefore, virtual intelligence is software that is confined to the controlled pre-defined setting. Also, it cannot produce impromptu responses or independent decisions.
Virtual intelligence is a code or program that they created functions within the controlled environment for. To demonstrate a better idea, you can now find virtual intelligence at chatbots or interactive maps. The latter interacts with the user and provides a prompt response based on the intended functions.
Artificial intelligence vs virtual intelligence
While both clearly share similarities and backgrounds, artificial intelligence vs virtual intelligence is not the same. They serve different purposes for users, but both can be valuable tools for improving our experience. First of all, virtual intelligence is a more open system than AI. While AI software development is limited to performing tasks, VI is more open to the end-user. VI also boasts more creativity and is not limited to a pre-set wealth of functions.
The second artificial intelligence vs virtual intelligence difference is that VI cannot function without building the whole system. On the other side, AI can tackle issues as they arrive. Lastly, virtual intelligence is more creative, yet cannot produce critical decisions. AI, on the other side, fairs well for important decision-making. Nevertheless, both can exist within meta consciousness. Virtual intelligence will be responsible for the direct computer-user interaction. AI and virtual reality, in turn, will nurture a hyper-immersive environment.
Besides AR, VR, AI, the new virtual reality is also dependent on 3D technologies. The latter encompasses both 3D reconstruction and graphics.
3D graphics is a generally known term applied to images or objects created using 3D computer graphics software. They are displayed with the aid of specialized display equipment, often referred to as 3D glasses, and can be shown on devices such as stereoscopic displays, handheld projectors, digital domes, and virtual reality headsets.
3D reconstruction, at the same time, is a technology used to visualize an object or environment. Through the use of computer-based technology, 3D models can be created that allow you to see the objects in a three-dimensional view. Today, it helps create detailed drawings and renderings. In the metaverse, however, the 3D world is crucial to ensuring the comfort of users.
The metaverse can only exist within the three-dimensional setting. It’s essentially a Digital Twin of our world and any real-life objects, places, and people. Therefore, meta creators will need a wholesome 3D capture and virtualization ecosystem to get it off the ground.
Meta has already rolled out a number of wearables that allow people to immerse themselves in the metaverse-like setting. But apart from wearables, the metaverse’s atmosphere must also match the real world. Therefore, we’re yet to see the whole power of 3D capture that will allow us to replicate the real world without meticulous manual input.
The Internet of Things
The connected Internet of Things (IoT) is the network of physical objects or ‘things’. These are embedded with electronics, software, sensors, and connectivity to collect and exchange data. IoT basically connects devices that are not normally considered computers. Therefore, IoT allows objects to be sensed or controlled remotely across existing network infrastructure, creating opportunities for more direct integration of the physical world into computer-based systems and solutions.
Today, IoT solutions are used to ensure seamless data flow and real-time updates. The meta version is similar. Connected devices will pull data from the real world to generate more accurate digital twins (imaging having the same weather in the virtual and real worlds).
Beyond weather sensors, IoT implementation can create a whole ecosystem of 3D devices. This will help creators simulate a whole range of real-time situations. By further adding AI, the 3D world can process the IoT data and produce decisions on its own.
Why aren’t we there yet?
The concept of metaverse fairs well for user experience, but it does come with challenges and limitations. The latter hampers its quick implementation. Due to the high level of immersiveness, the biggest gripe of the metaverse is on the technology side. If you look into the history of the virtual world in terms of technology, then you will realize that this concept has been around for a long time. However, we still haven’t reached the at-scale implementation of AR and VR technologies that grant us interconnected reality.
Also, VR metaverse experience should go hand in hand with the interoperability of virtual assets and experiences. This is another uncompleted challenge for modern technologies since we lack large-scale platforms that remain interoperable.
Identity authentication issues can also be rampant within the new virtual reality. Avatars or our virtual representations would be hard to prove. Unwanted contact could become more intrusive as attested by a recent incident. It suggests that the virtual world already has a groping problem.
NFTs or tokens also reveal significant system flaws. Essentially, NFTs are posed to tackle the ownership problem of the virtual world. On the flip side, they can into a collective illusion of ownership. In this case, ownership of digital assets in the metaverse is hard to verify.
Therefore, blending offline and virtual interactions comes with the same share (if not larger) of risks as the current digital state.
The bottom line
Metaverse is the next-generation, decentralized and virtual ecosystem. It aims to build a blockchain-based infrastructure and framework that facilitates the building of decentralized applications (DApps), including virtual assets, digital identities, and value intermediaries. Unlike the current digital experience, meta consciousness will offer a 3D, multi-sensory experience backed by artificial intelligence and other disruptive technologies.
Today, the closest thing to the metaverse is the gaming experience in Fortnite and Decentraland-like spaces. According to Grayscale, the metaverse is to reach $1 trillion in annual revenue. Perspectively, it can compete on par with Web 2.0 companies, which makes it a lucrative untapped field for businesses.
Want to get your brand into the metaverse or implement AI into your organization’s infrastructure? Contact the team of experienced AI consultants and engineers. | <urn:uuid:936bba58-1d5e-4820-9a58-df0498cb558c> | CC-MAIN-2022-40 | https://indatalabs.com/blog/metaverse-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00795.warc.gz | en | 0.920717 | 2,508 | 2.59375 | 3 |
Whether artificial intelligence (AI) can help defeat climate change is a complicated question. The answer is both yes, and no. There is a lot of excitement and hope around the potential of AI, and what AI may be able to achieve when it comes to climate change. However, to make these benefits a reality, society must engage with AI in the right way. AI is not magic fairy dust that can simply be sprinkled on the challenge of climate change and expect it to fix itself. People still have to take personal responsibility, but AI can be one of the tools in the toolkit to help businesses make their fight against climate change easier.
AI, another tool in the toolbox
- How mobile phone apps are boosting climate change action (opens in new tab)
While the relationship between AI and climate change is complicated, there are four ways in which AI could potentially impact climate change. These concepts have also been discussed by the Brookings Institution, a Washington, DC-based think tank. These particularly poignant ways include energy supply, energy demand, climate modelling, and climate policy.
- Energy supply – AI is already helping to improve the supply of energy. For example, machine learning systems are being used to map underground deposits of oil and gas, helping companies to better understand their size and value. In the non-hydrocarbon space, AI is also being used for solar forecasting, allowing solar generation companies to participate more efficiently in the electricity markets.
- Energy demand – Today, AI is enhancing efficiency in energy consumption by lowering demand and emissions. These capabilities are set to increase significantly over time. AI can help align energy consumption with real-time changes in energy markets, resulting in significant reductions in demand. The self-healing of power grids is a possibility, too. Today, according to Brookings, potential uses for AI to better manage energy demand are “barely tapped.”
- Climate modelling – AI could be used to help create models that can drive policy-making aimed at reducing consumption. AI should also be able to significantly improve today’s climate change models – for example, by improving the accuracy of predictions for local climate change impacts.
- Climate policy – The reality – despite what others may think – is that the world is already seeing the effects of climate change. AI can help governments and other organisations better shape climate policy to reduce harm to people and the environment. For example, smart adaptation strategies can reduce losses, and can aid preparation for dealing with extreme climate events.
- Can tech combat the climate crisis? (opens in new tab)
AI is not a magic wand
These areas of development for AI are very exciting, but AI is not a magic wand. It is easy for businesses to be sucked into the hype around the possibilities of AI to help manage climate change. Indeed, there are already snake oil salespeople promising the moon with little to back up their claims. People also tend to put a lot of focus on the promise of algorithms – there is much exciting work here, but businesses must learn to think critically about them to avoid unintended consequences.
Amongst all of this, the data involved in AI climate change solutions is often ignored, which is a major mistake. It is important to ensure that all data associated with the AI applications being used is managed correctly so that it can be shared effectively. As an example, sharing data between government agencies and academics, or between companies that are partnering together – businesses cannot afford to share information that is incorrect or based off skewed data.
It is equally important to manage data around climate change in a way that builds trust. This is crucial for both the data being fed into an AI algorithm, as well as the output it produces. Managing data poorly can result in inaccurate results from the AI’s algorithms. Businesses need to know key information about the data within their organisation; such as who has access to the data, what was it used for previously, how it was obtained etc.
If a business inputs data that is ungoverned and potentially unreliable, then it cannot trust the results gained from an AI algorithm. Put simply, it is a case of if garbage data is put in, then a garbage output can be expected, resulting in potentially poor decision making. Alternatively, businesses need to be careful of where the information they produce from these AI algorithms is being stored. For example, information could be used for negative purposes – such as analysing energy use patterns from consumers to determine when a house is unoccupied, and ripe to be burgled.
Today, society is at a critical juncture for both climate change and AI. It is important to build trust within society for both how climate change is addressed and how AI solutions function. Strong data management practices have a foundational role to play in building that trust, and in supporting AI-based climate change solutions that deliver on their promises.
- How big data will drive smart city innovation (opens in new tab)
Stijn Christiaens, CTO & Co-founder, Collibra (opens in new tab) | <urn:uuid:e37215f4-504b-4fa7-b84c-c4716f4e4cf0> | CC-MAIN-2022-40 | https://www.itproportal.com/features/can-ai-save-the-planet-maybe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00795.warc.gz | en | 0.960025 | 1,023 | 3.578125 | 4 |
Data centers are found in some of the most densely populated areas in the world. These metropolitan cities have been the ideal locations due to their proximity to consumers and proximity to potential workers. But other aspects can influence where data centers can be located. Traditionally, rural areas haven’t been the idyllic location for a data center, but there are several reasons data centers may be leaving the city.
There has been a trend of data centers moving away from large metropolitan areas toward the countryside. This is the result of several different things. Data centers typically are utilized for mission-critical applications. Most businesses require minimal to no downtime. The data center Tier system can help those looking for a new data center.
A Tier 1 data center should have a single path for power and cooling. It may also have redundant and backup components. Tier 1 data centers are expected to have an uptime of 99.571% or 28.8 hours of downtime annually. A Tier 2 data center also has a single path for power and cooling. It also can have some redundant and backup components and should have an expected uptime of 99.741% or 22 hours of downtime every year.
A Tier 3 data center should have multiple paths of power and cooling. It should also have systems in place that update and maintain itself without going offline. Tier 3 data centers are expected to have 99.982 hours of uptime or 1.6 hours of downtime annually. Lastly, a Tier 4 data center should be built as a completely fault-tolerant system. It should also have redundancy for every component. A Tier 4 data center should have an uptime of 99.995%. It should also only have less than 26.3 minutes of downtime every year.
All of these data centers can be good options for a business just depending on what your needs are. As your business needs increase so will the need to change for a different data center tier. A rural data center may not need to worry about the specific tier it falls under.
Before computers, the most densely populated areas were near the coast or at least a body of water. This is because crops grew better where the was water. Later down the timeline, the coasts were also the most densely populated areas because most trade routes ended at some dock on the coast.
Today, nearly one-third of the United States population lives in a coastline or coastline county. This is also why more data centers can be found in certain locations. The data center hubs of the United States include Los Angeles, San Francisco, New York, Chicago, Miami, and more. Many of Colocation America’s data centers can be found in these areas. So, if you’re currently looking for a data center, connect with us today.
But there are several different benefits of a rural data center. One of these benefits is helping the economy in smaller cities and towns that aren’t anywhere near these metropolitan hubs where data centers are normally. Building data centers in less densely populated areas can help bring jobs to these locations. It can also be less costly and there are new tax incentives for those who do so.
Latency is the measurement of the amount of time it takes for the data to travel to its final destination. While there are several factors including connection, which include how different types of cables and how it affects this transmission time, the main factor is distance. Data is still controlled by the laws of physics and still cannot be faster than the speed of light. It doesn’t matter how fast the connection is because the data still has to travel from one point to another. Distance plays the biggest factor in speed. Latency can affect gaming, streaming, and anything regarding data. Limiting the physical distance between the data source can greatly reduce latency
It takes time for data to move from its starting point to its users. As more developers and companies implement new data-intensive applications, remote server farms aren’t quite enough. This is where bringing certain data center operations closer to its users is beneficial.
One of the ways this can be done is through edge computing. Edge computing is helping bring the data center or at least the computing power of the data center closer to less densely populated areas. One of the ways this is being accomplished is through micro data centers. Micro data centers can be positioned anywhere for several different reasons. They can be deployed for offices, retail stores, banks, and schools.
Micro data centers and edge computing bring the computing power of a data center closer to businesses. One of the main benefits is reduced costs. Traditional data centers can be too expensive for smaller businesses. A micro data center can be more affordable if you know how to manage your own micro data center system. It also helps give businesses more flexibility when it comes to scaling up as your company grows. these solutions can also help with latency since the system will be physically closer. The last reason and benefit of a micro data center is time. A micro data center can save time because it’s faster to deploy than building an entire data center.
Edge computing and micro data centers can already bring the power of a data center closer to users in rural areas. So, is building a data center in less densely populated areas worth it? As mentioned earlier, nearly one-third of people in the United States live near a coastal city or a coastal county. But the other two-thirds of people do not. Building data centers in rural areas can create jobs and help jumpstart the economy in certain areas. It can also be more cost-effective with new tax incentives in place. Rural data centers can also solve the problem of latency for the other two-thirds of the population. The world revolves around data and building data centers will allow other areas to flourish as well. New data center hubs will help bring other businesses into these areas. There is a lot of potential in rural data centers. An Iowa farmer once said, “If you build it, they will come”. | <urn:uuid:8cc380ee-d497-4351-8e11-d37bc26c7a6f> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/rural-data-center-benefits | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00795.warc.gz | en | 0.95993 | 1,218 | 2.765625 | 3 |
Open-Source Intelligence (OSINT) Meaning
Open Source Intelligence (OSINT) is a method of gathering information from public or other open sources, which can be used by security experts, national intelligence agencies, or cybercriminals. When used by cyber defenders, the goal is to discover publicly available information related to their organization that could be used by attackers, and take steps to prevent those future attacks.
OSINT leverages advanced technology to discover and analyze massive amounts of data, obtained by scanning public networks, from publicly available sources like social media networks, and from the deep web—content that is not crawled by search engines, but is still publicly accessible.
OSINT tools may be open source or proprietary: the distinction should be made between open source code and open source content. Even if the tool itself is not open source, as an OSINT tool, it provides access to openly available content, known as open source intelligence.
History of OSINT
The term OSINT was originally used by the military and intelligence community, to denote intelligence activities that gather strategically important, publicly available information on national security issues.
In the cold war era, espionage focused on obtaining information via human sources (HUMINT) or electronic signals (SIGINT), and in the 1980s OSINT gained prominence as an additional method of gathering intelligence.
With the advent of the Internet, social media, and digital services, open source intelligence grants access to numerous resources to gather intelligence about every aspect of an organization’s IT infrastructure and employees. Security organizations are realizing that they must collect this publicly available information, to stay one step ahead of attackers.
A CISO’s primary goal is to find information that could pose a risk to the organization. This allows CISOs to reduce risk before an attacker exploits a threat. OSINT should be used in combination with regular penetration testing, in which information discovered via OSINT is used to simulate a breach of organizational systems.
How Attackers and Defenders Use OSINT
There are three common uses of OSINT: by cybercriminals, by cyber defenders, and by those seeking to monitor and shape public opinion.
How Security Teams Use OSINT
For penetration testers and security teams, OSINT aims to reveal public information about internal assets and other information accessible outside the organization. Metadata accidentally published by your organization may contain sensitive information.
For example, useful information that can be revealed through OSINT includes open ports; unpatched software with known vulnerabilities; publicly available IT information such as device names, IP addresses and configurations; and other leaked information belonging to the organization.
Websites outside of your organization, especially social media, contain huge amounts of relevant information, especially information about employees. Vendors and partners may also be sharing specific details about an organization’s IT environment. When a company acquires other companies, their publicly available information becomes relevant as well.
How Threat Actors Use OSINT
A common use of OSINT by attackers is to retrieve personal and professional information about employees on social media. This can be used to craft spear-phishing campaigns, targeted at individuals who have privileged access to company resources.
LinkedIn is a great resource for this type of open source intelligence, because it reveals job titles and organizational structure. Other social networking sites are also highly valuable for attackers, because they disclose information such as dates of birth, names of family members and pets, all of which can be used in phishing and to guess passwords.
Another common tactic is to use cloud resources to scan public networks for unpatched assets, open ports, and misconfigured cloud datastores. If an attacker knows what they are looking for, they can also retrieve credentials and other leaked information from sites like GitHub. Developers who are not security conscious can embed passwords and encryption keys in their code, and attackers can identify these secrets through specialized searches.
Other Uses of OSINT
In addition to cybersecurity, OSINT is also frequently used by organizations or governments seeking to monitor and influence public opinion. OSINT can be used for marketing, political campaigns, and disaster management.
OSINT Gathering Techniques
Here are three methods commonly used to gain open intelligence data.
This is the most commonly used way to gather OSINT intelligence. It involves scraping publicly available websites, retrieving data from open APIs such as the Twitter API, or pulling data from deep web information sources. The data is then parsed and organized for consumption.
This type of collection requires more expertise. It directs traffic to a target server to obtain information about the server. Scanner traffic must be similar to normal Internet traffic to avoid detection.
This type of information collection interacts directly with a system to gather information about it. Active collection systems use advanced technologies to access open ports, and scan servers or web applications for vulnerabilities.
This type of data collection can be detected by the target and reveals the reconnaissance process. It leaves a trail in the target’s firewall, Intrusion Detection System (IDS), or Intrusion Prevention System (IPS). Social engineering attacks on targets are also considered a form of active intelligence gathering.
Artificial Intelligence: The Future of OSINT?
OSINT technology is advancing, and many are proposing the use of artificial intelligence and machine learning (AI/ML) to assist OSINT research.
According to public reports, government agencies and intelligence agencies are already using artificial intelligence to gather and analyze data from social media. Military organizations are using AI/ML to identify and combat terrorism, organized cybercrime, false propaganda, and other national security concerns on social media channels.
As AI/ML techniques become available to the private sector, they can help with:
- Improving the data collection phase—filtering out noise and prioritizing data
- Improving the data analysis phase—correlating relevant information and identifying useful structures
- Improving actionable insights—AI/ML analysis can be used to review far more raw data than human analysts can, deriving more actionable insights from the available data.
Here are some of the most popular OSINT tools.
Maltego is part of the Kali Linux operating system, commonly used by network penetration testers and hackers. It is open source, but requires registration with Paterva, the solution vendor. Users can run a “machine”, a type of scripting mechanism, against a target, configuring it according to the information they want to collect.
Main features include:
- Built-in data transformations.
- Ability to write custom transformations.
- Built-in footprints that can collect information from sources and create a visualization of data about a target.
Spiderfoot is a free OSINT tool available on Github. It integrates with multiple data sources, and can be used to gather information about an organization including network addresses, contact details, and credentials.
Main features include:
- Gathers and analyzes network data including IP addresses, classless inter-domain routing (CIDR) ranges, domains and subdomains.
- Gathers email addresses, phone numbers, and other contact details.
- Collects usernames for accounts operated by an organization.
- Collects Bitcoin addresses.
Spyse is an “Internet assets search engine”, designed for security professionals. It collects data from publicly available sources, analyzes them, and identifies security risks.
Main features include:
- Collects data from websites, website owners, and the infrastructure they are running on
- Collects data from publicly exposed IoT devices
- Identifies connections between entities
- Reports on publicly exposed data that represents a security risk
Intelligence X is an archival service that preserves historical versions of web pages that were removed for legal reasons or due to content censorship. It preserves any type of content, no matter how dark or controversial. This includes not only data censored from the public Internet but also data from the dark web, wikileaks, government sites of nations known to engage in cyber attacks, and many other data leaks.
Main features include:
- Search on email addresses or other contact details.
- Advanced search on domains and URLs.
- Search for IPs and CIDR ranges, with support for IPv4 and IPv6.
- Search for MAC addresses and IPFS Hashes.
- Search for financial data such as account numbers and credit card numbers
- Search for personally identifiable information
- Darknet: Tor and I2P
- Wikileaks & Cryptome
- Government sites of North Korea and Russia
- Public and Private Data Leaks
- Whois Data
- Dumpster: Everything else
- Public Web
BuiltWith maintains a large database of websites, which includes information on the technology stacks used by each site. You can combine BuiltWith with security scanners to identify specific vulnerabilities affecting a website.
Main features include:
- Reporting on the content management system (CMS) in use by a website, its version, and plugins currently in use.
- Reporting on other infrastructure components used by a website, such as a CDN.
- Providing information about the web server running the website.
- Providing details of analytics and tracking tools deployed by a website.
Shodan is a security monitoring solution that makes it possible to search the deep web and IoT networks. It makes it possible to discover any type of device connected to a network, including servers, smart electronics devices, and webcams.
Main features include:
- Easy to use search engine interface.
- Provides information on devices operating on protocols like HTTP, SSH, FTP, SNMP, Telnet, RTSP, and IMAP.
- Results can be filtered and ordered by protocol, network ports, region, and operating system.
- Access to a huge range of connected devices, including home appliances and public utilities such as traffic lights and water control systems.
HaveIbeenPwned is a service that can be used directly by consumers who were impacted by data breaches. It was developed by security researcher Troy Hunt.
Main features include:
- Identifying if an individual email address was compromised in any historical breach.
- Checking accounts on popular services like LastFM, Kickstarter, WordPress.com, and LinkedIn for exposure to past data breaches.
Google dorking is not exactly a tool – it is a technique commonly used by security professionals and hackers to identify exposed private data or security vulnerabilities via the Google search engine.
Google has the world’s largest database of Internet content, and it provides a range of advanced search operators. Using these search operators it is possible to identify content that can be useful to attackers.
Here are operators commonly used to perform Google Dorking:
- Filetype – enables finding exposed files with a file type that can be exploited
- Ext – similarly, finds exposed files with specific extensions that can be useful in attack (for example .log)
- Intitle/inurl – looks for sensitive information in a document title or URL. For example, any URL containing the term “admin” could be useful to an attacker.
- Quotes – the quote operator enables searching for a specific string. Attackers can search for a variety of strings that indicate common server issues or other vulnerabilities.
Open Source Investigation Best Practices
Here are best practices that can help you use OSINT more effectively for cyber defense.
Distinguish Between Data and Intelligence
Open source data (OSD) is raw, unfiltered information available from public sources. This is the input of OSINT, but in itself, it is not useful. Open source intelligence (OSINT) is a structured, packaged form of OSD which can be used for security activity.
To successfully practice OSINT, you should not focus on collecting as much data as possible. Focus on identifying the data needed for a specific investigation, and refine your search to retrieve only the relevant information. This will let you derive useful insights at lower cost and with less effort.
Consider Compliance Requirements
Most organizations are covered by the General Data Protection Regulation (GDPR) or other privacy regulations. OSINT very commonly collects personal data, which can be defined as personally identifiable information (PII). Collecting, storing, and processing this data can create a compliance risk for your organization.
In addition, if you discover criminal intent in an OSINT investigation, there may be specific legal requirements for exposing this data. For example, in the UK, exposing information that can tip off an individual under investigation for money laundering can lead to unlimited fines and prison time.
OSINT relies on publicly accessible data, but the use of this data can impact people, both in your organization and outside it. When you collect data, do not only consider your investigative needs, but also the ethical and regulatory impact of the data. Limit data collection to a minimum that can help you meet your goals without violating the rights of employees or others.
Letting technology collect data or scan systems “on autopilot” will often result in unethical or illegal data collection. A key part of ethical OSINT is to ensure data collection is controlled by humans, with effective collaboration between all stakeholders. Everyone involved in the OSINT project should understand ethical and legal constraints, and should work together to avoid privacy issues and other ethical concerns.
Imperva Application Protection Powered by Threat Intelligence
Imperva provides comprehensive protection for applications, APIs, and microservices, which builds on multiple threat intelligence sources including OSINT:
Web Application Firewall – Prevent attacks with world-class analysis of web traffic to your applications.
Runtime Application Self-Protection (RASP) – Real-time attack detection and prevention from your application runtime environment goes wherever your applications go. Stop external attacks and injections and reduce your vulnerability backlog.
API Security – Automated API protection ensures your API endpoints are protected as they are published, shielding your applications from exploitation.
Advanced Bot Protection – Prevent business logic attacks from all access points – websites, mobile apps and APIs. Gain seamless visibility and control over bot traffic to stop online fraud through account takeover or competitive price scraping.
DDoS Protection – Block attack traffic at the edge to ensure business continuity with guaranteed uptime and no performance impact. Secure your on premises or cloud-based assets – whether you’re hosted in AWS, Microsoft Azure, or Google Public Cloud.
Attack Analytics – Ensures complete visibility with machine learning and domain expertise across the application security stack to reveal patterns in the noise and detect application attacks, enabling you to isolate and prevent attack campaigns. | <urn:uuid:77f0e678-d3f6-44e9-aefc-dce48978c612> | CC-MAIN-2022-40 | https://www.imperva.com/learn/application-security/open-source-intelligence-osint/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00795.warc.gz | en | 0.902563 | 3,077 | 3.421875 | 3 |
What Is a Whaling Attack?
A whaling attack is a type of phishing attack where a particularly important person in the organization is targeted. It hinges on the cyber criminal pretending to be a senior member of the organization to gain the trust of the intended target. Once trust is gained, the attacker can prod the target for information that helps them access sensitive areas of the network, passwords, or other user account information.
A whaling attack can happen quickly, but it is often executed over the course of weeks or months. When a senior user interacts with the attacker, the attacker’s goal is to establish the target’s genuine trust. Taking the attack to the next stage too quickly may result in the target getting suspicious. However, if the attacker slowly proves that they are who they claim to be, the target may have no problem handing over sensitive information.
How Whaling Attacks Work
A whaling attack may begin with a communication through a method commonly used by both the person being impersonated and the target. This may be email or office texting that uses the internet. When the attack begins, there may be no reason for the target to question the identity of the attacker, as the latter may have the same username as the target's associate. In some cases, the email address may be faked, but it appears real enough to be believable.
The attacker may first seek to infiltrate the email account of the person they are using to get to the whale. Once inside, they can initiate an email that helps build trust. This may need to include a detail about the whale’s life that the associate being impersonated would know. This kind of information can be easily gleaned off social media.
For example, the attacker may notice that the victim recently got a new puppy and posted about it on social media. They could then scroll down to the previous year’s Christmas party and see that there was a huge cake. They could use the combination of both pieces of information to compose a seemingly innocent and appropriately knowledgeable email: “Hey, that cute little puppy’s been getting big, huh? Had he been there last Christmas, I bet he could have devoured that whole cake!!! Lol!!!” Because of the detailed nature of the email, the whale may not suspect the attacker is falsifying their identity.
Once trust has been gained, the attacker could try to get secret information from the whale. For instance, they could say, “Ay, I'm on the road, and I don't have my login for the VPN. Could you shoot it to me real quick?” They could also try to gain access to proprietary information by making a request like, “Listen, I put those blueprints on my laptop, but I am using my phone right now. You mind sending those over real quick? I gotta meet this deadline.” Because the whale believes the messages are legitimate, they may send over the information.
Whaling vs. Phishing vs. Spear Phishing
Phishing involves tricking someone into revealing sensitive information through an electronic communication. For example, the target may get an email from what appears to be a trusted source. The email may claim the target has to take quick action to rectify a problem. To do this, they must click a link in the email. This link brings them to a fake site that appears to be legitimate. It may have logos or fonts used by the real site it is trying to impersonate. The victim, while on the site, is prompted to enter their login credentials. What they enter goes straight to the attacker, who can then go to the real site and use the victim’s credentials to access their account.
This can be done with a bank or other financial account. The attacker may then transfer money to their own account or that of an accomplice.
Spear phishing is much like phishing, but it focuses on a particular victim. A phishing attack may use a list of email addresses, sending out the same communication—or similar ones—to everyone on the list. The attacker may also use details that pertain to the identity of the target to make the communication seem more legitimate.
For example, if the attacker were to see the person use an ATM at a certain location, they could include that activity in the email. They could say something like, “We noticed your card information may have been copied by a card-skimming device when you used the Chestnut Hill ATM on Grove St. yesterday at 12:07 p.m. Please click here to log in to your account and change your password.”
When the victim logs in, they enter their existing login credentials, which are collected by the attacker. When they change their password, nothing actually happens. The attacker could even try to change their password for real by using their correct login information.
Whaling is like spear phishing in that it involves a targeted attack. However, it is different because the attacker impersonates an associate of the victim to gain the victim’s trust. The act of impersonating someone the victim knows differentiates it from spear phishing and phishing.
Whaling Attack Examples and Statistics
The technology company Seagate, in 2016, was tricked into releasing the W2 forms of 10,000 employees. The whaling attack involved an email that requested copies of the employees’ 2016 W-2 forms, as well as other sensitive information such as their Social Security numbers, names, home addresses, and income. When HR complied, the information was sent straight into cyber criminals' hands.
Austrian aerospace parts manufacturer FACC was targeted in 2016 as well. The finance department sent $47 million to cyber criminals. This resulted in the CEO and CFO both getting fired.
The social media company Snapchat handed over payroll information of a selection of its employees back in 2016. Someone on the payroll team got an email from an attacker who pretended to be the CEO of Snapchat, Evan Spiegel. “Evan” requested payroll information, and the victim fell for the trick.
Protect Yourself from Whaling Attacks
The first step in protecting you and your organization from whaling attacks is to educate all potential targets, as well as those that may be used to try to gain access to them. Because this could include a large proportion of your company, it may be best to include a "how to avoid whaling attacks" discussion during a training on other types of phishing threats.
Avoiding whaling attacks begins with a shift in mindset. When you read an email from someone, you should ask yourself if you were expecting to receive a communication from that specific person. Also think about whether there is anything strange about the email, including not just what is being said but how it is being expressed, the use of punctuation, emojis, or anything else that seems out of the ordinary.
In some cases, it is very obvious that you are being targeted. For example, if the email address is plausible but not the typical email the person uses, that is a telltale sign. For example, if the person usually uses the email account firstname.lastname@example.org, but you get an email from email@example.com, you should beware. If there is no reason why John would have to get another email address, this one could be fake. Further, if the email has a name that makes sense but comes from outside the organization, that could also be a sign of danger.
In addition, executives need to be careful about what they post on social media. Details about their lives can be used to execute whaling attacks. If a high-level member of the organization gets an email that mentions things they posted on social media, it may be an attempt to gain their trust in preparation for an inquiry for information.
How Fortinet Can Help
Fortinet has developed FortiPhish, a service designed to increase awareness of whaling attacks and other kinds of phishing. It is available through the cloud and the Fortinet NSE Training Institute. The service involves continuous testing and simulations. The phishing techniques are based on information gleaned from FortiGuard Labs' knowledge of the most up-to-date phishing tactics being used by threat actors.
What is whaling in cybersecurity?
A whaling attack is a type of phishing attack where a particularly important person in the organization is targeted. It hinges on the cyber criminal pretending to be a senior member of the organization to gain the trust of the intended target.
What is whaling vs. phishing?
Phishing involves trying to trick someone into revealing sensitive information through an electronic communication. Whaling is different because the attacker impersonates an associate of the victim to gain the victim’s trust.
How do you recognize a whaling attack?
Signs of a whaling attack include unexpected communications from people in your organization, particularly if they come from a different email address or one from outside your organization. Also, any requests for sensitive information over email should be viewed with suspicion. | <urn:uuid:39831a9a-f8d1-40fa-814e-3eb2240dd450> | CC-MAIN-2022-40 | https://www.fortinet.com/kr/resources/cyberglossary/whaling-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00195.warc.gz | en | 0.959445 | 1,861 | 3.21875 | 3 |
Artificial Intelligence (AI) and the increased access to powerful processors and cloud computing have made possible significant advances in Natural Language Processing (NLP), the ability of a software program to understand human language as it is spoken.
Today, NLP is actively revolutionising communication between humans and machines. In fact, humans do not realise how much we communicate with machines every single day. When conversing with Siri and Alexa, the average person is well aware that they are interacting with a computer, but what about other common uses of NLP we do not even think twice about?
When navigating Google or sending an iMessage, NLP is employed to predict the next words in the sentence, a function known fittingly as ‘predictive typing.’ Another extremely common use of NLP, one that we encounter constantly, is Microsoft Word’s spell check. Grammar and spell checks may seem mundane due to years of widespread use, but these early forms of NLP have been slowly preparing humans to become more and more acclimated to communicating with computers as the years pass.
Now, thanks to more complex and perhaps obvious examples of NLP, i.e. daily interactions with the aforementioned Siri, Alexa and Cortana, humans are increasingly comfortable interacting with machines, and the future for NLP is literally limitless – MarketsandMarkets research projects that the global NLP market will increase by 15 per cent by 2021.
Through advanced computational techniques, NLP accesses raw text to extract meaningful data, thus expediting tasks such as ensuring data security through tokenisation and parsing, part-of-speech tagging, identifying semantic relationships and detecting languages. In regard to part-of-speech tagging, the same way a child still perfecting language may identify a semantic relationship between the words “large” and “big” and label them as adjectives, computers can draw similar conclusions through the use of NLP. Likewise, when employed for language detection, AI can mimic the exact manner in which a seasoned translator quickly detects a language and assesses word meanings, only faster. Why is this capability useful? Just look at NLP’s time-saving applications in personal assistance, gauging sentiment, automating language translation and summarising text.
The adoption of NLP is expected to pick up momentum in the coming years with the adoption of more personal assistants, increased smartphone functionalities and the evolution of Big Data to automate even more routine human tasks. The demand for NLP is driven by several key factors, including the ever-growing data generation in business organisations across the globe, the rising demand for a superior customer experience, and the increased use of smart devices across enterprises.
Today, NLP is most commonly used in machine translation and cognitive search. In recent years, conversational search, such as when one asks Siri to look up the weather in New York or requests that Alexa shuffles songs by U2, is responsible for the dramatic surge in human-machine interaction.
Chatbots are a prime example of how NLP is being used today. Through the use of NLP, chatbots are able to manage complex interactions and streamline business processes. In the past, chatbots were merely used for customer interaction; however, today a large number of startups are now applying the technology to other business areas, such as Human Resources (HR), including apps like Talla and Growboat.
NLP is also incredibly useful for organisations to analyse public sentiment about their companies. Sentiment analysis is the process of identifying human emotions derived from text within a social media post. With NLP, companies are now able to access public sentiment about their brand faster and more effectively than ever before, and can quickly determine whether they are receiving positive feedback or undesirable criticism and address their customers’ concerns in a timely manner.
NLP is also finding uses in a diverse spectrum of major industries. Banks and several other financial servicing organisations already deploy virtual assistants to resolve basic customer inquiries like how to open an account or recommendations for choosing between types of accounts. Major automotive manufacturers have integrated voice recognition, Natural Language Understand (NLU) and text-to-speech solutions in their driver assistance systems to access apps and services through voice commands. Similarly, the healthcare industry is integrating NLP into its physician documentation, allowing for the quick documentation of a patient’s story in real time.
NLP in the Near Future
As technology progresses, NLP will continue advancing in combination with machine learning. For instance, newer, smarter chatbots are beginning to use deep learning to analyse human response inputs and generate a response accordingly. What does this mean? Well, for music lovers it means that music systems will learn to play songs based on an individual’s most recent playlist. NLP is also expected to offer more comfort to humans when combined with IoT applications, such as smart homes, which will allow humans to use their voice to control smart appliances.
The true key to unlocking the future of NLP is Natural Language Generation (NLG), which will allow machines to mine massive quantities of numerical data, identify patterns and share that information in a manner comprehensible to humans. The implementation of NLG and related AI tools will allow businesses to manage and use such large volumes of data effectively.
Until recently, media companies struggled tremendously with generating content their audiences would find relevant, interesting or informative. But, with the employment of NLG, many media organisations are beginning to transform information about various events into comprehensible stories. This process can save writers’ times and allows them to focus on other important tasks.
Beyond the media industry, NLG is anticipated to be used in banking, financial services, insurance, retail, government and healthcare for applications such as fraud detection, predictive maintenance, risk and compliance management and customer experience management. In fact, it is predicted that NLG will become an integral aspect of the Business Intelligence (BI) ecosystem in as little as two to three years.
Recent advances in AI and NLP are actively revolutionising communication between humans and machines, and even more routine human tasks are expected to be automated in the coming years. Driven by the exponential growth in data, the increased focus on exceptional customer experiences, and the proliferation of smart devices, NLP has become a reality of daily life and has proven to be one of the most significant technological advancement across all industries in recent decades.
Sapan Shah, team lead for research, information and communication technology, MarketsandMarkets (opens in new tab)
Image source: Shutterstock/polkadot_photo | <urn:uuid:cd1265d8-3525-4940-9ee5-d27f7324dabf> | CC-MAIN-2022-40 | https://www.itproportal.com/features/natural-language-processing-today-and-in-the-near-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00195.warc.gz | en | 0.946808 | 1,336 | 3.328125 | 3 |
Computer Vision Applications Examples Across Different Industries
The concept of computer vision was first introduced in the 1970s. The original ideas were exciting but the technology to bring them to life was just not there. Only in recent years did the world witness a significant leap in technology that has put computer vision on the priority list of many industries.
Since 2012, when the first significant breakthroughs in computer vision were made at the University of Toronto, the technology has been improving exponentially. Convolutional neural networks (CNNs) in particular have become the neural network of choice for many data scientists as it requires very little pre-programming compared to other image processing algorithms. In the last few years, CNNs have been successfully applied to identify faces, objects, and traffic signs as well as powering vision in robots and self-driving cars.
Greater access to images also contributed to the growing popularity of computer vision applications. Websites such as ImageNet make it possible to have almost instant access to images that can be used to train algorithms. And this is only the beginning. The worldwide library of images and videos is growing every day. According to an analysis from Morgan Stanley, 3 million images are shared online every day through Snapchat, Facebook, Facebook Messenger, Instagram, and WhatsApp, and most of them are owned by Facebook.
Computer vision artificial intelligence (AI) market revenues worldwide, from 2015 to 2019, by application
The Future of Computer Vision. New Applications to Come
Computer vision is a booming industry that is being applied to many of our everyday products. E-commerce companies, like Asos, are adding visual search features to their websites to make the shopping experience smoother and more personalized.
Here are some computer vision examples.
Apple unveiled Face ID in 2017. In 2018, they announced its better version powered by neural networks. The third generation of Face ID came to the scene in 2019. Based on a powerful face recognition sensor, it’s become 30 % faster. Today, Face ID is used by millions of people for unlocking phones, making payments and accessing personal data. Moreover, Apple made it possible for users to better detect their faces in masks. The latest iPhone update iOS 13.5 streamlined the whole process. Now, users have a better chance of unlocking their phones with Face ID, or, if it fails, they’re asked to enter their PIN code.
And more money is being invested in new ventures every year. AngelList, a U.S. based platform that connects startups and investors, lists 529 companies under the label of the technology. The average evaluation of these companies is at $5.2 M each. Many of these are in the process of raising between $5M and $10M in different stages of funding. It’s safe to say there is a lot of money being poured into technology development.
So, why are applications of computer vision gaining such popularity? Because of the potential benefits that can be reaped from replacing a human with a computer in certain areas of our lives.
As human beings, we use our eyes and brains to analyze our visual surroundings. This feels natural to us and we do it pretty well. A computer, on the other hand, cannot do that automatically. It needs computer vision algorithms and applications in order to learn what it’s “seeing”. It takes a lot of effort but once a computer learns how to do that, it can do it better than almost any human on earth.
This can make processes faster and simpler by replacing any visual activity. Unlike humans, who can get overwhelmed or biased, a computer can see many things at once, in high detail, and analyze without getting “tired”. The accuracy of computer analysis can bring tremendous time savings and quality improvements, and thereby free up resources that require human interaction. So far, this can only be applied to simple processes only but many industries are successfully pushing the limits of what the technology can do.
Computer Vision Applications in Different Industries
Application of computer vision technology is very versatile and can be adapted to many industries in very different ways. Some use cases happen behind the scenes, while others are more visible. Most likely, you have already used products or services enhanced by the innovation.
Some of the most famous applications of computer vision have been done by Tesla with their Autopilot function. The automaker launched its driver-assistance system back in 2014 with only a few features, such as lane centering and self-parking, but it’s set to accomplish fully self-driving cars sometime in 2018.
Features like Tesla’s Autopilot are possible thanks to startups such as Mighty AI. It offers a platform to generate accurate and diverse annotations on the datasets to train, validate, and test algorithms related to autonomous vehicles.
Computer vision coupled with sensors can work wonders for critical equipment. Today, the technology is being used to check on important plants or equipment in there. Infrastructure faults and problems can be prevented with the help of computer vision that is wise enough to estimate its health and efficiency. Many companies are syncing predictive maintenance with their infrastructure to keep their tools in good shape. For example, ZDT software made by FANUC is a preventive maintenance software designed to collect images from camera attached to robots. Then this data gets processed to provide trouble diagnosis and detect any potential problems.
The innovation has made a splash in the retail industry as well.
Walmart is using computer vision to track checkout theft and prevent shrink in 1,000 stores across the country. They’ve rolled out a Missed Scan Detection program that uses cameras to detect scan errors and failures in no time. Once an error is detected, the technology informs checkout managers so they can address it. This initiative helps reduce ‘shrinkage’ that combines theft, scan errors and fraud. For now, the program has proved effective in digitizing checkout surveillance and preventing losses.
A startup called Mashgin is working on a solution similar to Amazon Go. The company is working on a self-checkout kiosk that uses computer vision, 3D reconstruction, and deep learning to scan several items at the same time without the need of barcodes. The product claims to reduce check out time by up to 10x. Their main customers are cafeterias and dining halls operated by Compass Group.
Although the technology has not yet proved to be disruptive in the world of insurance and banking, a few big players have implemented it in the onboarding of new customers.
The Bank of America is no stranger to AI. They’re big fans of data analytics and are using it for effective fraud management. Slowly but surely they’re adopting computer vision. They’re applying it to resolve billing disputes. Analyzing dispute data, the technology is quick to deliver a verdict and save the employees’ time. Caixabank is also welcoming computer vision. In 2019, they allowed their clients to withdraw money via ATMs using face recognition. The ATM can recognize 16,000 facial points on an image to verify the identity of a person.
In healthcare, computer vision has the potential to bring in some real value. While computers won’t completely replace healthcare personnel, there is a good possibility to complement routine diagnostics that require a lot of time and expertise of human physicians but don’t contribute significantly to the final diagnosis. This way computers serve as a helping tool for the healthcare personnel.
For example, Gauss Surgical is producing a real-time blood monitor that solves the problem of inaccurate blood loss measurement during injuries and surgeries. The monitor comes with a simple app that uses an algorithm that analyses pictures of surgical sponges to accurately predict how much blood was lost during a surgery. This technology can save around $10 billion in unnecessary blood transfusions every year.
One of the main challenges the healthcare system is experiencing is the amount of data that is being produced by patients. It’s estimated that healthcare related data is tripled every year. Today, we as patients rely on the knowledge bank of medical personnel to analyze all that data and produce a correct diagnosis. This can be difficult at times.
Microsoft’s project InnerEye is working on solving parts of that problem by developing a tool that uses AI to analyze three-dimensional radiological images. The technology potentially can make the process 40 times quicker and suggest the most effective treatments.
Agriculture has always been deeply steeped in tradition. Computer vision is here to change that. What exactly can the technology bring to the table? It can offer a helping hand in mapping, analyzing soil, counting livestock, evaluating crop yield and its ripeness and more. RSIP vision developed plenty of agriculture solutions. Using deep learning, sensory and satellite imagery they can estimate seasonal yield before harvesting. They made it possible for farmers to make yield estimation using their smartphones or tablets. One Soil Platform streamlines farming. They develop solutions that help collect field data and monitor plants. More importantly, the technology can help perform routine and time-consuming tasks like planting, harvesting and evaluating plant health and development. All rolled into one, it does help farmers streamline their work.
The innovation enables security of public places like parking lots, the subway, railways and bus stations, roads and highways, etc. The application of computer vision for security purposes is diverse. It’s face recognition, crowd detection, human abnormal behavior detection, illegal parking detection, speeding vehicle detection and more. The technology helps strengthen security and prevent accidents of various kinds. Racetrack unveiled surveillance solutions that detect abnormal activities and inform managers to intervene.
Challenges of Applied Computer Vision
As illustrated above, the technology has come a long way in terms of what it can do for different industries. However, this field is still relatively young and prone to challenges.
Not Accurate Enough for the Real World
One major aspect that seems to be the background for most of the challenges is the fact that the technology is still not comparable to the human visual system, which is what it essentially tries to imitate.
Computer vision algorithms can be quite brittle. A computer can only perform tasks it was trained to execute, and falls short when introduced to new tasks that require a different set of data. For example, teaching a computer what a concept is hard but it is necessary in order for it to learn by itself.
A good example is the concept of a book. As kids, we know what a book is, and after a while can distinguish between a book, a magazine or a comic while understanding that they belong to the same overall category of items.
For a computer, that learning is much more difficult. The problem is escalated further when we add ebooks and audiobooks to the equation. As humans, we understand that all those items fall under the same concept of a book, while for a computer the parameters of a book and an audiobook are too different to be put into the same groups of items.
In order to overcome such obstacles and function optimally, computer vision algorithms today require human involvement. Data scientists need to choose the right architecture for the input data type so that the network can automatically learn features. An architecture that is not optimal might produce results that have no value for the project. In some cases, an output of an algorithm can be enhanced with other types of data, such as audio and text, in order to produce highly accurate results.
In other words, the technology still lacks the high level of accuracy that is required to function efficiently in the real, diverse world. As the development of this technology is still in progress, much tolerance for mistakes is required from the data science teams working on it.
Lack of High-Quality Data
Neural networks used for computer vision applications are easier to train than ever before but that requires a lot of high-quality data. This means that the algorithms need a lot of data that is specifically related to the project in order to produce good results. Despite the fact that images are available online in bigger quantities than ever, the solution to many real-world problems calls for high-quality labeled training data. That can get rather expensive because the labeling has to be done by a human being.
Let’s take the example of Microsoft’s project InnerEye. The tool utilizes computer vision to analyze radiological images. The algorithm behind this most likely requires well-annotated images where different physical anomalies of the human body are clearly labeled. Such work needs to be done by a radiologist with experience and a trained eye.
According to Glassdoor, an average base salary for a radiologist is $290.000 a year, or just short of $200 an hour. Given that around 4-5 images can be analyzed per hours, and an adequate data set could contain thousands of them, proper labeling of images can get very expensive.
In order to combat this issue, data scientists sometimes use pre-trained neural networks that were originally trained on millions of pictures as a base model. In the absence of good data, it’s an adequate way to get better results. However, the algorithms can learn about new objects only by “looking” at the real-world data.
Now that the technology has finally caught up the original ideas of computer vision pioneers from the 70s, we are seeing this technology being implemented in many different industries. Both big players, like Facebook, Tesla, and Microsoft, as well as small startups, are finding new ways how computer vision software can make banking, driving, and healthcare better.
The main benefit of the technology is the high accuracy with which it can replace human vision if trained correctly. There are a number of processes that today are done by people that can be replaced by artificial intelligence applications and eliminate mistakes due to tiredness, save time and cut costs significantly.
As great as computer vision algorithms are today, they still suffer from some big challenges. The first is lack of well-annotated images to train the algorithms to perform optimally, and the second being lack of accuracy when applied to real-world images different from the ones from the training dataset.
Work with InData Labs on Your Breakthrough Computer Vision App
Have a project in mind but need some help implementing it? Schedule an intro consultation with our deep learning engineers to explore your idea and find out if we can help. | <urn:uuid:8243225d-300b-49d3-a36a-816ed8a4839e> | CC-MAIN-2022-40 | https://indatalabs.com/blog/applications-computer-vision-across-industries | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00195.warc.gz | en | 0.948255 | 2,991 | 2.640625 | 3 |
Mercury said Thursday its SSDR will be integrated into the Earth Imaging Spectrometer that will be launched to the International Space Station in 2022 as part of the Earth Surface Mineral Dust Source Investigation.
The company signed an agreement with the NASA Jet Propulsion Laboratory to supply the data recorders in support of the EMIT mission, which seeks to map regions that largely contribute to the production of mineral dust.
The research effort's goal is to use measurements of mineral dust to improve forecasts on the cooling or warming of the atmosphere.
Chris Opoczynski, vice president and general manager of Mercury's data segment, said the company's SSDRs are designed to handle radiation impacts and perform in extended operations.
Mercury offers a suite of solid-state drives that works to store sensitive data even while experiencing extreme environments. | <urn:uuid:e840db69-dd8f-4e42-9800-20622691fb1d> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2021/03/mercury-systems-to-supply-data-recorders-for-nasas-research-mission-on-mineral-dust/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00195.warc.gz | en | 0.920013 | 165 | 2.5625 | 3 |
February 24, 2021
Source: AI Trends Staff
Assuring that the huge volumes of data on which many AI applications rely is not biased and complies with restrictive data privacy regulations is a challenge that a new industry is positioning to address: synthetic data production.
Synthetic data is computer-generated data that can be used as a substitute for data from the real world. Synthetic data does not explicitly represent real individuals. “Think of this as a digital mirror of real-world data that is statistically reflective of that world,” stated Gary Grossman, senior VP of Technology Practice Edelman, public relations and marketing consultants, in a recent account in VentureBeat. “This enables training AI systems in a completely virtual realm.”
The more data an AI algorithm can train on, the more accurate and effective the results will be.
To help meet the demand for data, more than 50 software suppliers have developed data synthetic products, according to research last June by StartUs Insights, consultants based in Vienna, Austria.
One alternative for responding to privacy concerns is anonymization, the masking or elimination of personal data such as names and credit card numbers from eCommerce transactions, or removing identifying content from healthcare records. “But there is growing evidence that even if data has been anonymized from one source, it can be correlated with consumer datasets exposed from security breaches,” Grossman states. This can even be done by correlating data from public sources, not requiring a security hack.
A primary tool for building synthetic data is the same one used to create deepfake videos—generative adversarial networks (GANs), a pair of neural networks. One network generates the synthetic data and the second tries to detect if it is real. The AI learns over time, with the generator network improving the quality of the data until the discriminator cannot tell the difference between real and synthetic.
A goal for synthetic data is to correct for bias found in real-world data. “By more completely anonymizing data and correcting for inherent biases, as well as creating data that would otherwise be difficult to obtain, synthetic data could become the saving grace for many big data applications,” Grossman states.
Big tech companies including IBM, Amazon, and Microsoft are working on synthetic data generation. However, it is still early days and the developing market is being led by startups.
A few examples:
AiFi — Uses synthetically generated data to simulate retail stores and shopper behavior;
AI.Reverie — Generates synthetic data to train computer vision algorithms for activity recognition, object detection, and segmentation;
Anyverse — Simulates scenarios to create synthetic datasets using raw sensor data, image processing functions, and custom LiDAR settings for the automotive industry.
Synthetic Data Can Be Used to Improve Even High-Quality Datasets
Even if you have a high-quality dataset, acquiring synthetic data to round it out often makes sense, suggests Dawn Li, a data scientist at the Innovation Lab of Finastra, a company providing enterprise software to banks, writing in InfoQ
For example, if the task is to predict whether a piece of fruit is an apple or an orange, and the dataset has 4,000 samples for apples and 200 samples for oranges, “Then any machine learning algorithm is likely to be biased towards apples due to the class imbalance,” Li stated. If synthetic data can generate 3,800 more synthetic examples for oranges, the model will have no bias toward either fruit and thus can make a more accurate prediction.
For data you wish to share that contains personally identifiable information (PII), and for which the time it takes to anonymize makes that impractical, synthetic samples from the real dataset can preserve important characteristics of the real data and can be shared without the risk of invading privacy and leaking personal information.
Privacy issues are paramount in financial services. “Financial services are at the top of the list when it comes to concerns around data privacy. The data is sensitive and highly regulated,” Li states. As a result, the use of synthetic data has grown rapidly in financial services. While it is difficult to obtain more financial data, because of the time it takes to generate real world experience, synthetic data can be generated to allow the data to be used immediately.
A popular method for generating synthetic data, in addition to GANs, is the use of variational autoencoders, neural networks whose goal is to predict their input. Traditional supervised machine learning tasks have an input and an output. With autoencoders, the goal is to use the input to predict and try to reconstruct the input itself. The network has an encode and a decoder. The encoder compresses the input, creating a smaller version of it. The decoder takes the compressed input and tries to reconstruct the original input. In this way, scaling down the data in the encode and building it back up from the encode, the data scientist is learning how to represent the data. “If we can accurately rebuild the original input, then we can query the decoder to generate synthetic samples,” Li stated.
To validate the synthetic data, Li suggested using statistical similarity and machine learning efficacy. To assess similarity, view side-by-side histograms, scatterplots, and cumulative sums of each column to ensure we have a similar look. Next, look at correlations and plot a matrix of the real and synthetic data sets to get an idea of how similar or different the correlations are.
To assess machine learning efficacy, review a target variable or column. Create some evaluation metrics and assess how well the synthetic data performs. “If it performs well upon evaluation on real data, then we have a good synthetic dataset,” Li stated.
Best Practices for Working with Synthetic Data
Best practices for working with synthetic data were suggested in a recent account in AIMultiple written by Cem Dilmegani, founder of the company that seeks to “democratize” AI.
First, work with clean data. “If you don’t clean and prepare data before synthesis, you can have a garbage in, garbage out situation,” he stated. He recommended following principles of data cleaning, and data “harmonization,” in which the same attributes from different sources need to be mapped to the same columns.
Also, assess whether synthetic data is similar enough to real data for its application area. Its usefulness will depend on the technique used to generate it. The AI development team should analyze the use case and decide if the generated synthetic data is a good fit for the use case.
And, outsource support if necessary. The team should identify the organization’s synthetic data capabilities and outsource based on the capability gaps. The two steps of data preparation and data synthesis can be automated by software suppliers, he suggests. | <urn:uuid:8cf61f63-6852-4241-9072-317f90a7f381> | CC-MAIN-2022-40 | https://internetofbusiness.com/use-of-synthetic-data-in-early-stage-seen-as-an-answer-to-data-bias/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00395.warc.gz | en | 0.923281 | 1,414 | 3.28125 | 3 |
Can You Handle the Heat: Find Out How to Stay Safe While Vacationing During Intense Heatwaves
This has been one of the hottest summers on record, and we all know that unless we can curb our greenhouse gas emissions, heatwaves will only become more intense and more frequent.
However, most people do not want to put their summer travel plans on hold because of something as seemingly mundane as the weather. The key is knowing how to stay safe during this new “normal.”
If you are planning to travel during times of intense heat, it is important to be aware of the health risks and take precautions to avoid them.
Here are some of the most common health risks of spending time outdoors during a heatwave.
● Heat stroke: Heat stroke occurs when your body cannot regulate its internal temperature. It can also cause loss of consciousness, confusion, seizures, irritated skin, and rapid breathing. If left untreated, heat stroke can be fatal.
● Heat exhaustion: Similar but not the same as heat stroke, heat exhaustion is caused by excessive sweating, which leads to dehydration. Heat exhaustion can develop into heat stroke if not treated, so it is important to notice the early signs. Symptoms include headache, extreme thirst, excessive sweating, body cramps, pale skin, and more.
● Heat cramps: Although not as dangerous as the former two, heat cramps can be very uncomfortable. These cramps occur when you sweat so much that there is not enough salt in your body. If you feel heat cramps, you should stop any physical activity you’re doing, drink juice or a sports drink with electrolytes, and rest.
● Heat rash: You will notice heat rash pretty quickly once it happens. It is red and presents as 2mm to 4mm raised spots. It can occur anywhere on the body but is not contagious. Treatment includes cooling your skin and avoiding exposure to the heat source that caused the rash.
● Rhabdomyolysis: This is a life-threatening heat-induced illness that causes the death of muscle, which releases waste into the bloodstream. The kidneys usually remove waste from the blood and flush it out in urine. However, if there is too much waste, the kidneys will not work fast enough. This can lead to kidney failure and even death. While relatively rare, Rhabdomyolysis can be caused by intense exercise, severe dehydration, muscle trauma, certain prescription medications and/or illegal drugs, and long periods of inactivity.
How can you avoid heat-related illnesses?
You booked your travel plans last year and have no intention of canceling. Or you need to get away for a few days on a spontaneous trip that will refresh and revitalize you. Whatever your reason for traveling during the summer, there is no reason to cancel due to the heat!
However, to avoid heat-related illnesses like those listed above, it’s important to follow several guidelines:
● Wear loose fitting clothing
● Drink a lot of water and fluids with electrolytes
● Avoid drink alcohol or other diuretics
● Put on sunscreen and a hat when outdoors
● Do not exercise outside in the heat of the day
● Do not spend hours in the heat if you’ve never done it before (you need to build up your strength gradually)
● Do not use the oven or other heating electronics if you are indoors
Do not risk ruining your vacation by being cavalier about these guidelines. More importantly, do not risk your life! While most heat-related illnesses are not life threatening, why take the chance? In fact, why take the chance of even getting a little bit sick? Drinking enough water, minimizing time spent outdoors in intense heat, and common sense can go a long way. | <urn:uuid:8dc961b8-e6c1-4153-a6fa-76c825085eb6> | CC-MAIN-2022-40 | https://www.interforinternational.com/can-you-handle-the-heat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00395.warc.gz | en | 0.952881 | 779 | 2.890625 | 3 |
GitHub is a powerful collaboration tool and version control platform with more than 56 million users. One of its most attractive features is GitHub Issues, a tracking tool that allows teams to collaborate on projects in real-time. Read on to find out how to effectively use GitHub Issues for common development problems.
What Are Issues in Software Development Anyway?
Just like in any other field of work, problems are bound to come up in software development. Typically, these challenges manifest as problems with the code’s functionality, such as bugs or defects.
However, not all software development problems are related to the final product’s functionality. Sometimes, developers may suggest a change to improve the code, ask questions about a repository, or request missing documentation. All these are called issues.
In short, an Issue in software development refers to a unit of work that needs to be performed to improve a system. Therefore, issue tracking, also commonly referred to as issue management, is a fundamental part of software development. In this case, issue tracking is the process of detecting, reporting, fixing, and documenting all issues in the final product.
How GitHub Issues Works
Given that issues are bound to come up in the development lifecycle, it is helpful to have a way to track them as they occur. Moreover, it is beneficial to have a central place for developer teams to collaborate on issues. Tools like Jira and YouTrack are handy for issue tracking.
However, GitHub comes with its own lightweight issue tracking system known as GitHub Issues. This way, you don’t need a separate tool or platform just for issues. This functionality is built right into your repositories and is available immediately when you create a new repository.
Here, anyone in a public repository can quickly create an issue. Each new issue comes with a discussion thread where team members can contribute. Typical scenarios where GitHub users create issues include to:
- Discuss specific details of a project, such as feedback, bug reports, or planned improvements.
- Propose a new feature or change.
- Define requirements for a new feature.
- Manage a specific workflow, such as getting a commitment from other team members or requesting access to a system.
- Organize work on a specific deliverable.
Additionally, GitHub offers several ways to create an issue, including from a note in a project, a repository, a specific line of code, an item in a task, or a comment in an issue or pull request.
GitHub even provides labels to categorize issues. The labels make it easier to track issues. For example, you can tag unintended behavior or unexpected problems with the Bug label. Equally, you can label all-new feature requests with the Enhancement label. This way, you can sort through issues at a glance or even filter them by type.
GitHub even lets you edit labels and assign each label a unique color to make sorting through your issues even more effortless.
Let’s take a look at some examples of GitHub Issues use cases.
Example #1 – Reporting Bugs With GitHub Issues
Say you have encountered a problem when testing software and want to bring it to the attention of the developers so they can fix it. GitHub Issues is the perfect place to submit a bug report. But first, you’ll need to make sure that the bug gets due attention and is fixed quickly.
To this end, you’ll need to write a good bug report. First, the report should include a clear and concise description of the bug, along with a unique bug number. Next, include the steps to reproduce the bug, the tests you performed, and where the defect occurred. In short, provide all the information to make it easy for the developer to understand the problem clearly.
With your information in hand, it is time to submit your bug report with GitHub issues:
- First, navigate to the main page of the repository you are working on.
- Then, click Issues.
- Click New Issues.
Then, fill in your information, including a descriptive title and the details of the issue. There is an option to preview your issue before submitting it. This way, you can look at your proposal from the developers’ perspective and make any formatting changes if required.
Finally, click Submit new issue. The developers will now be able to see the details of the bug in their Issue List.
There are a few more things that you can do to improve your reporting. For example, adding a label specifying that the issue is bug-related makes it easy for developers to categorize their workflow. Additionally, you can assign the issue to a specific team member to guarantee that someone will work on the issue.
You can edit the report even after submitting it. You’ll also get a unique URL for your issue that you can share with team members. You can also use this URL to reference other problems or pull requests.
Example #2 – Commenting on Issues in GitHub
Let’s follow the previous example. But, this time, you are the one to fix the bug. However, you don’t want to change the code base until you have the go-ahead from other crucial team members.
Or, maybe you want to offer your insights on another collaborator’s issue.
GitHub supports comments on issues to make sure everyone is on the same page. In this case, you or other team members can add comments to an issue. These comments may be suggestions on how to approach the fix or other input relating to the issue.
The crucial thing is to keep the comments relevant to the issue. To comment on an issue:
- Navigate to your repository’s main page.
- Click Issues under your repository name.
- Click the issue you want to comment on.
- Scroll to the bottom of the issue and write your comment.
- Click Comment to submit it.
Not every issue requires detailed or complex points. To this end, GitHub lets you use reactions to express your feelings more directly. These reactions are limited to feelings relevant to typical code discussions such as confusion, smile, +1, or heart reactions.
These reactions are also available for pull requests. Finally, GitHub’s @mention feature lets you tag specific people in your comments. The mentioned members will get an email notification, so remember to use this feature sparingly.
How to Get Started With Common GitHub Issues
If you’re ready to start using GitHub’s issue tracker, here’s how to go about it:
Step 1 – Stick With the GitHub Issues Default Settings
You’ll probably want to customize GitHub Issues right off the bat. For example, you might want to create issue forms or templates to structure reporting. But, first, it’s a good idea to get a feel of how the feature works in a collaborative setting before implementing a tighter rein on discussions.
To this end, stick with the default settings, at least for a while. This step may sound counter-intuitive. After all, the idea is to limit noise and have an actionable discussion.
But, the idea behind the collaboration is to create an environment where developers and stakeholders can share information and weigh in before code gets done. Providing a free-for-all environment encourages the free flow of ideas.
Also, this strategy helps you to understand exactly what’s relevant when it’s time to structure issue tracking. You’ll have a better sense of what discussion should occur on GitHub Issues and which can be deferred to another collaboration channel such as GitHub Discussions.
Step 2 – Create A Standard Guideline For Issue Reporting
Guidelines for issue reporting can help to bring some order to GitHub Issues. Of course, you don’t want to limit the flow of information just yet, but you still need a semblance of order. Creating a structure that everyone follows will help bring some sanity to issues management.
Consider documenting a structure for all users to follow when using GitHub Issues. Some of the things to think about include:
Titles: It is a no-brainer that every issue should have a title. But, encourage your team to keep the titles short and descriptive. After all, most contributors only see the title in the list view. Bad titles are more likely to be ignored or dismissed.
Description: Encourage reporters to keep their issue descriptions clear and concise. Bullet points, colons, and incomplete clauses are great for keeping the message brief. Also, encourage contributors to offer details of what they tried to resolve the issue, even if it did not work.
Similarly, keep the issue descriptions up-to-date with the most current information and status. This way, team members don’t need to read the entire thread history to understand what’s going on.
Directly Responsible Individual: Ask contributors to mention the Directly Responsible Individual (DRI) in their issue. The DRI is responsible for completing a clear deliverable. Also, make separate issues when there is more than one person required to work on the task. Additionally, mentioning members to provide specific feedback helps to make collaboration more productive.
Use Formatting: GitHub issues offer great formatting options to help make collaboration more productive. Use things like check-boxes, bold text, lists, images, links, and syntax highlighting where appropriate. Shrewd formatting can help to reduce endless back and forth and ensure that everyone is on the same page.
Step 3 – Structure Issue Reporting
Once you have a feel of using GitHub Issues, you may notice specific trends. For example, you may find reporters making announcements, sharing company news, or discussing open-ended questions.
While it’s encouraged to keep everyone in the loop, not every discussion belongs in GitHub Issues. So, you may want to limit this feature to actionable discussions, such as sharing feedback, filing a bug report, or asking specific questions about files in the repository. But, again, an issues template is invaluable for creating structure.
GitHub already offers templates for the most common issues to get you started. Or, you can build a template from scratch using GitHub’s template builder. I suggest starting with the standard templates that GitHub offers. You can even customize the templates to fit your team’s preferences.
Then, you can work up to building custom templates once you’ve identified areas for improvement. These templates are stored in your repository and are available any time a reporter creates a new issue. The template automatically pre-populates the issue form to guide your team on how to structure their issue.
Step 4 – Use Labels (Sparingly)
GitHub offers a variety of labels to make work more manageable. For example, you can quickly aggregate similar kinds of issues based on their labels.
You can start with the default labels and create your own as you streamline your workflow. Besides issues, you can also use these labels on discussions and pull requests. Some of the crucial labels to consider include:
- Bug – Indicates unintended behavior or an unexpected problem.
- Duplicate – Indicates identical issues, discussions, or pull requests.
- Documentation – Indicates required additions or improvements to the documentation.
- Help Wanted – Indicates a maintainer requires help on an issue.
- Question – Indicates an issue that needs more information or discussion.
- Invalid – Indicates an issue that is no longer relevant.
- Enhancements – Indicates a new feature request.
Again, the labels mentioned above are included when you create a repository. You have the option to edit the labels or delete them altogether. But I recommend sticking with the default until you are used to how they work. Of course, unless your organization has precise requirements.
You can apply labels by navigating to the issue and clicking the settings symbol in the right sidebar. Then, simply choose the tag you’d like to apply to the issue from the dropdown menu.
However, don’t get too carried away using labels. Instead, use them sparingly. Using multiple tags for issues makes it hard on the eyes. It is also challenging to prioritize issues to work on if each one is decorated with numerous labels.
For example, an end-user experience expert is only interested in relevant issues and may not care much about data access problems.
Step 5 – Use Checklists For Large Issues
Initially, your team may use GitHub Issues for simple workflows like requesting documentation or discussing code. But, the workflow becomes increasingly complex as the project unfolds. For example, you may need to migrate to a new build system or implement a new feature. GitHub Issues supports these complex issues and allows you to aggregate the entire context of the issue in one place.
Consider breaking the issue down into checklists rather than creating a network of issues for a significant issue like migrating to a new build system. This way, you can quickly check off boxes as your team makes progress on the issue.
Specifically, GitHub’s Markdown feature helps to track the progress of large issues. Here, each task that needs to be completed goes on a separate line. Each task also comes with a clickable checkbox. You can check or uncheck the box depending on whether or not the task has been completed.
Markdown comes with a lot of extra functionality to simplify collaboration. For example, the task lists’ progress appears in different places within GitHub, including your repository’s issues list. You can even reference another issue or convert a task into an issue.
You can create your task list items by prefacing the items with [ ] when adding your description. | <urn:uuid:602082ab-c931-4244-9204-a06b84b15882> | CC-MAIN-2022-40 | https://nira.com/common-github-issues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00395.warc.gz | en | 0.914276 | 2,810 | 2.578125 | 3 |
More than 130 million adults are living with diabetes or prediabetes in the US as of January 2022. The latent danger this metabolic condition represents for so many people around the world is unprecedented, and people expect enhanced and innovative treatment methods.
New technologies are being developed to build diabetes devices that improve patients’ lives. For instance, continuous glucose monitoring, mobile apps that count food volume and carbohydrate levels, and insulin pumps. Easing pain and improving dosing precision are just some goals of future diabetes devices. Innovation in the area continues, and many more patients can soon access the newly available technology.
Some aspects of this process can be aided by technology like the following:
People with diabetes today witness technology for the first time that allows healthcare companies to operate their medical equipment via mobile apps, including remote insulin dosage. This feature has been hinted at for many years but has yet to be approved for use in diabetic devices by the Food and Drug Administration (FDA). Nevertheless, the advances continue, and the diabetes devices available in the market are:
MIT and Brigham and Women’s Hospital researchers are developing an app that recognizes and quantifies food composition, benefiting diabetics in carbohydrate counting.
The first gadget consists of a lancet, glucose test strips, and an insulin needle. Users would first take a photo of their meal with a smartphone app to assess the food amount and carbohydrate levels. They would then begin the automated process of taking blood, calculating glucose through the app, and providing the correct insulin dose.
The second device requires one needle poke, which includes the glucose sensor into the insulin needle and administers the necessary quantity of insulin. It has a waiting time of 5 to 10 seconds.
The rise of the Internet of Medical Things (IoMT) in the industry allows continuous, remote, and real-time patient monitoring. IoMT technology connects patients and physicians via medical devices, providing remote access to gather, process, and send medical data via a secure network. Additionally, these technologies help reduce needless hospital stays and related health expenses by enabling wireless monitoring of health indicators.
The innovation in the healthcare and life sciences sector has reached treatments for diabetes, including software features such as dosage assistance, information, and reminders to aid users in making better health decisions based on their device readings.
The IoT in healthcare is one of the fastest-growing sectors, predicted to reach $176 billion by 2026. And the new goal for 2023 is to expand diabetes device use and include more people with type 2 diabetes in their treatments.
Adopting diabetic technology and new treatments will help what the World Health Organization considers to be an epidemic expected to affect 700 million people by 2045. With the aid of technology and innovative treatments, like the all-in-one device, doctors can reduce diabetes complications and other problems. Finding new routes to improve patients’ lives and technology implemented to facilitate processes has now become imperative. By doing so, a greater number of individuals may be helped, and the application of IoMT in diabetic operations is ushered into a better new tech environment.
For more information about the IoMT and what it is, visit IoMT and Medical Device Cybersecurity.
ITJ is devoted to serving fast-growing and high-value market sectors, particularly the Internet of Medical Things (IoMT), working with innovative medical device companies looking to improve people’s lives. With a unique BOT (build, operate, and transfer) model that sources only the best digital talent available, ITJ enables companies in the US to create technology centers of excellence in Mexico. For more information, visit www.itj.com. | <urn:uuid:aa3a376c-f388-4329-a511-1f9b20234591> | CC-MAIN-2022-40 | https://itjuana.com/the-future-of-diabetes-treatment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00395.warc.gz | en | 0.935937 | 739 | 2.96875 | 3 |
What is Identity Trust?
Identity trust can be broken down into two parts. The first part involves the ability to establish trust for invidual and business identities in the digital world for scenarios such as online account creation, logins, digital or electronic signatures, payments, PKI certificates, etc. For example, as a Certificate Authority (CA), DigiCert validates customer and business identities in order to issue digtal certificates. Identity trust and verification also allows for compliance with regulations, fraud prevention and helps provide a smooth experience for online users.
The second part of identity trust is the ability for online users to understand who they're interacting with on the internet and ensure that their interactions and transactions are secure or safe. For example, how do you know that a user or device is who they say they are? Well, Public Key Infrastructure (PKI) enables identity validation, which is an important component of maintaining online security and trust. By certifying that a user is who they say they are or a device is authentically issued to your organization, you can thereby trust your communications with those users or devices.
Read our PKI ebook here to learn more. | <urn:uuid:cec60399-c7bc-4417-ab13-f70a135c9ae9> | CC-MAIN-2022-40 | https://www.digicert.com/support/resources/faq/identity-and-access-trust/what-is-identity-trust | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00395.warc.gz | en | 0.936963 | 236 | 3 | 3 |
Vulnerabilities within an IT environment pose a big security risk and are a threat to digital data within an organization. These vulnerabilities can be exploited by others, or a lack of necessary precautions can result in damaged or lost organizational data. Therefore, it is essential to have a vulnerability management process in place for these reasons.
What is a vulnerability management process?
Vulnerability management is about asserting a level of control over the vulnerabilities that may exist in your IT environment. Thus, a vulnerability management process seeks to obtain that control through following an established set of procedures. It is a continual process in which vulnerabilities are identified and assessed, and then action is taken to limit their risk.
Key steps of a vulnerability management process
Four key steps exist in every successful vulnerability management process which are:
1) Locate and Identify
The first step to minimizing vulnerabilities is to identify where the vulnerabilities are located within your data system and what kind of vulnerabilities they are. There are multiple kinds of vulnerabilities, so there is not just one way to minimize their risk. For example, identifying how many and what kind of vulnerabilities are in your IT environment is key to making a plan to manage them.
Assess the vulnerabilities you identify to determine how much of a risk they each pose to the rest of your IT environment. After evaluating the risks of the vulnerabilities, you can then categorize and prioritize them based on impact. Documenting this information doesn’t have to be complicated. Today, IT documentation software is widely available and able to automate much of the documentation process. To make this data actionable, compile it into a single vulnerability management report. This provides you with a structured plan as to how to approach managing vulnerabilities and in what order they should be mitigated or remediated.
3) Monitor and Remediate
After evaluating the vulnerabilities you’ve found within your environment, you should proactively monitor the system for new vulnerabilities that occur. Once a new vulnerability is discovered, action should be taken. This can vary from correcting an issue with the vulnerability, completely removing the vulnerability or continual monitoring of the vulnerability. This step is continual in the vulnerability management process as new vulnerabilities are detected.
With an RMM, technicians can easily monitor for vulnerabilities and automatically remediate the problem, whether that’s restarting the device remotely, deleting and reinstalling the patch, and much more.
The final step in a vulnerability management process is to confirm whether the detected vulnerabilities have been appropriately dealt with. Verifying that each detected vulnerability has been mitigated means that the vulnerability management process was successful. Proper documentation of these successes will ultimately help your IT teams run more efficiently and securely by scaling known solutions across growing IT environments.
Examples of vulnerabilities
Various vulnerabilities can exist within an organization’s IT environment. These encompass weaknesses in your organization’s data system that can be susceptible to various attacks or undesirable consequences. The different kinds of vulnerabilities include:
Some of the most basic types of vulnerabilities are physical vulnerabilities. Physical security attacks include anything from break-ins and thefts to extreme weather and the destruction that comes with it. On-site vulnerabilities such as power and climate control can also interfere with business uptime, place digital data at risk of being lost. They can result in damage to the data system.
The people employed in your organization also pose another risk to your IT security. Because humans are responsible for the manual operation of your business’s data system, a variety of risks can be exposed simply due to human error. For example, incomplete documentation or training, carelessness, or simply forgetting how to carry out company procedures correctly may result in a less secure IT environment.
Personnel-based vulnerabilities also include the risk of having crucial organizational data on employees’ devices. If necessary, data is only stored on one external device, and you certainly risk losing it. Even more severe is if someone has personal access to critical data and does not take proper measures to secure it, a hacker might easily break into the data system. - talk about the data that’s in people’s hands (implicit information - not written down anywhere, just in someone’s head, can solve with IT documentation)
Configuration vulnerabilities are risks to your organization’s computer system due to misconfigurations. Misconfigurations can be incorrect or substandard default settings or technical issues that leave the system insecure. These vulnerabilities should be minimized quickly and efficiently to prevent attackers from exploiting these weaknesses.
Computer programs continually need updates or fixes to improve and make them more secure. These program vulnerabilities are managed through the use of patching software. This software can deploy patches to endpoints and ensure that the process is complete.
Vulnerability management tool benefits
Vulnerability management tools provide a means for you to carry out your vulnerability management process effectively. These tools also help to reduce organizational risk and reduce costs associated with known vulnerabilities. A few significant benefits of vulnerability management tools
With the help of vulnerability management tools, you can get an overall view of all the vulnerabilities that exist in your environment. These tools also help you assess the risk of the vulnerabilities, enabling you to take appropriate actions to safeguard your data system and secure crucial digital data. Having a full view of the perceived risks puts you in a much better position to manage them.
Automation is another common benefit of vulnerability management tools that can save you both time and effort in maintaining the security of your data. Schedule regular and consistent vulnerability scans so you can proactively reduce risks. Setting up automatic scanning and detection helps you recognize possible vulnerabilities before it’s too late to take action.
Automation can also be a benefit when it comes to the remediation of vulnerabilities. Automated remediation removes the manual work of resolving vulnerability tickets and provides you with peace of mind knowing that vulnerabilities are actively being reduced.
Vulnerability management tools can easily create reports that can give you a synopsis of the compiled data. These overviews give you a good sense of the security of your data system and can help you quickly identify areas that need improvement. Using consistent reports also gives you visibility into the security of your organization’s IT environment over time.
Alternatives to vulnerability management tools
While there’s no replacing specialized security software, many vulnerabilities can be managed using unified tools that combine endpoint management, cloud backups, documentation, and ticketing. With this combination, documentation, monitoring, and remediation can be automated for some of the most common IT vulnerabilities like zero-day patches, corrupted backups, or stolen credentials.
NinjaOne offers patching software to enable successful patching and management of your system’s vulnerabilities. Check out NinjaOne’s Patch Management Best Practices Guide, and sign up for a free trial of Ninja Patch Management.
Vulnerability management tools give you greater control
IT vulnerabilities are an unfortunate reality of working with digital data, but these vulnerabilities can be mitigated with the right plans and tools in place. Following the steps of a vulnerability management process will give you greater control over any risks in your data system. Vulnerability management tools can also help to lower organizational risk, thereby significantly lowering costs associated with remediating risk issues. | <urn:uuid:9347ce7a-1431-4008-ac74-3372040b22ad> | CC-MAIN-2022-40 | https://www.ninjaone.com/blog/vulnerability-management-process-steps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00395.warc.gz | en | 0.92792 | 1,494 | 2.515625 | 3 |
When major disasters strike, there are many individuals who take advantage of these opportunities. Such is the case with scammers and con-artists alike. As many countries are struggling to overcome COVID-19, scammers have used this opportunity to cause further worry and concern over people.
The world of cybersecurity is an interesting world filled with many brilliant minds. Every day we are working on building more security systems only to find exploits that can be patched up later. On top of that, you have the ongoing conflict between professionals, individuals, and businesses against hackers.
The number of security threats is massive and ever growing. It’s easier for people to become hackers, spread viruses, malware and trojans to others than before. And technology has expanded so much to the point that the damages can be frequent, costly, and affect many people.
For any IT department, mobile apps can be a total nightmare. There are millions out there and more are being made every day. Unfortunately many of the apps never were developed with security in mind.
While your IT department may not be working in this area, many IT organizations have tried to counter potential threats from mobile apps through various techniques over the years. Each one has had their own rate of success, but through their efforts, we’ve learned some methods to help with mitigating risks.
There have been countless breaches that have occurred over the years. Some of them have been after government bodies, but in most cases, hackers have been looking to gather data on folks like me and you.
While the use of that information gathered can vary widely, stollen information from places like banks or credit bureau agencies leave hackers more room to commit identity theft. What’s worse is in those instances those hackers can continue to exploit that information and leverage it indefinitely. | <urn:uuid:9151a27a-3db5-479c-85be-5a3ccb0fa986> | CC-MAIN-2022-40 | https://davidpapp.com/tag/security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00595.warc.gz | en | 0.972784 | 367 | 2.65625 | 3 |
In response to the news that Newsnow, a popular news aggregator service, has suffered a data breach, please see below comments from Jake Moore, security specialist at ESET.
Jake Moore, Security Specialist at ESET:
“Hackers are far too keen to attempt using stolen passwords across other online accounts which will soon become compromised as well.
As passwords here have been taken that are connected with email usernames, it drives home the fact that no one should use the same password for more than one account. We always ensure people have a strong complex password on any accounts but especially your email accounts. Please note that strength of a password is determined by its length and therefore shouldn’t be the default minimum length neither should it be related to you. We therefore advise that your passwords are made up of three unrelated words and not “yourcatsname.1”
The safer way to use unique passwords is by using a password manager. Using a password manager means you don’t have to remember the ridiculous amount of passwords we all need to have any sort of internet presence. You no longer have to use the same password everywhere, or use memorable facts such as your cat’s name. Since the password manager takes care of the remembering part, every password can be a long, totally random ton of characters. The strength is in complex length so brute-force password crackers would simply take too long.“ | <urn:uuid:f8a4e2a1-63ee-451e-92f3-a667a9ab0dc7> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/expert-comments/newsnow-suffers-data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00595.warc.gz | en | 0.937249 | 293 | 2.640625 | 3 |
Last year Artur Filipowicz, a computer scientist at Princeton University, had a stop-sign problem. Dr Filipowicz is teaching cars how to see and interpret the world, with a view to them being able to drive themselves around unaided. One quality they will need is an ability to recognise stop signs.
copyright by www.economist.com
To that end, he was trying to train an appropriate algorithm. Such training meant showing this algorithm (or, rather, the computer running it) lots of pictures of lots of stop signs in lots of different circumstances: old signs and new signs; clean signs and dirty signs; signs partly obscured by lorries or buildings; signs in sunny places, in rainy places and in foggy ones; signs in the day, at dusk and at night.
Grand Theft Auto stop signs
Obtaining all these images from photo libraries would have been hard. Going out into the world and shooting them in person would have been tedious. Instead, Dr Filipowicz turned to “Grand Theft Auto V”, the most recent release of a well-known series of video games. “Grand Theft Auto V” is controversial because of its realistic portrayal of crime and violence—but from Dr Filipowicz’s point of view it was ideal, because it also features realistic stop signs. By tinkering with the game’s software, he persuaded it to spit out thousands of pictures of these signs, in all sorts of situations, for his algorithm to digest. Dr Filipowicz’s stop signs are one instance of the fondness that students of artificial intelligence (AI, of which machine vision is an example) have for video games. There are several reasons for this popularity. Some people, such as Dr Filipowicz, use games as training grounds for the real world. Others, observing that different games require different cognitive skills, think games can help them understand how the problem of intelligence may be broken down into smaller, more manageable chunks. Others still, building on these two observations, think games can help them develop a proper theory of artificial (and perhaps even natural) intelligence.
Games are helping research thrive
For all of this to happen, though, the games themselves have first to be tweaked so that they can be played directly by another computer program, rather than by a human being watching the action on a screen. “Grand Theft Auto V”, for instance, can be turned from a source of pictures of road signs into a driving simulator for autonomous vehicles by bolting onto it a piece of software called “Deep Drive”. This lets the driving and navigation programs of such vehicles take control—a cheaper and safer way of testing driving software than letting it loose on roads. Games companies are beginning to understand this. In June 2015, for instance, Microsoft started Project Malmo, an AI-development platform based on a popular “world-building” game called “Minecraft” that it had recently purchased. In November 2016 Activision Blizzard, owners of “Starcraft II”, a science-fiction strategy game in which players build and command human and alien armies, announced something similar in collaboration with DeepMind, an AI firm owned by Alphabet, Google’s holding company. | <urn:uuid:c7268d1c-3a3f-422a-8a06-372b1d305834> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/05/20/grand-theft-auto-and-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00595.warc.gz | en | 0.962253 | 663 | 3.125 | 3 |
Artificial intelligence is gaining momentum. What could be the benefits for enterprises?
The applications for Artificial Intelligence are growing every day. AI is great at determining optimal paths and resource utilization as well as identifying objects or interpreting data such as voice recognition. These are powerful tools that can be integrated into many different systems to improve effectiveness, efficiency, or create entirely new capabilities.
Autonomous driving is a great example where AI is employed to understand the surrounding objects and their behaviors while plotting an optimal trajectory around obstacles en route to the destination. Regular logic is not sufficient to comprehend all the inputs from cameras, lidar, and the multitude of other vehicle sensors. AI can identify object data it has never seen before and still classify it with a high degree of accuracy. This will enable ever more capable and safe self-driving cars, busses, trains, ships, and even planes. All computer-controlled.
The downside to such automated collision avoidance and navigation is, it becomes susceptible to interference, manipulation, and malicious hacking. This is true of all AI innovations. The more we trust, embrace, and enable technology to have control over our lives, the more risk we must be willing to accept or mitigate.
What about the disadvantages when human rights, consent, privacy, and security are concerned?
Many organizations are looking for AI to make sense of tremendous amounts of unstructured data that has been collected about people, transactions, systems, and social connections. It is estimated that over 90% of all data is unstructured and not usable by normal systems. AI holds the potential to find patterns, connections, and derive the value of such telemetry and raw data.
This could provide great beneficial insights into people’s needs, desires, opportunities, and alerts when they are subject to unfair practices. But at the same time, it can aggregate data in ways that build stronger profiles of individuals. Such systems can make connections about very private or personal aspects of people’s lives that those individuals never authorized or want to be known. It could be a confidential relationship, a medical condition, personal fears, economic status, or something they are privately working through. In much of the world, privacy is recognized as an important human right, necessary for people to thrive.
Video surveillance systems are a major concern, with AI now being able to identify and track people from networks of connected camera systems. This will potentially allow governments, who install many cameras to monitor the public, to track where every person is at any time, what they are doing, and whom they are meeting with. I expect such technology will also evolve to process microphone information or effectively read lips to determine what individuals are saying to each other. Such implementations will greatly undermine privacy, free speech, and in some countries could be used to persecute individuals for simply talking about certain topics.
In the world of cybersecurity, both attackers and defenders will leverage the power of AI systems for their respective benefits. Attackers will use AI to launch widespread automated attacks that are customized for individual targets, integrating analysis from disparate systems, and learning from each failure to optimize the chances of socially engineering a person or compromising a technical system. These artificially intelligent cyber attackers will be relentless.
The security folks will use AI to detect anonymous behaviors, trigger mitigation actions, then measure the effectiveness. Such systems will continuously learn and improve to prevent and minimize losses. These AI enhanced capabilities will replace much of the mundane work of security operators, allowing staff to focus on the most interesting issues and be empowered with smart orchestration tools to increase their overall operational capability. The same attack tools will be repurposed by security teams to proactively detect vulnerabilities in people, systems, and processes that must be improved before real attacks occur.
What do you predict the path for AI will be?
The undeniable value of AI will fuel a rapid adoption and further innovation for new use cases. Data analysis against the vast data lakes that exist is appealing to organizations in every sector. This will be a race for enterprising software providers to create new AI enhanced products and quickly maneuver to gain market share. Prioritizing Time-to-Market and Minimum Viable Products, typically sacrifices security, privacy, and may violate ethical practices across the architecture, design, development, implementation, and sustaining operations lifecycle. Rapidly developed AI systems will be the norm, but also a cause for great concern.
What should we be concerned about the most when it comes to AI?
The powerful lure of harnessing the great power of AI to transform digital technology across the globe may blind users to the necessity of mitigating the accompanying risks of unethical use. The ethical ramifications often start with developers asking ‘can we build’ something novel versus ‘should we build’ something that can be misused in terrible ways. The rush to AI solutions has already created many situations where poor design, inadequate security, or architecture bias manifested unintended consequences that were harmful. AI Ethics frameworks are needed to help guide organizations to act consistently and comprehensively when it comes to product design and operation.
Without foresight, proper security controls, and oversight, malicious entities can leverage AI to create entirely new methods of attack which will be far superior to the current defenses. These incidents have the potential to create impacts and losses at a scale matching the benefits AI can bring to society. It is important that AI developers and operators integrate cybersecurity capabilities to predict, prevent, detect, and respond to attacks against AI systems. The goal is to deliver the tremendous benefits AI while managing the risks to acceptable levels. It is a balancing act.
Do you see AI solving human errors and to what extent could it replace humans?
AI is a tremendously powerful tool that is capable of incredible good for mankind, or if used malevolently, could negatively impact society at an equitable level.
Autonomous vehicles hold the promise of nearly eliminating driving accidents, injuries, and fatalities, as most are due to human error. However, if an entire make of cars were hacked and simultaneously forced to crash at high speed, the resulting catastrophe would have momentous impact ripples on society.
In the near term, we will see AI play two roles. First, it will enhance what humans already do now. Supervised driving where AI systems do most of the work, but humans can intervene if necessary. This should provide great benefits while maintaining human safeguards.
Secondly, AI will automate work that is currently not being done, because it is too human-intensive or manually too slow. Real-time data analysis is valuable to almost all industries. Think about better search engines, email filters, personal assistants, conversational foreign language interpreters, helper robots, and understanding how an individual might be swayed to click a link, purchase a product, or believe a narrative.
Like all powerful tools, AI requires human involvement to ensure the architecture, design, and engineering and ethical, effective, possess limits that inhibit unintended usage and are hardened from attack. If we treat AI with respect and understand both the benefits and manage accompanying risks, humankind will have the ability to benefit in unimaginable ways. | <urn:uuid:6d3b25ae-d356-4995-b066-87ee674a7dac> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2022/01/17/ai-benefits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00595.warc.gz | en | 0.941862 | 1,432 | 2.75 | 3 |
Electricity, our financial institutions, and our transportation infrastructure, are things that permeate our lives each day, and are all dependent on the internet. Having a resilient infrastructure in critical areas is not only crucial to the everyday lives of citizens, but our national security. The theme of Week 5 looks at the role of cybersecurity in keeping our phone lines, running water, traffic lights, and other critical infrastructure secure.
What is critical infrastructure exactly? The DHS defines critical infrastructure as “sectors whose assets, systems and networks, whether physical or virtual, are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety or any combination thereof.”
Basically, critical infrastructure either: (1) supports some basic necessity of modern life, like electricity or (2) it is a big organization that would impact a lot of people.
There are 16 critical infrastructure sectors
- Chemical. Basic chemicals, specialty chemicals, agricultural chemicals, pharmaceuticals and consumer products.
- Commercial Facilities. Entertainment, gaming, lodging, events, public assembly, real estate and sports leagues.
- Communications. Internet, telephone and cable wired lines, wireless frequencies (cellphones) and satellites (GPS, DirecTV, satellite phones).
- Critical Manufacturing. Primary metals, machinery, electrical equipment and transportation equipment.
- Dams. Hydroelectric power, water supplies, irrigation, flood control, river control and recreation.
- Defense Industrial Base. Design and production of military weapon systems.
- Emergency Services. Police and fire departments, medical services and public works.
- Energy. Electricity, oil and natural gas.
- Financial Services. Banking, credit, investment and insurance.
- Food and Agriculture. Farms, livestock, restaurants, food manufacturing, processing and storage.
- Government Facilities. Federal, state, local and tribal government buildings.
- Health Care and Public Health. Hospitals, clinics, mental health, youth care and family services.
- Information Technology. Hardware, software, systems and services.
- Nuclear Reactors, Materials and Waste. Reactors, enrichment and nuclear medicine.
- Water and Wastewater Systems. Water treatment, storage, drainage and sewage.
If you work in any industrial setting — whether it is a farm, doing facilities work on buildings, working in a factory or in other skilled labor jobs like plumbers, electricians or HVAC specialists — pay attention to any devices you interact with, especially if they are internet-enabled.
Why would someone want to target a HVAC system? In 2013, Target was actually compromised through their HVAC system, exposing 110 million people while using a third-party company to manage their HVAC systems that were not properly protected from the rest of their network. Hackers were then able to break into the network using malware, exposing the card processing system.
While Target is an example of someone using an ICS as a pivot point to reach other critical infrastructure, what about someone using a primary network? In March of 2016, hackers took control of hundreds of PLCs that governed the flow of toxic chemicals that were used to treat water at a regional water utility. The cyber thieves took advantage of the water company’s poor security architecture that had multiple internet-facing systems with high-risk vulnerabilities on the same network as their SCADA platform. The actors were actually able to change flow rates of the toxic chemicals.
Luckily, the alert system provided the water treatment facility enough time to reverse the chemical flow changes, minimizing the impact on the facilities customers and saving hundreds of thousands of lives from danger.
Because the energy grid is so complex, managing it requires constant planning and coordination. Complicating matters, cyber threats to the grid are not static. They evolve — and so must the industry’s efforts to prepare. | <urn:uuid:4342445b-bd0f-4289-bd62-dee277a4014f> | CC-MAIN-2022-40 | https://1path.com/blog/cybersecurity/cyber-security-awareness-week-5-protecting-critical-infrastructure-cyber-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00595.warc.gz | en | 0.926153 | 797 | 3.421875 | 3 |
The choice of security anti-virus (if we disregard the issue of cost) depends on its quality requirements. If the use of Internet is not that active, only trusted web-sites from a limited list are visited, correspondence is only with a limited number of people and there are no tones of spam, new programs are not downloaded from the Internet – requirements to anti-virus protection can be minimal.
Otherwise, however, if the network is used extensively, the volume of email is high, and search services are used on a regular basis – requirements to quality and functionally of anti-virus protection are much higher.
Reliability and usability are the most important criteria, as even the ‘absolute antivirus’ might prove to be absolutely useless if it conflicts with the system, strongly reduces its efficiency or from time to time “hangs”. If an antivirus requires special skills which most common users are devoid of, it will be too difficult to use (work with). Common user will simply ignore its messages and randomly click «Yes» or «No», depending on which is closer to the cursor. And if an antivirus asks the common user difficult questions, most probably the latter will disable, if not delete the program from the system. If a corporate antivirus version does not have features required to administrate the company’s network, most system administrators will rather choose a product which would be less secure but more convenient.
Comprehensive protection is the second critically important criteria. All computer domains, all types of files and network elements which can be potentially attacked by a virus have to be constantly under protection. The program should be able to detect a malicious code and protect all channels of possible intrusion (e-mail, WWW, FTP e. t.c.), leading into the computer and the network.
Quality of protection is the third key criteria. Any most sophisticated antivirus is of no use is it is unable to provide a required level of protection from malicious programs. Anti-viruses have to resist a quite aggressive environment which is constantly developing – often new viruses, worms, Trojans become much more complex than their previous versions.
As for protection quality, it is made up of the following features: level of detection of malicious programs, frequency and regularity of updates, ability to delete the virus code from the system properly, resource capacity, possibility of double protection (systems) by different manufacturers, ability to protect not only from known – but also from new viruses and Trojans. | <urn:uuid:7cbe62e0-a59a-4577-859d-ff8a0f9def6e> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/choosing-an-antivirus-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00595.warc.gz | en | 0.940697 | 513 | 2.609375 | 3 |
Nowadays, Artificial Intelligence (AI) is a widely discussed subject in the circles of business at all levels. The majority of business experts and analysts claim that AI is our future. However, if we think carefully, AI is not the future – AI is the present! Let’s take your email service as an example. It uses AI to sort emails for you not to get spam. Amazon and other e-commerce platforms use AI to recommend goods to the customers taking the information that has been collected. Siri, Alexa, and Google Assistant turn to AI to enhance their user’s experience. Although technology is in the basic stage of its development, many companies already invest huge budgets in AI, assuming that AI-powered products have a very promising future.
One of the fields that AI could transform entirely in the nearest future is Learning (Training) and Development (L&D). But, before we talk about the ways AI will transform the Learning and Development sector, we should discuss the part AI takes in Learning and Development.
Why is Artificial Intelligence Important to Learning & Development?
Learning and Development specialists should be aware of rapidly developing technology and use it to enhance the learning processes. In terms of AI, L&D experts should explore and implement AI improvements to develop new training strategies and techniques. Leading research and advisory company Gartner predicted that by the end of 2020 about 85% of customer services would be performed by machines (bots), not humans. One more report claimed that about 20% of the training materials would be produced by AI. Moreover, the Bank of America stated that by 2025 AI would be the reason for $14-33 trillion industrial expansion annually.
Thus, AI will have a great influence on the L&D sector. L&D specialists need to track huge amounts of information using AI software and machine learning. This information will give L&D experts insights that will help them develop new training programs for effective learning.
7 Ways how AI is transforming Learning and Development
The most important way AI will change the training, learning, and development sector is to help organizers establish more optimized and effective training methods and techniques. So, here we propose 7 ways how AI is about to transform L&D:
Personalizing the learning pathways
Every person has a different style of how he/she learns and processes new information. That’s why the schooling process would be more effective if these preferences were taken into consideration. AI allows training programs to adapt to the needs of each employee. The benefits of adopting AI in Personalized Learning are:
- Time is saved. Employees achieve their learning goals faster as they get information based on their personal preferences and objectives.
- Engagement is encouraged. Learning system powered with AI analyses each learner and proposes a learning program based on his/her past performance and objectives.
- The learning process is automated. With AI, you can build a learning platform where all data, programs, materials, schedules based on individual learner’s experience, preferences, and objectives are stored, tracked, and delivered.
- ROI is growing. The formula is simple: faster learning, paired with greater engagement leads to better learning results. Better learning results lead to a positive return on a company’s learning investment.
Integrating training into the routine workflow
Stephen Walsh, a co-founder of Andres Pink, states that 93% of organizations wish to integrate learning into the routine workflow. However, 56% of learning is very formal and delivered face-to-face. Most of the learners are not satisfied either with the schedule of training or the format of information delivery. A learning system, powered with artificial intelligence is the solution to this problem as well and the benefits are simply the same: time is saved, employees are engaged and involved, the learning process is automated and the profit of the organization grows at the speed of light! When powered with AI, the learning system will provide a program, materials, and schedules that are personally developed for each employee.
Reinforcing training and development
Reinforcement is a process used to make learning valuable and remind the learners to apply meaningful knowledge in practice. It’s no secret that employees are too busy and sometimes lazy to work on their development. Bryan Austin in his “Modern Corporate Learner“ paper, claims that specialists are eager to dedicate only 1% of their precious time to learning and professional development. And the MASIE Center, an international learning LAB, states that employees finish only 15% of the learning programs that were assigned to them. Despite this fact, organizations spend billions of dollars on employee development programs annually. AI-equipped learning programs are called to solve the problem with poor reinforcement of training and development and can improve your reinforcement program, including:
- automation of all the processes that save time;
- personalizing the learning and reinforcement processes that boost engagement;
- personalizing the learning and reinforcement processes, that improves completion rates;
- automation of analytics that measures learning effectiveness and others.
Improving completion rates
As mentioned above, today only 15% of employees are ready to complete their learning programs. However, with Artificial intelligence delivering training content in the learner’s preferred learning format that is followed up with reinforcing stimulating methods, completion rates will improve. Here are 4 tips on how to improve completion rates using AI:
- develop a learning program based on the personal preferences of each employee;
- make the course short and engaging;
- turn to professionals to create a learning platform where all the processes will be automated;
- once you measure learning effectiveness, do not forget to report on the results or even reward your employees.
AI products make training programs reachable to a wide group of learners, including people with different types of disabilities. For example, Google presented an Automatic Captions Video App in 2009, which could help deaf people. Besides, the App is equipped with auto-translation functionality that helps people enjoy watching videos in more than 50 languages. For blind people, AI delivers programs and solutions that create alternative texts for pictures and images. Google presented Cloud Vision API that utilizes neural networks to distinguish the context and create a textual version for an image. Thus, with AI, experts will develop training programs accessible to any learner.
Measuring learning and training effectiveness
Averaging the learning process performance is very crucial, but time-consuming. Once L&D professionals use AI systems they collect and analyze data quickly to get certain insights on learning effectiveness. The insights point out learner’s progress and emphasize learner’s knowledge gaps, if any. Then an AI-equipped learning program suggests ways to fulfill the uncovered gaps. With AI, a 4–level Kirkpatrick Evaluation Model will perform even more effectively to guarantee that the initial learning goals are accomplished.
Focusing on AI-based digital tutors
AI-based tutors can replace teachers, lecturers, speakers, and coaches. Sounds implausible? But it’s true! Just a couple of years ago DARPA (Defence Approach Research Agency) sponsored a study that was called to develop a digital tutor to copy the interplay between an experienced specialist and a learner. The aim was to diminish the time spent by navy trainees to achieve some high-tech skills. The experiment revealed that when working with AI-based digital tutors, trainees not only obtained the skills quickly but also overperformed experienced experts. It means that potentially AI-based tutors could replace existing experts with time, and the learning process will be even more effective.
Ultimately, applying AI in training, learning and development will allow learners to receive training content based on their preferences, skills, and personal traits. Moreover, AI makes programs accessible to all learners even with different types of disabilities. If personalized, AI-powered learning courses will significantly improve completion rates and boost engagement. In addition to that, a learning platform driven by artificial intelligence enables organizers to offer training options for the employees 24/7, track results, analyze data, measure learning effectiveness, and make learning even more effective and efficient.
How do we use AI in everyday life?
Artificial intelligence or AI is a branch of computer science that brings together multiple disciplines, with the aim of creating smart machines — devices and systems capable of performing complex tasks that often require human intelligence, but in a manner that equals or exceeds the capabilities of humans. Uses: Virtual Assistants or Chatbots. Agriculture and Farming. Autonomous Vehicles and Aircraft. Online Commerce (eCommerce) and Shopping. Security and Surveillance. Manufacturing and Production. Healthcare and Medical Imaging Analysi. Health and Safety.
AI in Training, Learning, and Development
Learning and Development specialists should be aware of rapidly developing technology and use it to enhance the learning processes. In terms of AI, L&D experts should explore and implement AI improvements to develop new training strategies and techniques. 7 Ways how AI is transforming Learning and Development: Personalizing the learning pathways. Integrating training into the routine workflow. Reinforcing training and development. Improving completion rates. Providing accessibility. Measuring learning and training effectiveness. Focusing on AI-based digital tutors. | <urn:uuid:628e5525-17f3-44ce-864c-5a8aa2f79968> | CC-MAIN-2022-40 | https://itchronicles.com/artificial-intelligence/ai-in-training-learning-and-development/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00595.warc.gz | en | 0.943824 | 1,878 | 2.875 | 3 |
The Internet is the most significant and luxurious platform our society has. Everything is on the Internet, including a virtual copy of our lives, since we use the web to find information, writing and saving essential data, communicate with others, and build a social life. At this point, we cannot separate ourselves from the Internet. But there are some security issues regarding the use of the Internet that must be taken into account, especially when it comes to students and teenagers. In this article, we approach some of the most critical aspects of social networking security.
Security in the Internet
To have our information and a version of our lives online, available all the time, brings a lot of advantages. But it also has a downside because our information and data is vulnerable to hacks, and people with some skills could steal it or manipulate it. The social media security issues have become more noteworthy in recent years, due to some events of massive theft of data. The problems with internet security have reached schools, and more than one student is dealing with information theft.
That’s why the social network and internet security has become common essay subjects. This is an example of how to make students aware of their vulnerability. If you find it hard to do your social network, you can look for online help. Many sites will offer you assistance in your writing process and unique features like a conclusion generator.
There are risks in using the Internet and being part of social media, but it does not mean that every student should separate himself from the Internet. This is a great tool and there are more benefits than liabilities from its use. So, the solution is to take measures that will help every student assure the safe use of social media and the Internet.
Things students can do to enhance their social media security
As we said before, the solution is not to run away from the Internet. The secret to protecting yourself when using the Internet and social media is to be prudent and take action to keep your information safe. Here are some samples of activities that students and parents can decide to make the best of social media and avoid risks.
- Be open and honest about internet hazards. This is a recommendation for parents and teachers: lying to students and children won’t protect them. It is necessary, to be frank about the risks they take when exposing themselves to social media. If the young ones understand what is at wager, then they will be careful.
- Don’t give information about yourself. This might seems obvious, but it is necessary to remark its importance in social media security. When talking to people on social media, don’t share personal details like your full name, address, current location, phone number, or your school’s name. You can have relations with people on the Internet, but it is almost impossible to be sure that you are talking to a reliable person, so don’t give information that might guide them to you.
- Be careful with purchases. If you are going to purchase something online, make sure to use reliable sites. Don’t give information about credit cards or bank account numbers to strangers or in sites that can be considered shady.
- Use privacy policies to your advantage. Read the privacy policies so you can be completely aware of what the sites are going to do with the data you share. Also, in the settings of social media, you can determine the information other users have access to. Make use of these settings so you can control who can access your personal information.
- Use strong passwords. If you have a password easy to guess, then people that might try to steal your information won’t have a hard time deciphering it. Birthdate, names of your relatives, and important dates for you are samples of weak passwords. Some apps are designed to create strong passwords and even work as a vault where you can store them safely.
It is possible to avoid social media security issues by taking some precautions. This way, you can give the right use to the fantastic tool that the Internet is and receive only the benefits that social media offers. | <urn:uuid:973c15a7-437c-4bfe-a502-71ec5cb4550e> | CC-MAIN-2022-40 | https://cybersguards.com/social-network-security-of-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00595.warc.gz | en | 0.950165 | 830 | 3.078125 | 3 |
Devices will be 1,000 times faster than today’s record holder and help with nuclear and big data analysis.
Congress is directing the Energy Department to take the next decade to develop a new class of supercomputers capable of a quintillion operations per second to model nuclear weapons explosions, according to language in the 2014 National Defense Authorization Act passed by the House last week, with a Senate vote expected this week.
Department officials believe they could develop exascale supercomputers within 10 years, according to estimates offered at an Advanced Scientific Computing Advisory Committee meeting in Denver last month.
The exascale supercomputers will operate at a speed 1,000 times faster than the current record holder, a machine developed by China’s National University of Defense Technology that performs just under 34 quadrillion calculations per second, William Harrod, an ASCR division director told the conference.
Besides weapons research and simulation, Harrod said exascale computers would help support processing of complex “big data” sets, including climate modeling and genomics, with the first system slated to go into operation in 2023.
Energy jump-started exascale computer development with contracts valued at $62.4 million awarded to AMD, IBM, Intel and NVIDIA in 2012 and followed up with $25.4 million in contracts to the same companies along with Cray in the fall of 2013.
These contracts cover system interconnect architectures, open network protocol standards, “massively threaded” multiple processors and energy efficient systems, Harrod said.
New architectures and algorithms need to be developed to support exascale computing, Rick Stevens, a lab director at Argonne National Laboratory, told a hearing of the Energy subcommittee of the House Science, Space and Technology Committee in May.
Exascale research also will need to significantly reduce electric power requirements, Stevens said. Today’s most powerful computers require a few megawatts at a cost of about $1 million per megawatt per year, he said. Even though exascale computers will operate 1,000 times faster, Stevens said, Argonne wants to hold power consumption down to 20 megawatts.
Harrod agreed, saying Energy plans to develop exascale computers that consume the same amount of energy as today’s supercomputers. | <urn:uuid:13177069-cd00-40c2-99b7-806eee2cabbc> | CC-MAIN-2022-40 | https://www.nextgov.com/cxo-briefing/2013/12/energy-dept-told-develop-exascale-computers-10-years/75568/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00595.warc.gz | en | 0.923747 | 471 | 2.59375 | 3 |
3D XPoint (pronounced “3D crosspoint”), is a relatively new persistent memory technology that was first unveiled by Micron Technology and Intel in July 2015. At its introduction it was touted as the first new memory tier to be launched since the introduction of NAND (flash) memory in 1989.
What makes Micron and Intel 3D XPoint interesting is that it offers the following attributes:
- Its performance is higher than NAND, though not as high as DRAM (used to provide RAM in computer systems)
- It costs less than DRAM but more than NAND
- It offers higher storage density than NAND
- It offers persistent memory, unlike DRAM
These attributes mean that there are several possible use cased for 3D XPoint. These include:
- High performance 3D XPoint SSDs which are faster than conventional NAND-based SSDs
- Low cost DRAM substitute or supplement
Micron and Intel 3D XPoint Technology
3D XPoint storage is made up of a series of parallel wires called wordlines, with another series of parallel wires called bitlines beneath which run perpendicular to the ones above. In between the points where wordlines and bitlines cross (hence the “3D crosspoint” name) sit memory cells with accompanying selectors which can set the memory cells to store a 1 or a 0.
Any arbitrary memory cell can be read or written to by applying a specific current to the wordline and bitline which intersect at that memory cell, and wordline and bitlines can be stacked to increase storage density.
As well as offering higher performance than NAND, Intel claims that 3D XPoint is not “significantly” impacted by the number of writes it is subjected to. In other words, 3D XPoint should not wear out in the way that NAND does, and therefore it should not be necessary for storage controllers to carry out functions such as wear levelling to ensure that parts of a chunk of storage wear out prematurely.
One reason that conventional DRAM is more expensive than 3D XPoint is that it requires a relatively costly and bulky transistor at each memory cell to address it. Since 3D XPoint technology does not use transistors it can be produced at less cost and it can offer much higher storage density.
With 3D XPoint technology, the crosswork wordline architecture enables cells to be addressed individually.
Speed and performance
When 3D XPoint was first unveiled, one of its propositions was that it offered performance about one thousand times faster than existing NAND storage.
Unfortunately, that does not mean that Intel 3D XPoint SSDs offer performance that is one thousand times faster than NAND SSDs. In fact they are often only three or four times faster, and in some applications they are barely faster than conventional SSDs at all. Even accounting for future improvements, they are unlikely to get much more than an order of magnitude faster ever.
To understand why, it’s important to realize that the performance of an SSD is only partially affected by the speed of the underlying storage medium: NAND or 3D XPoint. Other factors which have a big impact on the performance of an SSD include:
* Delays from the host computer’s PCIe or NVMe port to the processor pins
* Software delays: the amount of time the operating system’s I/O stack requires to perform a disk read operation
Ultimately it comes down to bottlenecks, and if the actual storage medium is only one of the bottlenecks, then removing that bottleneck will only result in a small change in performance.
What is Intel Optane memory?
There is a great deal of confusion between the term “3D XPoint,” which refers to the storage medium developed jointly by Intel and Micron, and “Intel Optane,” which is the trade mark that is used to market Intel 3D XPoint-based products.
Intel Optane actually refers to more than just Intel 3D XPoint itself. It is the combination of the 3D XPoint memory medium, Intel’s memory and storage controllers, its interconnect IP, and its proprietary software in Intel 3D XPoint-based products such as the 3D XPoint-based Intel Optane SSD 905P
Intel Optane is similar in function to a set of technologies called QuantX developed by Micron, upon which Micron plans to base its own 3D XPoint based products.
Intel Optane Release Dates
* SSDs – The first Intel Optane product was the DC P4800X PCIe card, which debuted in March 2017. Since then the company has launched a number of other Intel Optane SSDs, including devices that use NVMe interfaces and ones with a standard 2.5-inch SSD form factor.
* Apache Pass – More recently, the company has offered a glimpse of its Intel Optane DC Persistent Memory DIMMs, which were previously known under the codename Apache Pass. These use the DDR4 memory bus and are designed to be installed as a substitute for standard DRAM modules and which will be available in 128 GB, 256 GB, and 512 GB versions. These are being sampled now, and will be available to selected customers on a limited basis later this year. Intel says that they will be broadly available in 2019.
Intel Optane and 3D XPoint RAM
When 3D XPoint was first announced, it was the possibility of producing high performance SSDs that received the most attention. But with the promise of persistent memory DIMMs in the pipeline, attention is being switched to this type of usage.
The obvious question to ask is why anyone would want to use comparatively slow 3D XPoint as a replacement for lightning fast DRAM and the answer, perhaps counter-intuitively, is to increase performance.
The argument goes like this: If you have $100 to spend to increase the performance of a server, then you could buy some more fast DRAM, or some slower 3D XPoint. But since 3D XPoint costs less, you can buy more Gigabytes of 3D XPoint memory for $100 than Gigabytes of DRAM. So even though 3D XPoint is slower than DRAM, choosing the 3D XPoint will in the right circumstances capture more data or instructions that would otherwise require a much slower SSD (or even HDD) access. In that case, the extra 3D XPoint will make the system run faster than extra DRAM.
Extending System Memory with Intel Optane today
Those hoping to use 3D XPoint as system memory (RAM) directly will have to wait for the availability of Optane DC Persistent Memory DIMMs (Apache Pass) sometime next year. In the meantime there are a couple of workarounds. These include:
* Intel Memory Drive Technology – This software integrates Intel Optane SSD storage capacity into a system’s memory subsystem so that it appears as a single pool of DRAM to the OS and applications running on the OS. Intel claims that the software intelligently places data in the pool to maximize performance, and says that it can have a positive effect on a system’s performance when real DRAM makes up as little as 10 per cent of the total memory pool.
* And, Intel Memory Drive Technology can be used in two scenarios:
- when there is a need to reduce the amount of DRAM used in a system, to reduce overall memory costs
- when there is a need to grow the overall memory pool of a system without the full expense of purchasing additional DRAM
For the moment, Intel Memory Drive Technology can only be used with Intel’s Xeon processors and is not backward compatible with older server processors.
Use Cases for 3D XPoint
Deciding whether to use 3D XPoint for a given server or application comes down to a cost/benefit analysis: is it worth paying more for 3D XPoint SSDs that offer higher performance, or could the addition of additional system memory in the form of 3D XPoint DIMMS provide a bigger performance boost than the addition of additional DRAM at similar cost – and is the performance boost worth the extra expenditure?
The exact use case scenarios will change as the price and performance characteristics of 3D XPoint change (for example, the storage medium is likely to drop in price as more fabrication plants come on stream.)
But initially at least, likely applications include:
- Data analysis/data mining
- Data warehousing
- Online transaction processing
- Virtualized infrastructure
- Graph analytics
One particularly interesting use of 3D XPoint may be in the area of large in-memory databases which currently rely on large amounts of fast but expensive DRAM. Using 3D XPoint instead of, or alongside, DRAM will make it much less costly to build these systems, or make building larger systems more affordable, because of the lower cost of 3D XPoint compared to DRAM.
An interesting additional advantage of using a 3D XPoint-based in-memory database stems from the fact that, unlike DRAM, 3D XPoint is persistent memory. That means there is less chance of corruption if any parts of the system fail, and it also means that the entire system can be restarted and operational again much more quickly, because huge volumes of data don’t have to be rewritten to DRAM memory from slower SSD or HDD storage before it can start working again. | <urn:uuid:44e8f5e5-796b-4930-a7e0-f120425485e7> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/products/3d-xpoint-technology-and-use-cases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00595.warc.gz | en | 0.947875 | 1,964 | 2.703125 | 3 |
Deloitte explains cognitive technology as a product of “the field of artificial intelligence” and says that cognitive technologies “are able to perform tasks that only humans used to be able to do. Examples of cognitive technologies include computer vision, machine learning, natural language processing, speech recognition and robotics.”
Cognitive systems listen, understand, reason and learn as people interact with them. These systems understand context and intent. They learn, adapt and get smarter, gaining value with age by learning from their own successes and failures.
When integrated with marketing campaigns, cognitive systems help marketers make better strategic decisions by recognizing opportunities to create personalized experiences for the customer.
Applications of cognitive technology reside in three main categories:
1. Product applications: Use the technology in a product or service to make it more effective and valuable to the customer.
2. Process applications: Use the technology to automate or improve operational workflow.
3. Insight applications: Specifically advanced analytical capabilities are used to uncover insights for improvement or development.
How cognitive technology can assist the digital marketer.
Unlike conventional systems, cognitive marketing allows the development of real-time experiences based on the user’s current situation.
For marketers, the goal is always to delight the customer, and cognitive technology helps with the bigger strategic picture, and it can be used in any industry. Digital marketers understand that a) their customers operate in a super fast world, and b) because of that, consumer attention spans continue to shrink. In four seconds, the only thing that matters is relevance. Cognitive technology, together with actual human brains, can provide that relevance.
Imagine this: You own an e-commerce cycling store. A prospective buyer gets a thought to take up cycling as a sport, so she begins researching from home on the internet. As she clicks through your site, you know she’s a new prospect by looking at her web behavior, social media posts, and in-store activity.
You know she’s in the early part of the sales funnel, so you support her with the right content, like getting-started tips, beginner cycling routes in her area, etc.
As the prospect gets more “into” cycling, and interacts with your site more often, cognitive technology really kicks in to provide value and now she gets different content, such as racing timetables and DIY maintenance tips. The technology “understands” your new prospect, and puts the right content together in real time, making for an amazing, start-to-end experience.
A real life example is Domino’s, the pizza delivery chain that introduced a feature in its mobile app that allows customers to place voice orders with a virtual helper called “Dom.”
Dom guides customers through the ordering process with his computer-generated voice. The intention of the pizza chain is to increase revenue by making the ordering process more convenient, and it has already found that customers who place their orders with Dom tend to spend more and more often.
Forget traditional customer surveys. In another example, BBVA Compass Bank uses a social media sentiment (which is the analysis of feelings and attitudes) monitoring tool to track and understand what consumers really think about the bank and its competitors. The collected insights influence the bank’s decisions on setting fees and customer bonuses, as well as customer support methodology.
Scott Brinker, CTO and co-founder of Ion Interactive, co-authored an article in the Harvard Business Review in which he stated, “Marketing is rapidly becoming one of the most technology-dependent functions in business.”
Not only can cognitive technology be used to boost the customer experience; it can also be used to automate time-consuming routine tasks, so that the tasks get done faster, better and cheaper.
Paul Roetzer from Marketing Land, argues, “Now, imagine if machines performed the majority of those activities, and a marketer’s primary role was to curate and enhance algorithm-based recommendations and content, rather than to devise them.”
Although you’d think that only conglomerates can afford to use cognitive technology, the fact is that even startups are using it in the form of entry-level marketing automation software like GetResponse with its website traffic tracking, shopping cart abandonment and email marketing workflow planner features. The marketing automation software uses conditions, actions and filters to plan automated processes to deliver an exceptional and relevant user experience.
For small businesses, this means improved communication and a drastically improved customer experience.
For e-commerce, the cognitive technology embedded in the marketing automation software can mean the difference between an abandoned shopping cart and a saved sale. If the statistics are correct, and two-thirds of sales are abandoned daily, cognitive technology can make a huge difference to your profits.
Cognitive technologies can generate data that provides exceptional insight to reduce costs, improve efficiency and effectiveness, increase revenue, or enhance customer satisfaction.
Organizations are able to use machine learning techniques to make predictions that are based on data sets that are too large to be understood by humans and too unstructured to be analyzed by traditional analytics.
Cognitive technology is used in three main areas:
1. To create products or enhance existing products.
2. To automate or improve processes.
3. To provide intelligent insights for product or service improvement.
Many startups have taken to marketing automation platforms which are powered by cognitive technology on a very light scale. At the same time, advanced marketers are starting to use cognitive technology in their marketing campaigns, with the aim of creating delightful customer experiences and increasing revenue.
Known in the tech industry by his Twitter handle, SocialMktgFella, Andre Bourque is a PR, content marketing and social media adviser who works with forward-thinking companies on "remarkable" content creation, brand messaging and distribution. | <urn:uuid:82430fe7-0c07-4c8f-9f6c-128c919b6d0c> | CC-MAIN-2022-40 | https://www.cio.com/article/236438/how-marketers-inflate-customer-results-with-cognitive-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00795.warc.gz | en | 0.930366 | 1,214 | 2.546875 | 3 |
The aim of this article is to discuss the importance of Recursive Lookup in BGP. First of all we need to understand the purpose of the recursion method and find some evidence of its use in everyday life.
Recursion is based on self-reference and can be found in many disciplines. For instance, in linguistics, a recursive acronym is an acronym that refers to itself. The notorious example is the GNU acronym that is explained as GNU’s Not Unix. In computer programming the recursion function calls itself. For instance, in the Python script below, the function factorial is placed within its own definition, calling itself with a modified argument that is decreased by one. The if condition n equals 1 represents a base case here. Its purpose is to stop the recursion, otherwise we would end up with a RecursionError: the maximum recursion depth being exceeded or even worse, exhausting resources. The benefit of using the recursion approach is in dividing a single task (factorial computing) into sub-tasks of the same type.
def factorial(n): if n == 1: return 1 else: return n * factorial(n-1) print(factorial(5)) 120
In virtualization technology, the nested virtualization represents another recursion example where a virtual machine is running inside of another virtual machine of the same type. Another funny example of a recursion can be found in the architecture of a Matryoshka doll, also known as a Russian nesting doll. It is a set of wooden dolls of decreasing size placed one inside another.
Picture 1: Russian Doll
Recursive Route Lookup
The Recursive Route Lookup follows the same logic of dividing a task into subtasks of the same type. The device performs its routing table lookup again and again until it finds the ongoing interface to reach a certain network. The routing table containing recursive chained entries is depicted in Picture 2.
Picture 2: Routing Table
Picture 3 illustrates the list of recursive entries for the 192.168.1.0/24 network. In order to forward a packet to the destination IP address of 192.168.1.100, the router performs a lookup of the routing table. The route 192.168.1.0/24 is the best-match with the next-hop IP address 192.168.2.254. Now, the routing table is looked up again for 192.168.2.254 and the route 192.168.2.0/24 is matched with the next-hop IP address 192.168.3.254. Again, the routing table is browsed to find the best match for the next-hop IP 192.168.3.254. The route 192.168.3.0/24 is matched with the next-hop IP 10.0.0.2/24. Finally, the route 10.0.0.0/30 is matched and the packet is forwarded over the egress interface Gi0/0 towards the destination.
Picture 3: Packet to 192.168.1.1 is CEF-Switched
|Note: The recursive routing table lookup described above is only valid for the switching process when a CPU is involved with every routing decision. In this case, browsing the routing table again and again until the exit interface is found slows down the router. The modern switching method such as Cisco Express Forwarding (CEF) builds the Forwarding Information Base (FIB) that contains pre-computed reverse lookups (IP address, Next-hop IP address, Next-hop Mac address and the egress port). Therefore, the lookup in the FIB is not recursive even if the underlying routing table contains recursively chained entries.|
Recursive Lookup in BGP
So far, we have explained the concept of recursion and the recursive route lookup.
But how is the recursive lookup applied in BGP and why do we need it?
The main issue with BGP is that neighbors do not have to be directly connected and they might by located several hops away. Without the recursive lookups, BGP could not work as the entire BGP is built on top of recursive routing. iBGP does not modify the next hop, leaving it at its original value. Therefore when the router performs a route recursion / lookup it can fail if there is no IGP route to the next-hop address which is advertised with the BGP prefix.
Let’s have a look at a simple network topology depicted in Picture 4. The routers R2 and R3 are iBGP peers in AS 64500.
Picture 4: Network Topology with iBGP Peers
The BGP route 18.104.22.168/32 is installed into the routing table of the router R2 only if the IP address of the next-hop attribute is reachable based on the information already stored in the routing table. The installed BGP route 22.214.171.124/32 contains a reference to that next-hop address 126.96.36.199 (Picture 5).
Picture 5: Network 188.8.131.52/32 in BGP Routing Table of R2
The network 33.33.33/32 is reachable via an IP-address, which is not directly connected. The physical interface is not located so the BGP route is installed in the IP routing table without any information of the outgoing interface. But how does the R2 find out the outgoing interface? The router makes a recursive lookup to find the BGP next-hop in the routing table. A BGP next-hop IP address must be reachable in order to use a BGP route. Reachability information is usually provided by IGP. The BGP next-hop 184.108.40.206 is found in the routing table of R2, known via OSPF and the outgoing interface is GigabitEthernet0/0 (Picture 6). The first route lookup checks whether the destination prefix is in the routing table and if so, then a recursive lookup is performed for its next-hop IP address since the next hop address is not a directly connected interface. Therefore, the BGP recursion process is – BGP route – IGP Route – Connected interface.
Picture 6: The next-hop IP Address 220.127.116.11 is Reachable Over OSPF Learned Route
Picture 7 confirms that the route 18.104.22.168/32 is recursive via 22.214.171.124 with the next-hop IP address 10.0.0.1 and the outgoing interface GigabitEthernet0/0.
Picture 7: Packet to 126.96.36.199 is CEF Switched
Recursion in computer science is a method of solving a problem where a solution depends on solutions to smaller instances of the same problem . BGP recursive route lookup allows to use the next-hop attribute to find a path to a network that the IGP is aware of. Without the recursive lookup the Border Gateway Protocol would not work, because BGP is built on top of recursive routing.
Boost BGP Preformance
Automate BGP Routing optimization with Noction IRP | <urn:uuid:0e723ad2-0ebd-4d8b-b02b-408f058734c9> | CC-MAIN-2022-40 | https://www.noction.com/blog/recursive-lookup-in-bgp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00795.warc.gz | en | 0.904419 | 1,494 | 3.71875 | 4 |
In the data-driven era, neural networks are transforming businesses, uplifting everyday life and bringing us to the next level AI.
Following the functionality of human brain cells, neural networks train and strengthen machines (such as smart mobiles or computers) to learn, recognize and make predictions as a human mind does and solve business problems in every domain.
“I’m not suggesting that neural networks are easy. You need to be an expert to make these things work. But that expertise serves you across a broader spectrum of applications. In a sense, all of the effort that previously went into feature design now goes into architecture design and loss function design and optimization scheme design. The manual labor has been raised to a higher level of abstraction.” ― Stefano Soatto
In this blog, we will talk about basic aspects of neural networks along with a core discussion over a few automated neural network software that are committed to deliver greater convenience in numerous ways, especially in everyday life.
Basics of Neural Networks
In particular, an input layer, an output layer and a hidden layer sandwiched amid them, these layers are interconnected through nodes and together design a network- a system of neural networks of interconnected nodes.
Neural networks work similarly to the human brain’s neural network where a neuron in neural networks is a mathematical function that accumulates and categorizes insights with respect to a particular architecture. A neural network embraces a strong correspondence with statistical data models such as curve fitting and regression analysis.
As neural networks are a system of interconnected nodes, these nodes are perceptrons and similar to multiple linear regression models. In multi-layered perceptron model, perceptrons are structured in between interconnected layers,
the input layer to assemble input patterns,
the output layer to hold classifications or output signals that could map with input patterns.
the hidden layer to fine-tune input weights to meet the minimal margin error of neural networks.
Best Neural Network Programs/Software
The neural network software is used to research, create, imitate and apply artificial neural network and software concepts simulating the biological nervous system. These software are aimed for practical applications of ANN such as data mining and forecasting.
The neural network software is designed and developed by a number of software firms- Google Inc., Qualcomm Technologies, and Intel Corporation, among others. Neural networks software are getting popular due to their expanded range of applications and strength.
(Read also: Introduction to Neural Network and Deep Learning)
Following are the some selected neural network software;
Best neural network software
One of the professional applications, the neural designer is used in order to detect unseen patterns, convoluted relationships and to anticipate certain trends from datasets using neural networks.
Neural designer has become one of the most used desktop applications for data mining, basically neural designer employs neural networks as mathematical models imitating human brain functionality. It designs sufficient computational models functioning as the central nervous system.
Neural Designer is a code-free app for data science and machine learning that allows you to easily build AI powered applications.- Neural Designer
A deep learning library developed over TensorFlow, Tflearn is a modular and transparent library that was aimed to give a top-level API to TensorFlow while designing. It supports active experimentations and holds complete transparency and compatibility with TensorFlow. The current API supports multiple deep learning models including LSTM, PReLU, Generative Networks etc. Learn more about TensorFlow from the link.
Tflearn has the following feature;
Smooth device-installation to use various CPU/GPU
Precise and attractive graph visualization detailing weight, gradients, activations and many more.
Sufficient supportive functions in order to train TensorFlow graphs through many inputs, outputs and optimizers.
A high-level API which is easy to understand and implement deep neural networks with exemplary tutorials.
Quick and efficient prototyping with modular embedded neural networks layers, regularizers, optimizers, metrics, etc.
Clear transparency to TensorFlow such as each function is developed over tensors while can be deployed independently of Tflearn.
NeuroSolution is a neural network software development environment blending a modular, icon-based network design interface that employs advanced AI and machine learning algorithms such as Conjugate Gradients, Levenberg Marquardt, Back Propagation time etc.
As neural network software, NeuroSolution products are hugely deployed for data mining in order to make substantial predictive models through advanced processing techniques, automated neural networks topology search via cutting-edge distributed computing.
NeuroSolutions works with advanced learning procedures using easy excel interfaces or intuitive wizards. Moreover, the software gives additional wizards for building automated neural network models including Data Manager, Neural Building and Neural Expert.
In deep learning, the high-level neural network library, Keras, is composed in Python for TensorFlow and Theano with minimum functionalities and can execute on top of these applications. Keras is an API following best practices to reduce cognitive processing by rendering easy-to-use constant APIs such as it reduces the number of user actions necessary to process a task. Learn more about Keras from this tutorial.
In practice, Keras offers precise and actionable error detection as well as gives extensive documentation and developer guide. With active participation, it goes from idea to result without delaying.
As a deep learning library, the software enables fast and centralized prototyping via modularity and flexibility. Moreover, Keras supports convolutional neural networks (CNN), recurrent neural networks (RNN) and a combination of both. The default library for Keras is TensorFlow, even with its simple API, debugging of Keras Models becomes easier as they are built in Python.
Microsoft Cognitive Toolkit
The microsoft cognitive toolkit, or CNTK, is an commercially available open-source toolkit for deep learning systems. CNTK gives substantial scaling potential with speed and accuracy allowing users to extract information from massive datasets.
CNTK explains neural networks in the form of a computational process by a directed graph, Where the leaf nodes of a network graph can depict input values or network attributes.It enables users to couple popular model types such as DNN, CNN, RNN, or LSTM.
In general, the toolkit employs stochastic gradient descent learning with automatic differentiation and parallelization over various GPUs and servers. Some of Microsoft products such as Skype, Cortana, Bing, etc use this toolkit to build enterprise-level AI-based products.(Source)
The video below provides a high-level overview of toolkit;
Currently, it supports;
Fully connected layers, non-linearities, in common neural network modules,
Classification and Regression problems such as SVM/Softmax and L2 regularization & cost functions respectively,
An experimental reinforcement learning module depending upon Deep Q Learning, and
To Describe and train convolutional neural networks to process images, etc. (from)
Torch is an open-source, scientific computing framework supporting machine learning algorithms through GPU. It deploys scripting language LuaJIT with an underlying C/CUDA implementation.
Torch provides a variety of efficiencies including N-dimensional array features, loads of routines for indexing, splitting and transposing to C via LuaJIT, and neural network models. The software facilitates impressive GPU support and can work with iOS, Android, etc.
Other features include neural network and energy based models, numerical optimization routine, etc. With the objective to magnify adaptability and agility in developing scientific algorithms, Torch makes the process very simple via facilitating a huge portrait of community-driven packages in machine learning, computer vision, signal processing and video & image processing etc.
With the emergence of neural networks, the concept is being broadly used for data analysis where neural networks simulation makes the analysis more faster with accurate predictions than other analysis methods.
For example, time series forecasting, function approximation, regression analysis, etc can be conducted with neural network software. Possible applications of neural networks are game forecasting, decision support, pattern recognition, automated control systems and many more, the method plays an important role across data mining processes and tools.
So at the end, we have known a few best neural networks software that can imitate human brain functionality to process data and recognize data patterns for easier and effective decision making. | <urn:uuid:48ab2ab1-7828-4154-b3af-117034d89397> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/7-neural-network-programssoftware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00795.warc.gz | en | 0.907877 | 1,900 | 3.328125 | 3 |
PGP, which stands for Pretty Good Privacy, is a widely used method of protecting and authenticating personal and private communication between two parties. The technology works seamlessly on a BlackBerry device, making BlackBerry PGP encryption one of the most reliable forms of secure email communication in the world.
Unfortunately, many myths about the security and reliability of BlackBerry PGP encryption persist. They tend to pop up any time there’s a high-profile story about police claiming they’ve managed to decrypt the private communication of citizens.
We’re here to bust the three biggest myths about BlackBerry PGP encryption:
Myth: Law enforcement agencies have cracked BlackBerry PGP encryption
In reality, it’s virtually impossible to crack encryption. A task like that would take thousands — perhaps even millions — of years to accomplish. You access encrypted data by bypassing or circumventing encryption, not cracking it.
In this case, for communications to be secure from law enforcement, the BlackBerry in question must be paired to a private BlackBerry Enterprise Server (BES). Essentially, a private BES is a private network, where a portion of the encryption key for the device in question is stored on a private server. The device doesn’t contain the entire key, and neither does the server. Data remains encrypted — you can’t access one without the other.
While we have no way of knowing exactly how law enforcement gains access to devices, they likely do it by obtaining passwords from individuals willing to share them, rather than by circumventing the encryption technology itself.
People, rather than technology, tend to be the weakest link in encryption.
Myth: BlackBerry devices are vulnerable to hacking
By not pairing a BlackBerry device with a private BES, it’s true that law enforcement could theoretically access it by physically removing the chips for the device in question and analyzing them forensically, or by using a debugging connection.
To avoid this, users should simply never use PGP encryption on a BlackBerry that isn’t paired to a private BES infrastructure.
In general, 80% of the devices we use every day are already infected with malware. BlackBerry PGP encryption accessed via a reliable provider isolates the use of the phone to just email. None of the other functions of the phone — web browsing, apps, texting, GPS, video, camera or microphone — are available.
This removes the opportunity for someone to use malware to circumvent the encryption.
Myth: Governments can demand access to BlackBerry PGP providers’ servers
Unfortunately, if these servers are located offshore in a politically unstable country, authorities could demand and gain access to a private data center. This is why you need to choose your service very carefully.
Myntex’s servers are managed on-site in Canada and not outsourced to a foreign location. This means we’re able to restrict access and mitigate the corruption issues that come up in other countries. | <urn:uuid:2b28bcf5-cfc6-4dd0-957c-cc8b8cf68107> | CC-MAIN-2022-40 | https://myntex.com/blog/index.php/2017/02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00795.warc.gz | en | 0.914695 | 600 | 2.96875 | 3 |
I was reading a fascinating new paper by Richard Thaler titled, “From Cashews to Nudges: The Evolution of Behavioral Economics” (and slide-show), when it dawned on me that in dialog with clients (and some analysts) there could be a misunderstanding about the differences between AI, machine learning and decision making. The article is not about decision making per se, but in exploring ideas related to choice and behavior, Mr. Thaler does bump into the process by which people take decisions. It was this that got me thinking: what exactly is a decision?
AI and machine learning are hot right now. Check out our Key Initiative Primer for AI. These technologies are appealing since they offer the promise of great benefit. These benefits range across a spectrum of extremes. One extreme is the (significant?) increase in automation of tasks that would otherwise have been undertaken by people, to the other extreme of discovery of new patterns in data that help with solving intractable problems or exploring unsightly opportunities. This is where we might do well to overlay what a decision is in context to these two extreme benefits.
For example the benefit of automation, very frequently talked about in the press, assumes some rather obvious but important points. Not all work can be automated; if we can separate complex, cognitive work from work that is less cognitive or complex than we might be able to automate it. These are not new points to you since automation is not new. What is new is that AI is able to re-draw the line – what was thought of us too complex and not routine can now be exploited with AI. AI can (so the promise goes) cope with more complex and more cognitive work than previous technologies.
This new reality will only survive the light of day if the outcome of the automated work left to AI make sense. If the new-fangled black-box takes decisions and changes outcomes that humans don’t understand, those humans will likely turn off the box. So understanding the decision to some degree is very important.
However understanding or interpreting a decision is quite different to understanding how the algorithm works. A human should be able to grasp the principles of inputs, choice, weights, and results, even if an algorithm combines many of these to an extent that we cannot even prove the process. If the gap between outcome and approximate inputs are too varied, trust in the algorithm will likely fail – that’s just human nature. What about AI nature?
There is of course a corollary here – there are probably just as many decisions that cannot be explained or interpreted by humans. This makes things doubly complex. Do we just trust the AI if we have no hope of explaining the outcome? I think we might improve our chances here if we focus first on the decisions we can explain, and then build out from there with trusted AI-helpers.
At the other extreme of the benefit scale we see pattern discovery and advances in medicine and science and other fields. Here is the Hail Mary- an unexplained but rich discovery of an opportunity humans are just unable to perceive. Here we literally seek opportunities that we can’t understand. But that’s not the point.
At this end of the benefits scale we are not asking AI to take a decision. We are asking or employing AI and ML to discover insights. We are not performing automation. Once the pattern has been discovered, humans will then decide how to employ those insights. We are not seeking to let the algorithm find the insight then automate its application directly. That would be conflating two uses cases into one.
Therefore we still need to understand a decision for both examples:
- For use cases related to automation the decisions itself is directly in focus and so plausibility, logic or predicable results will help increase confidence in the use of AI
- For use cases related to discover the decision itself is not in scope and is implied only as an external or environmental event. We may not employ the AI-discovered insight, or we may.
It is as if we are comparing apples to oranges when we talk of AI (and related machine learning techniques) and decisions. Not until a new use case for AI that applies at the intersection of the two extremes will we have the opportunity to jettison all grounds for responsibility with decision making. Maybe we won’t jettison those responsibilities so quickly. My colleague Erick Brethenoux wonders if this intersection point is where user and machine come together: humans to discovery how to apply decisions and measure their impact and AI to automate part of the decision process. Time enough for that blog to be written. | <urn:uuid:75ed29f4-eeae-4446-a86d-fe8dc6a3eeaa> | CC-MAIN-2022-40 | https://blogs.gartner.com/andrew_white/2018/08/08/the-difference-between-decision-making-and-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00195.warc.gz | en | 0.948476 | 945 | 2.609375 | 3 |
What is Bitcoin?
Our guide to the world's most funded cryptocurrency
Although the cryptocurrency market has undergone wild swings in recent years, Bitcoin is still the most well-known and valuable digital token in the world. It’s just one of the thousands of actively traded cryptocurrencies that users can obtain, but is regarded widely as the one that’s shone the spotlight onto the phenomenon, as well as blockchain technology more widely. While the rising and falling value of Bitcoin often leads to the phenomenon being described as a hype - or craze - this market activity serves to keep digital tokens in the headlines.
Fundamentally, cryptocurrencies like Bitcoin were envisaged as being an alternative to traditional finance and banking. Because they operate on a decentralised model, cryptocurrencies aren’t managed by a large financial institution like a bank and tend not to fall under regulatory oversight. This differs greatly from traditional currencies, like Pound Sterling or the US Dollar. As such, however, detractors often see cryptocurrencies as operating in an unregulated Wild West environment.
The ideas behind cryptocurrencies originated in the aftermath of the 2008 global financial crisis and took advantage of distributed ledger technology to offer those disillusioned with the financial system an alternative. Trust in massive institutions was at its lowest, and so the idea of a form of currency free of central control appealed to many. There was also an appetite among those with money to invest in a new form of digital currency, whether this was with the hope that new markets and systems would develop over the years or just to make a point.
Bitcoin, however, remained fairly obscure until very recently, and only managed to emerge into the mainstream after a great surge in 2017, where it reached just under $20,000, before falling once again. It was still relatively niche at this stage, and only truly captured the public’s attention when it exploded in value towards the end of last year, reaching a high of just over $60,000.
These wild swings, however, also illustrated its greatest flaw for those who had hoped to use it as an alternative to conventional money. Because its value varies so wildly, it’s too unreliable to be used to buy and sell material goods and services. Instead, its primary function at the moment is its use as an investment vehicle.
Who invented Bitcoin?
It's widely believed that the idea for Bitcoin was first proposed in 2008 by software developer Satoshi Nakamoto (most likely a pseudonym), who wanted to create a payment system based on mathematics. Nakamoto envisioned a currency that was designed specifically for online transactions, allowing for almost instantaneous transfers at a fraction of the cost.
How are bitcoins acquired?
Users are able to acquire Bitcoins in one of four ways:
- As payment for sold goods or services
- As a transfer from one person to another
- Bought through a Bitcoin exchange
- Competitive 'mining'
Unlike paper money, which is printed and distributed by government services, Bitcoin is 'mined' using software that solves complex mathematical problems. Every time a problem is solved, the network adds a new 'block' to a chain that is set at 1MB in size. With each solution, the miner is rewarded a number of Bitcoins that remains constant. The number of Bitcoins generated per block started at 50, and has halved every 210,000 blocks, or every four years.
Today the reward is set at 6.25 Bitcoins. This represents a problem for Bitcoin miners, as hardware costs and substantial electricity bills are increasingly making mining unprofitable as the equations get increasingly complex.
Another problem facing Bitcoin is that as more people decide to join the mining community, the more difficult the mathematical problems need to be. An indeterminate number of new miners makes it impossible to accurately predict how long it will take to mine Bitcoin each month.
Although 'mining' is the only way to actually create Bitcoins, today users will most likely purchase Bitcoins at a Bitcoin exchange. A number of marketplaces have popped up since the currency became popular, allowing people to buy and sell Bitcoins using other conventional currencies.
How is Bitcoin used?
Bitcoins are stored in a digital wallet that is saved to a user's PC or in the cloud. The wallet acts as a virtual bank account, allowing users to pay for goods and services by sending Bitcoins to another wallet.
The details of every Bitcoin transaction ever made are stored using blockchain, a system designed specifically for the use of Bitcoin that has since become widely popular for other services. The advantage of blockchain is that it provides a means to store information in a series of connected 'blocks' that update in real-time. It's maintained by a peer-to-peer network, free of centralised management, and is almost impossible to edit. For more information head to our blockchain explainer.
Bitcoin is also incredibly easy to use, and there is no need to go through bank applications to set up an account. You are able to send and receive Bitcoins from anywhere in the world at any time, processed in minutes by the Bitcoin network. Transactions are also entirely anonymous, as you are not required to tie personal details to a Bitcoin account.
Where is Bitcoin accepted?
The list of services accepting the cryptocurrency is slowly expanding, particularly given its strong performance over the past year. There are a number of particularly high profile companies already making use of Bitcoin, including the Microsoft Windows and Xbox stores, Subway, Reddit, Expedia.com, gaming service Steam, and technology companies such as Dell and Tesla. Recently, companies such as VISA and PayPal have started allowing customers to trade using cryptocurrencies including Bitcoin.
Are there problems with using Bitcoin?
Despite using the highly robust blockchain system, security remains an issue. There have been a number of high profile hacks of Bitcoin services over the past few years, most notably the breach of one of the largest Bitcoin exchange services, Mt Gox, which lost almost 750,000 Bitcoins worth $350 million. As the transfer of Bitcoins is irreversible, breaches of this kind make it impossible to recover funds.
The main issue with Bitcoin is its volatility. As it is almost impossible to predict the value of the currency in the long term, or to judge how difficult it will be to mine, there are still too many uncertainties for some. There is also concern that the network will become oversaturated and unusable, as more people flood the mining community and make Bitcoin mining too difficult.
What are Bitcoin forks?
IT Pro 20/20: Meet the companies leaving the office for good
The 15th issue of IT Pro 20/20 looks at the nature of operating a business in 2021DOWNLOAD NOW
Bitcoin's rise has been far from smooth. As the cryptocurrency is decentralised, its development is decided by reaching a consensus within its community. Over the past year two major 'forks' in Bitcoin, where community groups had different ideas about how to make improvements to Bitcoin's underlying blockchain, leading to the creation of new cryptocurrencies based on Bitcoin.
In August, a split over ways to improve Bitcoin transaction speeds resulted in the creation of Bitcoin Cash, a now separate cryptocurrency. Similarly, in October we saw the creation of Bitcoin Gold, conceived by a splinter group of developers that wanted to make it cheaper to mine the currency.
You would think these turbulent splits would've proved disastrous for Bitcoin, yet all signs suggest they did little to impede its momentum. Prices barely moved after the creation of Bitcoin Cash, and Bitcoin Gold has had even less impact so far. What's more, as each split allows Bitcoin to improve its blockchain, and as long as it's able to weather the fallout, these turbulent episodes are actually proving worthwhile.
Three ways manual coding is killing your business productivity
...and how you can fix itFree Download
Goodbye broadcasts, hello conversations
Drive conversations across the funnel with the WhatsApp Business PlatformFree Download
Winning with multi-cloud
How to drive a competitive advantage and overcome data integration challengesFree Download
Talking to a business should feel like messaging a friend
Managing customer conversations at scale with the WhatsApp Business PlatformFree Download | <urn:uuid:65f62768-9ade-4e81-a175-b62b106e2168> | CC-MAIN-2022-40 | https://www.itpro.com/strategy/28296/what-is-bitcoin | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00195.warc.gz | en | 0.960542 | 1,637 | 2.625 | 3 |
Space exploration is evolving from a government-led venture to a rich collaboration with the private sector.
Much to the chagrin of any kids who grow up idolizing astronauts, the start of the 21st century has marked an era of cutbacks for NASA. The space shuttle program shut down in 2011. A year later, Congress trimmed NASA’s overall budget appropriation by $648 million. Now that the Constellation program has lost its funding, prospects look bleak for sending astronauts back to the moon. In 1980, the United States had 100 percent of the global rocket launch capability. That share dropped to zero several years ago, but is now inching back up, thanks to private space flight.
The dismal outlook is a far cry from the heady days of the 1960s. It was then, just three years after NASA was created in 1958, that President John F. Kennedy issued his famous directive declaring that by the end of the decade the United States would send a man to the moon and bring him home safely.
NASA has radically scaled back, but space travel survives. How? In the face of fiscal constraints, the agency has changed its role. For the United States, space exploration is evolving from a government-led venture to a rich collaboration with the private sector.
As NASA has reduced its commitments, a dynamic private sector space ecosystem has sprung up vigorously into the void, and with the agency’s strong support. Richard Branson’s Virgin Galactic, for example, is developing a spacecraft to launch tourists into orbit and facilitate at least $4.5 million in NASA research contracts, prompting New Mexico to build a $209 million spaceport. Blue Origin, led by Amazon.com founder and chief executive Jeff Bezos, is developing space vehicles designed to launch and land on retractable legs. A startup called NanoRacks helps scientists who need zero-gravity environments transport their experiments to the International Space Station.
Many other companies, including Orbital Sciences, XCOR Aerospace, and Boeing, are testing vehicles for space travel. NASA is helping Moon Express Inc. develop robots to search the moon for precious metals. XCOR Aerospace is developing a two-seater Lynx vehicle to shuttle passengers to space for $95,000 a trip. Space Adventures has already sent seven people to the International Space Station from a Soviet-era launch facility in Kazakhstan.
One of the most interesting players in the new space ecosystem is SpaceX, of Hawthorne, Calif. SpaceX has more than $3 billion in contracts for more than 30 launches, including $1.6 billion from NASA. Its unmanned Dragon capsule docked on the space station in May 2012, in what was likely one of many supply runs to come.
Launched in 2002 by Elon Musk, the co-founder of PayPal and Tesla Motors, SpaceX intends to vastly reduce the cost of space ventures. “Today it costs over a billion dollars for a space shuttle flight,” Musk says. “The cost . . . is fundamentally what’s holding us back from becoming a space traveling civilization and ultimately a multiplanet species.”
Surprisingly, NASA feels no sense of rivalry with these emerging space entrepreneurs. “We have an enlightened self-interest in seeing the industry players do well,” explains Joe Parrish, NASA’s deputy chief technologist. Not only has the agency welcomed the new players in space, but it has also radically reengineered its own business model to take advantage of outside innovation. This approach sets NASA apart from most other government agencies
“Partnering with U.S. companies such as SpaceX to provide cargo and eventually crew service to the International Space Station is a cornerstone of the president’s plan for maintaining America’s leadership in space,” says John P. Holdren, assistant to the president for science and technology. “This expanded role for the private sector will free up more of NASA’s resources to do what NASA does best—tackle the most demanding technological challenges in space, including those of human space flight beyond low Earth orbit.”
NASA shows how an organization can nimbly adapt to resource constraints, offering the following lessons for agencies shifting roles within their fields:
- Instead of seeing new entrants as a threat, consider potential win-win scenarios that also yield public value.
- Support the development of platforms and exchanges that enable different providers to work together toward solving the big problems that affect everyone. You can’t begin to think about ways to combine capabilities with partners unless you know who they are and their specialties, a process that platforms can simplify.
- Get creative about the resources you can bring to the emerging ecosystem and that will provide a springboard for solutions. Perhaps it is funding, or convening a multidisciplinary team of wavemakers or something as simple as physical space for early-stage innovators to experiment side by side.
Pooling these disparate resources will reinforce that there’s more support available for problem-solving than one solitary approach. This awareness boosts not only your organization’s morale, but also the chance of reaching a solution.
William D. Eggers, leader of public sector research at Deloitte, and Paul Macmillan, the global public sector leader for Deloitte Touche Tohmatsu, are the authors of The Solution Revolution: How Business, Government, and Social Enterprises are Teaming up to Solve Society’s Toughest Problems (Harvard Business Press, 2013), which was released on Tuesday. | <urn:uuid:04431913-edb8-4fc2-b8fd-3bbc9c23f65a> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2013/09/analysis-nasas-new-role-partner/70468/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00195.warc.gz | en | 0.941692 | 1,132 | 3.3125 | 3 |
By: Microtek Learning
Sep. 14, 2022
Last Updated On: Sep. 14, 2022
Data significantly impacts the healthcare sector's cutting-edge approach to improving patient outcomes. Due to the nature of American healthcare, ensuring EHR records are organized correctly for interoperability throughout the healthcare system is crucial to their effective usage.
When comparing HL7 and FHIR standards, there is a much deeper idea to keep in mind. Understanding the fundamental variations can benefit your organization's efficiency and workflow. Let's begin by defining HL7 and FHIR.
The term "HL7" (Health Level Seven) refers to a collection of global data-sharing standards that have gained popularity thanks to Health Level Seven International, a non-profit organization specializing in healthcare interoperability. It functioned as an early encoding system to allow safe healthcare documents and messaging sharing between healthcare organizations.
Our healthcare system comprises a sizable number of roughly connected public and private providers, all with their unique way of operating. Due to the fact that EHR systems are completely unprepared for interoperability, information exchange protocols like HL7 are essential for facilitating communication between these associations.
While the creation of HL7 has been a significant step toward the standardization of health records, integration issues are still too typical, and implementation differs greatly between organizations. Even though HL7's development has been a highly good development for the normalization of medical records, integration issues continue to be prevalent.
When it was first introduced in 1989, HL7 V2 became one of the most widely used healthcare standards worldwide. The majority of American medical organizations already use this standard, according to HL7. The way HL7 V2 works is by providing a language for systems like healthcare information systems, electronic clinical record (EMR) systems, billing stems, laboratory information systems, and more to communicate with one another. Those messages are written in ASCII text format. When a patient is admitted to a clinic, a specialist requests a prescription from a pharmacy, or a healthcare provider charges a patient, frameworks send messages to one another.
HL7 V2 has helped healthcare companies avoid the difficult software development labor previously required to construct interfaces by allowing multiple systems to connect. The standard leaves a decent amount of work for developers because it was designed to be adaptable and changeable.
Fast Healthcare Interoperability Resources (FHIR), a significant replacement for HL7 V2 and V3 standards, was introduced in 2014 by HL7. Within a few years, FHIR has won the support of a number of prestigious healthcare organizations, including SMART (Substitutable Medical Applications, Reusable Technologies), CommonWell Health Alliance, and even Apple, with the inclusion of FHIR in the iPhone Health app.
With the help of the open standard FHIR, originally drafted in 2011, legacy systems and new apps may communicate data more quickly than in the past. FHIR was created to simplify implementation compared to earlier standards, provide readily understandable specifications, and allow developers to take advantage of widely used Web technologies. It was created to promote interoperability and communication efficiency.
FHIR expands upon earlier standards, including HL7 V2, HL7 V3, and CDA (the Clinical Document Architecture subset of HL7 V3). But unlike those prior standards, FHIR uses open web technologies such as RESTful web services and JSON and RDF data formats in addition to XML, which was the data format utilized by earlier standards. Compared to other standards, the learning curve should be less steep for developers thanks to these features.
FHIR also provides a variety of possibilities for system-to-system data exchange. It supports, for instance, messaging (comparable to HL7 V2), documents (comparable to CDA), and a RESTful API strategy. This RESTful method offers increased interoperability among various systems and devices, including mobile devices, mobile apps, medical devices, wearables, and electronic health record (EHR) systems.
The U.S. in 2020 The usage of FHIR by many CMS-regulated payers and providers starting in mid-2021 has been made mandatory by the Centers for Medicare & Medicaid Services (CMS).
There are a number of challenges and things to keep in mind as you prepare to adopt FHIR.
The adoption process will take time: FHIR is relatively young and evolving, whereas HL7 has been a widely used standard for sharing healthcare information for many years. There is little question that FHIR has many advantages, but adoption will be slow.
Adoption costs: For significant healthcare organizations, adoption costs are complicated.
Risks that sellers might face: Some health IT companies may be put at risk because FHIR can shorten the time it takes to deploy standard health records, which reduces income.
Using FHIR opens up new possibilities for cloud communications and mobile health applications, enabling more sophisticated integration and improved interoperability. FHIR has become the fundamental component for better patient care by enabling EHRs to interface with one another.
In terms of EHR spending, the majority of healthcare companies are currently highly keen to adapt to the latest technologies and standards. To help you achieve the actual outcomes you require, Microtek Learning provides you with the best HL7 FHIR SMART integration services. | <urn:uuid:d307d480-3c02-437c-a4ad-4578295f0612> | CC-MAIN-2022-40 | https://www.microteklearning.com/blog/what-is-the-real-difference-between-hl7-and-fhir/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00396.warc.gz | en | 0.945223 | 1,085 | 2.65625 | 3 |
As parents, we want to make sure our children are consuming content that is enriching, fun, and safe. With a plethora of kids’ media out there, it can be overwhelming to vet content and filter out anything that can be harmful or just a waste of time.
In part 1 of our media and technology guide for families, we explained how and why to create a media use plan for your household to help everyone in the home manage their digital experiences. Now that you’ve established some guidelines for your family, let’s talk about some age-appropriate media and how to tell the good from the bad.
The best of TV kids’ media
Experts agree letting your kids watch a limited amount of educational television can be harmless, or even beneficial to their cognitive development. However, plopping them down in front of the TV for hours at a time to watch whatever comes on can lead to behavioral problems, poor sleep, and aggression.
Despite the negative side-effects, a number of households leave the TV on for hours at a time. One 2017 study of children ages 0 to 8 found 42 percent of those households said the TV is on “always” or “most of the time,” whether or not anyone is watching.
The good news is, with the explosion of TV content creation prompted by streaming TV platforms and networks, TV can be more than just a digital pacifier in this day and age. There’s a lot of really great programming that families can access at a low cost through streaming services, through their regular TV package or on broadcast stations.
Recommended TV shows for little kids (3-5):
In this age group, there is no shortage of kids’ media out there with colorful characters that will delight your little one. But be sure to look for age-appropriate media with relatively slow-moving stories, positive messages, and characters that model healthy relationships.
- Science and nature: Octonauts; The Cat in the Hat Knows a Lot About That; Doc McStuffins
- Problem-solving: Puffin Rock; Handy Manny; Curious George; Peg + Cat
- Creative exploration: Little Einsteins; Creative Galaxy
- Social awareness: Mr. Roger’s Neighborhood; Sesame Street
Recommended TV shows for big kids (6-9):<
As kids grow into the elementary school years, they become more aware of themselves, their own learning, and their place in a complex world. Find age-appropriate media that help them understand how to navigate social interaction with children and adults, as well as build their understanding of both academic and creative concepts.
- Science and inquiry: Ask the Storybots; MythBusters; Odd Squad; SciGirls; Wild Kratts
- Creativity: Master Chef Jr.; World of Dance
- Silly fun: We Bare Bears; Gortimer Gibbon’s Life on Normal Street
- Social awareness: 100 Things to Do Before High School; The Next Step
Recommended TV shows for tweens and middle-schoolers (10-13):
The pre-teen years bring an explosion in brainpower, self-awareness, and emotional growth. Kids at this age need positive but realistic models to emulate, as well as healthy doses of both goofiness and drama. Try to find kids’ media that satisfy a growing interest in action, romance or horror without going too far for their maturity level. As tweens turn into teens, they’ll also want to watch more shows that cater to a particular genre or interest. You will likely want to create your own list by searching for personalized recommendations, such as “best teen anime shows” or “best teen shows about sports.”
- Social awareness: Who Do You Think You Are?; Speechless; I Am Jazz
- Silly fun: Adventure Time; Steven Universe; Doctor Who; Bravest Warriors
- True-to-life drama: Andi Mack; Boy Meets World; Being You
More and more, parents are shifting their focus to helping their kids build key skills through their technology use, rather than focusing only on rigid recommendations on screen time. It’s a matter of considering quality over (or in addition to) quantity.
As you load up a device with apps for your kids, there are a number of key questions you can ask to help you pick and choose.
Is it appropriate for their age, maturity level, and stage of development?
Look at age ratings whenever available and read reviews by other parents. This can help you determine how relevant age-appropriate media ratings are for your individual child. Consider not only the most obvious inappropriate content, such as graphic violence or sexually suggestive imagery. Sassy attitudes and immature arguing between characters can also have a negative effect on young children who imitate what they see. Shows or games that feature a lot of fast action or fighting are also more likely to lead to aggression or hyperactivity.
But it all depends upon your child’s temperament. It’s important to customize your choices and be a close observer of your child during and after their media use, so you can note any negative patterns.
Does it allow for some degree of creativity, collaboration or exploration?
Experts agree active play is better than passive. In other words, the more they can interact with “loose parts” in digital form to create something original, the better. Look for kids’ media apps that give them the opportunity to build their own scenes or characters, create fashion or art, make their own recipes, or explore imaginary worlds freely. If they are old enough to play games online, search for collaborative games to focus on choices that promote working together over winning.
Is it highly rated and from a reputable source?
One way to tell is to check how many people have downloaded it in the app store. Not to say a new or underdog program with few downloads is always bad, but there are many app factories out there pumping out volumes of low-quality apps, and one way you can identify them is when they have only 1K or 5K downloads. In addition to the questionable quality of content, an app that doesn’t have a lot of downloads might not be well-maintained and can present a security risk to your device.
Be wary, too, of any games or apps that haven’t been reviewed by other parents or specifically for use by children. When searching in your app store, get in the habit of filtering search results by ratings, so you only see the best options and don’t have to wade through many that are less reputable.
For even more in-depth ratings and information about age-appropriate media and apps for your children, you can use a free website/app such as Common Sense Media or Entertainment Software Rating Board to find other parents’ and kids’ ratings, as well as screenshots, detailed reviews, and additional information about the types of imagery, language and themes contained in various kinds of media.
Does it feature a lot of advertisements or push users into additional in-app purchases?
A free or low-cost app can end up costing you a lot of money over time. How? Many apps and games feature numerous opportunities for the user to spend real money to unlock levels, purchase special tools or character powers, etc. and often it only takes a simple click. So read parent reviews and monitor the games your kids play on a regular basis (as well as your own bank account) to avoid any unpleasant surprises. Also, be aware these kinds of apps aren’t necessarily the highest quality choices. As a rule, steer clear of the ones that hound unsuspecting users to spend money at every turn.
Is it safe and secure?
It’s not always obvious how an app collects and uses personal information of users. For older kids, this becomes an especially important question. The Children’s Online Privacy Protection Act (COPPA) is a policy intended to guard kids’ privacy and safety when using the internet, and requires websites and apps to inform users when they gather information from children under the age of 13.
But legal notifications are famous for being difficult to decipher. It’s best to do a little research, read ratings, and search for parent reviews that discuss how kids interact with others in different apps and sites. Be wary of any platform that allows kids to interact freely with other users, especially adults. And be especially cautious about apps that use GPS to share players’ locations, as this information can be harmful in the wrong hands.
For more recommendations, check out Common Sense Media’s “best of” lists for both apps and websites for kid’s media, with dozens of top choices selected by their staff and by parents, organized by age range.
Learn about technology together
Which changes faster, your child or your technology? Kids’ media has quickly become both an amazing tool for parents and a complex obstacle course for them to cross. The choices we make for (and with) our kids when it comes to how they engage with media will have a real impact on them as they grow up. But thankfully, there’s no shortage of great resources available to learn — and keep learning — what’s age-appropriate and beneficial for kids’ development in the ever-changing world of media technology. | <urn:uuid:6d017da6-416e-4d4a-8371-ca73d481dab0> | CC-MAIN-2022-40 | https://discover.centurylink.com/family-tech-guide-part-2-kid-friendly-media-options.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00396.warc.gz | en | 0.942078 | 1,935 | 2.96875 | 3 |
DROWN is a serious vulnerability that affects HTTPS and other services that rely on SSL and TLS, some of the essential cryptographic protocols for Internet security. These protocols allow everyone on the Internet to browse the web, use email, shop online, and send instant messages without third-parties being able to read the communication.
DROWN allows attackers to break the encryption and read or steal sensitive communications, including passwords, credit card numbers, trade secrets, or financial data. At the time of public disclosure on March 2016, our measurements indicated 33% of all HTTPS servers were vulnerable to the attack. Fortunately, the vulnerability is much less prevalent now. As of 2019, SSL Labs estimates that 1.2% of HTTPS servers are vulnerable.
Any communication between users and the server. This typically includes, but is not limited to, usernames and passwords, credit card numbers, emails, instant messages, and sensitive documents. Under some common scenarios, an attacker can also impersonate a secure website and intercept or change the content the user sees.
Websites, mail servers, and other TLS-dependent services are at risk for the DROWN attack. At the time of public disclosure, many popular sites were affected. We used Internet-wide scanning to measure how many sites are vulnerable:
|HTTPS — Top one million domains||25%|
|HTTPS — All browser-trusted sites||22%|
|HTTPS — All sites||33%|
Operators of vulnerable servers need to take action. There is nothing practical that browsers or end-users can do on their own to protect against this attack.
Modern servers and clients use the TLS encryption protocol. However, due to misconfigurations, many servers also still support SSLv2, a 1990s-era predecessor to TLS. This support did not matter in practice, since no up-to-date clients actually use SSLv2. Therefore, even though SSLv2 is known to be badly insecure, until now, merely supporting SSLv2 was not considered a security problem, because clients never used it.
DROWN shows that merely supporting SSLv2 is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to a server that supports SSLv2 and uses the same private key.
A server is vulnerable to DROWN if:
To protect against DROWN, server operators need to ensure that their private keys are not used anywhere with server software that allows SSLv2 connections. This includes web servers, SMTP servers, IMAP and POP servers, and any other software that supports SSL/TLS.
Disabling SSLv2 can be complicated and depends on the specific server software. We provide instructions here for several common products:
OpenSSL: OpenSSL is a cryptographic library used in many server products. For users of OpenSSL, the easiest and recommended solution is to upgrade to a recent OpenSSL version. OpenSSL 1.0.2 users should upgrade to 1.0.2g. OpenSSL 1.0.1 users should upgrade to 1.0.1s. Users of older OpenSSL versions should upgrade to either one of these versions. More details can be found in this OpenSSL blog post.
(Updated March 13th, 16:00 UTC) Microsoft IIS (Windows Server): Support for SSLv2 on the server side is enabled by default only on the OS versions that correspond to IIS 7.0 and IIS 7.5, namely Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008R2. This support can be disabled in the appropriate SSLv2 subkey for 'Server', as outlined in KB245030. Even if users have not taken the steps to disable SSLv2, the export-grade and 56-bit ciphers that make DROWN feasible are not supported by default.
Network Security Services (NSS): NSS is a common cryptographic library built into many server products. NSS versions 3.13 (released back in 2012) and above should have SSLv2 disabled by default. (A small number of users may have enabled SSLv2 manually and will need to take steps to disable it.) Users of older versions should upgrade to a more recent version. We still recommend checking whether your private key is exposed elsewhere
Other affected software and operating systems:
Instructions and information for: Apache, Postfix, Nginx, Debian, Red Hat
Browsers and other clients: There is nothing practical that web browsers or other client software can do to prevent DROWN. Only server operators are able to take action to protect against the attack.
DROWN: Breaking TLS
using SSLv2 [PDF]
Nimrod Aviram, Sebastian Schinzel, Juraj Somorovsky, Nadia Heninger, Maik Dankel, Jens Steube, Luke Valenta, David Adrian, J. Alex Halderman, Viktor Dukhovni, Emilia Käsper, Shaanan Cohney, Susanne Engels, Christof Paar, and Yuval Shavitt
25th USENIX Security Symposium, Austin, TX, August 2016
More: Conference paper | Bibtex | Original tech report
The team can be contacted at firstname.lastname@example.org.
DROWN stands for Decrypting RSA with Obsolete and Weakened eNcryption.
For the complete details, see our full technical paper. We also provide a brief technical summary below:
In technical terms, DROWN is a new form of cross-protocol Bleichenbacher padding oracle attack. It allows an attacker to decrypt intercepted TLS connections by making specially crafted connections to an SSLv2 server that uses the same private key.
The attacker begins by observing roughly several hundred connections between the victim client and server. The attacker will eventually be able to decrypt one of them. Collecting this many connections might involve intercepting traffic for a long time or tricking the user into visiting a website that quickly makes many connections to another site in the background. The connections can use any version of the SSL/TLS protocol, including TLS 1.2, so long as they employ the commonly used RSA key exchange method. In an RSA key exchange, the client picks a random session key and sends it to the server, encrypted using RSA and the server’s public key.
Next, the attacker repeatedly connects to the SSLv2 server and sends specially crafted handshake messages with modifications to the RSA ciphertext from the victim’s connections. (This is possible because unpadded RSA is malleable.) The way the server responds to each of these probes depends on whether the modified ciphertext decrypts to a plaintext message with the right form. Since the attacker doesn’t know the server’s private key, he doesn’t know exactly what the plaintext will be, but the way that the server responds ends up leaking information to the attacker about the secret keys used for the victim’s TLS connections.
The way this information is leaked can take two forms:
In the most general variant of DROWN, the attack exploits a fundamental weakness in the SSLv2 protocol that relates to export-grade cryptography that was introduced to comply with 1990s-era U.S. government restrictions. The attacker’s probes use a cipher that involves only 40 bits of RSA encrypted secret key material. The attacker can tell whether his modified ciphertext was validly formed by comparing the server’s response to all 240 possibilities—a moderately large computation, but one that we show can be inexpensively performed with GPUs. Overall, roughly 40,000 probe connections and 250 computation is needed to decrypt one out of 900 TLS connections from the victim. Running the computations for the full attack on Amazon EC2 costs about $440.
A majority of servers vulnerable to DROWN are also affected by an OpenSSL bug that results in a significantly cheaper version of the attack. In this special case, the attacker can craft his probe messages so that he immediately learns whether they had the right form without any large computation. In this case, the attacker needs about 17,000 probe connections in total to obtain the key for one out of 260 TLS connections from the victim, and the computation takes under a minute on a fast PC.
This special case stems from the complexity introduced by export-grade cryptography. The OpenSSL bug allows the attacker to mix export-grade and non-export-grade crypto parameters in order to exploit unexpected paths in the code.
This form of the attack is fast enough to allow an online man-in-the-middle (MitM) style of attack, where the attacker can impersonate a vulnerable server to the victim client. Among other advantages, such an attacker can force the client and server to use RSA key exchange (and can then decrypt the connection) even if they would normally prefer a different cipher. This lets the attacker target and break connections between modern browsers and servers that prefer perfect-forward-secret key exchange methods, such as DHE and ECDH.
We were able to execute this form of the attack in under a minute on a single PC.
DROWN was developed by researchers at Tel Aviv University, Münster University of Applied Sciences, Ruhr University Bochum, the University of Pennsylvania, the Hashcat project, the University of Michigan, Two Sigma, Google, and the OpenSSL project: Nimrod Aviram, Sebastian Schinzel, Juraj Somorovsky, Nadia Heninger, Maik Dankel, Jens Steube, Luke Valenta, David Adrian, J. Alex Halderman, Viktor Dukhovni, Emilia Käsper, Shaanan Cohney, Susanne Engels, Christof Paar, and Yuval Shavitt
The team can be contacted at email@example.com.
Yes. The DROWN attack itself was assigned CVE-2016-0800.
DROWN is made worse by two additional OpenSSL implementation vulnerabilities. CVE-2015-3197, which affected OpenSSL versions prior to 1.0.2f and 1.0.1r, allows a DROWN attacker to connect to the server with disabled SSLv2 ciphersuites, provided that support for SSLv2 itself is enabled. CVE-2016-0703, which affected OpenSSL versions prior to 1.0.2a, 1.0.1m, 1.0.0r, and 0.9.8zf, greatly reduces the time and cost of carrying out the DROWN attack.
Yes. We’ve been able to execute the attack against OpenSSL versions that are vulnerable to CVE-2016-0703 in under a minute using a single PC. Even for servers that don’t have these particular bugs, the general variant of the attack, which works against any SSLv2 server, can be conducted in under 8 hours at a total cost of $440.
Here are some examples.
We have no reason to believe that DROWN has been exploited in the wild prior to this disclosure. Since the details of the vulnerability are now public, attackers may start exploiting it at any time, and we recommend taking the countermeasures explained above as soon as possible.
Indeed, SSLv2 has long known to be weak when clients and servers use it to communicate, and so nearly every modern client uses a more recent protocol. DROWN shows that merely allowing SSLv2, even if no legitimate clients ever use it, is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to any server that supports SSLv2 using the same private key.
No. DROWN allows an attacker to decrypt one connection at a time. The attacker does not learn the server’s private key.
Yes. Some variants of the attack can be used to perform MitM attacks against TLS or QUIC. More details can be found in sections 5.3 and 7 of the technical paper.
Surprisingly, no. The active MitM form of the attack allows an attacker to target servers and clients that prefer non-RSA key exchange methods. See sections 5.3 and 7 of the technical paper.
Probably not. As the attacker does not learn the server’s private key, there’s no need to obtain new certificates. The only action required is disabling SSLv2 as per the countermeasures explained above. If you cannot confidently determine that SSLv2 is disabled on every device or server that uses your server’s private key, you should generate a fresh key for the server and obtain a new certificate.
No. There is nothing practical that web browsers or other client software can do to prevent DROWN. Only server operators are able to take action to protect against the attack.
Yes, that’s a reasonable precaution, although it will also prevent our scanners from being able to help you identify vulnerable servers. You might consider first running the test suite to identify vulnerable servers and only then filtering SSLv2 traffic. You should also use the countermeasures explained above.
Possibly. If you run a server and can be certain no one made a large number of SSLv2 connections to any of your servers (for example, by examining IDS or server logs), then you weren’t attacked. Your logs may contain a small number of SSLv2 connections from the Internet-wide scans that we conducted over the past few months to measure the prevalence of the vulnerability.
Yes. Even if you’re certain that you have SSLv2 disabled on your HTTPS server, you may be reusing your private key on another server (such as an email server) that does support SSLv2. We recommend manually inspecting all servers that use your private key.
Security against DROWN is not possible for that embedded device. If you must keep that device running, make sure it uses a different RSA private key than any other servers and devices. You can also limit the scope of attack by using a firewall to filter SSLv2 traffic from outside your organization. In all circumstances, maintaining support for SSLv2 should be a last resort.
Unfortunately, no. Although SSLLabs provides an invaluable suite of security tests, right now it only checks whether your HTTPS server directly allows SSLv2. You’re just as much at risk if your site’s certificate or key is used anywhere else on a server that does support SSLv2. Common examples include SMTP, IMAP, and POP mail servers, and secondary HTTPS servers used for specific web applications.
You can also download and run our scanner utility. This utility only detects SSLv2 support on a single port. It cannot detect the common scenario, explained above, where a web server that doesn't support SSLv2 is vulnerable because it shares its public key with an email server that does.
Due to CVE-2015-3197, OpenSSL may still accept SSLv2 connections even if all SSLv2 ciphers are disabled.
Not in the immediate future. There are still too many servers vulnerable to the attack.
For the third time in a year, a major Internet security vulnerability has resulted from the way cryptography was weakened by U.S. government policies that restricted exporting strong cryptography until the late 1990s. Although these restrictions, evidently designed to make it easier for NSA to decrypt the communication of people abroad, were relaxed nearly 20 years ago, the weakened cryptography remains in the protocol specifications and continues to be supported by many servers today, adding complexity—and the potential for catastrophic failure—to some of the Internet’s most important security features.
The U.S. government deliberately weakened three kinds of cryptographic primitives: RSA encryption, Diffie-Hellman key exchange, and symmetric ciphers. FREAK exploited export-grade RSA, and Logjam exploited export-grade Diffie-Hellman. Now, DROWN exploits export-grade symmetric ciphers, demonstrating that all three kinds of deliberately weakened crypto have come to put the security of the Internet at risk decades later.
Today, some policy makers are calling for new restrictions on the design of cryptography in order to prevent law enforcement from “going dark.” While we believe that advocates of such backdoors are acting out of a good faith desire to protect their countries, history’s technical lesson is clear: weakening cryptography carries enormous risk to all of our security. | <urn:uuid:8f0388f8-33b4-4e35-9c8b-7223f437fd7e> | CC-MAIN-2022-40 | https://drownattack.com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00396.warc.gz | en | 0.916604 | 3,418 | 2.859375 | 3 |
What in the world is smart dust?
It sounds like something lifted straight from the pages of hard science fiction or the world of fictional superspies: a linked network of millions of tiny intelligent devices, each no more than the size of a grain of rice, communicating with each other as they continually monitor their surroundings and stream out data on chemistry, air or water pressure, vibration, temperature, light and more.
The man who coined the term, Kris Pister, told IDG Connect last April that the name “…was kind of a joke – everything in the US and LA at that time seemed to be ‘smart’, smart bombs, smart houses, smart roads…” His recollection can be counted on, since he co-authored the paper “Smart Dust: Autonomous sensing and communication in a cubic millimeter” back in 1997, when the technology was still in its infancy. He was also one of the first researchers to develop functional prototypes of the technology.
Fast forward twenty years since that curious name came to be; now over a dozen leading companies in the US and abroad, not to mention numerous research labs and institutions, are quietly working to harness the power of Bluteooth® 5.0, 5G wireless connectivity and related technologies to make the next industrial revolution, with smart dust at its core, a reality.
So once again, what exactly is smart dust, and what are its potential applications and benefits? According to the Engineering and Technology website,
Smart dust will enable the wireless, real-time collection of data via miniaturized low-power sensors, transforming our understanding of structures, systems and the environment. An evolution of wireless sensor networks, the advent of smart dust will see the distribution of billions or trillions of devices, each capable of transmitting specific feedback including data on vibrations, sound, temperature, pressure or chemistry. Powered by battery or kinetic energy and measuring just one cubic millimetre, smart dust could be deployed across vast or hard-to-reach areas.
What are the promises and potential applications of smart dust?
Fast Company tells us to “Forget the Internet of Things. The Future is Smart Dust,” and poses these questions: “Why even bother attaching sensors to actual things? What if they just floated all around us in the air and everywhere?”
Is this mere hype, or is there substance behind the claim that smart dust could really help usher in the Industrial Revolution, version 4.0?
In truth, the potential for smart dust applications is nearly infinite. For example, it could enable continuous real-time monitoring of new or existing structures such as skyscrapers or bridges to assess their condition and potentially extend their lifespan. Environmental monitoring could become much more precise through large-scale, detailed measurements of air or water quality. And wireless miniature seismometers could give advanced warning of volcanic eruptions or earthquakes, while similar water-borne devices could do the same for tsunami warnings.
But these are just a few of the hundreds of potential applications being considered and tested in labs around the world that could revolutionize our lives. Are you interested in the topic of smart dust and itching to learn more? If so, may we suggest the following additional reading:
Engineering and Technology: “20 technologies to change the world”
Fast Company: “Forget The Internet Of Things. The Future Is Smart Dust”
IDG Connect: “Smart Dust: A revolution that’s blowing in the wind?”
Internet of Business: “Future of IoT will be ‘smart dust’, says Cambridge Consultants”
As a company with a long history of involvement and innovation in embedded and Internet of Things (IoT) technologies, AMI is monitoring developments in the emerging world of smart dust very closely and excited to be a part of its promising future.
Thanks for reading today’s Tech Blog! Do you have any thoughts on the potential of smart dust as a disruptive technology? Feel free to drop us a line via social media or our Contact Us form and let us know – and what you might like to see in future posts!
Bluetooth® is a registered trademark of the Bluetooth SIG, Inc. | <urn:uuid:f4354574-5bdb-41c5-8fc6-499a512fc1c1> | CC-MAIN-2022-40 | https://www.ami.com/blog/2017/10/03/what-in-the-world-is-smart-dust/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00396.warc.gz | en | 0.910933 | 995 | 3 | 3 |
Virus-writers are using increasingly complex and sophisticated techniques in their bid to circumvent anti-virus software and disseminate their viruses.
A case in point was the notorious Nimda virus that used multiple methods to spread itself and was based on an exploit rather than on the virus/Trojan behavior that anti-virus products typically search for. Email security tools must become more sophisticated if such threats are to be blocked before they can cause harm. Anti-virus software, though essential, cannot combat such threats alone; an email exploit detection tool is also necessary.
What is an exploit?
An exploit uses known vulnerabilities in applications or operating systems to execute a program or code. It “exploits” a feature of a program or the operating system for its own use, such as to execute arbitrary machine code, read/write files on the hard disk, or gain illicit access.
What is an email exploit?
An email exploit is an exploit launched via email. An email exploit is essentially an exploit that can be embedded in an email, and executed on the recipient´s machine once the user either opens or receives the email. This allows the hacker to bypass most firewalls and anti-virus products.
Difference between anti-virus software and email exploit detection software
Anti-virus software is designed to detect KNOWN malicious code. An email exploit engine takes a different approach: it analyses the code for exploits that COULD BE malicious. This means it can protect against new viruses, but most importantly against UNKNOWN viruses/malicious code. This is crucial as an unknown virus could be a one-off piece of code, developed specifically to break into your network.
Email exploit detection software analyzes emails for exploits – i.e., it scans for methods used to exploit the OS, email client or Internet Explorer – that can permit execution of code or a program on the user´s system. It does not check whether the program is malicious or not. It simply assumes there is a security risk if an email is using an exploit in order to run a program or piece of code.
In this manner, an email exploit engine works like an intrusion detection system (IDS) for email. The email exploit engine might cause more false positives, but it adds a new layer of security that is not available in a normal anti-virus package, simply because it uses a totally different way of securing email.
Anti-virus engines do protect against some exploits but they do not check for all exploits or attacks. An exploit detection engine checks for all known exploits. Because the email exploit engine is optimised for finding exploits in email, it can therefore be more effective at this job than a general purpose anti-virus engine.
An exploit engine needs to be updated less frequently than an anti-virus engine because it looks for a method rather than a specific virus. Although keeping exploit and anti-virus engines up-to-date involve very similar operations, the results are different. Once an exploit is identified and incorporated in an exploit engine, that engine can protect against any new virus that is based on a known exploit. That means the exploit engine will catch the virus even before the anti-virus vendor is aware of its emergence, and certainly before the anti-virus definition files have been updated to counter the attack. This is a critical advantage, as shown by the following examples that occurred in 2001.
The Lessons of Nimda and BadTrans.B
Nimda and BadTrans.B are two viruses that became highly known worldwide in 2001 because they infected a colossal number of Windows computers with Internet access. Nimda alone is estimated to have affected about 8.3 million computer networks around the world, according to US research firm Computer Economics (November 2001).
Nimda is a worm that uses multiple methods to automatically infect other computers. It can replicate through email using an exploit that was made public months before Nimda hit, the MIME Header exploit. BadTrans.B is a mass-mailing worm that distributes itself using the MIME Header exploit. BadTrans.B first appeared after the Nimda outbreak.
With their highly rapid infection rate, both Nimda and BadTrans.B took anti-virus vendors by surprise. Though the vendors tried to issue definition file updates as soon as they learned about each virus, the virus had already succeeded in infecting a large number of PCs by the time the anti-virus updates were released.
Though both viruses used the same exploit, anti-virus vendors had to issue a separate definition file update for each. In contrast, an email exploit detection engine would have recognized the exploit used and identified the attempt to automatically launch an executable file using the MIME header exploit. As a result, it would have blocked both worms automatically, preventing infection.
Testing for exploit vulnerability
You can easily test whether your email system is vulnerable to the exploit described above and similar email exploits and threats. GFI has set up a testing zone that enables you to determine the susceptibility of your email system to email exploits such as malformed MIME headers, ActiveX exploits, CLSID file names, and more. The tests available on this zone are safe and do not do anything dangerous. They simply detect whether your email system is safeguarded against a number of email-borne threats. | <urn:uuid:ec5aaafb-420d-4eac-9598-e90558ea0cec> | CC-MAIN-2022-40 | https://it-observer.com/why-you-need-email-exploit-detection-engine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00396.warc.gz | en | 0.941277 | 1,089 | 3.359375 | 3 |
In 1990, it cost roughly $9,000 to store 1 gigabyte (GB) of data; today, it costs less than 3 cents. Over the last decade, it has become normal to assume that the cost of storage is so negligible it’s “virtually free.” But even though storage is almost free to users, data center operators are still spending billions on it every year – and that cost is only going to increase, spurred not only by the explosion in the amount of data created each year but also by more and more stringent durability and availability requirements.
According to Cisco’s Global Cloud Index, worldwide data center storage capacity will grow to 2.6 zettabytes (ZB) by 2021, up from 663 exabytes (EB) in 2016. That’s a growth rate of roughly 400%. More than half of that storage will be consumed on hard drives, and about a quarter on SSD, per IDC’s Global Datasphere report.
“Virtually free” storage is in fact an expensive line item in a data center budget.
The nature of data is changing
In the not-so-distant past, data centers were filled with storage to support applications that ran on servers, data was written to disk and often, that data was rarely accessed again.
But with today’s modern applications, the world is a very different place:
- Microservices deployed in a scale-out fashion are replacing monolithic applications.
- Data volume is huge and data movement between nodes is increasing.
- Services require high throughput and low latency storage at scale.
- The overall data temperature is rising – i.e. the volume of real-time hot data is increasing.
Organizations are under pressure to cope with these needs and, at the same time, drive down costs.
Data Reduction: Innovations in compression algorithms
This is why we’ve seen the emergence of next-generation compression solutions. For text/binary data, compression algorithms such as Facebook’s Zstandard, Google’s Brotli, and Microsoft Project Zipline all offer compression ratios higher than standard deflate-based algorithms. Moreover, more than 50% of data in cloud storage today consists of pictures and videos. These compression algorithms do not compress JPEG or MPEG files at all. One approach that cloud vendors have taken is to introduce a category of lossy compression algorithms for images that can save 20% to 30% on storage, such as Google’s Guetzli. Another approach that Dropbox has taken is to deploy Lepton, a lossless compression algorithm for JPEG which saves up to 22% but can only achieve compression throughput of 40Mbps.
Even a small improvement in compression ratios results in huge cost savings in storage and network bandwidth. These savings easily outweigh the additional cost in CPU cycles and power/cooling required to run the compression algorithms. Unfortunately, each of these schemes also come with a trade-off in terms of performance. The greater the compression ratio, the slower the throughput.
Due to the throughput constraints, these algorithms are typically applied to data at rest, not data in motion. To fully maximize cost reductions by using compression on data in motion as well, we must be able to sustain throughput at line rates.
Data Durability and Availability: Replication vs. Erasure Coding
Today’s data centers demand many 9s of durability and availability. Data replication (or mirroring) is one of the most basic ways to offer durability and availability. This scheme makes identical copies of data and stores them in different failure domains. The compute requirement to replicate data is relatively small and this scheme offers the fastest recovery time. However, replication results in higher storage costs, as it is not uncommon for data to be replicated two times or more.
Parity encoding is another well-known scheme to provide durability and availability at a much lower storage overhead. An example of a parity encoding scheme is erasure coding, where multiple data and parity fragments are distributed across different failure domains. The number of parity fragments determines the durability factor. Erasure coding schemes require low storage capacity overhead but have higher compute and networking requirements, especially when having to reconstruct data from different locations in the event of non-availability. Thus, compute processing throughput and low network latencies are key requirements to successfully implement erasure coding.
The most common practice today is to replicate data in real time (i.e., in the context of read and write commands), but lazily erasure code data at rest. This is because current solutions cannot support erasure coding in the read and write path at latencies that are acceptable to applications.
Resource pooling at massive scale
Another way to lower storage cost is to improve capacity utilization. This can be done by pooling storage resources into dynamically allocated virtual pools, which can be accessed by many clients. In his PhD thesis, Peter J. Denning showed that combining N separate pools of a resource of 1 unit each into a single pool provides the same service level with just √N units of the resource instead of N units. In other words, the larger the shared pool, the more significant the storage savings.
Today, while resource pooling can be done in hyperconverged infrastructure (HCI), access to direct-attached storage SSDs is still constrained by CPU bottlenecks. High, unpredictable latencies through the CPUs result in complex software, ultimately limiting performance and scale. Resource pooling can be much better realized through a disaggregated infrastructure where compute and storage elements are physically located in different servers. By decoupling storage from compute, CPU bottlenecks are reduced and latencies become more uniform, allowing data placement considerations to be simplified.
At Fungible, we believe that a disaggregated storage architecture is a natural fit to implement (i) parity schemes such as erasure coding, enabling distribution of data and parity codes across different failure domains, and (ii) large-scale shared storage pools.
However, up until now, disaggregated storage has not achieved its full potential due to CPU inefficiencies, fabric performance, legacy software limitations, and so on.
Fungible’s Data Processing Unit (DPU)
To break free of these limitations, Fungible has defined and designed a new class of programmable microprocessor known as the Data Processing Unit. The DPU is purpose-built from ground up to not only keep storage costs in check, but also to provide the performance and scalability that is lacking in today’s compute-centric architectures.
The DPU was designed with the following principles in mind:
- Compression ratio and throughput need not be a trade-off consideration. Compression algorithms must be lossless for text/binary as well as for images.
- Data durability using erasure coding schemes must be supported at the throughput and latencies required by modern applications in the context of reads and writes.
- Resource pooling must be supported at the throughput and latencies required by modern applications and must be achievable at massive scale across the network.
Storage may never be free, but it can be so much cheaper with Fungible’s DPU solution.
To learn about erasure coding and read this whitepaper to learn about the benefits of disaggregating storage. | <urn:uuid:a4efbe2e-05ab-486b-bc9a-e64fd8e1875c> | CC-MAIN-2022-40 | https://www.fungible.com/2019/10/18/storage-is-far-from-free-but-it-should-be-much-cheaper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00396.warc.gz | en | 0.936402 | 1,508 | 2.5625 | 3 |
As technology and urban design march into the future, more and more urban centers will begin looking like the “smart cities” science fiction began promising us many decades ago: City infrastructure that talks to cars, cars that talk to each other, sensors everywhere, high-speed wireless communications, self-driving and self-parking cars and much, much more.
But what is the cost of such a city? Not just the price tag, or even the value such investments promise to deliver, but the actual human cost? Will anybody be left behind by smart cities, including the elderly or the disabled?
Urban development is already a challenge for the disabled
Just precisely what constitutes a “smart city” will vary depending on that city’s goals. But one thing that doesn’t change from place to place is that technology should provide the means to realize one’s independence. Regrettably, even some of our current approaches to urban development, even the low-tech ones, already present some Herculean challenges for some of our most vulnerable citizens.
It’s a big enough problem, and it goes unanswered often enough, that the United Nations maintains a publication called “Good Practices of Accessible Urban Development.” Its stated mission is to “ensure accessibility, following the universal design approach, by removing barriers to the physical environment, transportation, employment, education, health, services, information and assistive devices, such as information and communications technology (ICTs), including in remote or rural areas, to achieve the fullest potential throughout the whole life cycle of persons with disabilities.”
So where are we falling short at present?
By 2050, the UN predicts accessibility will be a “major challenge” across the globe. By then, there will likely be some 940 million disabled individuals living in the world’s urban centers. These are just a tiny minority (6.25 percent) among the year 2050’s estimated 6.25 billion city dwellers.
In some ways, smart cities preserve many of the most mundane problems in urban development where the disabled are concerned. Without proper care during the design of pedestrian thoroughfares, embarkation points for public transportation, and public-facing technologies like digital signage and walking directions, those with physical disabilities like poor eyesight or compromised locomotion could quickly find themselves thoroughly excluded from even basic daily tasks.
The promise and challenge of smart cities
The phrase “you can’t get there from here” doesn’t apply to most of us. But for those who must live with disabilities, getting around an urban environment — to work, say, or to visit loved ones — can sometimes grant this phrase a nearly literal meaning. Technology companies are working more closely than ever with civic planners to make walking directions more widely available.
But for the disabled, directions which don’t accurately portray accessibility features, like ramps and dropped curbs, or, worse still, cities which don’t provide them at all, can make it difficult or impossible for the disabled to reach their destination unassisted. Several neighborhoods in Seattle famously don’t have any pavement at all, and occasional grades as high as 20 percent on some hills.
Smart cities won’t automatically solve this problem—not if we bring in the same design language we’ve been using for many decades. Instead, we can use the technologies we’re developing, like beacons and more advanced mapping systems to help pedestrians, drivers, and public transporters get around our cities.
But we can use that same technology to crowd-source feedback on lackluster or missing accessibility features, bring poor design to the attention of city leadership, and ultimately build more open and inclusive mapping databases which account for all types of obstacles, rather than just those of interest to the non-disabled.
There is a major need for the smart cities of the future to make public services, including new technologies, as easy to use as possible for as many people as possible. The “universal design” principles mentioned earlier from the UN’s accessibility guidelines will have to expand to include mandatory automatic doors and wider availability of voice commands for interacting with services. When a ticket sales kiosk uses a touchscreen, for example, it leaves a not inconsiderable portion of the population behind, including those who live with vision, dexterity or cognitive impairments.
Microsoft’s “Smart Cities for All” program and Google’s (Alphabet’s) “Project Sidewalk” are two great examples of the public and private sectors coming together, including policymakers, civic designers, app developers, disability advocates and many more.
Both of these organizations boast many member organizations and both recognize that technology is unquestionably the key to economic opportunity. Getting around a smart city means using our smartphones to acquire information, make reservations, or make contactless payments. A considerable amount of public infrastructure could be off-limits to people who aren’t able to read the contents of a mobile phone screen or physically manipulate the controls.
It’s time to learn the lessons of the past, too, when it comes to rushing the design and construction of public buildings and infrastructure. Construction booms and shoddy practices in the mid-2000s ended up costing companies and cities considerable rework and additional expenses because of construction defects.
Instead, accessible smart cities require thoughtful and deliberate design plus environmentally-sound and eminently durable construction. In other words: design and build it just once, for everybody to use.
Smart cities, accessibility and accountability
There’s no stopping smart cities. Whether they’re using data from mobile device traffic to plan future developments or make changes to existing roadways and infrastructure, or allowing automotive guidance systems to interface with city infrastructure for more harmonious intersections, we’ll all be living in a smart city before too long.
But we need to take a long, hard look first at who benefits first, and sometimes exclusively, from giant leaps forward in technology. It seems like the smart city accessibility program is finally on all of the right parties’ radars.
But answering the call for inclusivity means nailing the fundamentals first, like making sure a city is wheelchair-accessible from top to bottom before we begin layering on technology services. And it means countries everywhere must take up the UN’s call for universal accessible design and hold their own designers, builders, lawmakers, and landlords accountable. | <urn:uuid:1ec35252-87cb-4951-9d22-8b754929fc7f> | CC-MAIN-2022-40 | https://bdtechtalks.com/2019/07/17/iot-smart-cities-accessibility-challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00396.warc.gz | en | 0.927548 | 1,333 | 3.265625 | 3 |
How to Protect Against Phishing
Phishing is one of the most common cybersecurity schemes, and it happens all over the world every day. Anyone can become a victim of phishing in seconds. Learn how to protect yourself by understanding the signs of a phishing attempt and the steps you’ll need to take if you receive a suspicious email.
How to recognize phishing
Even with more clever techniques on the rise, phishing emails, messages, and calls tend to display clues that prove they’re fraudulent.
Scammers often use a familiar company’s logo and name. However, the email addresses and images will most likely appear “off” in some fashion. The logo might be off-center or a different color. The company name might be in the local part of the email address.
Generic greetings are another giveaway, such as “Hello, friend.” A legitimate company will most likely address you by name.
Common types of phishing techniques
Bad actors have resorted to using several techniques to attack individuals and businesses alike. Some of the most common techniques you should look out for are:
- Email phishing
- Deceptive phishing
- Spear phishing
- Search engine phishing
- Angler phishing
Steps to protect yourself from phishing
Protecting yourself from phishing will help you prevent cybersecurity attacks. There are several steps you can take preemptively, including:
- Using cybersecurity software
- Setting up multi-factor authentication
- Turning on automatic updates for devices on your network
- Backing up your data often
- Training your staff to look for signs of phishing
- Incorporating encryption services
Questions to ask if you suspect a phishing attack
If you suspect phishing, you should examine the email before clicking links or responding. Ask yourself the following questions:
- Do I know this company or person?
- Are there any signs of phishing techniques?
- Are they requesting my or my company’s sensitive information?
What should you do if you Receive or respond to a phishing email
If you believe it’s a phishing email, do not enter any personal information, click any links, or respond. Report the email as directed by your organization and delete it as soon as possible. If you responded to a phishing email, contact the appropriate departments at your company or managed IT service. You should change all of your passwords immediately.
How to report phishing
If you received a phishing email, there are several ways you can report it. First, you’ll want to report it to your IT provider. Outside of this resource, you can also report phishing to:
- U.S. Department of Justice
- Federal Trade Commission
- Anti-Phishing Working Group
- Your email servicer (Outlook, Google)
Don’t get hooked. Minimize phishing with Agio today.
At Agio, we have cybersecurity and phishing detection services that will serve your business. Learn more about our techniques today.
Connect with us.
Need a solution? Want to partner with us? Please complete the fields below to connect with a member of our team. | <urn:uuid:5b8047d0-8cb0-449b-b5fb-c3c79ff1dbda> | CC-MAIN-2022-40 | https://agio.com/how-to-protect-against-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00596.warc.gz | en | 0.936503 | 687 | 2.84375 | 3 |
Understand the difference between Hashing, Encryption, and Encoding
If you think that Hashing, Encryption, and Encoding are the same thing, you are wrong! However, you’re not alone. There is an awful lot of confusion surrounding these three terms. As similar as they may sound, they are all totally different things. Before getting down to business though, let’s understand a few things first.
From a security point of view what are the most important things when sending data/message on the internet?
First, you want to let the other person know that the message has been sent from you – not from anyone else.
Second, you want the message to be in the exact same format – without any alteration or modification.
And third, you want your message to be protected from the reach of ill-intended people – hackers, fraudsters & other types of cyber criminals. These three functions can be designated as:
- Identity Verification
So, how exactly is this done? Hashing and Encryption are the answers. Now you must be thinking ‘Doesn’t that make them the same thing?’ The answer is NO.
Let’s start with Hashing.
What is Hashing?
Let’s try to imagine life without hashing. Suppose, it’s someone’s birthday and you decide to send a ‘Happy Birthday’ message. Your geeky (and funny!) friend decides to have a bit of fun at your expense so he intercepts it and turns your ‘Happy Birthday’ message into a ‘Rest in Peace’ message (imagine the consequences!). This could happen and you wouldn’t even know it (until you are at the receiving end of certain reactions!).
Jokes aside, Hashing protects the integrity of your data. It protects your data against potential alteration so that your data isn’t changed one bit.
Basically, a hash is a number that is generated from the text through a hash algorithm. This number is smaller than the original text.
The algorithm is designed in such a way that no two hashes are the same for two different texts. And it is impossible (almost!) to go back from the hash value to the original text. It’s kind of like a cow moving on stairs – it can move upstairs but not down!! Anyway, looping back to our “happy birthday” message. Had you hashed your message, the intended recipient of your message would unhash the message and see a different value than what should come back. At that point, they’ll know the message has been tampered with.
That’s one of the most indispensable properties of Hashing—its uniqueness. There cannot be the same hash value for different text. Even the tiniest bit of change/modification will alter the hash value completely. This is called the Avalanche Effect.
Let’s understand this with an example. In the below example, we have applied the SHA-1 algorithm. Let’s see how it goes.
Text: Everybody loves donuts.
SHA-1 Hash value of the text above: daebbfdea9a516477d489f32c982a1ba1855bcd
Let’s not get involved in the donut debate (is there a debate?) and focus on hashing for the time being. Now if we make a tiny bit of change in the sentence above, the hash value will change entirely. Let’s see how it goes.
New text: Everybody loves donut.
SHA-1 Hash value of the new text: 8f2bd584a1854d37f9e98f9ec4da6d757940f388
See how the hash value changed entirely when we removed the ‘s’ from Donuts? That’s what hashing does for you.
Uses of Hashing
- Hashing is an effective method to compare and avoid duplication in databases.
- It is used in Digital signatures and SSL certificates.
- Hashing can be used to find a specific piece of data in big databases.
- It is widely used in computer graphics.
What is Encryption?
It’s almost impossible to imagine the internet without Encryption. Encryption is what keeps the artificial world of the internet secured. Encryption keeps data secured and confidential. Fundamentally, it is the process of transforming your confidential data into an unreadable format so that no hacker or attacker can manipulate or steal it. Thereby, serving the purpose of confidentiality.
The encryption of data is executed through cryptographic keys. The information is encrypted before it’s sent and decrypted by the receiver. Therefore, the data is safe when it is “in the air.”
Based on the nature of the keys, encryption can be classified into two main categories – symmetric encryption, asymmetric encryption. Let’s understand them in detail.
Symmetric Encryption: In symmetric encryption, the data is encrypted and decrypted using a single cryptographic key. It means that the key used for encryption is used for decryption as well.
Asymmetric Encryption: Asymmetric encryption is a relatively new technique compared to its counterpart. It involves the use of two different keys, one for encryption and one for decryption purposes. One key is known as a ‘Public Key’ and the other is regarded as a ‘Private Key.’
The Public Key is virtually everywhere. Even you possess it without even knowing it. One is stored in your web browser every time you visit an HTTPS-enabled website.
When you send any data to an encrypted site, it is encrypted using the Public Key. The Private Key, on the other hand, is only with the receiver and must be kept discreet. Private Key is used to decrypt the encrypted data. The use of two distinct keys makes the encryption process more secure and a tad slower.
Both these techniques are used in the SSL/TLS certificates. The Asymmetric Encryption is first applied to the SSL handshake process — server validation if you call it. Once the connection is in place between the server and the client, Symmetric Encryption takes care of the data encryption.
What is Encoding?
Unlike Encryption and Hashing, Encoding is not used for security purpose. Fundamentally, it is just a technique to transform data into other formats so that it can be consumed by numerous systems. There is no use of keys in encoding. The algorithm that is used to encode the data is used to decode it as well. ASCII and UNICODE are examples of such algorithms.
Let’s flashback a bit
- Hashing: A string of numbers generated to confirm the integrity of data through hashing algorithms.
- Encryption: A technique used to maintain the confidentiality of data by converting the data into an undecipherable format.
- Encoding: A conversion of data from one format to another format.
- What is Asymmetric Encryption? Understand with Simple Examples
- Understanding the SSL/TLS Handshake process
- Why is HSTS the new future of HTTPS encryption?
- How Hashing Algorithms Work
- Self-Signed SSL Versus Trusted CA Signed SSL Certificate
- HTTP vs. HTTPS, Do You Really Need HTTPS?
- Why Trust Seals Play a Vital Role on E-commerce Websites
Here, we understood the difference between Hashing, Encryption and Encoding. To encrypt our website and the browser-server SSL certificate is the key element that enables the layer of security. | <urn:uuid:8bf4cc95-1af4-4727-b2c3-edb94c3967bc> | CC-MAIN-2022-40 | https://cheapsslsecurity.com/blog/explained-hashing-vs-encryption-vs-encoding/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00596.warc.gz | en | 0.903943 | 1,607 | 3.09375 | 3 |
The hallmark of any healthy risk management program is the ability to assess risk in a uniform fashion. Are your risk assessments built on a foundation of best practices to help you understand your risks in their entirety? This guide will discuss what is a risk assessment, why is a risk assessment important, when in the risk management process should you complete the risk assessment, risk assessment scaling criteria, risk assessment best practices, an example of a risk assessment and finally concrete solutions to completing better risk assessments.
What is a Risk Assessment?
Simply put, completing a risk assessment is the process of analyzing the specifics of different risks faced by your workplace.
On a more complex level, going through the risk assessment process will reveal granular levels of each of those risks, including their impact, likelihood and assurance. This helps you more clearly see the consequences of certain risks materializing, especially when used with a great risk management plan.
Why is a Risk Assessment Important?
At its core, the risk assessment process is intended to help you make better decisions to add value to your workplace. Better decision making requires transparency into all risk information gathered at your organization. It also requires the ability to prioritize that information by assessing the risks related to your organizational goals, resources, and more.
Risk assessments help you take a look at where you spend time and money so you can prioritize resources and resolve confusing or contentious issues. Nevertheless, controls, tests, tasks, and resources are expensive; risk assessments add priority to these activities to help you and your employees understand how critical each one is.
So what happens if you fail to complete a risk assessment as part of your risk management process? By failing to prioritize the right activities, you’ll likely see the following consequences:
Lack of Continuity: Changes in your workplace may cause you to create new activities, even though existing ones are more effective.
Lack of Coordination: Activities often apply to multiple risks or commitments across functional lines. The inability to formally tie activities to risk or commitments hinders inter-functional coordination, resulting in silos and duplicative work.
Activity Fatigue: Staff may ignore certain activities because of a lack of time to assess them.
Wasted Resources: If a risk changes, most organizations have no way of knowing how (or even if) these changes will impact their resources and activities.
Activity Obsolescence: In a changing environment, there is no effective way to know when activities no longer apply.
Lack of Prioritization: Picking activities to focus on is likely to be on an ad hoc basis and subject to the whims of current staff.
What Is The Risk Assessment Process?
You may struggle to decide when is the best time to complete a risk assessment. After all, it can be an iterative risk assessment process that requires due diligence and hinges on the results of other time-consuming research. Here is the order of operations we recommend – as you’ll see, completing the risk assessment is step 2 in the overall risk assessment process of your risk management efforts:
The first step of the risk assessment process involves identifying risk across your organization should be step 1 when developing your risk management program. Note: it’s not enough to simply identify what happened; the most effective risk identification techniques focus on root cause.
This allows you to identify systemic issues so that you can design controls that eliminate the cost and time of duplicate effort.
You can read more about risk identification by clicking here.
Assess & Prioritize
Assessing level of risk in a uniform fashion is the hallmark of a healthy risk management system.
Risk analysis allows you to determine the likelihood of any given level of risk and subsequently prioritize your remediation efforts.
You can find out more about the risk prioritization process here.
Risk mitigation (view complete guide here) is the process of introducing measures aimed at reducing risk exposure and minimizing the likelihood of an incident through effective control measures.
Your top risks and concerns need to be continually addressed to ensure your workplace is fully protected. There are certain risk mitigation best practices that you can follow to ensure that you are mitigating your risks correctly.
Monitoring and taking time to identify potential hazards that could cause harm should be an ongoing and proactive review process. It involves testing, metric collection and incidents remediation to certify that your controls are effective through a review process. It also allows you to identify, review and address emerging trends to determine whether or not you’re making progress on your initiatives.
Create relationships between potential hazards and risks that could cause harm, workplace units, mitigation activities and more to create a cohesive picture of your organization. This allows you to recognize upstream and downstream dependencies, identify systemic risks and design centralized controls. When you eliminate silos, you eliminate the chances of missing critical pieces of information.
Risk Metrics Report
Presenting information about your risk management program in an engaging way demonstrates effectiveness and can rally the support of various stakeholders. This is an integral part of the risk assessment process. Develop a key risk indicators report that centralizes your information and gives a dynamic view of your company’s risk profile.
Risk Assessment Evaluation Scale
Your risks should be assessed based on the Impact, Likelihood and Assurance of them occurring. Once this system is in place for labeling or identifying risk, you should begin assessing the potential impact of each risk based on a standard set of criteria. A lot of organizations use a high-medium-low scale to assess their risks, but this actually isn’t best practice.
High-medium-and low scales make it difficult and time-consuming to quantify, aggregate, and objectively rank information. With only three options from employees to choose from, they’ll likely feel conflicted about which one to choose. Many employees may even feel compelled to write in a medium/high option.
In reality, best practice favors a 1-10 scale, with 10 having the most unfavorable consequences to the organization. Using a 1-10 scale makes calculating the residual index score of a risk more straight forward. This gives employees more flexibility in their assessments will increase accuracy, and more confidence when determining what your top risks really are. The 10-point scale should be distributed as follows:
Risk Assessment Best Practices
In order to truly improve your company’s risk program, it’s critical to conduct objective, enterprise-wide risk assessments. But what else is best practice for conducting a risk assessment?
Best Practice #1: Take a root-cause approach.
The most effective way to collect risk data is to identify risk by root cause. Root cause tells us why an event occurs, which provides information about what triggers a loss and where an organization is vulnerable. Using root-cause categories provides meaningful context as to what steps to take to mitigate risk.
Best Practice #2: Standardize your scales and criteria through templates.
We talked earlier about the 1-10 scale. You need defined evaluation criteria, because too often, one person’s 9 is another person’s 7. You should provide a clear, unambiguous definition for each of the 5 buckets we mentioned above. The key is to express severity in both quantitative and qualitative terms in a standardized way. Each bucket should have a variation of these themes applicable to each level of severity.
Best Practice #3: Link risks to controls.
Once you have identified the source of risks and assessed them objectively, you need to know how controls are actually covering risks. Oftentimes, the knowledge of how the risk is mitigated is only a conversational explanation from the business area in facilitated sessions. Maintaining a system where risks are directly linked to their controls helps you maintain better governance over mitigation activities. With such a system, you have a valuable record of when and why different controls were created, as well as the proof you need for auditors to show that your workplace is actively working to manage risk.
Best Practice #4: Connect risks to strategic goals.
Getting an accurate pulse on strategic priorities is challenging because these types of organizational goals are cross-functional in nature. And while they are extremely useful for the board and senior executives, they are impossible to act upon without operationalizing them (breaking them down into root-cause, silo-specific activities within business areas). Taking a risk-based approach helps you prioritize in a strategic way.
Best Practice #5: Embed risk assessments in your everyday activities.
At the end of the day, better risk assessments can only be fostered by engagement, and this is the hardest part. The good news is, when it comes to business, people love success and efficiency. So be your own business case! Start to use your own experience and successes to get others to see the value involved. Risk is in everyone’s job responsibilities. The more integrated ERM is in everyone’s job descriptions, the easier risk assessments will become and the more valuable they will be, but this may take time. Start integrating ERM into everyone’s day-to-day activities by starting with your own area.
Risk Assessment Example
As an example, let’s look at an experience most companies face: professional liability insurance applications. Insurance companies require seemingly innocuous assertions about the management of your organization’s operations and governance. Among other activities, they seek information on your operational controls, management of content and privacy exposures, computer systems controls, computer system access protection, data back-up procedures and data encryption procedures.
Additionally, we see risk management failures covering a wide range of sectors from the Chipotle scandal (Food Safety News) to banking customer outages (The Hill).
Assess Your Risks with LogicManager
The more integrated ERM is in everyone’s job descriptions, the easier risk assessments will become and the more valuable they will be, but this may take time. Start integrating ERM into everyone’s day-to-day activities by leveraging LogicManager’s ERM platform today.
By applying an ERM approach, you can more easily prioritize existing activities, manage change, objectify conclusions to enable better issue escalation, and gain a panoramic view of disparate controls and tests. All of this will help you streamline and add value to current activities, enabling you to spend less time on check-the-box compliance or insurance efforts and more time preventing loss events and identifying emerging risks.
No matter what your industry, company size, risks or unique challenges may be, LogicManager has a fully integrated risk assessment solution that works to tackle all of your risk needs and manage risks that can cause harm in one place.
We also have nearly 100 point solution packages so you can cherry pick based on your most specific and timely needs as risk assessments play an important role in defining what GRC is to any organization.
Interested in seeing just how LogicManager’s software empowers better risk assessments? Schedule a free demo today to find out! | <urn:uuid:c5d6a126-778c-4a8b-9fb8-b6955f7339d0> | CC-MAIN-2022-40 | https://www.logicmanager.com/resources/erm/what-is-a-risk-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00596.warc.gz | en | 0.930658 | 2,282 | 2.90625 | 3 |
An “Innovation Story” published by Microsoft last April re-introduced readers to the company’s work with immersion cooling for data center servers, particularly as it relates to Azure. But then it brought up an unusual connection — one whose existence may be a question in itself. If two-phase immersion cooling were present in production data center environments, wrote Microsoft’s John Roach, the scalability limitations of processor designs at the atomic level could be overcome.
Put another way: If heat is the obstacle that derailed Moore’s Law, why not use immersion cooling to transfer heat off the chips? The observation that transistor densities could double every 18 months while keeping processors affordable and desirable in the market, could get back on track — albeit submerged.
That assertion prompted Greg Cline, Omdia’s principal analyst for SD-WAN and data center, along with Vladimir Galabov, head of Omdia’s Cloud and Data Center Research Practice, to draw a stunning conclusion. In their April 21 report entitled, “Immersion cooling is heating up,” they wrote that Microsoft expanded its investments in liquid cooling technologies because “it is the only way to offset the slowdown of Moore’s Law.”
Could this case have possibly been overstated, maybe just a bit? Is Omdia really saying immersion cooling is the one way data centers will overcome Moore’s Law, and eventually, they’ll all just have to sink or swim with it?
Data Center Knowledge (whose parent company, Informa, is also parent of Omdia) put the question to Dr. Moises Levy, Omdia’s new principal analyst for data center power and cooling.
“We are currently reaching a physical limit in transistor miniaturization,” responded Dr. Levy. “At 7 nm, each transistor is the size of 10 hydrogen atoms laid side by side. It is more expensive and technically difficult to keep up Moore’s Law.
“When we are unable to continue shrinking the size of each transistor to pack more on a processor,” he continued, “we would need to start making our processors bigger and bigger. This means they would require more and more power. Liquid cooling systems enable thermal management for higher density of electronics, since air cooling systems are no longer effective. Liquid cooling solutions are currently being adopted in many data centers, and contribute to improving the power to cooling ratio, eliminating the need for complex airflow management, and helping to achieve sustainability goals linked with carbon emissions and water consumption.”
Yeah, sure, but. . . are immersion tanks truly an inevitability? Maybe not, Dr. Levy responded, if you’re willing to accept some even wilder possibilities.
“Another way to keep up Moore’s law is through new technological advances in nanotechnology and quantum computing,” he wrote. “Companies such as Intel, IBM, Microsoft, and Google are already working on quantum computing, where we talk about quantum bits (qubits) and subatomic particles.”
In an article last January for Consulting / Specifying Engineer, Dr. Levy proposed a new means of visualizing performance metrics in a data center by measuring four key beneficial attributes — productivity, efficiency, sustainability, and operations — weighed in each instance against risk. The result is what he calls a data center site risk metric, which he proposed in a 2017 paper for the IEEE as a means of comparing the overall efficiency of multiple data centers against one another. It would be interesting to see how Dr. Levy would evaluate a fully immersed data center in the context of site risk. | <urn:uuid:5367f5a4-68e0-4bdb-8390-cef5b51817cd> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/power-and-cooling/liquid-cooling-cure-moore-s-law-breakdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00596.warc.gz | en | 0.93832 | 759 | 2.671875 | 3 |
In my previous article “Putting your data on the map – an introduction to geocoding” I explained the basic concepts of geocoding and geocoder software.
Getting to grips with geocoding is a skill needed to turn the addresses or place names in your data into latitude/longitude coordinates to display on a web-map. Seeing your data on a map can be an instant game changer, neatly illustrated by my colleague Ben Klarich in a recent video.
In this article, I will delve deeper into geocoding and introduce the concept of positional accuracy and why this might be important to your use case.
Setting a geocoder to work on a batch of addresses is satisfying – the time saved, the mundane avoided – and it is tempting to regard the resulting pins on the map as the finished product from the process. This may be the case, but I would urge those doing geocoding to pause a moment and consider two questions:
- Has the geocoder made the correct address match?
- Is the location of the pin accurate enough for my needs?
Has the Geocoder Made the Correct Address Match?
This question is a matter of quality checking the address matching made by the geocoder. Using the Mapcite Excel Add-In to geocode, the output will be 10 new columns added to your spreadsheet of data. The first two are the latitude/longitude coordinates of the matched location. The next 8 columns are populated with the address found by the geocoder. It is worth putting some time into checking the address found by the geocoder against the address you are looking for in your data. If you have less than 250 or so addresses, it can be efficient to scan the list by eye and check the results this way.
As the datasets get larger, you will need other tactics. Useful ones include:
- Looking at the data on a map – useful to pick out obvious issues if you know your data should be within a location boundary such as a city, state or country. For example, if your data is only for Florida addresses, it will be easy on the map to see addresses out of state.
- Sampling – pick a sample of the addresses and check each one carefully. Distributing your sample across different geographic areas is advised.
- Excel Functions – you can write a quick function in excel to compare elements of the found address against the sought address. Postcode or zip code is a good field to use for this.
Also consider what is an acceptable match. If the postcode or zip code is matched but not the street number, is that good enough? Which neatly brings me to the second question:
Is the Location of the Pin Accurate Enough for my Needs?
Consider this, not only does the geocoder have to do an address match, it must also assign a latitude and longitude coordinate to the address. How does it do that? The simple answer is that it gets the coordinates from its own database. The real question is who compiled the data that sits in the database and how? That could be a whole topic for another day, but in summary it depends upon the geocoder you are using. The data could be crowd-sourced from open data, commercially gathered, or gathered by trusted government bodies. For some data, the address coordinates will be on the rooftop of the building – the highest level of positional accuracy. At other times the coordinates may be somewhere within the property boundary, on the same street, to the nearest intersection, or at the centre of the postcode/zip code. Address data gathered commercially or by Government bodies will usually come with a cost and licence restrictions on the use of the data.
For this reason, most commercially available geocoders will have a charging structure and T&Cs that means the more addresses you geocode, the more you must pay and you are limited in what you can do with the data. To build a free product without such licence restrictions, Mapcite chose to use open data from www.OpenStreetMap.org for its Excel geocoder.
Getting back to the question, you need to think about your use case and decide if an approximate location such as a postcode is good enough for your use-case. In the UK, the average postcode includes addresses over an area of 0.6 Hectares (or one football pitch). However, as you move from urban to rural areas, postcode sizes increase. Ten percent of postcodes cover 19 Hectares or more and the largest postcode extent in the UK is in the Scottish Highlands, covering 40,700 Hectares.
So if your use case is doorstep deliveries, a postcode might be good enough in urban areas (if you accept the delivery driver will need to locate the exact address themselves once they arrive in the postcode).
If your use case is something more specific such as assessing the lending or insurance risk at an individual address, you should invest in premium data and geocoders that can reliably give you rooftop coordinates. Mapcite, through its partnership with respected geospatial organisations like Pitney Bowes, Ordnance Survey and PSMA Australia has access to some of the world’s best addressing data and geocoders for roof top accuracy.
Using the Mapcite Add-In and following the advice given in this and previous articles, you should be able to geocode a significant proportion of your data and start using it for map-based analysis. If after reading this article you wish to explore higher positional accuracy, get in touch and we can advise you on the data and services best suited to your needs.
About the Author
Richard Crump is Head of Consulting at Mapcite, a location data analytics company and previously held a similar role at Ordnance Survey, the National Mapping Agency for Great Britain. | <urn:uuid:ee166525-7ff6-4b86-ad35-70ca40b86619> | CC-MAIN-2022-40 | https://www.mapcite.com/2020/04/06/why-x-sometimes-marks-the-spot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00596.warc.gz | en | 0.929037 | 1,213 | 3.25 | 3 |
Security of over a billion iPhone owners and users of popular instant messengers is at risk due to a vulnerability in Apple iCloud.
As the Forbes reports, private messages sent via iMessage and WhatsApp on iPhone are not secure when using factory settings.
While encrypted apps like iMessage and WhatsApp keep messages on the device completely safe, a vulnerability in Apple’s iCloud backup system puts them at risk, and unauthorized people can access messages. This is possible as Apple stores message encryption keys in iCloud backups, which undermines the main security features that protect iMessage.
Apple states in its security policies: “End-to-end encryption protects iMessage conversations on all your devices, so Apple cannot read your messages as they are transfered between devices.”
This means that while messages are completely secured in transit between phones, they don’t have to be secured on the device or in the cloud.
Apple has come under a lot of pressure recently after an internal FBI document was released proving that the bureau regularly accesses messages on nine secure messengers, including iMessage and WhatsApp.
To keep their messages safe, users can turn off iCloud backups.
Apple also urgently needs to change its approach to iCloud to stop storing encryption keys and avoid backing up encrypted data. | <urn:uuid:f0df14c5-8256-4727-bb69-492d5a7694db> | CC-MAIN-2022-40 | https://gridinsoft.com/blogs/vulnerability-in-apple-icloud-puts-billion-users-at-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00596.warc.gz | en | 0.93216 | 258 | 2.65625 | 3 |
In recent years, the phishing threat has grown significantly. In 2021, phishing attacks doubled compared to the previous years, and July 2021 was the highest on record according to the Anti-Phishing Working Group (APWG).
These phishing attacks often took advantage of the confusion and switch to remote work driven by the COVID-19 pandemic. The US government is taking the threat of phishing so seriously that by 2024 it is requiring federal agencies to adopt phishing-resistant multi-factor authentication (MFA).
What is phishing-resistant MFA?
It’s exactly what it sounds like. Phishing-resistant MFA can’t be compromised by even a sophisticated phishing attack. This means that the MFA solution can not have anything that can be used as a credential by someone who stole it, including, but not limited to: passwords, one-time passwords (OTP), security questions, and interceptable push notifications.
Password-based MFA solutions can’t and won’t stop phishing
Usually, a phishing campaign’s end goal is to perform an account takeover, which MFA solutions aim to protect against. Phishers try to steal users’ credentials via fake login pages and then use them to gain access to the user’s account.
In theory, MFA should protect against this by requiring multiple different factors for authentication, but in practice, it isn’t so simple. Often, these other factors are just as vulnerable to phishing as just using passwords to authenticate.
Most MFA today uses a combination of a password and an instance of a “something you have” factor. Often, this involves sending an OTP via SMS or email that the user then types into the authentication page. Modern phishing campaigns include the ability to phish these additional credentials as well.
For example, a modern phishing page may be designed to interact directly with the target site, triggering an OTP text message or email when the user tries to log in on the phishing page. The SMS or email is sent to the user from the legitimate site, and the user enters it into the phishing site. At this point, the attacker has both the password and the OTP, enabling them to legitimately authenticate as the user.
This is just one way in which password-based MFA can be bypassed using phishing. It also assumes that the user is willing to enable MFA at all. Getting a code from an SMS text message, email, or authenticator app and typing it in adds friction to the authentication process and requires the user to have immediate access to the device or email account. As a result, a user may opt not to use MFA at all in favor of a better user experience, eliminating any anti-phishing protections that it could provide. Ironically the thing that is supposed to make them more secure, through its own inconvenience, pushes them towards the less secure option.
In the end, these MFA solutions rely on a password, which is an incredibly weak factor. Passwords are continually re-used, stolen, and stored in insecure methods. Once a malicious actor is able to successfully steal a password it only requires an interception of a text message or magic link sent via email for them to authenticate and start accessing critical data and costing an organization money.
How Beyond Identity’s MFA stops phishing in its tracks
MFA using passwords and OTPs is vulnerable to phishing because it uses weak factors that users can be tricked into entering into a website. Beyond Identity’s passwordless MFA provides robust protection against phishing by using authentication factors that users can’t be tricked into handing over to an attacker.
Instead of using weak “something you know” factors like passwords combined with other phishable authentication factors, Beyond Identity only uses strong authentication factors that can’t be phished:
- Local Biometrics: Modern devices include biometric scanners such as fingerprint and facial recognition. These “something you are” factors provide stronger authentication than passwords or OTPs and a more frictionless user experience.
- Cryptographic security keys: Security keys stored on an authorized device provide a phishing-resistant “something you have” factor. This ensures that a user is logging in from a trusted device, stopping phishing attacks cold.
- Device-level security checks: In addition to MFA, Beyond Identity checks what resources the device is trying to access (applications, cloud resources, etc.) and its current security posture. This makes it possible to validate that the request is compliant with corporate security policies and protects sensitive resources from being accessed by infected or insecure devices.
Beyond Identity’s passwordless MFA eliminates phishing risk because there are no passwords or OTPs for an attacker to phish. It also provides a more frictionless authentication experience because users are no longer required to memorize passwords, wait for OTPs, and type them into the website. Instead, the app seamlessly checks dozens of risk signals, accessing the private key on the user’s device, and the user authenticates themself with a fingerprint or other biometric setting.
Learn more about Secure Work and Secure Customers to find out how to stop phishing attacks from impacting your workforce or customers. You can explore the future of MFA by reading about The Next Frontier of Multi-Factor Authentication. You can also get a demo to experience the solution. | <urn:uuid:6235c7ef-5d75-4795-ac8d-5620f0311a2b> | CC-MAIN-2022-40 | https://www.beyondidentity.com/blog/are-you-using-phishing-resistant-mfa | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00796.warc.gz | en | 0.93257 | 1,131 | 2.78125 | 3 |
Our Apple in the classroom series continues with six more educators explaining how Apple technology has helped shape teaching and learning, and what their favorite Apple technology in the classroom is and why.
Global collaboration: creating an eBook with students from 20 other countries.
One of the best examples showing how Apple has transformed learning is the “If You Learned Here” Project, according to education consultant Lucy Gray. She had presented at an Apple Distinguished School meeting on global connections, and two teachers Mary Morgan Ryan and Carolyn Skibba went back to their classrooms to design their own collaborative project. In the end, it involved 70 schools from 20 countries using a myriad of tools, including Flipgrid, Padlet and Book Creator to pull together a collaborative eBook written by students from around the world. The project offers “a great example of how educators can bring the world into their classrooms,” adds Gray.
What’s her favorite Apple Technology? iTunes U, as it offers a repository of mostly free content for all levels of education. “There is a treasure trove of material within iTunes U including videos, podcasts, iBooks, and other digital documents. Colleges and universities, K12 schools, and institutions of informal learning have created channels showcasing great resources for others,” says Gray. "Also, iTunes U Course Manager is available for those who want to build their own collections of content. My favorite channel on iTunes U is the Apple Distinguished Educator channel!"
Using apps to offer discrete, individualized instruction for fourth grade math students.
“Apple has transformed learning by increasing mobility, collaboration and creativity,” says Scott Newcomb, a fourth grade teacher and blogger/consultant. Having iPad Minis in his classroom has had a huge impact on his students: “Their learning is no longer tethered to their desks. It has leveled the playing field. All students feel that they can contribute to the activity.”
Specifically, Newcomb has his students use math apps on the iPad Mini to sharpen their skills. He also incorporates blended learning by having the students use online math programs to differentiate instruction.
The technology is great for him because it enables Newcomb to individualize and differentiate instruction through these apps, as well as tailor instruction to fit each of his student’s individual needs. “Through the integration of iPad Minis, I am able to differentiate instruction without drawing attention to specific students, as all will be working on the same type of device,” he adds.
Just tap twice for a media-rich learning experience on the big screen.
Daniel Edwards is director at Stephen Perse Foundation schools and co-author of Educate 1-to-1. His schools work on two key principles when using technology to enhance learning: providing seamless access to content and removing barriers to learning.
In terms of achieving the former, the schools use iTunes U as the content delivery mechanism. For the latter, they offer a 1-to-1 iPad environment to provide students with instant access to materials they need. Using the iPad, “students can now receive feedback on their assignments and act on it before their next 'contact' period with the teacher,” he says. “We see this as a crucial aspect in our desire to enhance the learning process.”
The iPad has greatly changed Edward’s approach to teaching by offering a media-rich platform coupled with access to student information and feedback – all available with “a couple of taps on a screen.” By pairing this with the Apple TV, Edwards says he is free to teach and address individual concerns more readily, “It used to be so difficult five years ago to do the things I've always wanted to do. And now I just tap a screen.”
Apps are the new textbook, and we’re nowhere near our potential.
“While the iPad hardware is impressive, Apple was way, way out ahead of competing platforms in fostering the growth of high-quality, innovative, and polished apps,” states Terry Heick, founder and director of TeachThought.com. And while this hasn't ‘transformed’ learning, it has created a compelling alternative to the textbook, made project-based learning more accessible, and began to illuminate what's possible with mobile learning. He adds, “we're nowhere near our potential here, either.”
Heick believes that the iPad is probably the best thing Apple has created, as he says that BYOD is not something most schools and districts are comfortable with. “So even while the iPad seems to kind of hit a wall in terms of sales, by empowering students, it wins,” explains Heick.
Laura Blankenship, Chair and Dean of Academic Affairs at The Baldwin School @lblanken
Students demonstrate learned concepts by creating movies about robots and binary code.
Laura Blankenship is chair and dean at The Baldwin School, which became a 1-to-1 MacBook school two years ago. “I have to say, it has transformed so many of our classes,” she states. The biggest result Blankenship has seen is that learning is now less passive, as teachers now have students actively shape their own learning. By using e-texts and online resources for classroom materials, the school has also expanded the kinds of materials it uses and is no longer stuck with static textbooks, “which can get out of date far too quickly,” says Blankenship.
She enthusiastically calls out iMovie as her favorite Apple technology, adding, “there are so many ways this can be used for students to demonstrate what they know, and it's such a flexible platform that students are really only limited by their imagination.”
In the classroom, her students use iMovie to create videos to demonstrate the concepts they've learned, even in computer science. Examples include how-to videos for making robots sing or draw, and explanations of the binary number system. Blankenship explains, “Because they can easily add video, photos and music all together, students can easily make many different kinds of videos. The end results are never boring!”
Tablets for the win: enabling intuitive and easy student learning.
“My favorite Apple technology is the iPad because the tablets are intuitive and easy to use for students,” says Beth Blecherman of Techmamas. Her sixth graders integrated iPads into their curriculum this year with wide success, making it a “transformative year.”
For other classrooms looking to do the same, Blecherman recommends leveraging an infrastructure of automated tools to help with your school’s internal communication. “We had that this year and all teachers participated which made the school workflow very efficient. I commend the staff at our Middle School for the work they did to bring the technology and workflow into the classroom in a way that enriched the kids’ learning environment and made the workflow more organized,” says Blecherman.
Have market trends, Apple updates and Jamf news delivered directly to your inbox. | <urn:uuid:48351e22-f5c5-4d5b-b426-48fcefd71b79> | CC-MAIN-2022-40 | https://www.jamf.com/blog/how-has-apple-transformed-your-classroom-part-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00796.warc.gz | en | 0.962889 | 1,485 | 3.171875 | 3 |
Software development lifecycle models have different strategies and methodologies for the software development process and I wrote about the different types of development models, please review this article for more information, we also discussed how to select the most suitable model based on your project context.
Regardless, what model you have selected, these models are sharing mostly the same development phases with different arrangements, a more or a less phase. Furthermore, they can be implemented in an iterative and incremental model.
In this article, we will discuss the most common phases across all SDLC models. I will add other articles to discuss each phase in details 🙂
1- Requirements elicitation
Requirements elicitation is the practice of collecting the requirements of a system from users, customers, and other stakeholders. The practice is also sometimes referred to as “requirement gathering“
The common issue of software engineers now that they use requirement gathering as a phase name, while in the practice, requirements elicitations is not only a process of collecting requirements, it has a lot of techniques and required skills to extract and generate the requirements, for example, observations, workshops, brainstorming and making prototypes and analyze the feedback.
At this phase, we define the requirements which will shape the software regardless if the process model itself. Moreover, we assume here that there is the scope definition has been done and the business case for why we need to develop and implement this software.
It is a common fact now that most of the software projects fail because of the requirement elicitation phase and that requirements are unclear, unprioritized, incomplete, unreflective of business goals and objectives.
2- Architecture and Design
System design is the process of defining the architecture, modules, interfaces, and data for a system to satisfy specified requirements.
Software design usually involves problem-solving and planning a software solution. This includes both a low-level component and algorithm design and a high-level, architecture design.
Software design is depending on the business requirements and architecture design decisions have been taken, in this regards it is very important to define the requirements well or you will have a failure or not agile design. At this phase, you continue analyzing the requirements and you may need other iterations for requirements refinement and changes.
The design phase has a lot of design disciplines, like data design, User interaction and experience design, process design, and others.
This is considered the longest phase as we turn the requirements and design elements to actual code.
This is also known as coding or building or developing phase. It is known as implementation phase at most of Software engineering blogs, while it may be correct from developers perspective not from the overall process perspective as we will see that the implementation phase has different activities. do you agree?
Similarly to other phases, The construction phase can be done in an iterative way to have early business value to the customer. Moreover, we can back again to design and refine requirements as well, while this will depend mainly on the SDLC model selected.
Testing has a life cycle by its own know as Software testing life cycle STLC and it is called also verifications and validation phase or stabilizing phase, as we ensure that we are doing things right according to the specifications and we are doing the right things from the customer perspective.
In this phase, we make all types of testing, for example, unit testing, integration testing, quality attributes testing, and others.
Furthermore, it is an iterative process and always there is a feedback loop to other phases to fix the bugs and issues found during this test. And it has a lot of techniques to calculate the required test cases and how to ensure an acceptable test coverage.
The main goal of the Deploying Phase is to place the solution into a production environment. Supporting goals include deploying the solution technology and components, stabilizing the deployment, and transitioning the project to operations and support.
Deployment can be iterative as well and need continues testing to ensure that software functionalities are working correctly in the production environment. Currently, most of the startups use a continuous delivery and continuous integration approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently.
Deployment has different approaches as well, we should deploy first in staging environment especially in critical projects, which simulates the production environment and we continue performing our testing activities and validation process based on this environment.
The Implementation Phase has a lot of supporting activities include training end-users and administration, The software will need observations and smart detections of issues and bugs which we could not detect during the previous phases. The implementation phase also may include the deployment phase as the main activity and change management process for users who will use the software which is a huge challenge for software and IT project success.
7- Operation and Maintenance
The purpose of the Operations and Maintenance Phase is to ensure that the information system is fully functional and performs optimally until the system reaches its end of life.
In this phase, the software become one of the core components of the organization baseline architecture and users start to use the software to benefit from its functionalities and the business values it delivers for them.
The operation phase also can be merged with the implementation phase activities. During this phase, some issues and bugs may be discovered and it is important to solve them to ensure business continuity.
Now, we hear about DevOps and how the roles of developers and operation support engineers are merged together to achieve the continuous delivery approach and establishing a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.
This phase is not common in the development process, and it is neglected usually. The goal of the Retirement Phase is the removal of a software release from production, it is also known as system decommissioning or system sunsetting. The retirement of systems is a serious issue faced by many enterprises today as legacy systems are removed and replaced by new systems. You must strive to complete this effort with minimal impact on business operations and you need to assess the other solutions are depending on this software. A software is retired for several reasons:
- The software is being replaced.
- The software is no longer to be supported or obsolete.
- The software no longer supports the current business model.
- The system is redundant.
- The system has become obsolete.
You can always add your findings and notes in the comments section below 🙂
- Chemuturi, M. (2012). Elicitation and Gathering of Requirements. In Requirements Engineering and Management for Software Development Projects (pp. 33–54).
- Gomaa, H. (2011). Software Life Cycle Models and Processes. In Software Modeling and Design: UML, Use Cases, Patterns, and Software Architectures (pp. 29-44). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511779183.005
- Software testing. (2017, June 09). Retrieved June 12, 2017, from https://en.wikipedia.org/wiki/Software_testing
- Phase 8: Implementation – COTS Multiple Release Project. (n.d.). Retrieved June 12, 2017, from http://doit.maryland.gov/SDLC/COTS/Pages/Phase08Multiple.aspx
- CONSTRUCT PHASE. (n.d.). Retrieved June 12, 2017, from https://www.lifecyclestep.com/open/430.0CONSTRUCTPHASE.htm
- Chapter 1: Deploying Phase. (n.d.). Retrieved June 12, 2017, from https://technet.microsoft.com/en-us/library/bb496997.aspx
- Application retirement. (2017, March 14). Retrieved June 12, 2017, from https://en.wikipedia.org/wiki/Application_retirement
- B. P. (n.d.). 5 Reasons Software Projects Fail. Retrieved June 13, 2017, from http://www.seilevel.com/requirements/5-reasons-software-projects-fail-hint-its-often-due-to-incomplete-incorrect-requirements
- List of failed and over budget custom software projects. (2017, June 01). Retrieved June 12, 2017, from https://en.wikipedia.org/wiki/List_of_failed_and_overbudget_custom_software_projects
Help to do more!
The content you read is available for free. If you’ve liked any of the articles at this site, please take a second to help us write more and more articles based on real experiences and maintain them for you and others. Your support will make it possible for us. | <urn:uuid:7723f0da-4e6b-4a9a-b334-482884f848bd> | CC-MAIN-2022-40 | https://melsatar.blog/2017/06/13/what-do-you-need-to-know-about-the-eight-software-development-phases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00796.warc.gz | en | 0.933594 | 1,827 | 2.84375 | 3 |
Kids and Online Safety – 4 Essentials for Parents
Jackie here. Today’s kids use the internet more than any other generation. For parents, increased internet usage means it’s important to know how to prepare kids for the risks they may encounter online. This often feels like a daunting task. Where do you start? These 4 essentials can help you get started.
Do your kids know how to create a strong password? All too often, adults choose passwords that aren’t secure (think ‘password123’) so it only makes sense that children do too. Teach your child the basics of a strong password (letters, numbers, symbols, not common words, no names, no important dates, etc.). Also, make sure your child knows when (and when not) to enter a password when prompted online. Many scams impersonate popular sites to attempt to steal your password.
Parents can set a good example for their children by sharing wisely on social media. Teach your children not to overshare. The more information you put out there, the more information thieves have available for cracking your security questions, creating targeted phishing attempts, etc.
Secure Mobile Devices
Children often access the internet using mobile devices like tablets and smartphones. One survey found that 37% of children didn’t have security software on their mobile device. Only 34% of parents have installed a parental control app. Security software and parental control tools are an important way for parents to protect their children online.
Cyberbullying is a bitter reality online and can be particularly harmful to children and teens. Help protect your child by teaching them what to do should cyberbullying occur. Teach them about the blocking and reporting options on Facebook and Twitter so they can control those that have access to their accounts and information on these sites. If abusive messages are received, teach your child to talk to you and to save the messages in case they are needed for sharing with school administration or the police.
For more great tips, check out this article from WeLiveSecurity. | <urn:uuid:736ce30a-4230-4a70-862f-5009b561d680> | CC-MAIN-2022-40 | https://www.allclearid.com/2015/10/15/kids-and-online-safety-4-essentials-for-parents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00796.warc.gz | en | 0.914794 | 430 | 3.078125 | 3 |
Concerns over the accuracy of facial recognition systems came under the spotlight when Amazon’s Rekognition incorrectly matched 28 members of Congress to criminal mugshots.
If facial recognition is now a mainstream technology, then Apple’s November 2017 release of Face ID can be viewed as the turning point, according to a report from Forrester.
Facial recognition, a technology powered by machine learning, can now match users to faces in social media photos and help police find a criminal suspect.
But there have been concerns with the technology. Joy Buolamwini, a researcher at the MIT Media Lab, found it is less accurate for people with darker skin tones. It doesn't work well in low light or with images taken by thermal cameras used by the military. And even though more than 117 million American adults have their photos in law enforcement facial recognition databases, "most law enforcement agencies do little to ensure their systems are accurate,” a 2016 Georgetown University study found.
Concerns over the accuracy of facial recognition systems resurfaced recently when the ACLU released the results of a study that found Amazon’s version of the technology, Rekognition, misidentified 28 members of Congress.
“Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos,” the ACLU said in the release of its findings. “Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.”
Nasir Memon, a professor of computer science at the New York University Tandon School of Engineering, said it isn’t realistic to expect these systems to be completely accurate.
“False positives are inevitable,” Memon said. “After all, these are pattern recognition systems that do an approximate match.”
The “default match settings” has been at the heart of the conversation since the ACLU released its findings. Amazon and one researcher interviewed by GCN said the ACLU would likely have had fewer, if any, false positives had the confidence threshold settings been different.
The higher the confidence threshold, the fewer false positives and more false negatives. The lower the confidence threshold, the more false positives and fewer false negatives. It's a trade-off, according to Patrick Grother, a computer scientist at the National Institute for Standards and Technology who administers the agency's Face Recognition Vendor Test.
“It is always incumbent on any end user of a technology in any application to set the threshold appropriately,” he explained. “In this case, I don’t think Amazon or anybody else would claim that the threshold was set appropriately.”
This is exactly the point made by Amazon after the ACLU made its findings public. The default setting -- an 80 percent confidence threshold -- has its uses, but checking photos of members of Congress against a criminal database isn’t one of them, the company said.
“While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” an Amazon Web Services spokesperson wrote in an email. “When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95% or higher.”
AWS General Manager of Deep Learning and Artificial Intelligence Matt Wood, posted a blog saying that law enforcement applications of Rekognition should use a confidence threshold of 99 percent. Amazon also reran the test conducted by the ACLU on a larger dataset and saw that the “misidentification rate dropped to zero despite the fact that we are comparing against a larger corpus of faces (30x larger than the ACLU test),” Wood wrote. “This illustrates how important it is for those using the technology for public safety issues to pick appropriate confidence levels, so they have few (if any) false positives.”
The ACLU did not respond to multiple questions including whether it planned to retest the software using a higher confidence threshold and focusing on false negative rates.
Higher confidence levels are, however, a double-edged sword because they will introduce more false negatives, meaning the system will fail to match or identify people it should be spotting.
“When you raise this confidence to a higher level, 95 to 99 percent, then you start getting more false negatives,” Memon said.
Grother said lower confidence thresholds can actually be used in law enforcement settings as long as staff can verify the results produced by the algorithm. He pointed to the investigation of recent shooting at the Capital Gazette newspaper in Annapolis, Md. The suspect wouldn't tell investigators his name, so officials ran his photo against the state's facial recognition database, which quickly returned a match.
“In that case you can afford to set a threshold pretty much of zero and what will come back from the system is a list of possible candidates sorted in order,” he said. “At that point you would involve a number of investigators to look at those candidates and say, ‘Is it the right guy or not?’ or ‘Is it the right person or not?’ because you have time," to sift through the possible false positives, Grother explained. "The opposite situation, when you want to run a high threshold, is when you’ve got such an enormous volume of searches or so little labor to adjudicate the result of those searches that you must insist on having a high threshold” that would produce fewer false positives.
The increase in the number of false negatives associated with a high confidence level also means it's easier for a malicious actor to trick the facial recognition system, Memon said.
“A malicious actor can actually take advantage of the fact that the threshold is very high and potentially try to defeat the system by simply changing their appearance a little bit,” he said. “There has been work that has shown that by just wearing [a certain] kind of shades, you might fool the system completely.”
Forrester’s report on facial recognition technology from earlier this year called the false positive rate “more critical to assess” than the false negative rate because the potential consequences of misidentifying someone can outweigh the risks of not identifying someone. Both measures should be considered when buying a solution, the market research firm advised.
“Seek to implement [facial recognition] solutions that operate in production at a stringent [false positive rate] of no more than 0.002% (one in 50,000) and a [false negative rate] of no more than 5%, but with the ability to make the [false acceptance rate] more stringent and the [false rejection rate] higher if the firm’s needs change,” Forrester suggested.
Rekognition is currently being used by the Sheriff’s Department in Washington County, Ore. Another locality that was testing the technology, Orlando, Fla., decided in June not to move forward after piloting it, but in July said it would continue with its testing.
Perhaps the most visible use of facial recognition technology has been efforts by the Transportation Security Administration and Customs and Border Protection, which are testing systems at Los Angeles International Airport. and other major airports to verify identities of international passengers. TSA told GCN that facial recognition “is still in the development and testing phase.”
Before making it into the airport for the pilot phase, the technology “undergoes a thorough and rigorous testing and evaluation process in a laboratory setting,” Michael McCarthy, a spokesperson for the TSA, said in an email. “The information gathered during pilot tests helps determine whether a technology may move forward in the testing process or whether it requires additional development and testing in a laboratory environment,” he said.
The availability of the Rekognition facial recognition technology through the AWS cloud could speed widespread adoption, in spite of its relative immaturity and standards that change depending on the use case, Grother said.
But given the easy access, Memon said, it could be time to start looking at some kind of regulation.
NEXT STORY: Bridging federal IT’s knowledge gap | <urn:uuid:e6daaa18-492c-434c-a444-5751878b983f> | CC-MAIN-2022-40 | https://gcn.com/cybersecurity/2018/08/why-confidence-matters-in-facial-recognition-systems/292976/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00796.warc.gz | en | 0.944223 | 1,720 | 2.796875 | 3 |
A botnet is one of the cyber threats you should beware of. Your device may be part of this malicious network, and you need to resolve it quickly.
The 21st century is undoubtedly the century of technological advances. Year after year, technology evolves at a dizzying pace that would surprise past generations. This is especially noticeable in the field of communications and information. Every day is more instantaneous, every day we are more interconnected, and no less than the Internet has a total of 4 billion users worldwide. And while this brings its benefits, it also has its consequences, such as the use of botnet.
Those who wish to commit malicious actions are unfortunately everywhere, and the Internet is not out of it. You may already know that, just as the Internet is a source of information and communication, it is also a source of threats and risks. If you don’t have the right defenses, a single click on the wrong site at the wrong time can bring terrible results, such as theft or loss of information or destruction of your devices.
What is a botnet?
Before explaining what a botnet is, we must first define what a bot is, so common all over the web. And this is essential because, although you may already know them, many know the term but don’t know its meaning.
A web robot, or simply a bot, is a software app that performs automated tasks (scripts) across the Internet. The most common practice is for them to carry out considerably easy and repetitive tasks since they perform faster-than-human speed. Bots themselves are not malicious, as evidenced by web crawlers, bots that index the web, but, as with any tool, some use them for illegal or dishonest purposes.
So, a botnet is a network of devices connected to the Internet, such as computers, smartphones, or IoT devices, that have been compromised and whose control is in the hands of a third party. And each of these devices runs one or more bots. The botnet controller has the ability to direct the actions of the compromised computers via standard network protocols, such as IRC or HTTP. The attacker commonly controls the bots via command and control (C&C) software.
How a botnet works
To have a complete understanding of botnets, you must understand how they work. The subject is not exactly simple, but it is essential to understand the threat posed by a botnet.
In summary, there are two ways to set up botnets. These are:
- Client-server model. This model is the oldest way of setting up botnets. The compromised computers, i.e., the bots, receive instructions from a single location on the Internet, such as a specific website or server. When it comes to this model, it is much easier to take down the botnet, as it is enough to find the website and take it down.
- Peer-to-peer model. This is much more complicated than the previous model, to the point that it does not have the same weakness. Each infected device is directly connected to others in the network, which is in communication with other ones, thus forming an intricate botnet. In this way, it is not enough to take down a single device, as the rest of the network will continue to function.
Hackers or botnet controllers, called “bot herders”, can carry out a series of terrible actions with the network of zombie computers they manage.
Mainly, botnets are used for:
- A sufficiently large botnet can quickly produce and send hundreds, thousands, or even millions of spam messages. Originally, this was its main function.
- By compromising a computer, the bot herder gains access to all your personal information, including your contacts. Potentially, it could impersonate you by using your own computer to carry out online scams.
- Data theft. Once your device is part of the botnet, the hacker can easily install spyware and observe your activity, creating a massive spyware network. This facilitates information theft, such as bank details, passwords, and any other sensitive information.
- Click fraud. Third parties may use your computer remotely to visit websites without your knowledge, generating fake traffic.
- Ad fraud. The bot herder can use all the devices in the botnet to falsely increase its popularity or increase the number of clicks on an advertisement, getting more money from advertisers.
- Bitcoin mining. A botnet can be an interconnected network of devices mining bitcoin or any other cryptocurrency to generate profits for the network operator.
- DDoS attack. It is common for hackers to use botnets to carry out massive DDoS attacks on websites, relying on hundreds, thousands, or even millions of devices to do so.
- Virus spreading. A sufficiently sophisticated botnet can compromise and add more devices to the network automatically.
If you notice that your device, whether it is a computer, a phone, or a tablet, suddenly works slower; if someone you know lets you know that they received a message from you that you do not remember sending; also, if your antivirus has suddenly stopped working or you cannot download one, then it is most likely that your device is part of a botnet. And you must solve the issue as quickly as possible.
Fortunately, it is not that difficult to stop being part of a botnet, even if it seems so. All you have to do is uninstall the malicious software that controls it. However, this is certainly not simple, as they usually hide under the guise of bona fide software. But if you have a powerful antivirus, a thorough scan should be enough; the other option, which cannot fail, is to format the device completely.
The best thing you can do is prevent being part of a botnet. Always have a good antivirus installed; be careful on the web, don’t access untrusted websites, and don’t download dubious origin content. Also, always have your device’s operating system updated.
Finally, always seek guidance from an IT security expert. You’ll see this will bring you many benefits. | <urn:uuid:9d76a985-a6cf-41a2-884c-fcda75791369> | CC-MAIN-2022-40 | https://demyo.com/botnet-network-threats-concerns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00796.warc.gz | en | 0.94934 | 1,265 | 3.21875 | 3 |
WiFi uses different frequency ranges for connectivity. Each frequency that is offered by the WiFi router has certain pros and cons and people opt to use these features according to their convenience and utility.
These frequencies make it possible for the WiFi signal to work efficiently in a controlled environment. To understand the concept better, if you have used radio or have studied their functioning, you must know that there are different types of frequencies like AM and FM frequencies on the radio.
Also, all wireless connectivity protocols like infrared and Bluetooth have different frequency ranges. The most advanced and fastest frequency range for WiFi is 5 GHz WiFi. The feature is being released and can be seen on most of the latest WiFi routers or WiFi-enabled devices so that they can ensure an optimal level of connectivity.
There are two most common types of WiFi frequencies that are being used across the world and most devices that have the option to connect to WiFi come pre-enabled with them. These two most common frequencies are:
2.4 GHz is the most widely used frequency of WiFi. The frequency is supported by almost all the WiFi-enabled device as it is the original frequency for WiFi. If you got a choice, this frequency range is optimal for covering a large geographical area as it limits the speed of data being shared over the WiFi but increases the signal strength and the range signals are being transferred to.
This frequency will be optimal for you if you require stronger signals and you are going to use a single WiFi router to power your entire home or office. You might have to compromise on the data transfer speeds but the signal strength and connectivity range will definitely make up for it.
5 GHz is the latest and fastest frequency of WiFi. You might not be able to get a hand on lots of devices that support 5 GHz frequency connectivity over the WiFi. Also, the routers that are able to transmit signals over such frequency are also a bit rare and expensive and you cannot find them commonly. However, the trouble in finding a 5 GHz router and spending some extra bucks will be worth each penny as you will get the top-notch speed of data being transferred over the WiFi signals.
The biggest perk that 5 GHz frequency allows you is that you will get the fastest internet and data transfer speeds over this frequency. However, the frequency is good only for short ranges and it can be suitable for a room or two at the maximum.
The signal strength beyond that can be a bit hard to reach and you might have to face disturbances there. Although, if you are a gaming enthusiast and have your router placed in the same room as your console and want to get the best internet for online gaming, or video streaming at HD quality, 5 GHz is the best choice for you.
Can’t Connect to 5 GHz WiFi issue
A most common issue people face while trying to connect to a 5 GHz WiFi network is the error they get that your device is unable to connect or sometimes the devices are not able to search for a 5 GHz WiFi internet connection.
There can be several reasons for this problem and we are going to discuss each problem and its solution in detail so you can not only troubleshoot any such errors yourself but also effectively find a solution to enjoy the best speed of the Internet at your home or office. The most common issues that might cause you to no being able to connect to a 5 GHz WiFi are:
1. Hardware Compatibility
Hardware compatibility is necessary to be able to connect with a 5 GHz WiFi connection. If you got the latest WiFi router and you want to enjoy the fastest speed of 5 GHz WiFi on your laptop, PC, or mobile device, you need to ensure that the device can be connected to the 5 GHz WiFi.
Please check your device specifications and make sure that it has 5 GHz WiFi connectivity enabled. Remember that you will not be able to connect to WiFi if your router is only transmitting at 5 GHz and your device does not support 5 GHz connectivity. There are routers that are able to transmit at both 2.4 GHz and 5 GHz so you might need to check your router too if it has the ability to switch to a lower frequency.
2. ISP Support
There is a certain ISP that will have limited plans for your WiFi or have certain restrictions on using the 5 GHz WiFi. You need to ensure that not only your router and device that you want to connect to the internet through a 5 GHz connection supports the frequency but also need to ensure that your ISP does not restrict any such issues. The best way would be to contact your ISP to confirm and have your plan upgraded that will support 5 GHz frequency connectivity.
3. Software Configuration
You might need to consider the configuration of all the devices to ensure the optimal level of connectivity. Please make sure that the device like a cellphone, laptop, or gaming console you are trying to connect with a 5 GHz WiFi has the latest OS installed. For laptops, you will need to check the Internet Driver software as well.
Once you are clear on the OS version, you might need to configure the settings manually. If your device supports 5 GHz connectivity and is still not able to connect with the WiFi, there are high chances that you have not turned the auto switch on. Make sure that your WiFi-enabled device has the switch on that can convert from 2.4 GHz to 5 GHz automatically.
4. Hardware Failure
If any of the above does not work, that means that your device might have some hardware issues that need to be fixed. In such cases, it is recommended to have your device checked from an authentic service store that will able to diagnose properly and get you a solution for your issue. | <urn:uuid:b9da0547-a0ba-4d73-a33e-57a276a0cb93> | CC-MAIN-2022-40 | https://internet-access-guide.com/cant-connect-to-5ghz-wifi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00196.warc.gz | en | 0.956424 | 1,147 | 2.609375 | 3 |
Consumers are bad at protecting themselves from online threats and that's probably the main argument why such a responsibility shouldn't be theirs to bear in the first place.
Consequently, the UK government is considering building a national cybersecurity defence system, which should incorporate government agencies, businesses, telecoms and ISPs.
During the CYBERUK 19, a conference set up and run by the National Cyber Security Centre (NCSC), Jeremy Fleming, director of GCHQ, said it was time "to do more to take the burden of cybersecurity away from the individual.”
"This technological revolution is providing extraordinary opportunity, innovation and progress – but it's also exposing us to increasing complexity, uncertainty and risk," he said, adding that this "brings new and unprecedented challenges for policymakers as we seek to protect our citizens, judicial systems, businesses - and even societal norms."
This cybersecurity defence system would be based on intelligence sharing between different parties involved.
Cybersecurity solutions can be improved and can do an amazing job at protecting systems, but at the end of the day, it comes down to the user. Security researchers are saying humans are still the biggest risk factor, as they sometimes ignore security warnings, click unwanted links and download malicious attachments.
Cybersecurity firms will continue improving their solutions, but businesses everywhere are urged to educate their employees on how to stay safe online.
Phishing, a practice in which hackers fish for vital information such as login credentials, is still considered one of the biggest cybersecurity threats.
Image Credit: Den Rise / Shutterstock | <urn:uuid:1be1ed90-3149-47b0-8998-ad325ea2e679> | CC-MAIN-2022-40 | https://www.itproportal.com/news/uk-could-be-set-for-a-major-cybersecurity-upgrade/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00196.warc.gz | en | 0.957164 | 315 | 2.84375 | 3 |
Technology will determine the winners and losers in the energy transition revolution
The energy industry is entering a new era, triggered by the relentless rise of renewables, electric cars and smart grids. And like previous industrial revolutions, technology – and the willingness to embrace it – will determine the winners and losers.
Over the past decade, the greatest disruptor in the energy industry has been the unlocking of vast shale oil and gas reserves in the U.S. But the next breakthroughs won’t come from better drilling rigs. They will emerge from digitalizing the existing energy infrastructure and improving the way data is collected, analyzed and used to maximize efficiency and minimize the environmental impact of fossil fuel extraction and consumption.
Global energy demand will grow an expected 50 percent by 2050, and there are still over one billion people who lack access to power, so it’s important to maintain investment in oil, gas and power generation to meet these needs. At the same time, we must tackle the challenges of climate change and massively reduce CO2 and other emissions. There’s consensus that new technology has an important role to play to solve this problem.
But the industry remains strained. For the past five years, oil and gas companies faced “lower for longer” forecasts for crude prices and slashed investment budgets, including crucial spending on technology. Some energy experts don’t see annual investment reaching the $600 billion necessary to meet future oil demand through the next decade. Such a shortfall would limit the broad deployment and development of promising technologies.
More about A cleaner energy future
Some producers are reluctant to allocate capital to projects that won’t show returns for years. Others are concerned about sharing data and cybersecurity, reinforcing the industry’s preference for tech conservatism. The slow pace of adoption, however, isn’t uniform and many companies, especially in the Middle East, are on the cutting edge of digitalization and devote considerable resources to technology.
Innovations in the oil and gas sector today, from exploration to downstream projects, are aimed at squeezing maximum value from each barrel. Along this chain, some advances are no-brainers, like switching out a half-century-old gas engine with a highly efficient and connected electric drive or installing sensors that allow for real-time production optimization.
The greater promise lies in technology that’s still in its infancy and may not be the obvious choice, yet. Potentially disruptive technologies such as artificial intelligence, ‘digital twins’ and additive manufacturing are among the most important developments today.
Check out some of the startups we spotted at Web Summit this year, focused on clean energy and renewable energy.
Continue to read the article here.
The author of this article is Dietmar Siersdorfer, CEO of Siemens Middle East as well as Siemens LLC United Arab Emirates since December 2013, and CEO of the Energy Sector in the Middle East, since June 2008. Based in the United Arab Emirates, Siersdorfer joined Siemens in 1987 in Mannheim, Germany, as an electrical engineer and held various managerial positions in the Industry and Energy sectors during his tenure at the company. | <urn:uuid:dbf3ed10-be5d-42f8-973e-d2e7ac54fc7c> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/connected-industry/technology-will-determine-the-winners-and-losers-in-the-energy-transition-revolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00196.warc.gz | en | 0.939624 | 639 | 2.515625 | 3 |
The cost of cooling high-powered data center compute has become so high that liquid cooling from companies such as Iceotope can now slash electricity bills by as much as 40 per cent in hot locations like Singapore.
And that saving isn’t just from the power required to run air-conditioning and other cooling systems, but the total electricity bill. Furthermore, water savings are even greater – as high as 95 percent.
That’s according to an independent survey commissioned by cooling specialist Iceotope, revealed to DCD in a recent interview with CEO David Craig.
“The survey looks at a megawatt of compute at the ‘edge’ in Singapore. The electricity saving between our technology and what would otherwise be best-in-class cooling is 40 percent. That's not the cooling saving alone; that's the total electricity saving. The cooling saving is much more.
“So the total electricity load saving is 40 percent and the water saving as 95 percent. So the cost of running a megawatt of compute at the edge in Singapore is about $1.6 million a year versus a $1 million,” Craig told DCD. In terms of PUE, Craig claims that Iceotope’s technology can drive it right down to 1.03 “pretty much out of the box” regardless of where in the world the hardware is located.
And for a state like Singapore, dependent upon its neighbor for water – or for nation’s that need to use electricity to produce drinking water from desalination – the ability to dramatically cut water consumption by using liquid cooling is as valuable as the power saving.
“Equally important, liquid cooling actually causes fewer failures in electronics, because it's operating in quite a cocooned space, and keeps the hardware in a much narrower temperature band,” said Craig. As a result, it can help data center operators pack more hardware into tighter spaces, as well as improving hardware reliability, too: “We've had systems have been running seven years without a single server failure.”
While such savings and features ought to be enticing for conventional data center operators, Craig believes they will be absolutely essential for edge data centers, based in out-of-the-way places where high power and water consumption could be an even bigger issue. Indeed, Craig believes that even hospitals will need their own edge data centers to assist with diagnosis and operations.
These will need to be in “a more protective environment with fewer moving parts and fewer things to worry about, which can phone home when it might need some maintenance,” so that engineers can plan their visits rather than driving from site to site, fire-fighting.
The issue of cooling is only going to intensify over the coming years with ever-greater demands placed on data centers, not just so that the masses can watch films at 4K resolutions while tweeting their friends and Instagramming their dinners, but increasingly for serious, intensive AI-based applications that require real-time or near-real-time responses, such as self-driving vehicles.
That will put compute power in some inhospitable locations, both in the Western world, as well as emerging markets, with their own unique challenges.
“One billion more people will join the world’s middle classes over the next decade, so the power of the world’s electronics must be shared with them, but we need it to be done as sustainably as possible. And most of them will likely live in environments that are quite hot and humid,” says Craig. They may also be somewhat water-constrained.
In other words, the demands on data centers in these parts of the world, whether in Asia, Africa or South and Central America, will be similar to the ones faced by operators in Singapore.
More than that, adds Craig, immersion cooling also makes heat re-use easier and should power go down, liquid can keep cooling CPUs for 20 or 30 minutes, he adds, without power. “If there’s a glitch you can point your UPS towards the servers, keep the servers going and the rest will be fine,” suggested Craig. “We can certainly ameliorate and mitigate the problems of unstable power.”
Craig, himself, not only has plenty of energy – as he no doubt demonstrated in his youth as the bassist in a punk-rock band – but also 30+ years of business experience, almost entirely in the tech sector. Craig’s grown-up life started at Unisys in the mid-1980s, followed by IBM during the period when its CEO Lou Gerstner was trying to teach the knackered, old elephant to dance. Less than ten years later, Craig was part of a team leading the turnaround of McLaren Software as chief operating officer.
On top of all that, he’s helped to build schools in Kenya as a trustee and founder of the charity Educate the Kids, and founded wind turbine developer Green Power Partnership – among many other things.
In other words, Craig is both involved in trying to cut data center power consumption via cutting edge cooling technologies with Iceotope, and also generating the power in the first place.
Sponsored Intensive care: Why hospitals of the future will need liquid-cooled edge data center power
In the future, healthcare will be as much about 'predictive maintenance' as it is acute care – and that will require fast, well-cooled local processing power, says Iceotope's David Craig
Conference Session Taking healthcare to the Edge: An AI-driven transformation
Broadcast DCD>Building at Scale | Stream on-demandHow can you continue to deliver on data center construction demand and meet the need for speed? | <urn:uuid:a9ddcf45-cec9-4998-a830-55e81522f9c7> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/marketwatch/liquid-cooling-can-cut-power-costs-by-40-per-cent-states-iceotope-ceo-david-craig/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00196.warc.gz | en | 0.957978 | 1,210 | 2.59375 | 3 |
A perspective article by researchers from Stockholm College and ETH Zurich, printed in August in Environmental Science & Expertise, means that environmental contamination by per- and polyfluoroalkyl substances (PFAS) defines a separate planetary boundary and that this boundary has been exceeded.
“There was an astounding decline in guideline values for PFAS in consuming water within the final 20 years. For instance, the consuming water guideline worth for one well-known substance within the PFAS class, specifically the cancer-causing perfluorooctanoic acid (PFOA), has declined by 37.5 million occasions within the US” mentioned Ian Cousins, the lead writer of the research and professor on the Division of Environmental Science, Stockholm College.
“Based mostly on the newest US tips for PFOA in consuming water, rainwater in all places could be judged unsafe to drink. Though within the industrial world we don’t usually drink rainwater, many individuals all over the world anticipate it to be secure to drink and it provides lots of our consuming water sources,” Cousins proceed.
The Stockholm College staff have performed laboratory and subject work on the atmospheric presence and transport of PFAS for the previous decade. They’ve famous that the degrees of some dangerous PFAS within the ambiance will not be declining notably regardless of their part out by the foremost producer, 3M, already twenty years in the past. PFAS are recognized to be extremely persistent, however their continued presence within the ambiance can also be as a result of their properties and pure processes that frequently cycle PFAS again to the ambiance from the floor atmosphere. One vital pure biking course of for PFAS is the transport from seawater to marine air by sea spray aerosols, which is one other energetic analysis space for the Stockholm College staff.
“The intense persistence and continuous international biking of sure PFAS will result in the continued exceedance of the above-mentioned tips,” mentioned Professor Martin Scheringer, a co-author of the research primarily based at ETH Zurich in Switzerland and RECETOX, Masaryk College within the Czech Republic.
“So now, because of the international unfold of PFAS, environmental media in all places will exceed environmental high quality tips designed to guard human well being and we will do little or no to scale back the PFAS contamination. In different phrases, it is sensible to outline a planetary boundary particularly for PFAS and, as we conclude within the paper, this boundary has now been exceeded,” mentioned Scheringer.
PFAS is a collective identify for per- and polyfluorinated alkyl substances or extremely fluorinated substances which have an analogous chemical construction. All PFAS are both extraordinarily persistent within the atmosphere or break down into extraordinarily persistent PFAS, which has earned them the nickname “eternally chemical substances.”
PFAS have been related to a variety of great well being harms, together with most cancers, studying and behavioral issues in youngsters, infertility and being pregnant problems, elevated ldl cholesterol, and immune system issues.
Dr. Jane Muncke, Managing Director of the Meals Packaging Discussion board Basis in Zürich, Switzerland, and never concerned within the work, factors out: “It can’t be that some few profit economically whereas polluting the consuming water for thousands and thousands of others, and inflicting critical well being issues. The huge quantities that it’s going to value to scale back PFAS in consuming water to ranges which might be secure primarily based on present scientific understanding should be paid by the business producing and utilizing these poisonous chemical substances. The time to behave is now.”
The article “Exterior the Protected Working Area of a New Planetary Boundary for Per- and Polyfluoroalkyl Substances (PFAS)” is printed within the scientific journal Environmental Science & Expertise. | <urn:uuid:554e2e12-35f8-4a8b-b5b6-5e892d01c637> | CC-MAIN-2022-40 | https://blingeach.com/rainwater-is-now-not-drinkable-due-to-pfas-say-researchers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00196.warc.gz | en | 0.936863 | 800 | 2.875 | 3 |
IPv6 is the latest version of the Internet Protocol (IP) that is the basis of how the internet is built and how it runs.
The internet was originally designed using IPv4, but after several years of enormous growth, it started to look like the world would run out of IP addresses. In 1998, IP version 6 (IPv6) was standardized. Simply put, version 6 gives the internet more IP addresses and additional features.
When you lease a static IPv4 address, it will typically have a "derived IPv6 address" associated with it, so there is no need to order anything additional. If your modem is compatible, then you just have to enable IPv6 on your modem. All of the newer modems from CenturyLink are IPv6-compatible. | <urn:uuid:2e060323-de16-4b4b-b41d-25d0e4d44316> | CC-MAIN-2022-40 | https://www.centurylink.com/home/help/internet/static-ip-addresses/FAQ-static-IP-addresses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00196.warc.gz | en | 0.948843 | 158 | 2.828125 | 3 |
Researchershave found a vulnerability in a popular C standard library in IoT products that could allow attackers to perform DNS poisoning attacks against a target device.
The library is known to be used by major vendors such as Linksys, Netgear, and Axis, but also by Linux distributions such as Embedded Gentoo. Because the library maintainer was unable to develop a fix, this vulnerability remains unpatched. For this reason, the affected devices were not mentioned in detail.
In computing, a library is a set of resources that can be shared among processes. Often these resources are specific functions aimed at a certain goal. These functions can be called upon when needed so they do not have to be included in the code of the software that uses it. Another example of such a library that caused some havoc was Log4j.
A C standard library is a library for the C programming language itself. Such a library provides macros, type definitions, and functions for tasks such as string handling, mathematical computations, input/output processing, memory management, and several other operating system services. As you can imagine, such a standard library is called numerous times by many programs that depend on these basic functions.
In this case, the library at hand is uClibc, one of the possible C standard libraries available, which focuses specifically on embedded systems because of its size. Because uClibc is a relatively small C standard library intended for Linux kernel-based operating systems for embedded systems and mobile devices. Features can be enabled or disabled to match space requirements.
The alternative uClibc-ng is a fork of uClibc that was announced after more than two years had passed without a uClibc release, citing a lack of any communication from the maintainer. Unfortunately uClibc-ng shares the same vulnerability.
Similar to other C standard libraries, uClibc provides an extensive DNS client interface that allows programs to readily perform lookups and other DNS-related requests.
DNS poisoning, also known as DNS cache poisoning or DNS spoofing, is a cyberattack method in which threat actors redirect web traffic, usually toward fake web servers and phishing websites.
In a typical home setup, there is:
- A modem provided by your Internet Service Provider (ISP) which is your connection to the outside world.
- A router that distributes the internet connection across all the devices (often wireless).
- The devices like your laptop, phones, tablets and IoT (Internet of Things) devices such as TVs, temperature sensors, and security cameras.
These days, the modem and router are usually combined in the same device.
A DNS poisoning attack enables a subsequent Machine-in-the-Middle (MitM) attack because the attacker, by poisoning DNS records, is capable of rerouting network communications to a server under their control.
One of the main ingredients to protect us against DNS poisoning is the transaction ID. This is a unique number per request that is generated by the client, added in each request sent, and that must be included in a DNS response to be accepted by the client as the valid one for that particular request. So while this transaction ID should be as random as possible, the researchers found that there is a pattern. At first the transaction ID is incremental, then it resets to the value 0x2, then it is incremental again.
While figuring out where this pattern comes from, the researchers eventually found out that the code responsible for performing the DNS requests is not part of the instructions of the executable itself, but is part of the C standard library in use, namely uClibc 0.9.33.2.
Given that the transaction ID is now predictable, to exploit the vulnerability an attacker would need to craft a DNS response that contains the correct source port, as well as win the race against the legitimate DNS response incoming from the DNS server. As the function does not apply any explicit source port randomization, it is likely that the issue can easily be exploited in a reliable way if the operating system is configured to use a fixed or predictable source port.
Since the library maintainer has indicated he is unable to develop a fix, this vulnerability remains unpatched. The researchers are working with the maintainer of the library and the broader community in order to find a solution. The maintainer explicitly asked to publicly disclose the vulnerability, hoping for help from the community.
Because of the absence of a fix, the researchers did not disclose the specific devices that they found to be vulnerable. They did however, disclose that they were a range of well-known IoT devices running the latest firmware versions with a high chance of them being deployed throughout all critical infrastructure.
The vulnerability was disclosed to 200+ vendors invited to the VINCE case by CERT/CC since January 2022, and a 30-day notice was given to them before the public release.
If you suspect that your router has been affected by DNS cache poisoning, have a look at our article DNS Hijacks: Routerswhere you will find some information on how to resolve such matters. When it is purely a case of router DNS caching, I have yet to find a router where resetting the router and leaving it off for at least 30 seconds did not clear the cache. But note that this does not resolve an ongoing attack or remove the vulnerability. It’s just a matter of symptom management.
Stay safe, everyone! | <urn:uuid:b7b8ff84-0d3d-40c4-8977-1b0453fcfe96> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2022/05/unfixed-vulnerability-in-popular-library-puts-iot-products-at-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00196.warc.gz | en | 0.948896 | 1,131 | 3.03125 | 3 |
How Secure are Password-Protected Files?
We recently discussed email security for accountants and mentioned that the use of password-protected files is not usually a very good solution for meeting data privacy needs. After writing this and getting some feed back, we thought that the issue of password-protected files really deserves some further discussion. Many people are under the assumption that if they use the “password protection” features of whatever software they are using, that their data is safe and secure. However, this is not necessarily the case. Why?
Using password-protected files to secure data is fast and easy and built into many applications. Why not use it? Certainly, password protecting files is much better than not doing so. However, there are several things that determine how secure these “protected” files really are.
First, let’s assume that the file has fallen into the malicious hands of someone (a hacker) trying to steal the data from within it. If the file is not accessible to unauthorized people in the first place, encryption doesn’t even come into the picture. The hacker needs to figure out how to access the protected data. How can s/he do this?
Unlocking password-protected files?
How can someone access the content of a password -protected file? Well, that depends:
- If the file is not encrypted, but not openable in the normal program that is used to read it (i.e. like Microsoft Word), then the hacker just needs to remove the block on opening the file by editing the file.
- If the file is encrypted, but with a weak/poor form of security, the hacker may be able to use well known techniques to break into the security in a relatively short amount of time, no matter what password is used.
- If the file is encrypted with strong encryption, such as AES, the hacker needs to guess the password used.
Case 1 used to be prevalent many years ago when password-protection was first becoming popular. Various file formats could include codes that the reader programs would detect and cause them to ask for a password before letting the file be viewed. In these cases, the raw data was not actually encrypted, and the security relied upon the assumption that (a) the user can’t/won’t look at the raw file and see what the data actually is and (b) the user can’t/won’t be able to figure out how to edit the raw file to remove the “don’t open me” instructions. Of course, both of these assumptions are invalid. No mainstream program released in the last few years with password protection is so insecure as to use these kind of assumptions. So, unless you are using old legacy software, you don’t really have to worry about this extreme form of password-protected insecurity.
As a case in point, as recently as 2004, it was discovered that Microsoft Word’s (version 2000 and 2003 in backwards-compatibility mode) password-to-modify protection can be subverted easily to gain access to the full contents. Microsoft responded to this discovery by stating that
“(When) you use the Password to Modify feature, the feature is functioning as intended even when a user with malicious intent bypasses the feature … The behavior occurs because the feature was never designed to protect your document or file from a user with malicious intent.”
Admittedly, this is not exactly password protection from viewing, but password protection from editing. But the point is the same: even widely used software from companies like Microsoft sometimes does not have any kind of real inherent security in places where a naive user would assume it does.
Case 2 is prevalent even today. This involves using old encryption methods that have long ago proven to be easily broken. For example, Word and Excel 95, 97, and 2000 files with password protection can be opened by a hacker withing 10 seconds because the encryption methods used contain known problems. For versions 2002 and 2003, the default encryption methods were made to be compatible with version 2000 and are thus susceptible to the same kind of easy access by any hacker. Versions 2002 and 2003 can use 128-bit RC4 for better (though not super) encryption; however, you need to manually enable this.
Many people still use versions of Microsoft Office older than 2007, and password-protected files generated by these versions are likely to be completely insecure. Many other programs commonly in use are also using old vulnerable encryption methods that render them completely insecure.
Case 3 is what you want if you need to use password-protected files. In this scenario, the file is actually encrypted using a highly secure encryption algorithm such as 128- or 256-bit AES. The only way to access the original data is to know or guess the password used. Microsoft Office 2007 uses 128-bit AES encryption for password protection and places those encrypted documents squarely in this case. Encrypted ZIP files (via WinZIP) use 128- or 256-bit AES encryption as well.
- Adobe Acrobat v9 (for making PDFs) uses 256-bit AES encryption, but this is actually weaker than that available in previous versions of Acrobat. This is still viable as long as your password is chosen well.
- Adobe Acrobat v8 uses 128-bit AES encryption; it is implemented in a way that is stronger and takes longer to break than that in v9. This is the best version, currently, to use for encryption.
- WinZIP and PkZIP use 128-bit or 256-bit AES encryption. These are both good as long as you have a good password. Note, however, that the file names inside a password-protected ZIP file are visible to anyone without needing to decrypt the file! If your file names are sensitive … put your password-protected ZIP file inside of another password-protected ZIP file.
- Office 2007 products (Word, Excel, Powerpoint, One Note) use 128-bit AES Encryption. This is good as log as you have a good password.
- Office 2002 and 2003 products can use 128-bit RC4, but are not configured to by default. This is bad … don’t use password encryption in these versions!
- Older versions of Office (as well as the default configurations of Office 2002 and 2003) use an older encryption scheme that is completely broken. Never use password encryption in these versions.
Breaking Strong Encryption
Password-protected files using strong encryption can only be accessed by knowing or guessing the passwords. If you are careful and use a very good password (i.e. one that cannot be easily guessed), then this form of password protection is indeed very secure.
However, it is exceedingly common for people, especially those with no security training, to use very simple passwords on such files. I.e. words found in the dictionary, like “green”, people’s names, or simple variations on these themes. Such passwords can be “guessed” easily by simply trying all words in the dictionary, all names, and all commonly used variations on all of these. For English, this means a few million possibilities (plus or minus — dictionaries vary). Computers are so fast that checking a few million possible passwords against an encrypted file can be done very quickly. So, any file protected with a password that falls into the category of “easily guessed/cracked” can be reliably opened in short order. It is not the strength of the encryption that is the problem, it is the strength of the key — the password.
In fact, the demand for opening password-protected Office and PDF files is so great that there are many commercial programs available that can do this for you for a few dollars. These are “password recovery” programs, but are equally useful to people trying to gain unauthorized access to such files. They will do all the guessing and testing and can open most files with poorly chosen passwords. For example, a quick Google search found:
- How to Open a Password-Protected PDF
- Office 2007 Password Recovery
- Office Password Pro
- PDF Password Recovery
- ZIP File Password Recovery
With all of these utilities readily available, it is within anyone’s reach to open common password-protected files.
Other Problems with Password-Protected Files
Unauthorized access to the content of a file is not the only potential problem. Anyone who can get access to the file content and its password can also alter the file content and re-protect it with the same password in a way that is, for all intents and purposes, undetectable. So, you have have an encrypted file that holds important information that has been broken into and changed and you would not know it. Use of regular password-protected files as “vaults” where the data stored therein is assumed safe and immutable is not a really good decision.
So, What Can Be Done?
If you need to use encrypted files, you should:
- Make sure that the files are encrypted using strong encryption
- Use good passwords … ones with uppercase and lowercase characters, numbers, spaces, and symbols. Things that would never be assembled into a common dictionary.
- If you are using password protection for sending files to multiple people, do not use the same password for everyone! Use a different password for each of your corespondents. This ensures that the loose lips of one person does not compromise the security of someone else.
We have time and again seen or heard of organizations that use really poor passwords, like a dictionary word, and use that same password for all encrypted documents. This is often done to make things easy for the staff or users, but effectively renders the attempt at encryption laughable.
To protect the content of the file against unauthorized change, you will have to use a digital signature, like that available in PGP and S/MIME. The digital signature allows you to verify (a) when the content was signed, (b) who signed it, and (c) if it has been altered at all since then.
Mitigate Brute Force and Dictionary Attacks
The key to being able to guess the password to an encrypted file is the ability of the hacker to try as many passwords as s/he likes as fast as possible. If this is not an option, then “guessing” the password becomes, essentially, impossible — even if the password in use is poor.
How Can This be Accomplished?
If the encrypted file is stored in a server with access only available via a web site where you have to enter the password, then:
- No one has access to the raw encrypted file and thus cannot use any of the available password cracking tools against the file itself.
- The web site can lock out access after a few password failures. For example, after 5 incorrect passwords, the hacker would not be permitted to try again for a few minutes from the same location. This makes automated testing of large numbers of possible passwords impossible.
As a case in point, LuxSci’s SecureLine Escrow service allows LuxSci users to email files to anyone on the Internet who has an email address. It digitally signs and then encrypts the files using strong encryption and stores them on a secure server. It will never email the encrypted files themselves, keeping them invulnerable to direct attacks. It uses a long random password and makes access only available via a secure (over SSL) web site which automatically locks out access after several failed password guesses. This kind of communication is uniformly more secure that emailing password-protected files.
Of course, communications security assumes that the sender or recipient is using a computer that is not compromised. But, that is the subject of a future article. | <urn:uuid:929c46c3-622e-48bc-aa13-dfc594b8412a> | CC-MAIN-2022-40 | https://luxsci.com/blog/how-secure-are-password-protected-files.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00396.warc.gz | en | 0.941852 | 2,439 | 3.09375 | 3 |
If not documents, then what? Two important kinds of outputs of a slice of a software development process are shown in Figure 1: (1) knowledge and understanding on the part of the system builders, and (2) documentation of that understanding.
Figure 1: Milestones defined as measurable increases in knowledge.
Documents are merely evidence that a person has performed certain intellectual activities. For example, a test plan is evidence that a test planner has enumerated the tests that need to be done, and explained their rationale. However, one does not know if test planning is actually complete (has sufficient coverage) unless someone credible and impartial assesses the plan. That is, the plan needs to be verified.
Progress should be measured through tangible outcomes whenever possible, or through independent assessment when there are no tangible outcomes. The outcomes or the assessment are the credible indicators of progress, not the documents. For example, how do you know whether a design is robust enough to proceed with development? The assertion that a design document has been completed is not a reliable indicator, because it is well-known in software development that designs evolve substantially throughout implementation.
How then can one tell whether one is at a point at which proceeding with development will be productive or lead to lots of rework and perhaps even scrapping a first attempt at building the system? Prototypes are useful for this purpose, and so the successful completion of prototypes that address critical design issues is a better indicator of readiness than the completion of a design document. In any case, progress should be seen in terms of the creation of actionable knowledge, not artifacts.
The Scaling Problem
As projects scale, the effects of a document-centric process become more prominent, because those who create the documents tend to be less available to answer questions. Teams create documents and pass them on to other teams, and the original teams are often re-deployed to other activities. They might even be located at a separate site. Programmers, testers, and others are expected to pick up the documents and work from those alone. It is as if someone sent you a book of calculus and said, “Here, build a program that implements this.” No wonder large projects tend to fail. Due to pressure to optimize the deployment of resources, large projects tend to consist of many disjointed activities inter-connected by the flow of documents. But, since documents are information and not knowledge and are therefore not actionable, these flows tend to be inadequate.
Agile methods have been extended to large projects. For example, see Scott Ambler’s article Agile and Large Teams. Ambler is Agile Practice Lead for IBM/Rational and tends to work on very large projects. The basic approach is to decompose the project into sub-projects, define interfaces between the associated sub-components, and define integration-level tests between these sub-components. This is very much a traditional approach, except that documents are not used to define all of this ahead of time. Instead, the focus is on the existence and completeness of the inter-component test suites, on keeping interfaces simple, and allowing interfaces (including database schema’s) to evolve while keeping the inter-component tests up to date. | <urn:uuid:268f2026-92d2-431f-9b5f-9531d66478d3> | CC-MAIN-2022-40 | https://cioupdate.com/solving-the-problem-of-large-project-failure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00396.warc.gz | en | 0.94936 | 665 | 2.59375 | 3 |
Employees of federal, state and local governments; and businesses working with the government.
This is a moderate-to-advanced level course. Assumptions we make about people attending this course: General networking knowledge of TCP/IP networks. A working understanding of network components and their role and capabilities in an Ethernet/WLAN TCP/IP network The course covers iOS, Android, Windows, Linux and MacOS. Familiarity with most of these OS’ is assumed. Some basic Linux command-line experience is assumed.
This course teaches you to both understand and manage your electronic signature both on-line and in your surrounding physical space.
- Secure Password Development
- Leveraging Virtualization to Enhance Privacy & Security
- DNS and DNS Security (DNSSEC, Encrypted DNS)
- Using Wireshark to Inspect Device Traffic
- Using iptables to manage ingress and egress network traffic
- Using VPNs to provide privacy and manage attribution
- Managing Digital Fingerprints and Privacy
- Secure Email and Secure Messaging
- Screening & Sanitizing equipment
- 802.11 Device Behavior in WLAN environments
- Understanding and managing your 802.11 presence in the RF environment
- Malware & Ransomware
- Bluetooth Device Behavior
- Understanding and managing your Bluetooth signature in the RF environment
Detailed Course Outline:
- Learn to create strong passwords base on the most current NIST recommendations.
- Understand password entropy and how it is used as a measure of password strength.
- Leverage multi-factor authentication on systems and web apps.
- Build your own personal VPNs using Wireguard and OpenVPN solutions.
- Use virtual machines to better facilitate personal digital security and manage attribution.
- Explore vulnerabilities in DNS and how you leak information about yourself.
- Use command-line tools to query DNS and validate results.
- Configure a PiHole ad blocker and configure it to use DNSSEC and DNS over HTTPS.
- Install dig on Windows
- Use command line tools (Linux, Windows, MacOS) to determine what services are running, which ports are in use and which services are using them.
- Use Wireshark to inspect your own network traffic (Linux, Windows, MacOS) to better understand and manage your personal digital signature.
- Configure iptables as a client firewall to control ingress and egress traffic from you Linux devices.
- Examine different browsers and their behavior and capabilities as it pertains to privacy and tracking.
- Securing different OS’ with encryption
- Mobile Device Security (iOS, Android)
- Secure email and secure messaging. Understanding email flow scenarios on the Internet. Explore mechanisms for securing email traffic.
- Screening & Sanitizing equipment. Securely erasing files and disks and exploring different systems for artifacts that leak information about you.
- Understanding and working with EXIF data in files and managing attribution concerns with EXIF data.
- An overview of WLAN terminology and behavior. Understanding the ever-evolving behavior of different network devices and develop an understanding of how your 802.11 device eminates and what that means for your security.
- An understanding of the attack vectors used in WLAN environments.
- Overview of different Bluetooth implementations (Bluetooth Classic, BLE, BT5, etc.) and determining what information your devices reveal about you.
- Managing your Bluetooth signature and understanding the exploits currently known. | <urn:uuid:d3ba6f1c-6100-4e54-a337-a4ccbd7559dd> | CC-MAIN-2022-40 | https://www.itdojo.com/courses-cyber-security-info-assurance/managing-personal-digital-security-and-electronic-attribution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00396.warc.gz | en | 0.842336 | 737 | 2.578125 | 3 |
As data science trends towards utilizing more and more automation, a growing concern is how we can audit, evaluate, and understand the results from automated processes. Libraries like Featuretools, an open-source Python tool for automated feature engineering, simplify critical steps in the machine learning pipeline. Automated feature engineering can easily extract salient features from data without the significant time and knowledge required to do so manually. This automation also helps avoid human errors and discover features that a human might miss. However, this abstraction can make it harder to understand how a feature was generated and what it represents.
With manual feature engineering, you have to understand what the feature is in order to create it. Automated feature engineering, on the other hand, requires working backwards from the end result. In Featuretools, we rely on the auto-generated feature names to describe the feature. For a simple feature—i.e., one that only uses a single primitive—the name is usually easy to interpret. However, as primitives and features stack on each other, feature names become significantly more complicated and harder to intuitively understand.
To make it easier to understand how a feature was generated, Featuretools now has the ability to graph the lineage of a feature.
What is a Feature Lineage Graph?
Starting from the original data, lineage graphs show how each primitive was applied to generate intermediate features that form the basis of the target feature. This makes it easier to understand how a feature was automatically generated, and can help audit features by ensuring that they do not rely on undesirable base data or intermediate features.
For an example, let’s look at some features we can generate from this entity set:
import featuretools as ft es = ft.demo.load_mock_customer(return_entityset=True) es.plot()
First, we can make a simple transform feature using a transform primitive. Transform primitives take one or more variables from an entity as an input and then output a new variable for that entity. They are applied to a single entity, so they don’t rely on any relationships between the entities.
If we wanted to know whether the amount for a given transaction was over 10, then we could use the GreaterThanScalar primitive. This would generate a feature that “transforms” the transaction amount into a boolean value that indicates whether the amount was over 10 for that transaction.
trans_feat = ft.TransformFeature(es['transactions']['amount'], ft.primitives.GreaterThanScalar(10)) ft.graph_feature(trans_feat)
Next, let’s use the feature we just created to perform an aggregation with an aggregation primitive. Unlike transform primitives, aggregation primitives take related instances as an input and output a single value. They are applied across a parent-child relationship in an entity set.
Let’s say for a specific customer, we want to know what percentage of their transactions had a transaction amount over 10. We can use the PercentTrue aggregation primitive on the GreaterThanScalar feature we just generated to create this feature. The resulting feature “aggregates” all of the
amount > 10 features for each customer over all transactions, and calculates what percentage of those were true for each customer.
agg_feat = ft.AggregationFeature(trans_feat, es['customers'], ft.primitives.PercentTrue) ft.graph_feature(agg_feat)
The dotted lines in the graph indicate which variables are used to group the data. The aggregation is then applied to each grouping to create the final feature. Importantly, the graph not only shows the aggregation, it also shows the previous transformation we applied. In this way, feature lineage graphs can track feature development through each intermediary feature. Also note that, in order to go from the ‘transactions’ entity to the ‘customers’ entity, we first had to relate the two through the ‘sessions’ entity that lies between them.
In this example, we manually generated the feature we wanted to explore in order to demonstrate how feature lineage graphs work. With automated feature engineering, we no longer have the advantage of the understanding derived from manually creating a feature and instead have to work backwards from the final feature.
For example, compare this generated feature name:
<Feature: customers.NUM_UNIQUE(sessions.MODE(transactions.DAY(transaction_time)))> to the associated lineage graph. Looking at the name alone, it’s much harder to clearly see what this feature actually represents and what steps Featuretools took to generate it. Feature lineage graphs show us how the feature was generated step by step. Graphing the lineage of the feature is another tool to help users gain deeper insight into what the feature means and understand how Featuretools generated it from the base data.
How We Can Use Lineage Graphs to Audit Generated Features
Feature lineage graphs are a powerful tool to help understand how a feature was generated, which in turn can be used to assist in auditing the features. Because we can visually see step by step what features were stacked in order to generate the target feature, we have full knowledge of all features and primitives that were used to generate the feature without having to do any complex analysis.
Knowing which data a feature is based on is a critical step in auditing a feature for three reasons:
First, it allows us to easily throw out features based on data we later realize we don’t want to use. For example, if we realize some of our data is contaminated in some way, we can use feature lineage graphs to determine which features are based on that data and therefore should be discarded.
Second, it means the original data that the feature is based on is obvious. This is particularly important for explaining which original data was fed into the model and how that data may have impacted the result. Being able to clearly show how the original data was used is a critical part of analyzing a model and its results, especially in well-regulated industries.
Finally, feature lineage graphs can also help audit generated features by demonstrating which features were stacked in order to create it. For example, if we know that a feature has a lot of null values or is otherwise contaminated with bad data, we can easily identify and disregard any other feature that stacks on top of it.
Feature lineage graphs are implemented by converting feature objects in Featuretools to a collection of nodes and edges that we then render visually. This visualization is possible because every feature in Featuretools contains the entity it is a part of, the primitive it uses, and references to its input features.
You can see the approach we took here. The algorithm utilizes the feature structure by starting at the target feature and then recursively searching over input features in a depth-first manner. As it traverses, every feature is given a node ID and collected by entity. We also create a few additional edges and nodes based on the feature type:
- Aggregation features and direct features are applied across entities and contain data on the relationship path between themselves and their input features. As we traverse that path, we add the groupby/join variables as nodes in our graph.
- Transform features do not relate between entities, so if there is a groupby variable, it is always part of the same entity, and no relationship path needs to be explored. Because of this, groupby transform features store their own groupby variable, which is used to add its groupby node to the graph.
Once we’ve traversed through all features, we have enough organized information to generate the final feature lineage graph. Tables are generated for each entity containing their features, and primitive and groupby nodes are generated from the nodes created during the feature exploration. We then use Graphviz to render the graph as an image.
While automated feature engineering makes it significantly easier to generate new features, it is still critical to be able to understand those features before using them. This helps ensure that features being used are meaningful and interpretable. Feature lineage graphs make both of these goals easier to achieve by helping audit features as well as demonstrating the steps taken to generate features in the first place.
In general, system visualizations are a powerful tool that improves explainability and understanding while also simplifying critical steps like auditing. We’ve implemented feature lineage graphs for just this reason, and believe that the strategies we’ve shown here can be more broadly applied to a variety of tools that rely on complicated multi-step processes. | <urn:uuid:08860d69-29c9-44db-8db5-b6f456ccca25> | CC-MAIN-2022-40 | https://innovation.alteryx.com/visualizing-automated-feature-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00396.warc.gz | en | 0.941692 | 1,773 | 3.03125 | 3 |
For developing countries, especially the poorest among them, trade—both imports and exports—is a powerful, cost-effective tool to mitigate the potentially devastating effects of COVID-19. G20 countries should, therefore, quickly implement trade policies that can protect lives across the world by improving access to affordable medical supplies. Policies that put this access at risk should be restrained. Global cooperation is critical to meeting this challenge.
Trade can also play a key role in the recovery. As COVID-19 wreaks havoc across industries worldwide, G20 countries need to keep supply chains functioning. They should also begin preparing the groundwork for a revitalized global trade framework to help rebuild the world economy.
Adopt trade measures to support health systems
Tariff rates on pharmaceuticals and medical equipment are relatively low, but disinfectants and other personal protective products needed to fight the pandemic still face high tariffs and non-tariff barriers in many countries. Soap, the first line of protection against COVID-19, is subject to a global average tariff of 17 percent, with 72 countries applying import duties in excess of 15 percent. Tariffs on health and hygiene products are a regressive form of taxation that targets the sick.
Eliminating such protectionist measures could also lower the cost of inputs like active ingredients and other chemical products, encouraging domestic investment and production. A starting point is the indicative list of essential COVID-19 medical supplies published by the World Customs Organization. A number of countries have already announced tariff reductions in certain categories of critical medical supplies, albeit temporarily. US tariffs on imports from China risk shortages of ventilators and other medical products.
Facilitate trade in health-related products
Cross-border movement of health-related goods and imported inputs to manufacture these products can be disrupted by lengthy and inefficient customs and border procedures, as well as by logistical obstacles, preventing timely access to critical products. Vaccines, for example, require careful and rapid handling from port or factory to their destination.
China and the European Union have established “green lanes” to expedite inspection and release of goods to avoid such delays, which other countries could also replicate. Building on a five-country initiative led by Singapore and New Zealand, G20 should also keep air and sea freight lines open. To support trade, governments should also keep trade finance flowing and maintain liquidity, as called upon by the private sector.
Expand access to technical standards
Medical gear is typically subject to stringent standards on design, manufacturing, and market placement to protect consumer safety and public health. These rules, however, may unintentionally limit production and access. To overcome this problem, the European Union made freely available its basic standards for certain personal protective equipment and medical devices, lifting the requirement that firms purchase and use European standards according to intellectual property rules.
This step will allow factories to convert their production lines quickly. The European Commission has recommended speedier conformity assessment procedures and market surveillance of these products. Other countries, with limited conformity assessment capabilities, should consider automatic registration of supplies that have met standards in advanced economies.
Allow health professionals to cross borders
In February 2020, two nurses in Wuhan pleaded for health workers from around the world to come to China. China later sent 300 intensive care doctors to Italy. In the United States, New Jersey has authorized the temporary practice of foreign doctors licensed and in good standing in another country.
More such movement of physicians, nurses, and health professionals is needed, especially in poorer countries. Flexible regulatory measures, special visas, and work permits can help. A common international framework to support the temporary movement of health professionals across countries, as called for by India, would facilitate the response to the crisis.
Share knowledge via digital interactions
In the United States, authorities have moved to facilitate telemedicine to screen high-risk patients, communicate and track COVID-19, and manage health care systems. The global health community is turning to digital technologies, data, and cross-border e-health interactions to share evidence and experience. Common rules to support cross-border digital services trade, in particular to provide a trusted environment for digital exchanges in the health sector, could support rapid knowledge sharing and case management.
Do not hinder access to new technologies
Companies across the world are racing to develop diagnostic methods, vaccines, and antivirals for the prevention and treatment of COVID-19, while governments are working to expedite approvals. New technologies—such as 3D-printing respirator parts developed by Italian engineers—can address shortages.
But protection under intellectual property regimes must be balanced against the global significance of the pandemic. New issues will need to be sorted out. Collective action could bring greater certainty to safeguard access by all.
Avoid trade measures that put lives at risk
As of April 4, 2020, 69 governments, including India and the European Union, had banned or limited exports of face masks, personal protective equipment, medicines, and other medical goods. These practices hurt not only importers but also exporters as they raise prices, discourage investment, and provoke retaliation. Some countries have also restricted exports of certain foodstuffs. In the past, similar actions have aggravated food insecurity and increased prices.
The world’s poorest countries are extremely vulnerable to such protectionist policies. Ten exporting countries account for almost three-quarters of world exports of medical goods and nearly two-thirds of world exports of protective gear.
The top three countries exporting medical products critical to fight the pandemic supply 65 to 80 percent of total world imports of those products (see figure). Any restrictions on exports risk leaving most of the world without access to vital supplies, with catastrophic consequences.
Many companies are utilizing global supply chains to increase production of some medical products, but governments could provide subsidies or encourage compacts among companies along the supply route to stimulate production.
International organizations like the World Bank can also facilitate access to supplies for poor countries. Governments should refrain from adopting “Buy National” policies as they are counterproductive and prevent companies from accessing vast foreign supplies.
Keep supply chains moving
A collective G20 response, with regular follow-up mechanisms, is critical to avoid politically appealing but self-defeating trade policies. If global cooperation is impossible, willing countries should step up. The World Trade Organization (WTO), hobbled as it has been lately, provides a forum for countries to agree to refrain from export bans. It can also facilitate an agreement to eliminate tariffs and nontariff barriers on health-related products, expanding on the scope and membership of the WTO initiative on trade in pharmaceuticals.
The WTO could also encourage progress on the other steps mentioned above, including a common framework on cross-border movement of health professionals and a collective understanding that the WTO Agreement on Trade-Related Aspects of Intellectual Property Rights does not limit governments’ actions to safeguard affordable access for new vaccines and drugs.
The post-COVID-19 world economy will require more, not less, global trade cooperation. Global trade rules will be needed to foster investment and trade.
Reforming the WTO has become more pressing than ever to help update rules in line with the dramatic changes brought about by the COVID-19 pandemic. The G20 countries have allowed international collaboration on trade to unravel. They now have a chance to seize on the crisis to sow the seeds for renewed global trade cooperation.
- Anabel González is Non-Resident Senior Fellow, Peterson Institute for International Economics. This article is part of a series of proposals for the G20’s agenda on the COVID-19 pandemic and first appeared on the Institute’s website. | <urn:uuid:f17d9e48-f153-418a-a4eb-383efe234002> | CC-MAIN-2022-40 | https://news.networktigers.com/opinion/global-supply-chains-are-key-to-the-covid-19-battle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00396.warc.gz | en | 0.938693 | 1,556 | 2.953125 | 3 |
Tired of spending time reading through a document, webpage or database line by line? There is a better way built into almost every Windows program called the FIND tool.
How to access this tool? Press the Ctrl key then the F key, Ctrl + F. This will open a small text box either at the top or bottom of the screen. Type any part of the word you are looking for, and hit enter. This tool will then find and highlight every instance of those letters in the current window you are searching.
Where can you use this tool? Almost every program has some of this functionality. For Example: Websites, PDF’s, Word Documents, Excel and Email.
Generally next to the text box where you enter your search you will find 2 arrows, and the up and down arrows. Pressing these arrows will jump the document to the next lower or previous upper instance of the search you started. Give it a shot now on this document! Press Ctrl + F then search for the word “key.”
Tech Tip Provided By:
Greg Bastien, Level 2 Technician
Center for Computer Resources | <urn:uuid:862460b0-b70c-4feb-b8c6-e762a8e9954e> | CC-MAIN-2022-40 | https://www.centaris.com/2017/09/tech-tip-find-tool-shortcut/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00396.warc.gz | en | 0.874991 | 231 | 3.171875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.