text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
A "mind reading" algorithm reveals dogs to be action-oriented as opposed to their object-obsessed human best friends.
A new study from Emory University shows that dogs would be pretty bad at whodunits, with their brains focusing on the action they see rather than who or what is performing it. In this, their minds differ considerably from human brains, which focus on objects and actions.
"We humans are very object-oriented," Gregory Berns, professor of psychology at Emory and corresponding author of the study, said in the university's news release on the paper published by the Journal of Visualized Experiments.
Berns said that "there are ten times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects," but dogs had other things to worry about.
"Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount," Berns said, noting that it made "perfect sense" that dogs' brains were going to be highly attuned to actions first.
Significant differences between canine and human visual systems also reflect this. While dogs can only see in shades of blue and yellow, they have a slightly higher density of vision receptors designed to detect motion.
"Historically, there hasn't been much overlap in computer science and ecology," said Erin Phillips, lead author of the paper, who worked in Bern's Canine Cognitive Neuroscience Lab. "But machine learning is a growing field that is starting to find broader applications, including in ecology."
Researchers say that only two out of all the dogs trained for the study had the attention span and temperament for the experiments. Both mixed breeds, Daisy and Bhubo, were dubbed "superstar" dogs for their exceptional concentration.
"They didn't even need treats," Phillips said. The dogs had to lie still for the fMRI scan and watch half an hour-long video without a break, three sessions each. It is unclear whether two humans who underwent the same experiment needed the treats.
In any case, researchers recorded the fMRI neural data for all the subjects as they watched videos filmed from a dog's perspective. These included a lot of sniffing and playing, cars and bikes driving past, a human offering a ball, or a cat walking to a house – scenes interesting enough to keep the dog's attention for an extended period.
Video data was then segmented by time stamps into classifiers that included objects such as dogs, cars, humans, or cats and actions like sniffing, playing, or eating. Researchers had a machine-learning algorithm called Ivis to analyze the patterns of the neural data.
When mapping out data for human participants, the model was 99% accurate for both object and action classifiers. In the case of canine subjects, it did not work for object classifiers but was 75% to 88% accurate at decoding action classifications.
Researchers say that the study offers a "first look" at how the canine mind reconstructs what it sees – even if to a limited degree. "The fact that we can do that is remarkable," Berns said.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:3e10b04e-b66f-4e22-8d53-94d43084d5a9> | CC-MAIN-2022-40 | https://cybernews.com/tech/ai-offers-glimpse-into-dogs-mind/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00502.warc.gz | en | 0.977446 | 671 | 2.890625 | 3 |
- class featuretools.primitives.RollingMean(window_length=3, gap=0, min_periods=0)#
Calculates the mean of entries over a given window.
Given a list of numbers and a corresponding list of datetimes, return a rolling mean of the numeric values, starting at the row gap rows away from the current row and looking backward over the specified time window (by window_length and gap).
Input datetimes should be monotonic.
window_length (int, string, optional) – Specifies the amount of data included in each window. If an integer is provided, it will correspond to a number of rows. For data with a uniform sampling frequency, for example of one day, the window_length will correspond to a period of time, in this case, 7 days for a window_length of 7. If a string is provided, it must be one of pandas’ offset alias strings (‘1D’, ‘1H’, etc), and it will indicate a length of time that each window should span. The list of available offset aliases can be found at https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases. Defaults to 3.
gap (int, string, optional) – Specifies a gap backwards from each instance before the window of usable data begins. If an integer is provided, it will correspond to a number of rows. If a string is provided, it must be one of pandas’ offset alias strings (‘1D’, ‘1H’, etc), and it will indicate a length of time between a target instance and the beginning of its window. Defaults to 0, which will include the target instance in the window.
min_periods (int, optional) – Minimum number of observations required for performing calculations over the window. Can only be as large as window_length when window_length is an integer. When window_length is an offset alias string, this limitation does not exist, but care should be taken to not choose a min_periods that will always be larger than the number of observations in a window. Defaults to 1.
Only offset aliases with fixed frequencies can be used when defining gap and window_length. This means that aliases such as M or W cannot be used, as they can indicate different numbers of days. (‘M’, because different months have different numbers of days; ‘W’ because week will indicate a certain day of the week, like W-Wed, so that will indicate a different number of days depending on the anchoring date.)
When using an offset alias to define gap, an offset alias must also be used to define window_length. This limitation does not exist when using an offset alias to define window_length. In fact, if the data has a uniform sampling frequency, it is preferable to use a numeric gap as it is more efficient.
>>> import pandas as pd >>> rolling_mean = RollingMean(window_length=3) >>> times = pd.date_range(start='2019-01-01', freq='1min', periods=5) >>> rolling_mean(times, [4, 3, 2, 1, 0]).tolist() [4.0, 3.5, 3.0, 2.0, 1.0]
We can also control the gap before the rolling calculation.
>>> import pandas as pd >>> rolling_mean = RollingMean(window_length=3, gap=1) >>> times = pd.date_range(start='2019-01-01', freq='1min', periods=5) >>> rolling_mean(times, [4, 3, 2, 1, 0]).tolist() [nan, 4.0, 3.5, 3.0, 2.0]
We can also control the minimum number of periods required for the rolling calculation.
>>> import pandas as pd >>> rolling_mean = RollingMean(window_length=3, min_periods=3) >>> times = pd.date_range(start='2019-01-01', freq='1min', periods=5) >>> rolling_mean(times, [4, 3, 2, 1, 0]).tolist() [nan, nan, 3.0, 2.0, 1.0]
- __init__(window_length=3, gap=0, min_periods=0)#
__init__([window_length, gap, min_periods])
Flattens nested column schema inputs into a single list.
Additional compatible libraries
Default value this feature returns if no data found.
woodwork.ColumnSchema types of inputs
Name of the primitive
Number of columns in feature matrix associated with this feature
ColumnSchema type of return | <urn:uuid:f7222e2f-2776-4d0c-98bd-f72e764db2c5> | CC-MAIN-2022-40 | https://featuretools.alteryx.com/en/stable/generated/featuretools.primitives.RollingMean.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00502.warc.gz | en | 0.741191 | 1,227 | 2.703125 | 3 |
API’s (Application Programming Interfaces) are back-end services which expose an interface which can be used to connect to and transact or read/write information to and from a back-end system. They are extremely useful and a great architecture decision delivering flexibility and extensiblity of a service.
APIs deliver functionality once the client service knows how to talk to the API. APIs generally sit behind a HTTP port and can’t be seen unlike a website but they may deliver an equal level of value and functionality to the requesting client.
Many websites may use an API but the user does not invoke the API directly but rather the Website /App is a proxy for the API. APIs are not built to be human readable, like a website, but rather machine readable.
You may have APIs hosted on systems behind HTTP ports but are undiscovered. They may be well known but they may also be old or development deployments which are forgotten about. We can’t secure what we don’t know about.
Adequate assessment involves coverage of entire corporate ranges (CIDR ranges), large lists of IPs, domain names (FQDN’s) and using a multi-layer probing methodology detailed below:
API discovery is a combination of both host layer and web layer investigation. Some are easier to discover than others.
Discovering API artifacts:
Discovery of APIs may require multiple layers of probing. If we don’t know how to invoke a given API, identification across many levels is required to accurately provide a confidence interval if an API is present or not.
Assessment of APIs can be difficult as the assessment methodology requires knowledge of how to communicate and invoke the API.
Running a simple web scanner against an API simply does not work. A scanner would just hit an initial URL and not know how to invoke or traverse the various API calls.
Good API assessment should have the ability to read/ingest descriptor files in order to understand how to communicate and invoke the API. Once this is done a scanner can assess the API method calls.
As the development team alter and change the API, the assessment technology can read the newly updated descriptor file and assess the API including new changes.
Keeping pace with change.
Assessment of vulnerabilities specific to API’s is also important.
Items discussed in the OWASP API Top 10 are an important aspect to true API specific testing.
DevOps: In a DevOps environment the descriptor file can be used to determine change/deltas since the last deployment of the API and only assess the changes saving valuable time in a fast DevOps environment – Iterative testing when frequent change occurs.
Marketing Executive of Edgescan | <urn:uuid:9d298128-ea3c-4e93-b1c7-8da776d96700> | CC-MAIN-2022-40 | https://www.edgescan.com/api-detection-and-assessment-what-they-dont-tell-you-in-class/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00502.warc.gz | en | 0.906633 | 570 | 3.171875 | 3 |
NT Lan Manager (NTLM)
Windows New Technology LAN Manager (NTLM) is an outmoded challenge-response authentication protocol from Microsoft. Still in use though succeeded by Kerberos, NTLM is a form of Single Sign-On (SSO) enabling users to authenticate to applications without submitting the underlying password.
NTLM gives users SSO access on an Active Directory (AD) domain through the exchange of three messages comprising the cryptographic handshake: the client's negotiate message, the server's challenge message, and the client's authenticate message.
NT LAN Manager was the default protocol for Windows until Microsoft deprecated it, citing vulnerabilities related to the password hash's password equivalency. Passwords stored on the server, or domain controller, are not salted and therefore an adversary with a password hash does not require the underlying password to authenticate. NTLM's cryptography also predates newer algorithms such as AES or SHA-256 and is susceptible to brute force attacks by today's hardware.
"One of our data centers suffered an Active Directory outage that stemmed from an issue with NTLM authentication. To solve the problem, the IT folks eventually ended up building a new domain controller." | <urn:uuid:da565998-0a6d-4805-817b-24739a7a5ebe> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/nt-lan-manager | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00502.warc.gz | en | 0.866959 | 247 | 2.96875 | 3 |
Malicious software may seem like a relatively new concept. The epidemics of the past few years have introduced the majority of computer users to viruses, worms and Trojans – usually because their computers were attacked. The media has also played a role, reporting more and more frequently on the latest cyber threats and virus writer arrests.
However, malicious software is not really new. Although the first computers were not attacked by viruses, this does not mean they were not potentially vulnerable. It was simply that when information technology was in its infancy, not enough people understood computer systems to exploit them.
But once computers became slightly more common, the problems started. Viruses started appearing on dedicated networks such as the ARPANET in the 1970s. The boom in personal computers, initiated by Apple in the early 1980s, led to a corresponding boom in viruses. As more and more people gained hands-on access to computers, they were able to learn how the machines worked. And some individuals inevitably used their knowledge with malicious intent.
As technology has evolved, so have viruses. In the space of a couple of decades, we have seen computers change almost beyond recognition. The extremely limited machines which booted from a floppy disk are now powerful systems that can send huge volumes of data almost instantaneously, route email to hundreds or thousands of addresses, and entertain individuals with movies, music and interactive Web sites. And virus writers have kept pace with these changes.
While the viruses of the 1980s targeted a variety of operating systems and networks, most viruses today are written to exploit vulnerabilities in the most commonly used software: Microsoft Windows. The increasing number of vulnerable users is now being actively exploited by virus writers. The first malicious programs may have shocked users, by causing computers to behave in unexpected ways. However, the viruses which started appearing in the 1990s present much more of a threat: they are often used to steal confidential information such as bank account details and passwords.
So malicious software has turned into big business. An understanding of contemporary threats is vital for safe computing. This section gives an overview of the evolution of malware: it offers a glimpse of some historical curiosities, and provides a framework to help understand the origins of today’s cyber-threats. | <urn:uuid:93f8bd3a-e2a0-42ca-84a8-a1119315baf6> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/history-of-malicious-programs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00502.warc.gz | en | 0.968059 | 545 | 3.6875 | 4 |
Artificial intelligence is a branch of computer science, which uses visual perception, speech recognition, decision-making, and translation between languages, to make a machine intelligent without assistance of humans.
Artificial intelligence is a branch of computer science, which uses visual perception, speech recognition, decision-making, and translation between languages, to make a machine intelligent without assistance of humans. Artificial intelligence techniques can be used for security management purposes to protect any system from security attacks or threats by warning the user in real-time. For example, Intrusion Detection System is a mode of security software that can automatically detect if someone is trying to hack confidential information through malicious activities or security policy violations, further alerting the user about the attempt to intrude data. Similarly, cryptography is a method of analyzing and transmitting data in a particular form that shares the data only with an intended user.
Rampant advancements in technology and increasing use of 2G, 3G, and 4G Long-Term Evolution (LTE) wireless networks, along with the introduction of 5G mobile networks, are major factors driving growth of the global market for artificial intelligence based security. These cellular networks enable connectivity and communications to exchange real time information, data, and online transactions among other activities, which are prominent targets of cyber-attacks. . For instance, according to the Internet Crime Complaint Center (IC3), in 2016, cybercrimes resulted in losses of over US$1.33 billion, globally. This in turn, is expected to boost the demand for security solutions, thus boosting growth of the artificial intelligence based security market over the forecast period.
Additionally, increasing smart city initiatives, globally, is also expected to fuel growth of the AI based security market. For instance, in 2015, Government of India launched ‘100 Smart Cities Mission’. Important elements of the smart city initiatives include digitalization and deployment of Wi-Fi hotspots at various locations. Public Wi-Fi networks are unsecured networks that use shared passwords. Hence, is exposed to threats such as malware, phishing, password breaches, and denial of service attacks. Increasing deployment of personal Wi-Fi facility is thus, expected to increase demand for artificial intelligence to detect any abnormal behavior at early stages. This in turn, is expected to fuel growth of the market for artificial intelligence in security.
Moreover, according to F-Secure Labs, daily around 5000,000 samples of data are received from customer reporting around 10,000 malware and around 60,000 malicious URLs for analysis and protection. Analysis of large amounts of data is beyond human capacity, due to which, artificial intelligence is implemented.[…]
read more – copyright by techziffy.com | <urn:uuid:f71dadea-5085-47ff-ba43-da5fb1a1638b> | CC-MAIN-2022-40 | https://swisscognitive.ch/2019/03/17/tag-artificial-intelligence-based-security-market-driver/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00502.warc.gz | en | 0.931589 | 546 | 3.1875 | 3 |
Over 500,000 Teslas all over the world are feeding data back to Elon Musk’s headquarters, to train their autonomous car algorithms. This data gives Tesla a huge advantage in the race to put more self-driving cars on the road.
When you think about Tesla, you might assume they’re a traditional car manufacturing company. There’s no question that Tesla is a leader in electronic vehicles.
But their key to success is that they’re actually a technology company. Their company is built on artificial intelligence technology, and it’s one of the reasons for their success.
These days, one of the key goals for Tesla is making their cars fully autonomous – and they’re leveraging big data and AI to make that happen.
How AI Can “Teach” Cars to Drive on Their Own
In order to drive on their own, autonomous cars constantly interpret images from their sensors and machine vision cameras, then use that information to make decisions about what to do next.
They use AI to understand and anticipate the next movements of cars, pedestrians, and cyclists. This data helps them plan their moves in a split second, and decide what to do from moment to moment. Should the car stay in the current lane, or change lanes? Should it pass the car in front of them, or stay where it is? When should the car brake or accelerate?
In order to make cars fully autonomous, Tesla has to collect the right data to train the algorithms and feed their AIs. More training data will inevitably lead to better performance – and this is where Tesla excels.
Tesla’s competitive advantage is that they crowdsource all their data from the hundreds of thousands of Tesla vehicles that are currently on the roads. Internal and external sensors monitor what Teslas are doing in all kinds of situations, and even collect data on driver behaviour, how they react in different situations as well as data like how often a driver touches the steering wheel or the dashboard.
Tesla’s approach is called “imitation learning.” Their algorithms learn from the decisions, reactions, and movements of millions of actual drivers around the world. All those miles translate into super smart autonomous cars.
Their tracking system is incredibly sophisticated. For example, when a Tesla vehicle makes an incorrect prediction about the behaviour of a car or cyclist, Tesla saves a data snapshot of that moment, adds it to the data set, then reproduces an abstract representation of the scene with colour-coded shapes that the neural network can learn from.
Other companies that are working on autonomous vehicles use synthetic data (for example, video game driving behaviour from games like Grand Theft Auto) – and that data is far inferior to the real-world data Tesla is using to train their AIs.
AI at the Heart of Tesla
Data from their existing customer base has helped Tesla since its inception, and their work on autonomous cars is part of their continuing mission to put AI at the center of all their efforts.
As Tesla expands into their latest projects (including their plans to revolutionize the electric grid with their home solar power panels), AI and big data will remain steadfast partners to Elon Musk and his team at Tesla.
Where to go from here
If you would like to know more about , check out my articles on:
Or browse the Artificial Intelligence & Machine Learning library to find the metrics that matter most to you. | <urn:uuid:2a92eaca-2874-4b9d-bee3-911f4f85e229> | CC-MAIN-2022-40 | https://bernardmarr.com/how-tesla-is-using-artificial-intelligence-to-create-the-autonomous-cars-of-the-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00502.warc.gz | en | 0.949405 | 696 | 3.28125 | 3 |
Ransomware in 2021
The Sophos company publishes a study done on ransomware attacks among hundreds of medium-sized organizations in the retail sector, so that you don’t have to read all 21 pages. We have summarized the important things that everyone should know.
What is a ransomware attack?
Popular among hackers and involved in 22% of all cyberattacks, a ransomware attack begins with the installation of malicious software. This malware is designed to lock our data and hold it “captive” until the hacker’s demands are fulfilled. The malware can encrypt the information or lock our device, thus preventing us access.
After the requirements are met, usually in the form of cryptocurrency payment, the victim only has the attacker’s promise to rely on.
Types of ransomware
Encryption-This type of malware locates files that seem important to the user – texts, documents, images, PDF and more. It encrypts the information, thus preventing access to it. When the victim is an individual, the ransom usually amounts to several hundred dollars, and the requirement includes a transfer of the payment up to 72 hours, otherwise, the data is permanently deleted.
Lock –When the user is locked out of the device, and the ransom message appears on the screen.
Scareware- Perhaps the most cynical of them all, this attack mimics software that scans for security issues, such as antiviruses, and alerts us of critical findings. The error messages that appear to detect faults mimic legitimate antivirus software, and give a sense of reliable source by providing the IP address and geographic location information, or using the names of reputable and trusted companies. Afterward, access is denied until the victim allows the malware to repair these issues, for an additional fee.
DoxWare- Ransomware that threatens to leak victims’ data to sites on the Dark Web. the attacker might sell this information or leak it to sites for free.
Who is SOPHOS?
Sophos is a British based security software and hardware company, who develops products for communication endpoint, encryption, network security, email security, mobile security and unified threat management. Sophos is one of the leading companies in the supply of managed EDR systems, with firewalls that prevent ransomware from expanding into the corporate network by advanced learning techniques and responses based on analysis. Intercept X uses CryptoGuard technology that stops unauthorized encryption of files by viruses.
Ransomware state- 2021 research
5,600 IT people in medium to large organizations (100-5,000 employees) from 31 countries in the retail industry participated in Sophos’ annual study. The survey was conducted in January and February of 2022, and participants were asked to answer based on their annual experience in 2021.
Complexity, frequency, and impact of ransomware attacks
66% of organizations in the study were affected by ransomware in 2021, an increase from 37% in 2020.
The sharp increase proves that hackers have adapted more sophisticated capabilities to carry out significant ransomware attacks.
It also reflects the growing success of the Ransomware-as-a-Service model which significantly extends the reach of ransomware by reducing the level of skill required to deploy an attack. This model essentially allows users who have purchased a subscription to use ransomware tools for their own use, and the investors in the development of the software receive a percentage of each ransom paid. Like the SaaS users, the RaaS users do not need to have any knowledge or experience to take advantage of the capabilities of this tool.
In 2021, attackers succeeded in encrypting data in 65% of attacks, an increase in the encryption rate of 54% from what was reported in 2020.
57% experienced an increase in the volume of cyber attacks overall, 59% saw the complexity of attacks increase, and 53% said the impact of attacks increased.
72% saw an increase in at least one of these criteria.
Backups are the number 1 method used to restore data – 73% of the organizations in the survey used this method to restore encrypted data.
Cost of Ransomware attacks
Along with using data backup, 46% reported paying a ransom for data recovery – reflecting the fact that many organizations are using multiple recovery approaches to maximize the speed and efficiency with which they can get back up and running.
The payment of the ransom almost always ends with the recovery of the data, but not entirety.
On average, organizations that paid received 61% of their data back, down from 65% in 2020.
Similarly, only 4% of organizations that paid the ransom received all of their data in 2021, down from 8% in 2020.
965 of the organizations that paid a ransom revealed the exact amount to Sophos, and this paints a worrying picture –Average ransom payments have increased considerably over the past year.
In the past year, there has been an almost 3-fold increase in the proportion of victims who paid a ransom of one million dollars or more: an increase from 4% in 2020 to 11% in 2021.
Simutltaniously, the percentage of those paying less than $10,000 dropped from 34% in 2020 to one in five (21%) in 2021.
Overall, the average ransom payment reached $812,360, a 4.8-fold increase from the 2020 average of $170,000 (based on 282 respondents).
The average cost to pay for a ransomware attack has increased by 4.8 times from 2020.
Ransom attacks by the industry
There are considerable variations between industries, and hackers manage to extract a higher payment from some organizations.
The highest average ransom payments were $2.04 million in the manufacturing industry and $2.03 million in energy, oil/gas, and utilities.
The lowest average extortion payments were $197K in healthcare and $214K in state/local government.
Ransom attacks by industry – the media, leisure, and entertainment industry were hit the most, after real estate, and in third place was the energy and gas industry
Ransom attacks by country
In Italy, where extortion payments are illegal, meaning organizations are legally prohibited from paying, 43% of organizations hit by ransomware have admitted paying. The study proves that legislative barriers alone are not effective in stopping ransom payments.
Ransom attacks by country – Austria is the country most affected by a ransomware attack in 2021, followed by Australia and in third place Malaysia. Israel is in the 17th place, with 66% of organizations in Israel affected by ransomware.
Effects of a ransomware attack
90% of those hit by ransomware in the past year said the most significant attack affected their ability to operate.
Furthermore, among private sector organizations, 86% said it resulted in lost business/revenue.
Overall, the average cost to fix the impact of a ransomware attack in 2021 was $1.4 million. A decrease from $1.85 million in 2020, which is likely due to the fact that organizations are less afraid of the impact to their reputation, in light of the increase in high profile cases.
Another explanation of this is that the insurance providers of these organizations are able to guide the victims quickly and efficiently in response to an event, reducing the costs. It is worth noting that in many cases where the ransom is paid, the insurance, and not the victim, pays the bill.
On average, organizations that have suffered attacks in the past year need one month to recover from the most significant attack – a long time for most companies.
The slowest recovery was reported by higher education and central/federal government, with two in five taking more than a month to recover.
Conversely, the sectors that recovered the fastest were manufacturing (only 10% took more than a month) and financial services (only 12%), apparently as a result of planning and preparation.
Protection against Ransomware
In order not to fall victim to this brutal attack, we must adopt security procedures that prevent the download of a ransomware virus into our device and incorporate advanced protection systems which detect these attacks. In the case of an organization or business, those responsible for the organization’s information security, such as the CISO, must ensure the implementation of these procedures and the integration of security systems such as:
- You have purchased a reliable antivirus
- Choose passwords – Choose strong passwords with uppercase and lowercase letters, numbers, and characters. And perhaps most importantly, do not recycle the same password for all your accounts.
- Restrict user access and deny login
In order not to fall victim to a virus attack, we must employ safety procedures that prevent the download of a ransomware virus to our personal device, and integrate advanced protection systems that detect these viruses and attacks. If it is an organization or business, the officials responsible for the organization’s information security, such as the CISO, must ensure the implementation of these procedures and the integration of security systems such as: | <urn:uuid:104b6b33-d37b-47ce-8a4c-3ccd5ba8cd4a> | CC-MAIN-2022-40 | https://redentry.co/en/blog/ransomware-attacks-sophos-research-en/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00502.warc.gz | en | 0.957784 | 1,831 | 2.5625 | 3 |
Everyone has seen a movie or TV show where a criminal kidnaps a victim and demands a ransom in exchange for their release. What some people do not realize, however, is that demanding a ransom is not just confined to kidnappings. Today, many criminals employ ransomware, a form of malware or computer virus that locks a user’s keyboard or computer and holds their data ‘hostage’ until the victim pays a ransom in exchange for restoring access to it.
Recently, computer criminals used ransomware to conduct the largest cyberattack in history. More than 200,000 Windows operating systems in more than 150 countries—including the United States, England, Germany, and Japan—were infected with the ransomware strain WannaCry or WanaCrypt0r2.0. Victims had the data on their computers encrypted or scrambled, effectively locking them out of it while demanding they pay a ransom of between $300 and $600. The attack was not limited to personal PCs—WannCry victims included hospitals, banks, and government agencies.
So, how does ransomware work? Well, just like in the movies, someone takes something you own and holds it hostage until you send them the money they demand in return. The individual requesting the ransom infects your computer with a virus, usually by sending an email that requests the user to click on a link. Once the virus infects the system, the hacker can lock down the computer’s files and extort the user until he or she is paid the money.
While this may seem like a relatively simple issue to resolve, the problem lies in the information that is being held hostage. Few organizations can operate without their data, and if one doesn’t have this data backed up, the impact of a ransomware attack can be crippling. In addition, the FBI, Department of Justice, and many technology firms suggest you don’t pay the ransom. Doing so does not guarantee you’ll regain access to your data, and since you’ve already been exposed to the virus and shown a willingness to pay the ransom, you’re vulnerable to be re-targeted again in the future.
How can you protect yourself against ransomware? To help prevent these kinds of attacks, there are a few steps you can take to mitigate risk. First, regularly install Microsoft security patches and system updates, frequently backup your files, secure your router, and—perhaps most important of all—don’t open suspicious emails. If it’s too late and a virus has already taken over your system, the most crucial step is disconnecting from the Internet to prevent the virus from spreading. Then, you should report the attack to authorities and file a complaint with the Internet Crime Complaint Center. Finally, wipe your PC and restore your data and files from backups.
Big risks can sometimes yield big rewards, but not when it comes to cybersecurity. Be sure your organization is doing all it can to protect itself from ransomware and other cyberattacks. Contact Infomax Office Systems today to learn how our on-site Managed IT services can help give you peace of mind from ransomware attacks. | <urn:uuid:27735d91-74db-4380-ad7e-582dcbea4707> | CC-MAIN-2022-40 | https://www.infomaxoffice.com/tag/wannacry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00502.warc.gz | en | 0.949857 | 637 | 3.46875 | 3 |
Politicians seem to enjoy the new ways of communication they can have through the internet. Communication is no longer a one-way street from politicians to the public but more of a town hall meeting where everyone is invited to share their opinion. Of course, this is mostly good, but this virtual proximity doesn’t come without downside risk.
As the recent hacking attack on german politicians, artists, and journalists shows, hackers can trick unknown users into gaining access to their accounts and, therefore, their personal data.
Since there has been a lot of attention from the media, we wanted to summarise what we can learn from this event.
First of all, what happened?
Allegedly, a 20-year-old student from Homberg, Germany, hacked social media accounts, and stole personal data of approx. 1000 people (mainly politicians) and published it via Twitter. By hacking the account of a famous YouTuber, the attacker could share malicious links through his profile.
The alleged hacker’s main motive is said to be attention-seeking, as he also „mistakenly“ dropped hints on how he extracted data or got into people’s accounts. Another indicator for this is that mostly „only“ contact data was published instead of more sensible data.
What happens now?
The lack of security in Germany’s IT landscape has again been shown in public media. As a result, many politicians are pleading for laws that enforce 2-Factor-Authentication and strong passwords to major software companies.
“We are not securing data, we are securing people.” — Katarina Barley
But the problem is human nature. “Not every politician does this, ” said Katarina Barley (SPD) about the Two Factor Authentication method she has already used. In the Talk Show “Maybrit Illner,” she also pointed out that security has been seen as a “progression break” for too long and not about securing data but rather about securing people.
What can we learn from it?
Generally, 2-Factor-Authentication and strong passwords are a must and should, therefore, be mandatory for every company dealing with sensible data. But this has been known and should have already been implemented.
The bigger problem is the human side of the hacking attack. If employees — even highly educated government members — aren’t taught how to securely use social media or other web services, no software or encryption can prevent a data breach.
Data security becomes a public issue once politicians or other public figures are attacked. Still, any company dealing with important customer or business data can be the victim of such an attack. In these cases, it can get extremely costly for the companies involved.
Because this is an issue affecting every one of us, we want to give you a few points to remember regarding your IT security.
- Establish 2-Factor-Authentication in your organization!
- Use only a strong password (a password manager might help)!
But most importantly:
- Educate your colleagues, friends, and customers on security issues! After all, they are all part of your network… | <urn:uuid:3e19fb20-b759-4492-a8e6-604375ba99f3> | CC-MAIN-2022-40 | https://crashtest-security.com/hacking-attacks-on-politicians-public-figures/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00702.warc.gz | en | 0.954068 | 659 | 2.71875 | 3 |
As cybercriminals continue to evolve their phishing techniques, one old-school threat has developed new relevance: fake messages purporting to be from government entities. The advent of the COVID-19 pandemic opened up new vistas of fraud for phishing scammers, and they didn’t hesitate to dive right in. Google reports uncovering 18 million daily malware and phishing emails in 2020. Phishing scams using government and quasi-governmental entities became a quick, effective way for cybercriminals to capitalize on the fear and uncertainty generated by tumultuous world events, and they haven’t stopped.
Automated security isn’t a luxury. See why Graphus is a smart buy. LEARN MORE>>
More than 70% of organizations around the world experienced a phishing attack, resulting in an overall increase of 42% in phishing in 2020, while some categories and attack types like ransomware experienced triple-digit growth. Phishing threats took their biggest jump in Q2 2020, escalating an eye-popping 660% according to Google. Even in Q4 2020, the increase was lower but still hefty: phishing was up more than 220%.
Cybercriminals used social engineering tricks and clever design to great effect in pulling off audacious impersonation scams in 2020, especially scams related to the COVID-19 pandemic. An abundance of data making its way to the dark web provided ample fuel for phishing. The FBI IC3 reports receiving received over 28,500 complaints about phishing scams related to COVID–19. Those phishing scams came in all sorts of shapes and sizes, and far too many people got caught.
Early in the pandemic, a flood of emails that claimed to contain important information about the virus and lockdowns from government entities battered business defenses, many carrying ransomware. One notable scheme involved spoofing emails from the World Health Organization. In these messages, bad actors would entice victims to download a map of COVID-19 transmission in their area in order to deliver malware like ransomware. In a similar scam, phisher men pretending to be representing John’s Hopkins University capitalized on the trustworthy reputation of the venerable school and the popularity of its live Coronavirus COVID-19 Global Cases map to send out “updates” that were actually ransomware.
We’ll show you how to spot security risks fast with employee profiling! SEE THE DEMO>>
Another variant of this type of phishing involves cybercriminals angling to snatch credentials through bogus websites. An estimated 4,300 malicious web domains related to COVID-19 relief were registered in just March 2020, and Google reported stopping 18 million suspicious COVID-19 related emails per day. One of the most popular phishing scams that impersonated or spoofed government entities was falsifying notices and communications about pandemic relief. Disasters are a common source of exploitation for cybercriminals and COVID-19 was no exception. The advent of COVID-19 relief checks in the US created a new avenue of attack for bad actors using phishing emails to drive traffic to credential-stealing websites.
Cybercriminals haven’t backed off of using this technique in 2021 either. The US IRS (Internal Revenue Service) released an official warning in early April 2021 to alert tax professionals about spoofing emails supposedly sent from “IRS Tax E-Filing” with the subject line “Verifying your EFIN before e-filing.” The U.S. Financial Industry Regulatory Authority (FINRA) was also forced to issue a regulatory notice in March 2021 warning brokers of an ongoing phishing campaign. Attackers using carefully faked messages based on FINRA templates with bogus but believable URLs were sending out fake compliance audit notices, spurring companies to react – and get their credentials stolen.
What’s next in phishing? Find out in the 2021 State of Email Security Report! GET IT NOW>>
How can you keep this type of cybercrime away from your business? With a strong cybersecurity culture that’s buttressed by the defensive power of Graphus. Unfortunately, more than 40% of office workers in a recent survey admitted that they regularly open suspicious messages to avoid missing something important. In the same survey, 40% of respondents didn’t report suspicious messages to IT to stay out of trouble. Employee email handling errors are one of the biggest cybersecurity risks any business faces. By building a culture that doesn’t punish asking for help and equipping your staff with the tools that they need to combat phishing, your business benefits from reduced risk for expensive pitfalls like ransomware attacks.
The three layers of protection that Graphus provides give your business the antiphishing support that gives you an edge. TrustGraph stops most phishing email before it ever hits an employee inbox. It’s also smart, so it collects its own threat intelligence to make sure that your protection is up to date immediately, not after some distant patch releases. You won’t miss new opportunities either. Messages that are from new sources that pass muster are equipped with the EmployeeShield banner, allowing recipients to mark them safe or suspicious with one click.
But your protection doesn’t stop there. It’s essential that staffers feel comfortable seeking guidance on tricky email questions, and they’re more likely to ask for assistance if they have immediate resources at hand to consult when they receive a suspicious email. Enter Phish911. Staffers can quarantine an email in a flash, keeping it out of everyone’s inbox until administrators can review it.
Stop fake government and official email from being a menace to your business with Graphus. Set up a consultation with our experts now to learn more and let’s get started on putting the AI power of Graphus to work for your organization right away. | <urn:uuid:d1893541-c973-4586-a1c4-cb0e17d5d66c> | CC-MAIN-2022-40 | https://www.graphus.ai/blog/fake-government-messages-bring-real-trouble/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00702.warc.gz | en | 0.932556 | 1,198 | 2.59375 | 3 |
Asset Performance: The art of doing more with less
Producing clean water and treating wastewater is a complex industrial process involving lots of different types of assets, for example pumps, sediment tanks, treatment units etc. Adding to the complexity is the fact that that these [units/assets] are often distributed over a wide area and are connected to the consumers through a large network infrastructure. For readers not closely familiar with operational aspects of water plants and networks, think about how the water gets to all the houses in your neighborhood). The task of monitoring everything to make sure it is running at peak efficiency is a rather daunting one – which water operation managers can confirm.
So, let’s pause for a minute and think about pumps located in remote locations which often consume large amounts of electrical power. How to ensure pumps will run at the correct speed to deliver water with the right pressure when you open your tap at home, considering that simultaneously hundreds of others in your neighborhood may be flushing toilets or taking showers?
To ensure consistent water delivery, high-powered pumps are often set to operate at greater speeds than needed – this is viewed as as a conservative approach which may use more energy and increases the stress to pipelines, but it beats the alternative. Now, how does this affect maintenance? Bearings, impellers etc. wear out, that’s a fact of life. Ideally, they get replaced before the pump grinds to a halt, but can we know when that’s going to happen? Today this largely is a manual process, so either as things fail they get fixed, or at other times, units get replaced before they need to, in order to avoid failure later. More about decreasing industrial inefficiencies through Asset Management and IoT.
Digitize for efficiency and reliability
So how do some of today’s water suppliers up efficiency, use less energy, avoid leakage and breakdowns, and supply and treat the ever-increasing amounts of water needed to keep up with the trend in population growth? Another trend may offer some answers: the Industrial Internet of Things (IIoT), which basically means that the control systems, sensors etc. behind all assets can be accessed and connected, securely, anytime and anywhere. For most of the water industry this could be a place to start looking for new ways to improve operations. Buried inside all the data, accessible because of this trend, it is very possible we can find some of the answers we need. We wrote earlier about the biggest challengesin the water industry in case you want to check them.
How to get started?
The first step is to make sure data is accessible – and there’s lots and lots of it! In its raw form it would be information overload, when using the right tools and analytics this ocean of data can be turned into actions to take and provide helpful [and new] insights into current operations.
I’m not talking about raw numbers here but about actual guidance to advise what to do and not to do, when to act to avoid failure or when to make tweaks to ensure error-free continuation of your process ensuring everything runs at peak efficiency.
The tools able to deliver this are referred to as ‘Advisors’; they distill business value from data and put IIoT technology to practical use in improving efficiency and reliability.
Get insights to those who can make a difference -when and where they need them
Today’s mobile and augmented reality technologies allow, for example, a worker in the field to simply point a tablet towards a pump to determine if there are any developing problems based on how it’s currently operating versus what’s normal or optimal.
Maintenance takes place only when it’s needed but always before a breakdown occurs.
Additionally, it’s possible to set pumps to run at outputs that closely match actual demand, and even to compare smart water meter data with flow rates across the water network to diagnose leaks.
What’s the business benefit of asset performance? Two things according to ARC:
- Insight driven prescriptive maintenance programs have the potential to reduce unplanned downtime to almost zero.
- Making better use of maintenance resources by performing only needed maintenance and avoiding unplanned downtime can reduce MRO and labor costs by 50%
Originally this article was published here.
This article was written by John Boville, the EcoStruxure Enablement manager in the Industrial Automation Team at Schneider Electric. He has been with the company for more than 20 years and in engineering, management and marketing roles for industrial automation solutions for over 30 years. | <urn:uuid:0d7a2162-899f-40a9-b242-cb3ab58dfbca> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/connected-industry/asset-performance-the-art-of-doing-more-with-less/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00702.warc.gz | en | 0.947431 | 943 | 2.765625 | 3 |
A data scientist is first and foremost a curious and inquisitive individual. To make sense of a massive amount of data, one has to be focused on research objectives.
Data scientists study the database as if their lives depend on it. Their curiosity is so profound, that it pushes them through endless nights of puzzle-solving and organizing various sets of intel. Once all the pieces are in place, once everything is in its proper order, that is when the magic happens.
The curiosity of a data scientist is fueled by an obsession with accuracy. The ultimate goal is to find the most meaningful information in a haystack of numbers and letters. This is where everything begins, in science and in life in general.
First, we raise our questions. Next, we go about looking for relevant information. We collect data from meaningful locations in a timely manner. Then, with our data on hand, we ask if we asked the right questions in the first place. After this verification process, we formulate a theory. We then test this theory and subsequently make conclusions after the research.
These conclusions then drive the growth of companies. It brings about innovations and powerful new ideas that advance technology itself, among many others.
Having the right data scientists in your team will work wonders for your business. Choose well. | <urn:uuid:f0f553aa-6026-4eaa-ba2d-171d99f22e04> | CC-MAIN-2022-40 | http://fieldviewsolutions.com/curiosity-data-scientist/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00702.warc.gz | en | 0.941541 | 263 | 2.6875 | 3 |
“If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.” – Bruce Schneier
A multi-part series by John Ball, Phillip Kuzma, and Ted Nass on web server security.
Wikipedia: “HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797.”
An explanation of the benefit from Qualys: “HTTP Strict Transport Security (HSTS) is an SSL safety net: technology designed to ensure that security remains intact even in the case of configuration problems and implementation errors. To activate HSTS protection, you set a single response header in your websites. After that, browsers that support HSTS (at this time, Chrome and Firefox) will enforce the protection.
The goal of HSTS is simple: after activation, do not allow insecure communication with your website. It achieves this goal by automatically converting all plain-text links to secure ones. As a bonus, it will also disable click-through SSL certificate warnings.”
HSTS is so easy to implement on Apache2, once HTTPS is configured, it is crazy that more sites aren’t using it. Of the top 500 websites that are tracked by Hardenize.com, only 32% of those are utilizing HSTS and 7% are preloading HSTS. We enabled HSTS preloading in our /etc/apache2/sites-enabled/[sites].conf file. It literally is one line in your virtual host configuration:
Header always set Strict-Transport-Security "max-age=31557600; includeSubdomains;
We tested that HSTS was deployed correctly before setting long duration by changing the “max-age” variable to “max-age=0”. You will get a warning on validation because HSTS age is set low but this confirms that it is working prior to setting a long duration on a faulty deployment.
As the definition above mentions, this ensured that insecure communication was no longer allowed with our web server. As an added layer, we also enabled HSTS on Cloudflare under the “Crypto” tab after validating our results on SSL Labs.We validated our results again in SSL Labs to ensure our changes took effect in Cloudflare then jumped over to https://hstspreload.org/ to get our domains added to the Google Chrome HSTS Preload list. This means that users of Google Chrome who have never visited our domains before would do it via HTTPS even if their browser had never visited the domain before.
If you’re curious about what is loaded in Firefox and Chrome there is a Stack Exchange thread on how to pull that data. https://security.stackexchange.com/questions/92954/how-can-i-see-which-sites-have-set-the-hsts-flag-in-my-browser
Resources used in this post: | <urn:uuid:9044438d-262c-4a3c-94c6-97d9eebd8a3b> | CC-MAIN-2022-40 | https://www.johndball.com/multi-part-series-on-securing-our-internet-presence-preloading-hsts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00702.warc.gz | en | 0.894852 | 702 | 2.59375 | 3 |
SGSN and GGSN are two terminologies that started their journey in the late 1990s when General Packet Radio Service (GPRS) was introduced. SGSN and GGSN are two core network nodes that have been part of mobile communications since then.
GGSN (Gateway GPRS Support Node) and SGSN (Serving GPRS Support Node) are two core network nodes in 2G GSM and 3G UMTS networks that enable packet-switched mobile internet. GGSN and SGSN were added to GSM networks as part of the GPRS enhancement, and they are used by both GSM and 3G UMTS networks.
When we use our smartphones today for 4G and 5G services, we take the high-speed internet on our phones for granted. However, when the most widely deployed 2G GSM networks were launched in the early 1990s, they were limited in what they could offer to support the internet on the phone. GSM networks originally employed circuit-switched technologies for voice, SMS and mobile data. At that time, High-Speed Circuit-Switched-Data (HSCSD) technology could enable maximum downlink speeds of 57.6 kbps in the GSM networks through traditional circuit-switching. The approach for offering mobile data changed in the GSM networks when a highly efficient technology General Packet Radio Service (GPRS) was introduced based on packet-switching. With GPRS, two new network nodes, SGSN and GGSN, were added to the GSM network architecture. These nodes still exist in the 2G GSM and 3G UMTS networks, and every time we see the E symbol or the H/H+ symbol on our mobile phones, we are connected to these nodes to access the internet through our phones.
What is SGSN – Serving GPRS Support Node?
SGSN stands for Serving GPRS Support Node, and it is an essential core network node in GPRS networks that enables packet-switched mobile data (internet) in 2G GSM and 3G UMTS networks. SGSN works with GGSN and is responsible for mobility management, billing, and the management of data sessions.
SGSN is an essential network entity in GPRS (General Packet Radio Service), EDGE (Enhanced Data for Global Evolution) and 3G UMTS networks. SGSN (together with GGSN) was added to introduce the packet-switched technology within 2G GSM networks. In the overall 2G /3G mobile core network, SGSN can be seen as the “packet-switched” version of the MSC. So, just like the MSC utilises its circuit-switched capabilities to facilitate voice services, SGSN uses its packet-switched capabilities to facilitate data services. In the network architecture, SGSN sits between Radio Access Network (RAN) and Gateway GPRS Support Node (GGSN). It communicates with mobile phones through RAN and communicates with external networks through GGSN. That way, it allows our mobile phones to be able to connect to external networks.
What is GGSN – Gateway GPRS Support Node?
GGSN stands for Gateway GPRS Support Node and it is a network node that connects SGSN (Serving GPRS Support Node) to external data networks such as the internet and X.25 for enabling packet-switched mobile data (internet) in 2G GSM and 3G UMTS networks. GGSN is part of the mobile core network.
GGSN is a network component that works closely with SGSN (Serving GPRS Support Node) and connects the core network in 2G GSM and 3G UMTS to external packet networks. The word ‘packet’ is important in this context because GGSN is all about packet-switched networks that enable mobile data. GGSN was added to the mobile core network as a gateway to connect the GPRS network to the external data world. From a network architecture point of view, GGSN is situated between the Serving GPRS Support Node (SGSN) and the external data networks such as the internet and X.25 networks.
What are SGSN and GGSN used for?
SGSN and GGSN are used for connecting our cell phones to mobile internet through GPRS, EDGE, HSPA and HSPA+ technologies in 2G GSM and 3G UMTS networks. SGSN is the main node that performs packet-switching functions, whereas GGSN is more of a router that connects SGSN to the external data networks.
SGSN and GGSN are two essential nodes within the GSM and UMTS mobile core networks. These nodes allow us to access external networks such as the internet through our mobile phones when we are on 2G or 3G networks. The core network is central to the overall mobile network because it allows the subscribers of a mobile operator to access all the services that they are entitled to. The original GSM networks were mainly designed to support voice calls and SMS (text messages). Even though technologies like Circuit-Switched-Data (CSD) and High-Speed Circuit-Switched Data (HSCSD) could technically enable data, it was not the best use of network resources by engaging dedicated circuits for data sessions. That is where the General Packet Radio Service (GPRS) enhancement came in with its packet-based approach. As part of the GPRS enhancement, a packet-switched part was introduced into the core network architecture of GSM by adding two new nodes. These nodes are called the Serving GPRS Support Node (SGSN) and Gateway GPRS Support Node (GGSN). Later, the 3G UMTS networks also followed the same approach and continued with separate circuit-switched and packet-switched network entities that included SGSN and GGSN. 4G LTE networks, however, use a more advanced mobile core network called the EPC or Evolved Packet Core.
How does SGSN work?
SGSN is responsible for the two-way communication of data packets with information content (e.g. WhatsApp message) between the mobile phone user and the destination network in the geographical area it serves. SGSN connects to external networks through GGSN and to the user through RNC and NodeB.
SGSN provides the packet-switched capability to GSM and UMTS networks to enable mobile data (mobile internet). The packets of data with information content (e.g. results from a Google search) can be sent and received by the mobile phones operating in a given geographical area covered by a serving SGSN. The SGSN is responsible for mobility management, billing, and the management of data sessions.
When SGSN receives data (with information content) from mobile phone users, it passes that on to GGSN. GGSN converts that data into a suitable protocol format (e.g. IP) before sending it over to the destination external network. On the way back, the process is reversed. So, the data is received from the external network in the format of that network and then converted by GGSN into a protocol format that mobile phones can understand before finally sending it to the mobile phone user. Have a look at the following network diagram to see how SGSN fits within the 2G / 3G network architecture.
If some of the network entities above are unclear to you, you can check out our Mobile Networks Made Easy to get some guidance.
How does GGSN work?
GGSN acts as a router that sits between SGSN and external data networks (e.g. internet) to enable two-way communication. It receives data packets from a mobile user through SGSN and converts them into the format required by the destination network (e.g. IP). On the way back, the process is reversed.
GGSN receives data from a mobile user via the SGSN, converts the data into the protocol format that the destination requires (e.g. IP format for the internet) and sends that on to the destination data network (e.g. internet). On the way back, everything is reversed, so it receives data from the external network in the protocol format of the external network, which is then sent to the serving SGSN in the protocol format of the destination. The serving SGSN here means the SGSN that is serving the end-user (destination). For the external networks, GGSN is just a router interfacing the mobile packet-switched network (GPRS/EDGE/UMTS) and the external data networks. Have a look at the high-level network architecture of the UMTS networks in the diagram above to see where GGSN fits in.
Like the other key parts of the mobile core network, SGSN and GGSN are owned and managed by the mobile operators (e.g. Vodafone, T-Mobile etc.), who procure this network component from mobile network vendors like Ericsson, Huawei, Nokia etc.
Do we have SGSN and GGSN in 4G LTE networks?
4G LTE networks do not have SGSN and GGSN, but they use equivalent network nodes that offer the packet-switching capability in 4G. LTE networks are fully packet-switched and use Serving Gateway (S-GW) as the equivalent of SGSN and Packet Data Network Gateway (PDN-GW) as the equivalent of GGSN.
SGSN and GGSN network nodes only exist in GPRS, EDGE and UMTS networks. The 4G LTE mobile core network, Evolved Packet Core (EPC), consists of different network entities that perform the packet-switched tasks. The equivalent of SGSN in LTE is Serving Gateway (S-GW), as shown in the network diagram below. The equivalent of GGSN in LTE is Packet Data Network Gateway (PDN-GW). 4G LTE networks only use packet-switched technology which is why you cannot see the circuit-switched representation in the 4G LTE network architecture below. 4G LTE networks however have a circuit-switch fallback option that utilises the 2G GSM and 3G UMTS networks to facilitate circuit-based voice calls and SMS. Since the future of voice and SMS is all data, LTE networks have a capability called Voice over LTE (VoLTE) that allows packet-switched LTE networks to enable IP based voice calls and SMS. 5G networks have an equivalent technology, Voice over New Radio (VoNR) that enables voice calls and SMS through the 5G core network.
SGSN vs GGSN – Conclusion
SGSN and GGSN are the mobile core network nodes within 2G GSM and 3G UMTS networks that enable packet-switched mobile data. SGSN stands for Serving GPRS Support Node and is a network entity that provides the packet-switched capability. GGSN stands for Gateway GPRS Support Node and is a gateway situated between SGSN and external data networks. GGSN receives data from a mobile user via SGSN, converts that into a suitable protocol format (e.g. IP) and sends it over to the external data network. On the way back, everything is reversed. Before the introduction of SGSN and GGSN, GSM mobile networks only had the circuit-switching capability, which enabled voice calls, text messages and limited circuit-switched data (CSD). SGSN and GGSN were added to the mobile core network architecture to support General Packet Radio Service (GPRS) for offering packet-switched mobile data services. The circuit-switching capability in GSM networks is provided by the MSC (Mobile Switching Centre), whereas the packet-switching function is handled by SGSN (Serving GPRS Support Node). The EDGE enhancement (Enhanced Data rates for Global Evolution) in GSM makes use of the same network nodes for mobile data services. The 3G UMTS networks and the HSPA (High-Speed Packet Access) enhancements also rely on SGSN and GGSN for all things mobile data. 4G LTE networks are fully packet-switched and do not have a circuit-switched part. LTE networks have a different architecture and use Serving Gateway (S-GW) and Packet Data Network Gateway (PDN-GW) for packet-switching.
Here are some helpful downloads
Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you:
Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc.
Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap. | <urn:uuid:ac2f595f-549c-4571-b1c4-9672728c3d96> | CC-MAIN-2022-40 | https://commsbrief.com/what-is-the-difference-between-ggsn-and-sgsn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00702.warc.gz | en | 0.921069 | 2,967 | 3.28125 | 3 |
Information is the lifeblood of the modern organization – but if improperly guarded, it can be an Achilles’ heel, which is why information security is essential.
IT administrators are therefore tasked with performing a sort of balancing act: How much freedom can they give users without jeopardizing data? It’s a tricky position to be in, but it’s a pickle that can be remedied with refined measures. Here are some key examples for how these information security measures can benefit an organization:
1. Lockdown for Public-Access Machines
Many organizations such as libraries, airports, testing centers and retailers are in the position of providing public access on certain machines. From a security standpoint, this is risky since every single endpoint on a network is also an attack vector. At the same time, not granting access to increasingly common amenities such as self-service kiosks could result in lost opportunities for excellent customer service.
The more effective way around this problem is to use a computer management tool that enables desktop lockdown. Specifically, unauthenticated users would have restricted access to a specific set of applications, and even then, they would be unable to launch any unauthorized executable through those applications (i.e. a web browser). Likewise, USB ports and disk drives can be deactivated to prevent malicious uploads or attempted data theft.
2. User Login and Software Compliance Tracking
User tracking and software compliance are critical components of endpoint management.
The ability to track user login sessions is important for several reasons. For one, it helps administrators understand when certain accounts are being accessed, and the duration of that access. If nothing else, this creates behavior baselines that make it a little easier to identify unusual patterns in account access – especially when visualizations are used to organize this information.
Drilling down further, IT administrators can put themselves at an advantage by also knowing what software is being accessed and how often. This includes software that may be over-utilized, or applications that fall under the umbrella of shadow IT. This is the use of third-party applications that are not sanctioned by IT department for work purposes, and it can ultimately put company data at risk. So, while some users may have a greater degree of access than others, it’s important to monitor for software that may put company information at risk. This is often inadvertent, but it’s dangerous nonetheless.
3. Regular Computer Sanitation
Computer sanitation is an important component of data security. This is true for enterprise work stations, but also for machines and kiosks that are used by the public – where personal data might be compromised.
Traditionally, re-imaging was the go-to method that IT administrators used to maintain ongoing control over computer configurations. That said, manual system restores are incredibly time-consuming. A better option is to setup automated maintenance schedules using ‘reboot-to-restore’ software. With such tools, IT admins can lock down optimum configurations & ensure persistence, with every restart, despite any abuse and alterations (accidental or otherwise).
To learn more about how ‘Reboot to Restore’ software can help your organization, contact Faronics today. | <urn:uuid:695d78f8-80a9-4a2d-a6d0-ff3e26c4107d> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/information-security-3-effective-data-safeguarding-measures | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00702.warc.gz | en | 0.927664 | 649 | 2.53125 | 3 |
Lots of companies are working to develop self-driving cars. And almost all of them use lidar, a type of sensor that uses lasers to build a three-dimensional map of the world around the car.
But Tesla CEO Elon Musk argues that these companies are making a big mistake.
"They're all going to dump lidar," Elon Musk said at an April event showcasing Tesla's self-driving technology. "Anyone relying on lidar is doomed."
"Lidar is really a shortcut," added Tesla AI guru Andrej Karpathy. "It sidesteps the fundamental problems of visual recognition that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch."
In recent weeks I asked a number of experts about these claims. And I encountered a lot of skepticism.
"In a sense all of these sensors are crutches," argued Greg McGuire, a researcher at MCity, the University of Michigan's testing ground for autonomous vehicles. "That's what we build, as engineers, as a society—we build crutches."
Self-driving cars are going to need to be extremely safe and reliable to be accepted by society, McGuire said. And a key principle for high reliability is redundancy. Any single sensor will fail eventually. Using several different types of sensors makes it less likely that a single sensor's failure will lead to disaster.
"Once you get out into the real world, and get beyond ideal conditions, there's so much variability," argues industry analyst (and former automotive engineer) Sam Abuelsamid. "It's theoretically possible that you can do it with cameras alone, but to really have the confidence that the system is seeing what it thinks it's seeing, it's better to have other orthogonal sensing modes"—sensing modes like lidar.
Camera-only algorithms can work surprisingly well
On April 22, the same day Tesla held its autonomy event, a trio of Cornell researchers published a research paper that offered some support for Musk's claims about lidar. Using nothing but stereo cameras, the computer scientists achieved breakthrough results on KITTI, a popular image recognition benchmark for self-driving systems. Their new technique produced results far superior to previously published camera-only results—and not far behind results that combined camera and lidar data.
Unfortunately, media coverage of the Cornell paper created confusion about what the researchers had actually found. Gizmodo's writeup, for example, suggested the paper was about where cameras are mounted on a vehicle—a topic that wasn't even mentioned in the paper. (Gizmodo re-wrote the article after researchers contacted them.)
To understand what the paper actually showed, we need a bit of background about how software converts raw camera images into a labeled three-dimensional model of a car's surroundings. In the KITTI benchmark, an algorithm is considered a success if it can accurately place a three-dimensional bounding box around each object in a scene.
Software typically tackles this problem in two steps. First, the images are run through an algorithm that assigns a distance estimate to each pixel. This can be done using a pair of cameras and the parallax effect. Researchers have also developed techniques to estimate pixel distances using a single camera. In either case, a second algorithm uses depth estimates to group pixels together into discrete objects, like cars, pedestrians, or cyclists.
The Cornell computer scientists focused on this second step. Most other researchers working on camera-only approaches have represented the pixel data as a two-dimensional image, with distance as an additional value for each pixel alongside red, green, and blue. Researchers would then typically run these two-dimensional images through a convolutional neural network (see our in-depth explainer here) that has been trained for the task.
But the Cornell team realized that using a two-dimensional representation was counterproductive because pixels that are close together in a two-dimensional image might be far apart in three-dimensional space. A vehicle in the foreground, for example, might appear directly in front of a tree that's dozens of meters away.
So the Cornell researchers converted the pixels from each stereo image pair into the type of three-dimensional point cloud that is generated natively by lidar sensors. The researchers then fed this "pseudo-lidar" data into existing object recognition algorithms that are designed to take a lidar point cloud as an input.
"You could close the gap significantly"
"Our approach achieves impressive improvements over the existing state-of-the-art in image-based performance," they wrote. In one version of the KITTI benchmark ("hard" 3-D detection with an IoU of 0.5), for example, the previous best result for camera-only data was an accuracy of 30%. The Cornell team managed to boost this to 66%.
In other words, one reason that cameras plus lidar performed better than cameras alone had nothing to do with the superior accuracy of lidar's distance measurements. Rather, it was because the "native" data format produced by lidar happened to be easier for machine-learning algorithms to work with.
"What we showed in our paper is you could close the gap significantly" by converting camera-based data into a lidar-style point cloud, said Kilian Weinberger, a co-author of the Cornell paper, in a phone interview.
Still, Weinberger acknowledged, "there's still a fair margin between lidar and non-lidar." We mentioned before that the Cornell team achieved 66% accuracy on one version of the KITTI benchmark. Using the same algorithm on actual lidar point cloud data produced an accuracy of 86%. | <urn:uuid:2d9efc1c-5c13-4d46-be0c-b76716cf41b7> | CC-MAIN-2022-40 | https://arstechnica.com/cars/2019/08/elon-musk-says-driverless-cars-dont-need-lidar-experts-arent-so-sure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00702.warc.gz | en | 0.948588 | 1,166 | 2.609375 | 3 |
The UK’s ambition to become a global leader in artificial intelligence and robotics could be derailed because youngsters don’t have the right skills or ambition to work in the area, according to a new report.
The warning comes in a new study from enterprise software firm, Sage.
Growing numbers of young people are expressing an interest in technologies such as AI, machine learning, and robotics, says the company. However, in order for new alliances between government and industry to unlock this potential in the future, much more needs to be done now to “inspire and educate a diverse cohort of young people”.
Carried out by YouGov, the Sage research quizzed 1,484 children aged eight to 18 about their perceptions of the technology industry – particularly of AI and robotics.
Building a head of STEAM
While 66 percent of respondents said they enjoy working with technology, only 25 percent of them said they’d like to work with AI, robotics, or other advanced systems in the future. Out of this (significant) minority, 37 percent of youngsters said the idea of working in AI sounds exciting and motivating, while 31 percent said they want to work at the cutting edge of digital research.
Good news, but despite the optimism on display, the Sage report makes it clear that much greater diversity is essential to creating a strong, inclusive AI industry.
While some young people are excited about the future of technology in general, many others are worried, and 56 percent of youngsters said they are unlikely to enter the industry when they leave school or college.
Their reasons are wide-ranging, with 29 percent saying they’d like to pursue “creative careers” instead, and 25 percent saying they don’t have the right qualifications. A further 21 percent believe that they’re not “smart enough” to work in AI or robotics.
These findings suggest that the education system is failing to make technology seem like a creative pursuit and an attainable goal, despite worldwide efforts to align science, technology, engineering, and maths (STEM) careers with the arts (STEAM). Many robotics and AI development programmes now take place within diverse, inter-disciplinary teams, bringing in non-technical experts from the worlds of psychology, design, ethics, and more.
Action is essential
Kriti Sharma, vice president of AI at Sage, said that more action is needed to ensure that youngsters have the skills to survive in the future workplace. “It’s great to see the government starting to assess the importance of AI, evidenced in the comprehensive Sector Deal announced recently, committing extra resources and funding to help grow this promising area,” she said.
“However, there’s still a huge amount of work to be done, particularly when it comes to the elitism problem in the AI industry, as our research confirms.
“It’s no longer the case that you need a Master’s degree to consider a career in emerging tech; yet 24 percent of the young people we surveyed think you do. We need to educate young people about what working in tech really means.”
The research comes as Sage launches a series of AI events for young people. Run in partnership with the charity Tech for Life, the FutureMakers Labs are designed to showcase the opportunities available in the AI, robotics, and digital sectors.
Lyndsey Britton, founder of Tech for Life, said: “It’s encouraging to see how many young people enjoy technology and believe having a career in the sector will be exciting.
“But we need to make sure that the support is there for them to get the right skills to be able to work in future jobs at the cutting edge of digital, like AI. The young people’s events we put on are increasingly popular, and there’s a real thirst from young people to learn, especially from industry experts.
“Working with organisations like Sage means we can help make sure that opportunities and learning are accessible to young people from any background, and ensure there is a future workforce with the right skills and knowledge to do jobs that probably haven’t even been invented yet.”
Internet of Business says
While some might see the quest for diversity in AI and robotics – and technology in general – as a political tick in the CSR box, nothing could be further from the truth.
Figures released at last year’s UK Robotics week revealed that 83 percent of people working in all types of STEM careers in the UK are male, and the numbers for coding and computer science are even worse in diversity terms: just 10 percent of coders are women, with even fewer employees overall coming from ethnic minorities. In general, this is a global problem.
In the West, programmers are overwhelmingly young, straight, white males, and this is a concern, because they are developing technologies that will be used by everyone, in closed teams that are nowhere near representative of society outside the lab. There is growing evidence that this can lead to the creation of biased systems based on limited training data – often unintentionally.
More, as Joichi Ito of MIT’s Media Lab observed last year at the World Economic Forum in Davos, some of those coders prefer the binary world of computers to the messy, emotional world of people, in which systems will actually be used. He described his own students as “oddballs”.
With figures such as Ada Lovelace and Alan Turing (who was persecuted for his sexual orientation) so central to Britain’s computing heritage, the UK is well placed to push for greater diversity. Both now have institutes named after them, and it is good news that many of the country’s senior AI and robotics policymakers and academics are women.
Among many others, these include: Lucy Martin, head of robotics at EPSRC; Dame Wendy Hall, co-author of the UK’s AI strategy review; and Gila Sacks, director of digital and tech policy at DCMS, and Dr Rannia Leontaridi, director of Business Growth at BEIS, who jointly run the new Office for AI in Whitehall.
And Sage’s Sharma is also emerging as an important voice in the debate.
Driving her work at Sage is her fear that AI and the fourth industrial revolution will entrench inequality rather than provide solutions to it. Instead of emerging technologies easing problems such as gender, race, and age inequality, she believes that they risk perpetuating them by cementing biases that already exist in human society.
Speaking to Internet of Business editor Chris Middleton last year at the Rise of the Machines summit in London, Sharma described herself jokingly as “a token millennial” who had been brought into Sage to shake things up.
Sharma expanded on that view in a recent interview. “Despite the common public perception that algorithms aren’t biased like humans, in reality, they are learning racist and sexist behaviour from existing data and the bias of their creators. AI is even reinforcing human stereotypes,” she told PRI.
According to Sharma, the two key components in developing AIs that reflect social diversity are accountability and transparency. Only by understanding the full end-to-end development processes that any artificial system goes through can we check for inherent bias and keep its designers accountable.
“AI is a fascinating tool to create equality in the world,” she said. “When I’ve worked with people from diverse backgrounds, that’s where we’ve had the most impact. AI needs to be more open, less elite, with people from all kinds of backgrounds: creatives, technologists, and people who understand social policy… getting together to solve real-world problems.”
Meeting these challenges early is critical, and it is good news that Sage – via Sharma’s personal determination – is taking positive steps to push the message to the UK’s young people.
Additional reporting: Chris Middleton, Malek Murison.
- Read more: Top priest shares ‘The Ten Commandments of A.I’ for ethical computing
- Read more: AI regulation & ethics: How to build more human-focused AI
- Read more: Women in AI & IoT: Why it’s vital to Re•Work the gender balance | <urn:uuid:20f24ea0-9ef6-4945-a313-e87f8745e6d1> | CC-MAIN-2022-40 | https://internetofbusiness.com/pupils-unprepared-ai-robotics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00702.warc.gz | en | 0.9579 | 1,730 | 2.96875 | 3 |
Ray Lucchesi of RayOnStorage Blog comments:
At Google IO conference this week, they revealed (see Google supercharges machine learning tasks …) that they had been designing and operating their own processor chips in order to optimize machine learning. They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
When it comes to machine learning, hardware is still a necessity to do some of the really complicated things fast.
Read more at: TPU and hardware vs. software innovation (round 3) | <urn:uuid:1a3035e3-b162-4e90-b8f0-8592c6605c37> | CC-MAIN-2022-40 | https://gestaltit.com/all/tom/tpu-hardware-vs-software-innovation-round-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00702.warc.gz | en | 0.917978 | 151 | 2.671875 | 3 |
This feature first appeared in the Spring 2021 issue of Certification Magazine. Click here to get your own print or digital copy.
The Greek philosopher Plutarch described the process of educating young people "not as a vessel to be filled, but a fire to be kindled." The essence these wise words is that educating a young person involves much more than pouring bits of knowledge into their minds.
Rather, it should be about helping them gather materials for a fire encouraging them to ask questions, take risks, and begin thinking for themselves, with the goal that they ignite a learning fire to light their path through life.
The validity of Plutarch's observation is borne out by the thousands of teachers daily helping students ignite their own "fires of the mind." One such teacher is Eli Cochran of the Delaware Area Career Center in Delaware, Ohio. Based on what is happening in his classroom, Cochran is determined to start as many five alarm blazes as possible.
A self-professed "geek" who wears his geekdom with pride, Cochran brings an infectious passion and enthusiasm for electronics, robotics, and computers to his young charges. "As long as I can remember," he explained, "I've always been into computers and technology."
As a young teen, Cochran's innate interest in all things tech-related resulted in a parental grounding for disassembling the family computer "just to see what was inside of it." Acknowledging the budding tech wizard's interest in technology, his mother purchased a VHS tape that showed how to build a computer — enthralled, Cochran finished it in one sitting.
A growing appetite for technology (combined with a teen boy's dislike of restrictions on his fun) later enticed Cochran to "hack" the parental time controls on his video games by messing with their BIOS clocks — again without his mother's knowledge. "I told her about what I had done when I was moving out a few years later and she wasn't too happy,'' he said.
As a junior in high school, Cochran happened to observe an engineering class while the students and instructor were discussing robots. Excited by the concept, on his own he designed and constructed a combat robot in a mere two weeks.
Teenage boys, naturally, do not build combat robots purely for love of learning. They do learn electronics and programming, but any young lad's overarching desire is to see his creation fight other robots. Cochran entered his mini-terminator in the National Robotics Challenge (NRC) and quickly ran up against a few tougher and meaner robots. "I came in pretty close to last, but I loved the experience," he said.
As a college freshman, Cochran realized his school did not have a robotics team. He decided to rectify this shortcoming by making himself a team of one. Again, entirely on his own, he built a new-and-improved combat robot — and won first place in his division.
A roundabout route to teaching
The robotic experience made a lasting impression on Cochran — his involvement in robotics continues to this day, as he serves as the NRC's Director of Contest Judging. Participating in NRC events and watching the people in charge, particularly NRC director Tad Douce, sparked Cochran's interest in teaching. "Seeing how Tad Douce connected with students is one of the reasons I became a teacher," he said.
Cochran didn't make that leap right away, however: After high school, he leveraged his tech skills for a position with a small business that repaired ATMs. Before long, he was wearing many different hats on the job and growing in responsibility. "The owner found out that not only could I fix ATMs, but also computers, and I became the IT network guy," he said.
As the company IT guy, Cochran utilized soldering skills he had learned as a Boy Scout to save his employer more than a quarter-million dollars by doing board-level repairs on broken PCBs. To advance his IT career, Cochran moved on to a larger company that provided digital courtroom proceedings and audio-visual integration. Five years in, he was promoted to a regional manager position and began hitting the road to conduct software training events around the country.
Like many a road warrior, however, the rigors of travel eventually led him to consider a job change. "I was burned out from all the travel," explained Cochran. "I would leave on a Sunday night and return a few days later. It was fun while it lasted, and I got to see a lot of the country, but I wanted to stay close to home."
In 2018, Cochran began working as an end-point engineer in the Delaware Area Career Center IT department. Although he liked his job, the thought of being in the classroom was always on his mind and hearing that the center's network instructor was retiring, and that the school wanted to switch the program from networking to cybersecurity, he threw his hat into the ring.
"I thought about the job, a lot," he said. "I had always enjoyed IT and cybersecurity and decided to apply for the instructor position." Cochran made the switch to teaching in the summer of 2019 and has never been happier. "I'm in my second year of teaching and enjoying every day of it," he said.
Unlike most new instructors, Cochran inherited a well-stocked computer lab. "The former instructor of the network lab left behind a lot of equipment. All sorts of things like Raspberry Pis and routers for the students to use."
In addition to an embarrassment of technological riches, Cochran has added a few things of his own, sometimes in an unorthodox manner. Using social media, he asked whether anyone knew of companies that might be getting rid of used equipment. "In December, I found a company that was getting rid of some surplus gear. I asked for anything they had and got three five-year old servers," he said.
Cochran and his program also receive strong support from DACC's administration. "The support I get is amazing, not just support for the labs and students, but also for myself. Being new to teaching, I've had a lot of unexpected things I've had to deal with," he said.
Nearly all teachers starting out their careers learn the important lesson of humility — they may be the teacher, but they don't know everything. It has been the same for Cochran. "I've learned I have to be humble," he explained. "When I make a mistake, I own up to it and tell the students that we will do things differently from now on."
Keeping students busy
Although Cochran's classes are electives, students do not enroll looking for an easy grade. "Cybersecurity is not an easy program," he said. "There is a lot of work and students are earning the same industry certifications that professionals in the field are getting."
Cochran's expectations for his students are akin to what an employer would like to see in job applicants. "I go at it [teaching] from the industry side and think, 'Coming out of high school what is it that I wish I knew or was taught,' " he said.
DACC's cybersecurity course is a two-year program. Students enter as juniors and continue through their senior year. Year one gives students a healthy introduction into hardware and software, as well as the opportunity to earn certifications for CompTIA A+, TestOut PC Pro and Cisco IT Essentials. "We have to get to the basics before we can really get into cybersecurity," said Cochran.
The pace of learning picks up in senior year as students dive into cybersecurity, earning CompTIA Security+, TestOut Security Pro, and, what Cochran says is "the fun one, the one everyone looks forward to completing," TestOut Ethical Hacker Pro.
Seniors also complete a capstone project on a topic related to IT in general or cybersecurity specifically. "This is a passion project," said Cochran, "something a student really wants to do."
During both years, Cochran soaks his students with IT knowledge and hands-on experience by bringing in guest speakers from the industry, arranging job shadows, and helping students conceive and carry out service projects for the community.
Further establishing his "geek cred," Cochran is an amateur radio operator, call sign KD8RBH. He volunteers his skills to help out the Delaware County Emergency Management Association (EMA) in emergencies. "Shortwave radio fascinates me," he said. "It's glorified walkie talkies, but I really geek out with this stuff because in an emergency, cell phones may not work, but shortwave radio always does."
One community service project the students are currently doing is revamping and programming equipment for the EMA. The project is optional — no grade is given for participating. Cochran sees it as a way for the students to use their soft skills to communicate with others in a real-world setting.
"It's a great experience for them. One of my seniors spec-ed out all the equipment, the kids are thinking out of box and have even come up with some ideas I've never seen in the industry," he said.
While his cybersecurity courses do follow a set curriculum and students are required to meet educational standards, Cochran has accelerated the learning experience by implementing a modified version of an increasingly common practice among tech companies called the "20 Percent Project."
The goal of the 20 Percent Project is to foster creativity and boost productivity among employees by letting them work on personal tech projects for 20 percent of their work time. Most people credit tech behemoth Google with inventing the practice — and many of their services and products, such as Gmail, originated from personal employee projects — but the idea actually began with the manufacturing multinational 3M Corporation back in 1948.
Cybersecurity breeds creativity, competition
Cochran allots approximately five percent of lab time for students to work on projects of personal interest. The practice has proven to be a fruitful inspiration for the students. Working individually and in combination with others, students have developed some interesting innovations utilizing existing tech.
A popular student creation using a raspberry pi is a barcode check-in (and -out) scanner for bathroom breaks. Another impressive item is a Unraid server, a device that saves money and time by enabling the class to run virtual machines and dockers.
According to Cochran, the real benefit of his five percent rule is not what happens in class, but the learning that takes place outside of class. "It has been a real motivation for some students to work ahead at home so they can have that time in the lab to collaborate with classmates."
Cochran is liberal in encouraging students to try out new ideas on the standalone computer systems in the classroom. As students experiment, they are not just practicing rote procedures but are developing the ability to think independently and creatively.
"I'm a firm believer in allowing students an outlet to flex their digital muscles, as we like to say in class," he explained. "It's not fair to teach them cybersecurity and not let them see how it actually works."
To stress the importance of proper cybersecurity practices, Cochran lets students build a Pwnagotchi — a spinoff device like the Tamagotchi from the 1990s. Pwnagotchi enables passive sniffing of wireless networks. "It's a fun little toy they can build for $20 or $30 that shows how easy it is for wireless networks to be compromised," he explained. "Of course, it's passive; capturing PCAPs is legal because you're just monitoring RF."
Students also have ample opportunity to run software on the systems. One student used a rubber ducky and wrote a payload that put the computer into a boot loop. "The kids love it," said Cochran. "Anything goes — as long as it doesn't physically damage the device or capture personal information on anyone, it's fair game."
These are cybersecurity classes, however, and Cochran runs a tight ship when it comes to working with the actual production systems. The rules are simple: no jump drives in class (these can be used to inject malware onto a system), and each student must sign a software authorization form prior to using any software outside of the curriculum.
"I remind them regularly that, 'Mr. Cochran is the only one allowed to download software,' " he said.
With leeway to practice their newly learned skills, the students are blossoming well under Cochran's direction. Members of his classes enjoy participating in capture the flag competitions ('CTF') and they can hold their own. Last year DACC's team was the first high school team to participate in a nationwide college-level CTF event.
Cochran's kids finished 10th out of 20 teams. This year, the team is competing in an international contest and is currently in a tie for first place. "I'm very proud of my students, they take responsibility for putting the team together, practicing, and competing. They pretty much run the whole team," he said.
Cochran is also proud of what he calls his "first ace" as one of his young charges recently scored a perfect 1,000 on a Microsoft Office Specialist (MOS) certification exam. "The best part of teaching is my students," he said. "Every day, I'm amazed at the talented group of students I have the opportunity to teach and help grow!"
The COVID-19 shutdown hit Delaware Area Career Center just as hard as every other school, requiring Cochran to quickly modify his teaching methods. "When I signed up to start teaching, I would have never thought of having to switch to full remote teaching at the end of my first year and then switching to a hybrid schedule this year," he said.
Fortunately, his courseware was already fully online via TestOut and Cisco Networking Academy. TestOut's LabSim learning platform proved especially vital for students. "Because students didn't have access to hardware, TestOut simulations allowed my lab to pivot overnight and continue learning remotely," Cochran said.
"Don't misunderstand me: It hasn't been all sunshine and rainbows. But all things considered, students are still learning and getting certifications!"
Cochran also sees a silver lining to the shutdown. "It was definitely a change for students," he explained. "Their whole school career had been in person and now they had to change on a dime. But they are learning how to manage their time and balance everything they have to do.
"They have to plan their projects ahead of time and communicate clearly with others. Those are skills that will pay off in the future and employers will be glad to have them because they have learned it the hard way."
Oftentimes, simple decisions made for convenience result in greater and more positive benefits than anticipated. Three years ago, Eli Cochran left a high-paying job to "be closer to home." While he lives just six miles from school, his influence and personality will eventually spread much farther than he ever imagined, as students use the knowledge and skills he teaches to light their way in successful careers and in life. | <urn:uuid:9fa4a5c7-292f-445a-b6d7-ea3744230087> | CC-MAIN-2022-40 | https://www.certmag.com/articles/ohio-tech-teacher-followed-passion-cybersecurity-classroom | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00702.warc.gz | en | 0.982085 | 3,159 | 2.609375 | 3 |
How the IT and Health industries are much more connected than you may think
Think IT is just for the Tech world? Think again. The fourth industrial revolution is here and it’s opening up a whole world of possibilities all around us, especially when it comes to healthcare.
Devices are connecting quicker than ever with convenience and efficiency at the top of everyone’s medical to-do list. Never has there been such a need for sensors and devices that can either collect data, offer advice or take action on a variety of medical needs.
Healthcare industries are staying on top of these trends to see what role the Internet of Things, also known as IoT, will play in health services and just how it can help. From remote care options to data collection to identifying symptoms in real time.
What’s so unique about IT and the health industries?
Put your hands up if you’re wearing a Fitbit, Apple Watch or some other version of step counting, heart monitoring, calorie counting consumer wearable. We’re imagining a sea of digitally enhanced hands shooting up.
Aside from the fact that you’re keen to look after your own personal health, it means that you are already embracing the use of IT in the health arena. Now imagine the possibilities of this on a much greater level.
In 2017, PwC released a report entitled ‘What doctor?’. This report surveyed a whole host of people, over 12,000 across 12 different countries to be exact, to look at their attitudes towards AI and robotics in the health industries. A key finding showed that “The proliferation of consumer wearables and other medical devices combined with AI is also being applied to oversee early-stage heart disease, enabling doctors and other caregivers to better monitor and detect potentially life-threatening episodes at earlier, more treatable stages.”
The report also highlighted the changing attitudes towards IT and health industries with a key finding showing that “there is a growing enthusiasm among consumers to engage in new ways with new technology for their health and wellness needs, and when connected to the internet, ordinary medical devices can collect invaluable additional data.”
From the ability to provide remote care, quicker and more accurate diagnosis, and further insight into symptoms and trends, IT is embedded in every aspect of the health industry. This offers patients much more control over their individual treatment plans and general lives. Wondering how you get involved in such pioneering work? Let’s take a look at the roles…
What kind of digitally advanced healthcare jobs can you get involved in?
These days, nobody has time for laborious and time consuming paper based systems, especially doctors with back to back appointments. So it’s out with the old and in with the digital new. Starting with…
VA Healthcare Content Creator
Voice-based virtual assistants have already set up home in most of our personal and professional lives but in the healthcare industry, such soothing tones like Alexa, Cortana and Siri look set to become the go-to support service for a variety of medical needs. From booking appointments and creating schedules to looking after the needs of elderly patients in care homes, the possibilities are endless.
Currently, those who sit behind content creation for such modern-day VA needs are developers. But as these continue to evolve, the developers will also need to, well, develop, in the form of passing the medical baton to specific healthcare specialists like you. Look at it this way, a patient tells Doctor Alexa they’re struggling with their mental health but there’s no guarantee any additional at-risk checks will be carried out. This means it will be essential for accurate and relevant content to be penned.
3D Printing Specialist
From the humble eye wash cup to prosthetic limbs and synthetic bladders, 3D printing has gone from strength to strength since the 80s. Equipment is becoming more readily adopted by the healthcare industry, and all sorts of live cells and “organoids” are being created for live tissue transfers. And what about the polypill? A type of multi-layered pill with the ability to hold more than one type of drug for certain patients who require different types of medication all at the same time.
With medical advancements such as these, this means a whole new breed of printing technicians with prototyping and 3D software design skills will need to step up to the printing plate. By getting involved in such an evolving role, you could be responsible for generating tissue and helping burn and accident victims get some new skin.
Writing up patient notes can take up an awful lot of what could be, crucial time spent on their patient. This is where advancements in tech come in. AI and voice recognition will see such a timely task being automated in the not too distant future. But that doesn’t mean people like you won’t be needed to help. In fact, human proofreaders will be required to ensure such robotic hands have created such accurate documentation to ensure only the safest patient outcomes.
Ethical Hacker for Health Data
Hacking happens all over the world, and as it goes, across all sorts of industries. As the healthcare arena ramps up their involvement with AI and the IoT, this means that hackers focused on specific health data will have more data options to choose from.
From connected devices in the home to smart hospitals, individual medical records are valuable to these types of hackers for a variety of reasons. From the usual reasons such as identity theft too much darker medical practices such as organ transplants on the black market. This means there’s no better time to get ethical hacker certified and fight the good fight against cybercriminals coming for your health records.
The health industry is just one of many exciting industries that IT, coding and cyber security professionals can get their hands on. Are you looking to upskill or expand your tech skills? Get in touch with our expert tech career consultants today for a free, impartial consultation call. | <urn:uuid:e26d7d9a-ba41-485c-bbc7-92bebcfa265c> | CC-MAIN-2022-40 | https://www.learningpeople.com/uk/blog/it/how-the-it-health-industries-are-much-more-connected-than-you-may-realise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00702.warc.gz | en | 0.950725 | 1,222 | 2.59375 | 3 |
A potentially damaging computer program's, capable of reproducing itself causing great harm to files or other programs without permission or knowledge of the user.
Types of viruses :-
1) Boot Sector Virus :- Boot sector viruses infect either the master boot record of the hard disk or the floppy drive. The boot record program responsible for the booting of operating system is replaced by the virus. The virus either copies the master boot program to another part of the hard disk or overwrites it. They infect a computer when it boots up or when it accesses the infected floppy disk in the floppy drive. i.e. Once a system is infected with a boot-sector virus, any non-write-protected disk accessed by this system will become infected.
Examples of boot- sector viruses are Michelangelo and Stoned.
2) File or Program Viruses :- Some files/programs, when executed, load the virus in the memory and perform predefined functions to infect the system. They infect program files with extensions like .EXE, .COM, .BIN, .DRV and .SYS .
Some common file viruses are Sunday, Cascade.
3) Multipartite Viruses :- A multipartite virus is a computer virus that infects multiple different target platforms, and remains recursively infective in each target. It attempts to attack both the boot sector and the executable, or programs, files at the same time. When the virus attaches to the boot sector, it will in turn affect the system’s files, and when the virus attaches to the files, it will in turn infect the boot sector.
This type of virus can re-infect a system over and over again if all parts of the virus are not eradicated.
Ghost-ball was the first multiparty virus, discovered by Fridrik Skulason in October 1989.
Other examples are Invader, Flip, etc.
4) Stealth Viruses :- These viruses are stealthy in nature means it uses various methods for hiding themselves to avoid detection. They sometimes remove themselves from the memory temporarily to avoid detection by antivirus. They are somewhat difficult to detect. When an antivirus program tries to detect the virus, the stealth virus feeds the antivirus program a clean image of the file or boot sector.
5) Polymorphic Viruses :- Polymorphic viruses have the ability to mutate implying that they change the viral code known as the signature each time they spread or infect. Thus an antivirus program which is scanning for specific virus codes unable to detect it's presence.
6) Macro Viruses :- A macro virus is a computer virus that "infects" a Microsoft Word or similar application and causes a sequence of actions to be performed automatically when the application is started or something else triggers it. Macro viruses tend to be surprising but relatively harmless.A macro virus is often spread as an e-mail virus. Well-known examples are Concept Virus and Melissa Worm | <urn:uuid:065cdcfe-a6e9-422c-929b-399bc0d442e6> | CC-MAIN-2022-40 | https://www.cyberkendra.com/2013/03/brief-details-on-computer-virus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00102.warc.gz | en | 0.909954 | 610 | 3.359375 | 3 |
What is 2FA
Living in the digital era, there is a concern for security. Accounts can be compromised by means of stolen credentials. Two-factor authentication (2FA) utilizes an additional form of identity verification on top of the password to help ensure personal information remains secure. This creates a double barrier of virtual safety nets between the user’s information and hackers.
There are many different types of authentication methods that when coupled with a password can provide better security for personal information. Here are some of the most used authentication methods.
CAPTCHA is a security measure referred to as a challenge-response to determine if the user attempting the login is human. In fact, CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. Its main purpose is to prevent spamming software from hijacking websites.
Two-step login is an electronic authentication method that will require you to enable a phone number or email address. Once an attempted login is initiated on the account, it will generate an authentication code sent via email or text for the user to enter into the account sign-in page. Some companies offer a mobile app that the authentication is sent to and the user simply denies or confirms the attempted login. This form of certification is used for many different types of accounts, including college student logins.
Picture-based passwords allow the user to set up a series of images that are later used to sign in.
One-time passwords (OTPs), also referred to as one-time PINs, are dynamic passwords that are only allowed to be used for single login sessions.
Access badges, or authorization badges, are cards that can be used to authorize entry. This type of identification authorization typically requires a thin client to serve as a badge reader and is commonly used to access work buildings or secure locations or devices.
Biometric authentication involves a unique biological characteristic to ensure secure login. Some characteristics include fingerprints, face or iris recognition, and voice identification. This type of verification is frequently used by healthcare providers to secure and exchange patient health information and other professions that require similar security.
The Benefits of 2FA
There are many recommendations to consider in order to create strong passwords. Here are some top suggestions:
- The longer the better – use a long string of words
- Use a combination of numbers, upper and lowercase letters, and symbols
- Create distinct passwords for each account
- Use random characters
While these are all great rules of thumb to keep in mind when setting up a new password, how effective are they if the password gets stolen through a phishing campaign? How strong would that nonsensical 25-character long password be then? These recommendations should still be followed, but it is essential to add an additional layer of security. This increases the odds of a hacker bypassing both.
Did you know? Phishing scams are the leading cause of ransomware infection, according to 54% of managed service providers in a 2021 survey.
Multi-factor Authentication (MFA)
While 2FA allows the use of a second method, multi-factor authentication (MFA) supports two or more. Adding more can strengthen the security of the account.
Did you know? Users are still setting up weak passwords like “654321” and “password.” In fact, 753,305 LinkedIn users have “123456” set as their password. The next on their list of weakest passwords is “linkedin.”
How to Enable 2FA
Enabling 2FA can differ from one account to another because it depends on the second authentication strategy being implemented. The first step is to create a username and password for the account. Then, the second authentication type needs to be established. This can require the user to use a mobile device to authenticate via email, text, facial recognition, or other similar means. If the second type is a badge, it will require admin configuration. The setup for each type of personal verification can vary.
2FA: Secure Your Information
Two-factor authentication includes two methods of identification. One includes what you know, which is the password you initially use when setting up the account. The other includes what you have. That can be a cell phone, a badge, or anything else that is physically in your possession. It can also be “what you are.” These types of authentication include something related to the biological characteristics that are unique to you. Considering our digital climate, 2FA is an extra layer that is vital for protecting your personal information.
If you found this useful, why not share it? If there’s a topic you’d like to know more about, reach out and let me know. I’ll do my best to bring you the content you’re looking for!
Here are some more interesting reads: | <urn:uuid:b55dbb7b-520d-40e0-8e3f-47f3961a3248> | CC-MAIN-2022-40 | https://social.dnsmadeeasy.com/blog/resource/why-enable-2fa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00102.warc.gz | en | 0.927273 | 997 | 3.453125 | 3 |
Wireless Networks can leak a treasure trove of information. In this tutorial we will use Snoopy to find various Wireless Access Points and Access points a device is probing for this can help us determine the name to call our malicious SSID for a evil twin network. When a device is probing for Wireless Access Points it will connect to the one with the most power by default allowing an attacker to make a Fake Access Point and alter the Wireless Tx Power of a device to establish a connection between the Fake Access Point and a device. This means that all traffic that is send over the device can then be intercepted and read by the attacker.
Linux Operating System
Snoopy v2.0 – modular digital terrestrial tracking framework
Wireless Interface with monitor mode capability.
Server only Required if you want to send data to a local or remote server. (Optional)
In this tutorial we will be working with a software called Snoopy.
What is Snoopy ?. Snoopy is a distributed, sensor, data collection, interception, analysis, and visualization framework. It is written in a modular format, allowing for the collection of arbitrary data from various sources via Python plug-ins. Each Snoopy instance can run multiple plug-ins simultaneously. A plug-in collects data, which is queried by the main Snoopy process and is written to a local database. Snoopy can sync data between clients (drones) and a server, and clients (drones) can also pull replicas of data from a server. Each Snoopy instance can run plug-ins appropriate for its position in the greater picture. Here’s a diagram to depict one possible setup.
You can check out the snoopy project at the following url (Credits Sensepost). https://github.com/sensepost/snoopy-ng
First we need to install Snoopy we found it quite a challenge to install due to changes in the Linux distribution. We got multiple errors during the install process so we put a little install script with the necessary fixes together to help you along the way.
Open up a new terminal and paste the follow commands.
apt-get update && apt-get upgrade apt-get install python-libpcap apt-get install gcc make autoconf git python-pip python-dev build-essential libffi-dev libssl-dev libjpeg-dev libxml2-dev libxslt1-dev python-dev tcpdump libpcap-dev vim postgresql libpq-devpip install --upgrade pip pip install --upgrade virtualenv git clone https://github.com/sensepost/snoopy-ng.git virtualenv ./snoopy-ng/venv source ./snoopy-ng/venv/bin/activate cd ~/snoopy-ng/ sed -i 's/.*from gps import.*/from gps3 import gps3/' ./plugins/gpsd.py sed -i 's/from libmproxy/from mitmproxy/' ./includes/mitm.py pip install BeautifulSoup Pillow cryptography epeg-cffi gps3 httplib2 mitmproxy netifaces netlib psutil pyOpenSSL pyasn1 pyinotify python-dateutil requests scapy sqlalchemy psycopg2 cd ~/snoopy-ng/ && git clone https://github.com/JPaulMora/Pyrit.git apt-get remove python-scapy sudo pip install ./setup/scapy-latest-snoopy_patch.tar.gz cd Pyrit/ python setup.py clean python setup.py build python setup.py install systemctl stop postgresql.service systemctl disable postgresql.service systemctl stop exim4.service systemctl disable exim4.service git clone http://www.tablix.org/~avian/git/publicsuffix.git cd publicsuffix python setup.py install
Our script will remove a python file called mitm.py from the installation directory. During the installation due to a error in the mitm.py addon. will need to now edit mitmproxy.py open up mitmproxy.py.
Open up mitmproxy.py in nano and comment the line “from includes.mitm import *.
Once snoopy has installed successfully open up a command terminal and set up a new drone user and a new server (this step is optional we will not be using any of the drone modules in this tutorial only the Wifi modules.
Data can be synchronized to a remote machine by supplying the –server (-s) option. The remote machine should be running the server plugin (–plugin server). A key should be generated for a drone name before hand. The below illustrates this.
[email protected]:~# snoopy_auth --create myDrone01 --verbose
[+] Creating new Snoopy server sync account
[+] Key for 'myDrone01' is 'GWWVF'
[+] Use this value in client mode to sync data to a remote server.
[email protected]:~# snoopy --plugin server
[+] Running webserver on '0.0.0.0:9001'
[+] Plugin server caught data for 2 tables.
[email protected]:~# snoopy --plugin example:x=1 --drone myDrone --key GWWVF --server http://<server_ip>:9001/ --verbose
[+] Starting Snoopy with plugins: example
[+] Plugin example created new random number: 21
[+] Snoopy successfully sunc 2 elements over 2 tables.
Once Snoopy Client & Server are running on the drone run:
# snoopy -v -m wifi:wlan1=True -s http://
Followed by the addon modules you will be using.
The data being collected will them be transferred to the sever. Your server must have GUI compatibility in order to run Maltego we will be using Snoopy Locally and Viewing the data tables locally.
Lab Set Up:
Alfa Networks AWUS036H USB Wireless Interface.
Snoopy is capable of monitoring and analyzing lots of different types of technology such as Wireless, Blue-tooth 3G for this tutorial we will be using it for the purpose of finding out what Wireless Access points that Wireless Clients are looking for.
In this guide we will be using Snoopy for the sole purpose of monitor Wifi clients in order to revel Wireless Access Points that the Device is probing for.
Why do we need to know what Wireless Access Points a Wireless Client is probing for ?.
With data costs being costly users will often users connect to a Open Wireless Network. users will very rarely forget Wireless Access Points from there devices. This makes is very easy for a attacker to intercept the traffic by mimicking a fake Wireless Access Point know as a Evil Twin access point.
The Access point will have the same SSID as the Wireless Access point the user has previously connected to.
The next time the client probes for the Fake SSID all the traffic will be sent and received through the attackers fake network.
First we need to set up Wireless Interface into monitor mode.
ifconfig wlan1 down
iwconfig wlan1 mode monitor
ifconfig wlan1 up
Now that our Wireless Interface is in monitor mode we can start sniffing Wifi traffic in Snoopy cd into your installation directory and using the following command to start Snoopy this commands tell snoopy that we want to use the Wifi modules and specifies our card that in monitor mode change wlan1 to your Wireless interface’s name.
python snoopy.py -vv -m wifi:wlan1=True
Snoopy will gather the data to a Database .db file within the installation directory to open up to view information in our database we will be SQLite Manager.
Now we have SQLite Manager Firefox addon installed open it from toolbar.
Locate Snoopy Database and open it up you can find Snoopy Database in the installation directory for Snoopy-ng.
In the screenshot below this is our Database saved by Snoopy under table “wifi_client_ssids” you will be able to view Wireless Access Points that the client device is probing for. | <urn:uuid:065bafdc-4c3d-4001-875a-d755e0430a65> | CC-MAIN-2022-40 | https://hackingvision.com/2017/02/18/snoopy-finding-previously-connected-ssid-of-a-device/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00102.warc.gz | en | 0.79005 | 1,787 | 2.75 | 3 |
It should not come as a surprise that quite a few computer users do not pay enough attention to a proper cybersecurity policy despite being aware of various threats, such as malware and viruses.
This negligence is not a good sign. Even if you have the basics down, it does not mean that you can counter some of the more aggressive threats.
In case you feel like you have been putting off this issue for too long and would like to make your computer more secure, this article should come in quite handy.
Proper Password Policy
Let’s start with passwords. From email accounts to various online shops, we need passwords to access our profiles. However, when there are so many different accounts, it feels bothersome to come up with new and complicated passwords and memorize them, right?
Well, if the data gets leaked for one of your accounts and you have been using the same passwords for everything, there is no need to tell what can happen.
It might be bothersome to use different passwords, but you should still do it. Also, remember that you can utilize a password manager and keep login details for your profiles in a safe location and access them with a master password when you need to.
Reliable Antivirus Software
Great antivirus software is one of the cornerstones of a solid cybersecurity strategy. Not only that, but you can also use the tool as a means to improve the computer’s performance by cleaning potentially corrupted files.
If you were to look through some tricks that advise on how to improve the computer’s performance, one of the things on these lists usually includes an antivirus tool, and there is an emphasis on having antivirus software in the background all the time.
Ad Blocker Extensions for Internet Browser
Even if online ads are not as effective any more as far as website monetization methods go, some sites still run aggressive ads and try to make money off of them. In some instances, these ads could redirect users to dangerous landing pages.
It is better to avoid such advertisements in the first place, and an ad blocker for your internet browser is a sound solution. An extension like that is quite useful and should be used more often, especially if you spend a lot of time surfing the net.
No Oversharing on Communication Channels
In case you like to use social media or communication platforms like Discord to chat with strangers, be wary of what you share. Personal details should be kept to yourself even if you establish a strong relationship with online strangers.
It is better to be safe than sorry, and if a person seems reliable and trustworthy online, it does not mean that it is the actual case.
Identification of Shady URLs
If you open an email and find a message saying how you won something and need to visit a website to claim the prize, do not fall for this.
Such messages are not as common these days because email services are smarter and identify shady emails and redirect them to the spam folder or block them. However, there is still a chance that you might encounter an email or a message elsewhere that has a shady URL. As a rule of thumb, you should not click on it.
Updated Operating System
The latest operating system version means new features, performance improvements, and security upgrades. OS developers tend to react to the latest threats and push hotfixes if they believe that they can protect users from potential malware and viruses.
Some updates might be quite large and take a while to download and install, but delaying them is not a sound strategy.
Data backups are not a direct countermeasure to cybersecurity threats, but they are still necessary to feel safer. One of the main reasons for losing important data is encountering malware and other threats that delete it.
You can utilize cloud storage or an external hard drive and create a file copy that you can access later. It works as a safety net and lets you rest easier, knowing that in case something happens to files on your computer, you will have a backup copy.
Utilization of Virtual Private Networks
A virtual private network is another example of how you can make yourself more secure online. In addition to encrypting data and preventing third parties from accessing your information, VPNs also give access to geo-restricted content because you can change your IP address with it.
Decent VPN services cost a few dollars a month, meaning that you should not have trouble getting one for yourself. | <urn:uuid:1cdede07-41c2-4e5e-a978-b222f3b50db7> | CC-MAIN-2022-40 | https://hackingvision.com/2021/12/07/improving-computers-security-8-valuable-ideas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00102.warc.gz | en | 0.950403 | 905 | 2.953125 | 3 |
The Expedition 62 crew performed a set of different biology research experiments at the International Space Station and supported preparations for the facility's next crew members.
Astronauts Andrew Morgan, Jessica Meir and Oleg Skripochka explored the areas of genetics, biological manufacturing and ergonomic conditions in experiments aboard ISS, Mark Garcia, a social media consultant supporting NASA, said in a blog posted Tuesday.
Morgan tested how microgravity affects the genetic expression of mice in space. His research may help humans determine how to survive longer space exploration missions.
Meir trialed the use of a bioprinter that NASA intends to use for in-space food and medicine production. The bioprinter is under testing to produce human organs. Lastly, Skripochka supported a study on ISS' ergonomic factors.
The astronauts also read characters on a computer-based vision chart as doctors on Earth supervised a vision test for the three.
Morgan, Meir and Skripochka also prepared life support equipment and computers for a launch in April where astronaut Chris Cassidy and cosmonaut Ivan Vagner will travel to ISS. | <urn:uuid:c61c6cff-aedd-4a17-a7d6-0f0d6fa58ee1> | CC-MAIN-2022-40 | https://executivegov.com/2020/03/iss-astronauts-conduct-experiments-prepare-for-next-crew/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00102.warc.gz | en | 0.859452 | 227 | 3.328125 | 3 |
All Aboard for COP26
COP26, otherwise known as the UN Climate Change Conference of the Parties is starting next week. COP26 will be the largest conference on climate change, and is being described as a last chance to get governments worldwide to reach an agreement on each country’s contribution to reducing carbon emissions.
The first of a series of climate change tipping points is expected to be reached by 2030, which could according to Yale School of the Environment ‘fundamentally disrupt the global climate system’ .
On November the 4th 2016, building on the previous UN Framework Convention on Climate Change in 1992, signatories from UN member states agreed to “undertake ambitious efforts to combat climate change and adapt to its effects” in the Paris Agreement. Part of that agreement was for each country to put forward nationally determined contributions (NDCs). These were voluntary targets for reducing emissions and for investment in renewable energy infrastructure, which were to be increased over time. The regular Conference of the Parties was intended to keep signatories on track to achieve NDCs towards reducing the global carbon footprint.
COP26 was originally scheduled for 2020 but delayed due to Covid-19. The conference lasts two weeks so the period in which these changes must be implemented is only just over 8 years.
Industrial Sized Contributions
In a previous article IgniteSAP pointed out that although governments are under pressure to meet these targets, the main effort of doing so will be achieved through the efforts of private citizens and through changes in the way we run commerce and industry. This is not just an ethical issue of balancing how the actions of companies in the developed world affect communities in other regions and in later years: this is a necessary material change in the way we live our lives in order to survive as a species.
As a result of negotiations at COP26 there will be new legislation in many countries that will put in place laws and regulations that ensure governments can demonstrate they can meet new climate and environmental targets. In order to comply with new rules governing how businesses operate corporations will be legally obliged to make changes to reduce their overall carbon footprint, as well as ensure that they are not harming the natural environment through waste energy and material, and pollution.
End-to-End Sustainability Transparency
IgniteSAP has suggested that ERP systems like SAP provide the means to measure, analyse and adapt business processes to these new conditions. The Industry 4.0 goal of end-to-end transparency which is applied to economic transactions, and to the manufacture and distribution of products, can equally be used to ensure compliance with new legislation on climate change and environmental sustainability.
The whole concept of ERP systems is geared toward efficiency as much as productivity: and as a consequence, emphasis on efficiency is the means by which companies can reduce their carbon footprint across the organisation, as well as reduce waste products and energy.
SAP Products for Sustainability Targeting
For their part SAP have improved existing software products in this regard and provided specific products which are calibrated to achieving the same ends. S/4HANA Cloud now has embedded sustainability metrics to allow users to compare products and services, along with material flows and usage.
SAP Supply Chain Management provides an excellent way to track the carbon footprint of the extended business network through the oversight of partner organisations and other suppliers.
SAP Business Ecology Management is a new product that allows small to medium-sized businesses to report with confidence on their carbon footprint. This can be a great way to appeal to new customers who are looking to assess their own carbon footprint through the products and services they use. Changing the carbon accounting behaviour of SMBs is particularly important as they make up the vast majority of the economy.
SAP Plastics Cloud collects data across the plastics supply and recycling chain, and allows users to buy recycled plastics using the Ariba network in order, eventually, to eliminate the use of single-use plastics in business processes.
SAP Concur now has the ability to track carbon footprints associated with business travel, and SAP solutions for renewable energy schemes are helping to increase the efficiency and profitability for renewable energy companies worldwide.
The Green Economy
The green economy, according to SAP’s lead for product engineering Thomas Saueressig, is among the fastest growing sectors and is projected to grow 35% each year.
IgniteSAP has pointed out that not only will IT consultancies be looking to provide services to help implement new greener ways of operating, but a new class of IT consultant will emerge over the next five years. IT-based corporate sustainability consultants working with ERP systems like SAP will have a central role in ensuring that companies are able to meet their legal and ethical requirements
In May this year IgniteSAP reported that SAP and Accenture announced they are expanding their collaboration on helping companies through the process of making these changes, which are exemplified by the UN Sustainable Development Goals. By creating a combination of SAP technology with Accenture’s sustainable services, these partners will co-innovate solutions to facilitate companies in achieving the goals of de-carbonisation of their supply chains and the establishment of business practices that promote a circular economy.
Deloitte now offers D-Carb sustainable supply chain planning which leverages SAP IBP (Integrated Business Planning) software in combination with GHG emissions data. This integrates direct and indirect carbon footprint data from purchased materials and energy, production activities and transportation so companies can accurately assess carbon footprints along the value chain.
PwC has recognised the importance of sustainability issues and how they now relate to the core strategy of many businesses. Along with a service designed to help corporations reach net zero, they offer advice and support on developing sustainable business strategies, including: assessments of a company’s sustainability-related risks and opportunities, senior management and board level workshops and learning programmes, sourcing and supply strategies, and the development of progress measurement systems.
Similar initiatives and can be found at many of the top IT consultancies, and most of them are now offering services aimed at helping commerce and industry to meet their own existing sustainability targets. These will be updated as legislation is brought into effect.
The necessity for every individual and every business to change their behaviours fundamentally if the world is to avoid irreversible climate change means that every industry will be forced to adapt.
Add to this the IT infrastructure required to facilitate the change in energy provision, public transport, international shipping, smart cities, intelligent agriculture, and the circular economy, and its easy to understand why the change needs an army of highly qualified, highly skilled and creative professionals to implement it.
Sustainable Development vs “Sustainable Development”
Governments (even progressive ones) are naturally conservative and want to preserve the status quo. They are equally prone to confirmation bias, and perpetuating stability: as this is what had worked before. They need to promote stable growth and the assumption has been that this requires an ever-increasing input of resources. Perhaps some politicians believe steady growth is what is needed to be sustained when referring to “sustainable development”.
At the same time in the majority they are aware now of the need for fundamental change, but they don’t want to create chaos in the process of bringing it about. This is the reason for their slowness to act.
New Ways of Creating Growth
Commerce and industry also require stability and steady growth, but at the heart of industry is a need to innovate and create new ways of creating growth: not by an ever increasing amount of resources, but with new ideas and new processes. This is fuelled by the infinite resource: human ingenuity.
Information Technology and its creative application will be the means for humanity to achieve sustainable development goals: not only by chasing an ever-receding horizon of efficiency, but making new horizons.
Reframing the Debate
At the bottom of the contradiction of attitudes toward sustainable development is the phrase: which can be interpreted as to sustain and to change, which seems to refute itself. Clearly it referred originally to the maintenance of supply of resources in a way that does not ultimately cut off supply (through cataclysmic climate change), but the meaning has become vague through over-use.
Search now: SAP jobs in Germany
After COP26 we will need to reframe the debate, and put the emphasis on the act of creation. The primary way of doing so is to propagate the seeds of change: the education in, access to, and application of technology and information systems in such a way that humans can provide for themselves in perpetuity.
If you are curious to know how you can work with the best placed companies to help others bring about reduction of carbon footprints and reach sustainability goals then get in touch with IgniteSAP to understand more. | <urn:uuid:9a327737-c9d7-4130-9d1f-f06461022bb4> | CC-MAIN-2022-40 | https://ignitesap.com/how-to-reach-cop26-targets-with-it-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00102.warc.gz | en | 0.956775 | 1,796 | 2.71875 | 3 |
The acquisition of user IDs has become much easier for cybercriminals in the globalization era. A variety of methods can be used to steal passwords, including spyware, keyloggers, and phishing attacks. This can lead to the total loss of essential data held in company or private databases. Most of the methods used by these cyber criminals involve the use of malware that has been designed to steal user credentials. Based on the objectives of a particular cybercriminal, a variety of malware methods are applied to fulfill those goals.
A significant proportion of methods used to steal user credentials consider the use of malware. Additionally, phishing attacks use malicious attacks through communication channels such as emails where malware-loaded websites are disguised as genuine ones to trap unsuspecting users. Other types of attacks include spyware and keylogging which, for a variety of incidences, has been observed to continually grow in both complexity and frequency of attacks.
Signs of a Malware Infected PC
One of the diagnosis methods of identifying whether a computer is infected with a virus is through the observation of random pop-ups and significantly increased booting time. Instances like these are associated with spyware configured to steal essential data from users without them noticing.
The objective of using spyware on user PCs is to ensure that information stored in browsers and other sensitive areas is well camouflaged. This includes communication channels such as email. Cyber crooks will attempt to acquire your passwords without you noticing that anything is wrong. Though this seems like a flawed technique that wouldn’t work all the time, the truth is that it works exceptionally well. For instance, 158 million social security numbers were stolen in 2017. That doesn’t include all the other types of records and data stolen from individuals and companies.
Malware Injection Technique
For reliable security dodging methods, process injection is a method of integrating malware and lifeless adversary strategy in trade-crafting accounting for the integration of custom codes within the address bars of other processes. The variety of injection techniques includes the following methods.
Portable Executable Injection
Shellcodes and Create Remote Threads are among strategies used in malware injection where malicious codes are copied into accessible active processes commanding them to execute as the originals. Through this strategy of attack, the malware does not require writing malicious code on a disk. Instead, it does so by calling Write Process Memory on the host procedure. The impact of this procedure is that the injected code copies its PE to another process with an unidentifiable base address commanding it to re-compute the original addresses of its PE.
Process hollowing is a technique that malware applies to take into account the mapping or hollowing out of the primary code from within the memory of the target’s procedure while overwriting the memory target process with an executable malicious code. The function of the malware is to create a new process designed to host the malicious code presenting it in a hanging form awaiting for the Resume Thread Function to be called in order to execute.
This process leads to the switching of the original file contents with the malicious payload. Processes used for mapping the memory include two API examples, the ZwUnmap and the NtUnmap Views of Section. In order to succeed in assigning new memory for the malware, this procedure takes advantage of the malware’s unmapping of the memory and proceeds to execute the loader, VirtualAllocEx that facilitates the application of the malware to the Write Process Memory on the identified vulnerable target.
Classic DLL Injection Through Create Remote Thread And Load Library
This technique is among the most popular method used in malware injection into other processes. By commanding the implicit address space to process the malware code using the dynamic-bond library, the approach facilitates the creation of Remote Threads in the target process through process loading.
The primary objective of the malware is to target a process for injection. This procedure is generally performed through a search of the processes to call a trio of APIs that include CreateToolHelp32Snapshot, Process32 1st, and 2nd. The specific functions of each of these APIs include the cataloging of heaps and returning a snapshot, retrieval of the first process, and the iteration through the previous two processes respectively. After successfully allocating the target process, the malware is able to execute through Open Process calling.
This article reported on a number of techniques used by malware attackers in concealing unauthenticated activities in other processes. Two procedures are observed to facilitate the functionality of malware and include open injection of a shellcode on another processor or the command of other processes to load malicious libraries on behalf of the malware. Cyber thieves are constantly updating their attack procedures to stay one step ahead of IT professionals. That makes locating and eliminating malware threats a full-time job. | <urn:uuid:ce443bae-7308-4164-8479-4bdb37496a7d> | CC-MAIN-2022-40 | https://www.go-concepts.com/blog/a-new-way-that-password-stealing-malware-infects-your-pc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00102.warc.gz | en | 0.929503 | 983 | 2.8125 | 3 |
As technology continues to change, the number of ways your company can be targeted in a malware attack grows. At Five Nines, we put a major emphasis on educating our clients about what potential attacks could do to their operational systems, while also preparing their network to fight these attacks and keep systems secure as the designated IT services provider. While we do install anti-virus software for our clients, it’s only one tool in our belt, given that additional layers of security are needed now that hackers are more sophisticated. Before we get into why you can’t solely depend on anti-virus to stay secure, let’s define terms that are crucial to understand when we’re talking about anti-virus software and security.
Malware is a broad term that really defines any malicious code or program that gives an attacker explicit control over your system. It may refer to all types of malicious programs including viruses, bugs, bots, spyware, etc. and even ransomware.
Anti-virus - Anti-virus software, also known as anti-malware, is a computer program used to prevent, detect, and remove malware. It’s the most commonly used weapon against malware.
Layered Security -- Layered security, also known as layered defense, describes the practice of combining multiple security controls to protect assets, such as resources and data.
Now that we have some context, let’s talk about why anti-viruses can’t keep up with the increasing number of malware attacks. While there’s been thousands of cyber-attacks, one that really called attention to this growing issue of anti-virus protection happened in 2013. Over the course of three months, attackers installed 45 pieces of custom malware and stole crucial information from The New York Times. The Times — which uses anti-virus products made by Symantec — “found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it.” The IT services team just didn’t catch it.
To get rid of the hackers, The Times, “blocked the compromised outside computers, removed every back door into its network, changed every employee password and wrapped additional security around its systems.” Ultimately, this is just one example of how hackers can create software that surpasses anti-virus software. They’re now able to design a piece of malware, run it on a computer with that anti-virus product to see if it will be detected, and if it is, then they can modify the code until the anti-virus software no longer detects it. What this means is that unless a traditional anti-virus software has seen a particular threat in the past, it won’t necessarily protect your computer. There are other new products that are able to ward off some of these new threats. For example, Cylance Inc. develops anti-virus programs with Artificial Intelligence to prevent, rather than re-actively detect, viruses and malware, this is also referred to as “Next Generation Protection”. So, what else can you do to stay secure?
- Keep Your Systems and Software Up-To-Date: One of the most common ways hackers launch attacks? Exploiting vulnerabilities in operating systems and software that are out of date. Simply put, when technology reaches its End of Life or End of Support date, patches, bug fixes, and security upgrades automatically stop, putting your technology at risk for an attack. Educating your team about when and how to update software and systems can keep you safe. Our IT services team works to monitor when these End of Life/End of Support dates as well.
- Firewall installation: You will want a business firewall to keep your company data protected. You can implement a firewall in either hardware or software form, or a combination of both. Your IT managed services provider can help you set this up and monitor it for success on an ongoing basis. There are next-generation firewalls as well. Unified threat management (UTM) provides multiple security features and services in a single device or service on the network. UTM includes a number of network protections, including intrusion detection/prevention (IDS/IPS), gateway antivirus (AV), gateway anti-spam, VPN, content filtering, and data loss prevention, just to name a few.
- Encrypting Information: If a hacker can infiltrate your system, encrypting your files can make the information useless if it is stolen. Encryption is the most effective way to achieve data security because it turns your crucial information into code. To read an encrypted file, someone would need access to a secret key or password that enables them to decrypt it. BitLocker, Microsoft’s easy-to-use, proprietary encryption program for Windows can encrypt your entire drive, as well as protect against unauthorized changes to your system such as firmware-level malware.
- Password Management: We’ve talked about this before, and we encourage you to create a password protocol for your company. Changing passwords often and ensuring the passwords are difficult to guess are two ways to protect yourself. You can read more about our password tips here.
- Image-Based Backups: It’s important to be in a position to recover your environment with backups if you encounter a breach. At Five Nines, we use image-based backups to keep your business running. Image-based backups are just what the name states: an image of your entire operating system, rather than individual files on your PC.
The purpose of multi-layered security is to stop cyber attacks on different levels, so they never reach the heart of your system and affect essential information. While it’s crucial to use anti-virus software, it cannot be your only line of defense. | <urn:uuid:e3c99dc7-9b4a-425d-990f-8ba8cfb01c11> | CC-MAIN-2022-40 | https://blog.fivenines.com/you-cant-depend-on-anti-virus-to-stay-secure-heres-why | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00102.warc.gz | en | 0.935372 | 1,200 | 2.734375 | 3 |
Overcoming Inconsistent Definitions of Errors and Unreliable Reporting
The various approaches used in healthcare to define and classify near misses, adverse events, and other patient safety concepts have generally been fragmented. The definition of an error or mistake is inconsistent, and the reliability of reporting is also a concern.
Having access to standardized data would make it easier to file patient safety reports and conduct root cause analyses in a consistent fashion. The Joint Commission on Accreditation of Health Care Organizations (JCAHO) developed a Patient Safety Event Taxonomy that was tested in this study.
Aggregating data into a standardized taxonomy was successful used by epidemiologists to detect nosocomial infections and also to establish patterns and trends in patient safety. Click "Download Whitepaper" to request the URL to this resource. | <urn:uuid:10a3af1a-54c2-438d-88f4-8862d50604ea> | CC-MAIN-2022-40 | https://www.givainc.com/healthcare/patient-safety-event-taxonomy-healthcare.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00303.warc.gz | en | 0.947409 | 163 | 2.703125 | 3 |
You probably hadn’t even thrown away the wrapping paper and upgraded from pajamas to pants before you were knee-deep in your new computer (probably a laptop of one brand or another), figuring out its capabilities, playing with its new programs and so on.
But before you get too far, you need to make sure your favorite new toy is protected. Here’s how.
- Download and Update Antivirus Programs: They aren’t sexy, but antivirus programs are essential to protecting computers and laptops. Attackers are constantly evolving their techniques for exploiting the newest devices, so you need a program that keeps up with them. If your new system doesn’t come with one – it probably does – download one immediately. And keep it updated.
- Always Update Applications and Software: Attackers thrive on loopholes in commonly used programs like Adobe, Flash and Java – not to mention all web browsers and operating systems – so whenever new updates for the programs that run on your computer become available, download them right away. These updates include the latest security patches that will keep those programs from being exploited.
- Backup Sensitive Data: Your computer will inevitably be a repository for sensitive information but you need to limit this. Backup and clear sensitive information from your system on a regular basis. It’s more likely than not that eventually your system will be hacked and you don’t want that information to be stolen or lost if your system is crashed.
- Watch What You Download: The best antivirus systems and firewalls may not save you if you download a malicious program. Don’t download email attachments from anyone you don’t know, and be wary of doing the same in forwarded email attachments that originate from people you don’t know – your friends may have unwittingly been passing along malicious code from spammers.
- Practice Smart Browsing: Always use complicated passwords for any site you log into online – not simple words, but passphrases that incorporate non-alphanumerical characters. Failing to do so can open the door for hackers to access your email, social media and online banking accounts. And be wary of public WiFi networks. They are insecure and attackers prey on their vulnerabilities to monitor the activity of users to steal their passwords and other sensitive data. | <urn:uuid:7ec96508-48f8-4f41-a016-23a1c9c00f97> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/5-ways-to-protect-your-new-computer/931/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00303.warc.gz | en | 0.925273 | 473 | 2.78125 | 3 |
If you’re vaguely aware of cryptography, you may know that it has something to do with secret messages. While this is true, the field of cryptography has a wider focus, which can be summed up by the question:
- How can we keep our information and communications secure from attackers?
A big part of cryptography involves finding out ways that we can keep our messages secret from adversaries that may be eavesdropping on us. This involves finding mechanisms that can grant us confidentiality. Much of this is accomplished through encryption, which involves encoding information with algorithms so that attackers are unable to read it.
But cryptography is about more than just encryption for keeping our data confidential. If we return to our initial question, we want to keep our information and communications secure from attackers. This can’t be accomplished by encryption alone. Consider the following scenario:
You have a top-secret message you need to send to your friend. You spend months reading up on encryption and all of the state-of-the-art practices so that you can build your own encrypted channel between you and your friend. You’ve checked and double-checked it, and everything is perfect, so you send your friend the top-secret message. Unfortunately, it’s not actually your friend on the other end. Instead, an attacker received your top-secret message, and all of your plans are ruined.
Would you consider the above situation secure? Of course not. Despite using all of the correct encryption protocols, your data ended up right in the hands of an adversary. Sure, your encryption did a good job of keeping other parties out of the channel, but it forgot something incredibly important—to authenticate that the party on the other side of the channel is really who they say they are.
Authentication plays a major role in keeping our communications secure. It doesn’t matter how good your encryption is at keeping third-parties from eavesdropping if you don’t authenticate your communications partner properly. Without authentication, you could be sending data straight to an enemy, just like in our example. In cryptography, authentication is accomplished through certificate systems and mechanisms like digital signatures and public-key encryption.
Other critical aspects of security can include integrity and non-repudiation. Integrity processes allow recipients to verify whether information has been tampered with since it was sent, while non-repudiation removes the sender’s ability to deny that they were responsible for sending something.
The mathematical concepts, protocols and other mechanisms that can grant us confidentiality, authenticity, integrity and non-repudiation are all aspects of cryptography. Some of the most common elements of cryptography include:
Hashing is changing a message into an unreadable string not for the purpose of hiding the message, but more for verifying the contents of the message. This is most commonly used in the transmission of software or large files where the publisher offers the program and its hash for download. A user downloads the software, runs the downloaded file through the same hashing algorithm and compares the resulting hash to the one provided by the publisher. If they match then the download is complete and uncorrupted.
In essence, it proves that the file received by the user is an exact copy of the file provided by the publisher. Even the smallest change to the downloaded file, by either corruption or intentional intervention, will change the resulting hash drastically. Two common hashing algorithms are MD5 and SHA.
Symmetric cryptography uses a single key to encrypt a message and also to then decrypt it after it has been delivered. The trick here is to find a secure way of delivering your crypto key to the recipient for decrypting your message to them. Of course, if you already have a secure way to deliver the key, why not use it for the message as well? Because encryption and decryption with a symmetric key is quicker than with asymmetric key pairs.
It is more commonly used to encrypt hard drives using a single key and a password created by the user. The same key and password combination are then used to decrypt data on the hard drive when needed.
Asymmetric cryptography uses two separate keys. The public key is used to encrypt messages and a private key is used to then decrypt them. The magic part is that the public key cannot be used to decrypt an encrypted message. Only the private key can be used for that. Neat, huh?
This is most commonly used in transmitting information via email using SSL, TLS or PGP, remotely connecting to a server using RSA or SSH and even for digitally signing PDF file. Whenever you see an URL that starts with “https://”, you are looking at an example of asymmetric cryptography in action.
An extreme example of how all three can be used goes something like this: your company’s accounting officer needs to get budget approval from the CEO. She uses her symmetric private key to encrypt the message to the CEO. She then runs a hash on the encrypted message and includes the hash result in the second layer of the overall message along with the symmetric key. She then encrypts the second layer (made up of the encrypted message, the hash result and the symmetric key) using the CEO’s asymmetric public key. She then sends the message to the CEO. Upon receipt, the CEO’s asymmetric private key is used to decrypt the outer most layer of the message. He then runs the encrypted message through the same hashing process to get a hash result. That result is compared to the now decrypted hash result in the message. If they match, showing that the message has not been altered, then the symmetric key can be used to decrypt the original message.
Of course, that would all happen automatically, behind the scenes, by the email programs and the email server. Neither party would actually see any of this sort of thing happening on their computer screen.
Obviously, there is a lot of math involved in converting a message, like an email, into an encrypted signal that can be sent over the internet. To fully understand cryptography requires quite a bit of research. Below are some of the most often referenced websites, books and papers on the subject of cryptography. Some of these resources have been in active use for close to 20 years and they are still relevant.
If you are new to cryptography, one of the best ways you can learn is by taking Dan Boneh’s free Cryptography I class on Coursera. Dan Boneh is a professor at the Computer Science Department of Stanford University. His research specializes in the applications of cryptography to computer security.
Cryptography I delves into cryptographic systems and how they can be used in the real world. It shows you how cryptography can solve various problems, such as how two parties can establish a secure communication channel, even if they are being monitored by attackers. The course covers numerous protocols, as well as more advanced concepts like zero-knowledge proofs. It’s a great introduction for those with limited prior knowledge.
Another good resource is David Wong’s videos, which often explain more technical concepts in detail. While his work can be a useful resource, it is not comprehensive or the best place to build up a foundation.
Newsgroups are community-generated feeds hosted on Usenet. To view them, you’ll need a newsreader app. Read more about how to get set up with Usenet here and see our roundup of the best Usenet providers here.
- sci.crypt – Possibly the first newsgroup dedicated to cryptography. Please take with a grain of salt as anything that has been around as long as sci.crypt has been is bound to attract nuts, hoaxes and trolls.
- sci.crypt.research – This newsgroup is moderated and not as prone to hoaxes as some others
- sci.crypt.random-numbers – This newsgroup was created to discuss the generation of cryptographically secure random numbers
- talk.politics.crypto – This newsgroup was created to get all the political discussions off of sci.crypt
- alt.security.pgp – And this newsgroup was created to discuss PGP way back in 1992
And a bonus Google group:
- Google Groups sci.crypt – A Google group trying to emulate the original sci.crypt newsgroup
Websites and organizations
- A good explanation of how RSA works
- PGP – A site dedicated to Pretty Good Privacy
- Cryptography World has their “Cryptography made easier” site available
- International Association of Cryptologic Research
- The CrypTool Portal
People of Note
- Bruce Schneier – schneierblog on Twitter
- John Gilmore
- Matt Blaze – @mattblaze on Twitter & flickr/mattblaze
- David Chaum
- Ronald L. Rivest
- Arnold G. Reinhold
- Marcus Ranum
FAQs about cryptography
How does cryptography work?
Cryptography is a method of secret communication that uses ciphers and decryption to encode and decode information. It is used to encrypt and decrypt data using mathematical equations. It’s employed in various applications, including email, file sharing, and secure communications.
What are the benefits of cryptography?
Cryptography has several advantages, including data security and authentication. Data security is one of the key advantages of cryptography. It secures information against unlawful access while also allowing only authorized users to access it. Authentication is another advantage of cryptography. For example, it may be used to verify a sender’s or receiver’s identity. A final benefit of using its algorithms is non-repudiation. This implies that a message’s transmitter cannot deny sending it, and its recipient cannot deny receiving it.
What are the challenges of cryptography?
Cryptography can be vulnerable to attacks, its algorithms can be broken, and keys can be stolen. Cryptography is also computationally intensive, making it difficult to use in some applications. Additionally, it can be subject to government regulations.
- Crypto-Gram by Bruce Schneier
- Cryptobytes – The full archive of RSA Labs newsletter on cryptography – last published in Winter 2007 – Vol 8 No. 1
- Applied Cryptography: Protocols,Algorithms and Source Code in C – Bruce Schneier, 20th Anniversary Edition
- Handbook of Applied Cryptography is now available as a downloadable PDF file
- Building in Big Brother: The Cryptographic Policy Debate is available through several university libraries
- Cryptography Engineering: Desigh Principles and Practical Applications – Niels Ferguson, Bruce Scheier, Tadayoshi Kohno
- Practical Cryptography – Niels Ferguson, Bruce Schneier
- Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World – Bruce Schneier
- Chaffing and Winnowing: Confidentiality without Encryption by Ron Rivest – CryptoBytes (RSA Laboratories), volume 4, number 1 (summer 1998), 12–17. (1998)
- Computer Generated Random Numbers by David W. Deley
- The Crypto Anarchist Manifesto by Tim C. May
- Diceware for Passphrase Generation and Other Cryptographic Applications by Arnold G. Reinhold
- The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability by David Chaum, J. Cryptology (1988)
- The Magic Words are Squeamish Ossifrage by D. Atkins, M. Graff, A. Lenstra, and P. Leyland
- The Mathematical Guts of RSA Encryption by Francis Litterio
- One-Time Pad FAQ by Marcus Ranum
- P=?NP Doesn’t Affect Cryptography by Arnold G. Reinhold
- Survey on PGP Passphrase Usage by Arnold G. Reinhold
- TEMPEST in a Teapot by Grady Ward (1993)
- Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms by David Chaum, Communications of the ACM
- Why Are One-Time Pads Perfectly Secure? by Fran Litterio
- Why Cryptography is Harder Than It Looks by Bruce Schneier | <urn:uuid:c3922c33-383b-4404-b6d5-bcc78b9c58e8> | CC-MAIN-2022-40 | https://www.comparitech.com/blog/information-security/cryptography-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00303.warc.gz | en | 0.923581 | 2,537 | 3.703125 | 4 |
Malware is malicious software that acts counter to the interests of the computer that hosts it. Viruses, worms, and Trojans are all types of malware. The standard way that a device gets infected by malware is through an executable program copied onto the victim’s machine. That file is often disguised as a different file format such as a PDF or JPEG, or it is hidden inside a carrying file like a compressed zip file.
Is fileless malware a virus?
When malware is executed, the instructions of the file are loaded into memory. It is that active process that causes the damage. Fileless malware is the same as a traditional virus in that it is a process that operates in memory. The difference between this new type of malware and traditional viruses is that the code for fileless malware is not stored in a file nor installed on the victim’s machine. Fileless malware loads directly into memory as system commands and runs immediately. Often it will continue to run until the host device is powered down — putting a computer into standby mode won’t kill off the malware process. The vast majority of fileless malware targets Windows computers.
Fileless malware origins
No one knows who invented the concept of fileless malware. However, this operating type first emerged in 2017. Early examples of this malware were Frodo, Number of the Beast, and The Dark Avenger.
The meaning of “fileless”
Although the fileless model is new, it builds on standard malware techniques that have been in operation since the 1990s. It would be difficult to load a program into memory without any file at all. The actual operation of the installation is managed by a separate program, which does involve a file. The strategy of bundling together specialized malware programs is common practice in Trojan architecture.
However, the fileless system doesn’t involve a second program downloaded by a malware installer. Instead, resident, trusted programs execute in a manner that makes them perform maliciously. The computer service that made fileless malware possible is Microsoft’s PowerShell.
Fileless malware attacks have become more prevalent since 2017 thanks to the creation of attack kits that integrate calls to PowerShell. These kits are virus creation environments. Fileless frameworks include Empire and PowerSploit. These kits create the intrusion phase of an attack. Attack frameworks that create intrusive and damaging PowerShell scripts for delivery later include Metasploit and CobaltStrike.
The off-the-shelf method of creating fileless attack has seen an explosion in this type of malware. The Ponemon Institute’s “The State of Endpoint Security Risk Report” estimates 77 percent of detected attacks in 2017 were fileless. The authors of the report believe fileless attacks are ten times more successful than file-based attacks.
A more recent report showed an 888% rise in detection of fileless malware in 2020.
PowerShell is a script interpreter. A script is a collection of commands that can be executed individually by typing them in. However, when you write a plain text file that contains a series of operating system commands, it becomes a script. A script won’t do anything if you just click on it. In its basic state, the script is just a plain text file. Instead, in you need to pass the script as a parameter to an interpreter program. The interpreter then reads through the file and executes each command contained within.
The PowerShell program runs in system memory that cannot be queried or searched, so that makes malicious PowerShell activities almost impossible to detect. PowerShell has full access to the core operating system of a Windows computer, so it can wreak total havoc because by undermining all security features, such as user accounts. It can even manipulate the definition of user accounts and password protection.
Should I delete Powershell?
PowerShell is very useful and it is widely used by a lot of standard programs, particularly Microsoft utilities. The program can be run remotely, and it can also execute commands on other computers to which the host computer is connected via a network or the internet. The remote execution functions of PowerShell are actually managed by another native Windows tool, called WinRM. PowerShell routines are not blocked by firewalls or antivirus programs because they are ubiquitous in modern IT environments and blocking them would shut down a large segment of network activity.
Another native program that can be used for fileless attacks is the Windows Management Instrumentation (WMI). The use of WMI is to carry commands to PowerShell. An example of a useful function that WMI can perform for a fileless hacker is the ability to wake up WinRM if it has been turned off on a machine. WMI also provides the hacker with access to the registry of a computer.
We’ve covered PowerShell in more detail in our PowerShell cheat sheet article.
Executing fileless malware
The installing program that starts up fileless malware does not have to be resident on a computer for long. It is not expected to endure on the host device. Instead, a common delivery method to launch fileless malware programs is through web pages.
Another common carrier of fileless malware is the Flash video playing system. Macros in Microsoft Office tools can also be used by hackers to deliver fileless malware.
The fact that the damage done by fileless malware is performed by instructions sent to native programs, rather than from malicious code, gives this type of intrusion the name of “non-malware attack.”
Fileless malware persistence
When you turn off a computer, all active processes shut down. Processes that are services of the operating system are started up again when you turn the computer on. You have to wait a while between clicking the power button on your computer and the point at which the Desktop is loaded and you can start opening applications. Even once the Desktop is ready, you will notice that your computer is still very busy as it continues to start up background processes.
Fileless malware writes its script into the Registry of Windows. This is a function of the operating system that launches programs either at system startup or on a schedule. The code that runs the fileless malware is actually a script. A script is a plain text list of commands, rather than a compiled executable file.
Short lists of instructions don’t have to be stored in a file. However, longer and more complicated scripts do get stored for relaunch at system start up. In these case, although the program is classified as “fileless,” there is actually a file involved.
As the fileless malware launches programs native to the operating system instead of its own program, the operations of those instructions do not show up in Task Manager under the name of the malware program. Instead, anyone examining active processes will see the name of the interface that managed the launch of the script, such as PowerShell, and then see the common processes operating under their own name.
Fileless malware attack examples
The fileless malware phenomenon is relatively recent, and so there have not yet been many examples of them. However, here are some of the attacks that have taken place since 2017.
Although Frodo wasn’t a truly a fileless virus, it is included in this list as it is considered to be one of the forerunners of the genre. The aspect of the virus that marks it out as a precursor to fileless malware is that it loaded into the boot sector of a computer.
Frodo was discovered in October 1989. It was relatively harmless in that it was a prank rather than a destructive piece of code. Its aim was to flash the message “Frodo Lives” on the screen of the infected computer. However, the program was so badly written that it did have the potential to accidentally damage its host.
This is another pre-cursor to the fileless virus methodology, and it was first discovered in September 1989. This virus required a file as a delivery system, but then operated in memory.
Its aim was to infect executable files whenever they were run on an infected computer. It would even infect files whenever they were copied. The creator of this virus became known as Dark Avenger.
Moscow-based Kaspersky Labs has risen to be one of the foremost anti-malware producers in the world. In 2015 they discovered one of the first fileless malware infections attacking their own IT system.
This virus was named Duqu 2.0 by Kaspersky Researchers, who calculated that the virus had gone undetected on the network for at least six months. The characteristics of Duqu 2.0 identified it as a variation on Stuxnet, which was created by the US and Israeli secret services.
This was an advanced persistent threat that endured for at least a year before it was detected in May 2017.
The malware was executed on the system of an Asian corporation and the PowerShell scripts involved were able to communicate with an external command and control server. This enabled it to initiate a series of attacks, which included the Cobalt Strike Beacon virus.
The Meterpreter program found its way into the memory of computers at more than 100 banks spread across 40 countries in February 2017. Meterpreter is an onboard element of the malware kit, called Metasploit.
This attacked aimed to control ATM machines and facilitate a cash robbery. The discovery of the malware thwarted the attempted heist in all but one country. Hackers in Russia managed to control the ATMs of eight bank branches and withdraw $800,000. Kaspersky Labs discovered this stealth attack when it was called in by an infected bank to investigate the intrusion.
The WannaCry ransomware attack in May 2017 attracted a lot of media coverage. UIWIX was rolled out shortly afterwards, but with less success. UIWIX uses the same exploit as WannaCry, which is called Eternal Blue. However, it has a fileless execution system. The publicity that surrounded the rollout of WannaCry prompted Microsoft to come up with a patch that resolved a weakness in the XP version of Windows.
The urgency with which owners of XP machines closed off this exploit by installing the patch meant that there were few vulnerable computers left by the time UIWIX launched. UIWIX is an example of how Eternal Blue is increasingly being used in fileless attacks. This is a ransom attack with demands written in English. The program will not run in Russia, Kazakhstan, or Belarus.
The Eternal Blue exploit is also used by hacker cryptocurrency miners and directed at the large servers of corporations. This fileless malware mines cryptocurrency on the host computer.
Thanks to the stealth of the no-malware model, many of these infections persist for months. The virus was first spotted running in memory without any trace of a file-based program in mid-2017, and new infections are ongoing. The purpose of WannaMine is the creation of Monero.
Misfox was first identified by the Microsoft Incident Response team in April 2016. Misfox uses the classic fileless techniques of executing commands through PowerShell and achieving persistence through Registry infection.
The creators of Misfox had the misfortune of getting their malware spotted by a key Microsoft security team. This led to Microsoft bundling a solution to this malware in Windows Defender.
Fileless malware trends
Although there was a marked increase in the number of fileless malware attacks at the beginning of 2017, the success of this technique seems to be waning. The 2017 surge was due to the discovery and definition of the technique and its formulation into hacker toolkits, which made the methodology easy to implement.
The marked lack of success of UIWIX in comparison to its immediate predecessor WannaCry shows the most effective hacker techniques are new ones. The speed at which the cybersecurity industry now rushes to close off exploits considerably shortens the attack life of new viruses and infection methods.
Although fileless malware is harder to detect than traditional file-based infections, the specific targeting of Windows services has laid a challenge to Microsoft, and they met that challenge full on. The response to no-malware attacks has come from Microsoft itself rather than from the anti-malware industry.
The system processes used by fileless malware are so essential to Microsoft’s operating systems and Windows software developers that they cannot be turned off without losing most of a business’s IT infrastructure software capabilities.
Therefore, Microsoft upgraded its Windows Defender package to detect irregular activity from PowerShell and WMI. Fine tuning Windows Defender enabled Microsoft to double the number of incidents that the firewall blocked in Q2 2017 compared to the previous quarter. That success against Misfox would also trap other viruses that exploit PowerShell.
Windows defense developments
Microsoft has created a whole range of commercial support products under the umbrella name Microsoft 365. This suite bundles in enhanced security measures alongside an updated version of Microsoft Office.
As fileless malware almost exclusively attacks Windows, this is a Microsoft problem and the company’s response should rapidly reduce the threat of no-malware attacks.
How to stop fileless malware
The main defense against any type of malware is to keep your software up to date. As Microsoft has been very active in taking steps to block the exploitation of PowerShell and WMI, installing any updates from Microsoft should be a priority.
Other measures that you can take to block fileless malware intrusion are:
1. Set email policies
Caution employees about clicking on links in emails. Luring someone within the network to a website containing the malicious code is the easiest way to get fileless malware onto a Windows computer.
Also, caution workers against opening attachments in emails not sent from trusted sources. Two types of documents are particularly hazardous:
- Microsoft Office documents
Although the PDF format is widely used in business, it is also a great medium for spreading malware of all types. Fileless malware is aided by the ability to load PDFs in browsers immediately rather than downloading them. These PDFs can be delivered as email attachments or as “white papers” available from websites.
If a PDF is downloaded before it is opened, your firewall has a chance to spot any malicious code within. Also, one of the problems of tracing a fileless attack is working out where it originated from. If the file is downloaded, it is available for analysis later. That is preferable to the disappearing malicious code that leaves no trace of its existence once the PDF viewing tab in the browser is closed.
Microsoft Office macros
The use of Microsoft Office macros for spreading malware is well known in the cybersecurity community. Microsoft productivity software now ships with macro capabilities turned off. Employees who are sent email attachments in Word or Excel format may be tempted to turn macros on when prompted.
A macro attack is not really a fileless malware conduit. However, the online availability of Microsoft products, including viewers for productivity documents for those who do not possess the Office suite, creates opportunities to run macros in the browser and launch calls to PowerShell or WMI.
Explain to employees that they should never enable Microsoft Office macros in any document. Although macros are great for automating tasks, they have become such a potential hazard that it is better to find other methods to generate forms and documents. It is better that your workers do not become familiar with the possibility of running macros in documents.
2. Disable Flash
Just like its Adobe stablemate, PDF, the web video delivery system Flash has become known as a malware-friendly feature in a web page. Most websites have already stripped out Flash and replaced it with HTML5 for video inclusion. Therefore, it is no great hardship to block this system from appearing in your browsers.
Microsoft Edge will not accept Flash code, so if that is your browser of choice, you don’t need to do anything. Both Firefox and Chrome give you the option to block Flash in their settings screens and Internet Explorer will not load Flash if you disable ActiveX.
Read more here on why Flash is not secure.
3. Ban personal use of company resources
Introduce a policy that bans employees from using company computers to access personal email or leisure sites. This may be unpopular, but you can create a separate wifi network that allows employees to use personal devices so that they can access the internet during break periods.
Such services are popular with workers and they remove the temptation of staff to break the rules and try to access the internet through the main company network. The “own use” network can be kept completely separate from the main company system to reduce the risk of cross-infection. Setting up a separate wifi system and paying for another internet plan costs a lot less than the losses you could face from a malware attack.
4. Protect your browsers
Create an office policy that only allows one browser type on company desktops. Then install browser protection on each computer. Microsoft produces Windows Defender Application Guard. This is part of Office 365 and it was written with specific procedures to protect against fileless malware attacks. This protection only covers Internet Explorer and Microsoft Edge.
If you decide to use Firefox or Chrome as your office browser, then install the Windscribe extension on each browser. This utility will examine each web page that the browser loads and block any page that contains malicious code. The extension is mainly a VPN. However, as long as the extension is installed, the browser protection remains active even if the VPN is turned off. The extension also has an optional feature that will strip out social media “Like” buttons. These buttons are a security risk and they can also harbor code, which potentially could be a vector to load in PowerShell instructions.
5. Strengthen user authentication
Microsoft suggests the spread of fileless malware is not caused by the existence of PowerShell, rather it is the result of weak user authentication on company networks and servers. A non-malware attack spreads throughout the network if installed on a computer whose user has high-level access rights to a large number of resources within the IT system. The Cobolt Kitty malware gained entry to the system by targeting the directors and infrastructure administrators of the victimized company. These superusers tend to have access to all parts of the network.
Employ password vaults and create different authentication points for each piece of equipment or service on the network. You should also consider implementing two-factor authentication that requires a code-generating physical gadget to provide a second layer of passkey that malware cannot emulate.
Fileless malware issues
Fileless malware is a relatively recent method for hacker intrusion into a network. It is worrying because it is new and there are few people in the cybersecurity community with the knowledge and experience to deal with this problem. In fact, the mainstream anti-malware industry seems to have continued to overlook this methodology.
The entire operational system of antimalware programs is based on checking files. So, the move to a fileless system was a very clever move by hackers that has blindsided traditional antivirus procedures. Although antivirus producers have problems reimagining their strategies, Microsoft showed no hesitation at throwing resources at the problem and has contributed to a reduction of attack successes on Windows computers.
Right now, fileless malware only attacks the Windows operating system. However, hackers seem to have a standard pipeline of writing attacks for Windows first and then adapting those strategies for Mac OS and Linux later on. Android attacks usually come next and iOS gets hit last.
The ability to deliver code through browsers should make fileless malware very easy to adapt to other operating systems. The only issue hackers face when porting this attack strategy is the need to discover an equivalent service on Linux and Macs that will match the usefulness of PowerShell. The fact that they haven’t found that facility yet is probably because they are so busy exploiting PowerShell right now.
The number one defense against fileless malware is to keep all of your software up to date. The second defense is to shut down browser security weakness, and the third action is to educate your employees about the websites they visit and the email attachments that they may encounter.
The good news is that all of the main defense strategies against fileless malware cost nothing to implement. So, there should be nothing stopping you implementing a policy to tighten up your company’s defenses against fileless malware. | <urn:uuid:74ac0fa8-928a-4678-ae50-d653d5404599> | CC-MAIN-2022-40 | https://www.comparitech.com/blog/information-security/fileless-malware-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00303.warc.gz | en | 0.945186 | 4,491 | 3.3125 | 3 |
Do you remember the game Jenga? It consists of a tower of wooden blocks stacked on top of each other. The goal is to take out as many blocks as possible without toppling the tower. Whoever takes out the block that makes the tower fall is the loser. Now, if you don’t want to play with people, you could play with a robot at MIT. Members of MIT’s MCube lab have developed a robot that is learning how to play the game, and it’s actually doing quite well.
Join the Komando Community
Get even more know-how in the Komando Community! Here, you can enjoy The Kim Komando Show on your schedule, read Kim's eBooks for free, ask your tech questions in the Forum — and so much more. | <urn:uuid:24b03756-752e-4390-acf4-7196d989988a> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/this-robot-has-learned-how-to-play-jenga/681603/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00303.warc.gz | en | 0.961548 | 167 | 2.984375 | 3 |
Artificial Intelligence and Machine Learning are two emerging concepts which are playing a very crucial role since the Covid pandemic hit us. Both technologies are being used to study the new virus, test potential medical treatments, analyse impact on public health and so on.
Today we look more in detail about two important technologies which are changing the way we look and perceive things and revolutionize the entire paradigm of industries, not just IT. We look at artificial intelligence and Machine learning and understand the difference between them , the purpose for which they are deployed and how they work etc.
About Artificial Intelligence
Artificial intelligence is part of computer science which mimics human intelligence. And as its name suggests it means human made thinking power. Artificial intelligence is a technology which can help us to create intelligent systems which can simulate human intelligence. These systems are not pre-programmed but they use such algorithms such as reinforcement learning and deep learning neural networks.
IBM Deep Blue which beat chess grandmaster Garry Kasparov in 1996 and Google DeepMind’s AlphaGo , which beat Sedol at Go in 2016 are all examples of narrow AI – skilled at one specific task. Based on its capabilities AI can be classified into below types – Artificial Narrow intelligence (ANI) or Weak AI, Artificial General intelligence (AGI) or General AI and Artificial Super Intelligence (ASI) or strong AI. Currently we are working on weak and General AI. The future of AI is strong AI which is going to be more intelligent than humans.
Applications of Artificial Intelligence
- Map services
- Recommendation engines such as Amazon, Spotify , Netflix etc.
- Robotics such as Drones, Sophia the robot
- Health care industry such as medical diagnosis, prognosis, precision surgery
- Autonomous systems such as autopilot, self-driving cars
- Research – drug discovery
- Financials – Stock market predictions
About Machine Learning
Machine learning is a subset of Artificial intelligence, in very simple words machines take data and learn for themselves. It is the most wanted and promising tool in the AI domain. ML systems can apply knowledge and training from large data sets , speech recognition, object recognition, facial recognition and many such tasks. ML allows systems to learn and recognize patterns and make predictions instead of hardcoding instructions for tasks completion.
In simple terms Machine learning can be defined as a subset of artificial intelligence which enables systems to learn from past data or experiences without being pre-coded with a specific set of instructions. Machine learning requires the use of massive amount of structured and semi-structured data to perform predictions based on that data. ML can be divided into three types – supervised learning, reinforcement learning and unsupervised learning. ML is used at various places such as online recommendation systems for google search algorithm, email spam filtering, Facebook auto friend tagging suggestion etc.
Applications of Machine Learning
- Regression (Prediction)
- Classification (lesser number of classes , with less data)
- Control systems – Drones
Comparison Table: Artificial Intelligence vs Machine Learning
Below table summarizes the differences between the two terms:
|Definition||Artificial intelligence technology enables machine to simulate human behaviour||It is a subset of AI which let a machine to automatically learn from past data without any pre-coded instructions|
|Origin||Origin around year 1950||Origin around year 1960|
|Purpose||Make smart computer systems to solve complex problems like human beings||ML is used to allow systems to learn from data so that we can get accurate output without manual intervention|
|It focuses on maximizing chances of success||It focuses on accuracy and patterns|
|Objective||Learning, reasoning and self-correction||Learning and self-correction when new data is introduced|
|Components||Artificial intelligence is subset of data science||Machine learning is subset of artificial intelligence and data science|
|Scope||Wide range of scope||Limited scope|
|Applications||Siri, customer support using catboats, expert system, online game playing, intelligent humanoid robot etc.||Online recommendation system, Google search algorithms, Facebook Auto friend suggestion, optical character recognition, web security, imitation learning etc.|
|Data types||Deals with structured, semi structured and unstructured data||Deals with structured and semi-structured data|
|Examples of algorithm||Q Learning, Actor critic methods, REINFORCE etc.||Linear regression, Logistics regression, K means clustering, Decision trees etc.|
Download the comparison table: Artificial Intelligence vs Machine Learning
AI and ML are often confused terms but AI is a simulation of natural intelligence at par with humans and ML is an application of AI to give systems the ability to learn and understand things without any hard coded programming instructions. They evolve as they learn. | <urn:uuid:6e67b990-8a20-4e5c-a605-02779eecde0c> | CC-MAIN-2022-40 | https://networkinterview.com/artificial-intelligence-vs-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00303.warc.gz | en | 0.901324 | 1,012 | 3.421875 | 3 |
There are two popular types of malicious attacks: A data breach and a ransomware attack. You may have heard these two terms used interchangeably, however they’re not quite the same. A data breach occurs when a hacker gains access to information and steals the unencrypted data from the system. This is often used to steal financial, medical, and other personal information. A ransomware attack occurs when hackers gain access to a system and hold the data hostage in exchange for a ransom, regardless of whether the data is encrypted or unencrypted. A hacker may keep the data inside the enterprise’s system but encrypt it so the right people can’t gain access to it. They may also remove the data from the system and return it in exchange for ransom. Both types of attacks have been around for a while, but recently, ransomware attacks have become more prevalent.
https://www.fornetix.com/wp-content/uploads/2022/08/blog-ransomware.jpg 300 1000 Fornetix https://www.fornetix.com/wp-content/uploads/2017/05/fornetix-logo-300x30.png Fornetix2019-09-03 08:28:142022-08-31 08:32:53Ransomware vs Data Breach: What They Are and How You Can Protect Your Enterprise From Them | <urn:uuid:36522901-e5d6-4218-b696-29163b628e19> | CC-MAIN-2022-40 | https://www.fornetix.com/2019/09/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00303.warc.gz | en | 0.875487 | 286 | 2.875 | 3 |
Assure sustainability by protecting Southern Italy’s water resources against natural and human hazards.
Harness IoT analytics by leveraging sensors and camera data. This monitors the territory and natural water springs and drainage basin and detects incidents for real time response.
Improved visibility of anomalies around springs and watercourses
Enabled greater governance of hydrogeological instability caused by natural factors
Faster response to incidents with direct connection to emergency services
Increased potential for monitoring additional natural and socioeconomic resources
Discovered new insights help to boost sustainability and combat climate change.
The Apennine mountain range, which extends longitudinally through Italy, holds the water resources for the country’s central and southern regions—the areas that are most susceptible to the consequences of climate change. Increasingly unpredictable and violent thunderstorms are impacting the hydrogeological structure of the soil, while long periods of drought are heightening the risk of potential desertification. Intensive use of the land for agriculture and other purposes also increased exposure to risks of natural disasters.
In the Southern Apennine region, the District Basin Authority governs the physical environment and protects its water resources. It is responsible for monitoring the appropriate use of these resources, forecasting the region’s water supply, and preventing natural disasters and human-made hazards, such as illegal abstraction, discharges, and spills.
The Authority embarked on a project to continuously monitor water quality and availability throughout the territory. The aim was to build a network of remote sensors that could surveil the water system and analyze the data using a variety of technologies, including big data analytics and data science modeling.
With Hitachi Vantara, the Authority is developing a system for sampling data from the field through specific multiparametric sensors, including video, thermographic, lidar, and others. Integrated with a GIS system and meteorological forecasts, that data flows into a big data analytics and data science system that allows officials to inspect water resources throughout the territory and generate actionable insights in real time to mitigate risks and protect the environment.
Vera Corbelli, Secretary General, Southern Apennine District Basin Authority, recalls: “The solution we identified from Hitachi, is proving successful in meeting our needs and expectations—helping us turn the concept of an innovative monitoring system with long-distance sensors into a reality.”
The solution uses Hitachi Lumada Industrial Data Ops to capture data on local water resources at remote sites throughout the region. This data is then blended with enterprise data, GIS and external data such as weather forecasts. Some of the field sensors are cameras and Lumada Video Insights is able to collect, tag, visualize and analyze video data, which is synced with other field and enterprise data in a unified IoT data architecture. Hitachi Content Platform Anywhere syncs the captured video to a central file server, and Hitachi Content Intelligence enriches it with metadata before storing it appropriately in the Authority’s Hitachi Content Platform object storage system. By matching the sensor alerts to the video data and feeding the results into the Authority’s research applications, real-time data modeling, analysis, and monitoring via dashboards and control rooms are made possible.
The solution augments the Authority’s advanced scientific knowledge in forecasting and modeling natural events and resource impairment. Strategies are developed for assessing and managing the risk exposure of the region’s physical-environmental and socio-economic systems. It also makes it possible to conduct experiments with multi-scalar criteria and methods of analysis to estimate and manage risks relating to human hazards such as illegal activity—with positive outcomes for the economy and society.
As a result of this pioneering work, the Authority is able to protect the springs and watercourses better. Using the new dashboard system, officials can react to alarms in an intelligent and coordinated manner and collaborate directly with emergency services, including law enforcement
Vera comments: “IoT analytics is playing a central role for monitoring our water resources, including forecasting and prevention of both natural hazards and malicious anthropic activities, such as dumping industrial waste. I believe it will also have a central role for environmental impact assessments of climate change as part of Italy’s National Recovery and Resilience Plan.”
“Our digital transformation will enable these IoT and big data analytics tools to be adopted for environmental and natural territory monitoring in other contexts, with huge benefits not only for the protection of environmental resources, but also for the social and economic protection of the areas where it is adopted,” Vera concludes.
Pentaho Data Integration
Data integration platform to ingest, blend, cleanse and prepare diverse data from any source in any environment without code.
Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us.
If you are already subscribed with us you will not receive any email from us where you need to confirm your data.
"FirstName": "First Name",
"LastName": "Last Name",
"Email": "Business Email",
"Title": "Job Title",
"Company": "Company Name",
"Phone": "Business Telephone",
"LeadCommentsExtended": "Additional Information(optional)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"DataModern": "Data Modernization",
"GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required." | <urn:uuid:307251a1-a594-4641-9c0b-b137fee4f3bb> | CC-MAIN-2022-40 | https://www.hitachivantara.com/en-anz/company/customer-stories/southern-apennine-district-case-study.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00503.warc.gz | en | 0.905809 | 1,322 | 3.21875 | 3 |
The invention of the automobile forever changed how we travel, and vehicles are now an essential component in every citizen’s life. Unfortunately, driving can be incredibly dangerous, with 1.35 million people dying in car accidents yearly. Luckily, innovations in car safety technology are being created every year. As these systems become more commonplace, we can reduce fatal accidents and ensure that more drivers on the road stay safe.
While many new technologies are creating safer roads for citizens worldwide, let’s look a bit closer at seven features that are especially helpful in saving the lives of motorists:
- Rearview Cameras
- Lane Departure Alerts
- Automatic Emergency Braking Systems
- Adaptive Cruise Control
- Electronic Stability Control
- Automatic Parking Systems
- Air Bag Systems
By placing a camera on the back of your vehicle and transmitting real-time video to an in-car monitor, rearview cameras allow you to see any other vehicles or objects behind you. Displaying a 180-degree view, these cameras are often equipped with their own washing systems, using the same fluid that cleans your rear window and windshield. This keeps your view clear, so you can safely reverse out of parking spots or into thoroughfares.
Lane departure alerts use a combination of sensors and cameras to detect when vehicles or other objects are approaching the sides of your car. This is especially useful if you or another car begins to drift out of their lane. The lane departure alerts will warn you with flashing lights or sounds, allowing you to adjust your vehicle to avoid a crash.
Brakes are a safety feature every driver is familiar with, but their biggest flaw is the ability for human error. There isn’t much time to react in the case of a potential collision, and applying the brakes a bit too late can be catastrophic. That’s the issue that automatic emergency braking systems were created to fix: with AEBs, detection systems automatically identify a potential impact and apply the brakes for you.
Of course, there can be issues with these systems. A recent example is the electric vehicle company Tesla, which recalled 11,704 vehicles over issues with their automatic emergency brakes. That’s why it’s crucial to perform a vehicle recall check before purchasing a used car; that way, you can check whether a car has been sent in for necessary repairs.
Building off of the traditional cruise control that helps your vehicle maintain a certain speed, adaptive cruise control adds a further level of safety by establishing a set following distance. Using a combination of radar, cameras, and laser technology, adaptive cruise control will detect the distance between you and the vehicle in front of you and automatically adjust your speed to keep you at a safe range. If your car starts to get too close, lights and noises will alert you as the car either suggests you brake or, in newer models, applies the brakes on its own.
Using computer-assisted braking on each wheel of your vehicle, electronic stability control can help your car retain traction in challenging road conditions. This will keep your vehicle from spinning and assist the driver during complex maneuvers. Electronic stability control helps lower the chances of flipping by ensuring your tires stay on the road and monitoring your steering wheel input. It activates when the systems detect that you are losing control of your steering.
Automatic parking systems, like parallel parking, can help guide your car during specific maneuvers. By activating the automatic parking system near an available spot, your car will use sensor technology to estimate the size of the area and navigate your vehicle into that spot. This can help avoid bumping into other cars in a tight parking situation and get your vehicle out of traffic and into a parking spot faster.
One of the most prominent car safety features, air bags, are inflatable pockets of air that deploy during crashes and impacts. Most cars come with front and side airbags, which can help protect both the driver and the passengers. Air bags can drastically reduce the severity of injuries during a crash, cushioning the head and upper body from the brunt of the force of an accident.
Car safety technology has saved countless lives and made safe driving that much easier for everyone on the road. But, even with all the advancements in safety, you’ll still need to be attentive and observant on the road. Make sure to drive at safe speeds, carefully monitor your vehicle’s blind spots, and stay alert while driving. By combining technology and safe driving practices, you can ensure you get to your destination safe and sound. | <urn:uuid:ec884771-6f25-4ae5-9841-8f956982129c> | CC-MAIN-2022-40 | https://coruzant.com/autotech/7-car-safety-technologies-that-can-save-your-life/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00503.warc.gz | en | 0.944714 | 917 | 2.75 | 3 |
Network discovery is the processes where Holm Security VMP automatically scans an IP range to identify each device and adding them as separate assets.
Read more about how to add multiple hosts here:
When doing a discovery scan Holm Security VMP uses multiple tests to check if a host is alive or not. Since there are networks where not all of the hosts are alive at any given time and only a small percentage of IP addresses are active, depending on firewall configurations or other circumstances, you may need many ways to detect alive hosts.
Here are some of the tests and short explanations that discovery scan uses to detect alive hosts and acquire information:
- Internet Control Message Protocol (ICMP)
Creates and sends a message to the source IP address to determine if the host can be reached.
- Transmission Control Protocol - Synchronize (TCP SYN)
Sends a TCP packet to the host requesting a connection to be established.
- Transmission Control Protocol – Synchronize and Acknowledge (TCP SYN and ACK)
Sends a TCP packet to the host requesting a connection to be established, receives an acknowledgment of an established connection.
- 3-way handshake (only during port scanning)
Sends a TCP packet to the host requesting a connection to be established, receives an acknowledgment of an established connection, establish a connection.
After the host discovery is completed and if any alive hosts are found, it will proceed to scan ports using either TCP SYN or 3-way handshake to gather more information.
There are multiple options to select from when executing a discovery scan and not only to see if a host is dead or alive. You can customize the scan profile to suit your needs by including different types of test categories such as SSL and TLS, Product detection, Databases etc. | <urn:uuid:1330fa7d-bd8a-40b1-9871-8ea599f029ca> | CC-MAIN-2022-40 | https://support.holmsecurity.com/hc/en-us/articles/213863505-How-does-the-network-discovery-work- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00503.warc.gz | en | 0.882748 | 369 | 2.609375 | 3 |
Google is envisioning a future in which email and online banking accounts are safer thanks to technology that would add a physical component to security protection, according to a report from Wired.
Members of the Google security team will publish a paper in next month’s IEEE Security & Privacy Magazine examining scenarios in which passwords would be a thing of the past. Instead, users would turn to some sort of hardware device to unlock email, online shopping or banking accounts.
The article details one device, a Yubico cryptographic card, that slides into a USB reader to authenticate and log into a user’s account. Google’s vice president of security, Eric Grosse, and engineer Mayank Upadhyay experimented with the device on a modified version of Chrome, but beyond the proper browser, the system did not require a software download or any complex steps to function.
Eventually, the team hopes the technology could go wireless. Then, instead of the key, the technology could be integrated into a mobile device such as smartphone or even a piece of jewelry — something people already carry with them — to access accounts.
“We’re focused on making authentication more secure, and yet easier to manage,” Google spokesperson Jay Nancarrow, told TechNewsWorld. “We believe experiments like these can help make login systems better.”
While passwords have been a cheap and easy way to protect online accounts so far, security breaches are becoming too common and egregious to continue on the same system. The past year saw a number of high-profile hacks in federal, enterprise and consumer accounts, including breaches of Microsoft’s online store in India, Sony’s PlayStation network, retail site Zappos and corporate networking site LinkedIn.
A combination of human error and an increase in the availability of advanced hacking technology is to blame for the rise in cyberattacks, said Chiranjeev Bordoloi, CEO of Top Patch.
“The biggest problem with passwords is a simple problem,” Bordoloi told TechNewsWorld. “The average user uses the same password for everything. So when they go to a mom-and-pop website and buy something and that site gets hacked, a hacker has access to their e-mail, and they can reset the password at any site it chooses. Then, with the cost of GPU computing decreasing dramatically, the technology available to hackers to crack passwords is really state-of-the-art. A hacker can do magic with a gaming laptop that is packed with a password cracking system.”
The question then becomes what to do in order to add online protection to user accounts. The idea of carrying around a piece of hardware or other token isn’t out of the question as an answer to that problem, said Michael Murray, managing partner of MAD Security.
“It’s entirely realistic,” Murray told TechNewsWorld. “Users are quite comfortable carrying around physical tokens for authentication to most of their world — their car and house keys fit that bill rather effectively.”
There are also a few layers of security protection that can happen before consumers make the costly jump to a device-driven system, said Bordoloi. Passwords might become sentence-length rather than a six or eight-character word, or multi-step authentication, like many banking websites, will likely become more popular in e-mail accounts or retail sites, Bordoloi noted.
Then, corporations might become the leaders of adding a hardware component to security, said Bordoloi.
“This would be an expensive system, so corporations might take it on a risk-based approach,” he predicted. “A company might execute this only for their finance employees, for instance. They’re going to figure out what pieces of data are most important to protect, and take extra steps for those.”
Still, that system would come with its own security issues, Kapil Raina, security expert and director at Zscaler, pointed out.
“This is a limited set of protections,” Raina told TechNewsWorld. “After all, where is the key going to be? In the same bag as your laptop or iPad so protection against theft will be marginal – and if the user keeps the key engaged in the system the defense against remote attacks may also be limited. The financial community tried this years ago for the consumer without success.”
Google the One to Lead the Charge?
While Google’s ideas might be generating the buzz, it might not be anything revolutionary within the security space, Raina pointed out.
“Fundamentally there are game-changing innovations going around all over the authentication space, far beyond what even Google has imagined,” said Raina.
Still, the company has the resources and customer base to lead the way in further consumer protection in ways a smaller, security-focused company couldn’t, said Murray.
“There are really only a few companies that have the reach and the scope to do it,” Murray noted. “Google is one of those. And I’m sure that the others, Microsoft, for example, will attempt to follow suit shortly.” | <urn:uuid:36ed2d5f-cf55-4e8d-ace1-e27ed9430d4c> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/google-outlines-plan-to-make-passwords-passe-77132.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00503.warc.gz | en | 0.936319 | 1,089 | 2.5625 | 3 |
With the discovery of a new property in commonly used plastics, researchers from Princeton University and Hewlett-Packard said they have invented a combination of materials that could lower the cost and boost the density of electronic memory.
The research, detailed in this month’s issue of Nature, involves a previously unrecognized property in a widely used polymer plastic coating. Combined with thin-film silicon transistors, the polymer can store data like a CD but would serve as a conventional electronic memory chip, plugging right into an electronic circuit with no moving parts.
“We’re hybridizing,” said Princeton electrical engineering professor and research group leader Stephen Forrest. “We are making a device that is organic in the plastic polymer and inorganic in the thin-film silicon at the same time.”
Researchers believe the invention could be the basis for a grid of memory circuits so small that a megabit, or 1 million bits of information, could fit on a square millimeter of paper-thin material.
When put together in a block, the plastic device could store more than one gigabyte of information, the equivalent of 1,000 high-quality images, in one cubic centimeter about the size of a fingertip.
The discovery, achieved by HP and Princeton researchers in Forrest’s university laboratory, came during work with a polymer material called Pedot — a clear, conductive plastic used as coating on photographic film and as electrical contact on video displays.
Princeton postdoctoral researcher Steven Moller, who is now with HP, found that Pedot conducts electricity at low voltages but permanently loses its conductivity when exposed to higher electrical currents, making it act like a circuit breaker.
In using Pedot as a storage medium, a device would use a grid of circuits in which all of the connections contain a Pedot fuse. With the introduction of high voltages, the fuses would blow and represent the zeros while unblown fuses would represent the ones that make up computerized data and digital images.
Princeton representative Steven Shultz told TechNewsWorld that researchers believe the technology will decrease the size, increase reliability and speed up reading and writing of memory chips.
Aberdeen Group chief research officer Peter Kastner said the plastic memory is related to other thin-film polymer memory approaches from the likes of Intel and Advanced Micro Devices.
However, Kastner said cost and density advantages still do not address larger issues for electronic memory — heat and leakage — that will be tackled in new DDR2 memory due out next year.
“The biggest problem with today’s high-density transistor memory is heat dissipation,” he said. “If a polymer or plastic can’t solve the electronic density issue and leakage, it will be difficult for it to be adopted.”
While he conceded that it might indeed become an important discovery in the next decade, Kastner stressed that any new invention will be slow to be adopted because of the billions of dollars already invested in existing manufacturing and install base.
One Time, Short Time
However, Princeton’s Forrest said the new memory — limited to one-time use — could be ready for market within as little as five years, given additional work on creating a large-scale manufacturing process and ensuring compatibility with existing hardware.
In response to the plastic memory discovery, many predicted it could mean the end of today’s CDs and DVDs as mainstream storage media.
Kastner, however, pointed out that the next generation of DVD-writing technology will use much less metal while still being able to support dual layers, which bodes well for the longevity of the technology. | <urn:uuid:6f892507-5487-4246-8cd3-d12df4f660a2> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/plastic-discovery-means-advanced-memory-32136.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00503.warc.gz | en | 0.934831 | 762 | 3.171875 | 3 |
Globaloria: Invent. Build. Share: Advancing Computing Innovation And Digital Citizenship Skills Star
Globaloria, the leader in computing innovation, is working towards the creation of a talent pipeline of informed and engaged citizens and digital workers effective for the 21st century. To bridge the widening knowledge gap across Science, Technology, Engineering and Mathematics (STEM) streams in education, Globaloria launched Massive Open Online Courses (MOOC), specifically on computing education for the K-12 market.
We bring MIT-style courses into schools, no matter what their zip code is. Coding is the new writing, and Globaloria sparks students’ imaginations and learning innovation skills
As an award winning K-12 learning platform with MIT-style courses in STEM, computing, game design and coding, Globaloria empowers educators and school systems by enhancing their STEM teaching opportunities, imparts computer coding fundamentals to students in a novel gaming study plan and transforms classrooms into networked game-design studios that motivate students to deeply comprehend key concepts in the curriculum. Through team-based learning of game design, teacher-led coaching and online networking with experts and peers, students are empowered to drive their own design and learning-by-doing processes and to master coding and computer science. And since they all like to play games, making them is fun and engaging too.
Globaloria’s technology tools and content have shown unparalleled success on a large scale among schools in rural and urban communities of varied socioeconomic status.
Globaloria’s interactive project-based blended-learning leads to better comprehension in students and helps them assimilate more information by having a hands-on approach towards learning. Course completion rates and student and teacher satisfaction are high.
Principals or superintendents who want to address their school’s need for computing innovation can purchase the Globaloria platform for student use. Globaloria can be implemented in various formats—as individual courses or as comprehensive cumulative programs, as part of a regular school day for credit, integrated into an existing core curriculum or elective class, or as a stand-alone course. Globaloria is also implemented within extended day programs, afterschool clubs or summer camps.
The platform provides customizable and scalable online courses, which can be easily integrated into classrooms and accessed by thousands of students at one time, with virtually no changes to the school infrastructure. A broadband internet connection is the only necessity as all of the content is hosted by Globaloria. The whole system comes with affordable pricing plans, includes training and ongoing mentoring for teachers and principals, backed by an elaborate learning management system and an online help center for students.
Globaloria’s goal is to spread computing innovation and coding literacy across one million students by 2015. As a recipient of the 2013 Tech Awards’ Microsoft Education Award and 2013 District Administration Top 100 Pick, Globaloria is poised for a national and global distribution, teaching young people how to design and code educational games and interactive simulations and become the next generation of computationally-fluent, engaged citizens and leaders of the global innovation economy. | <urn:uuid:aa1f340c-08c1-4bb5-a53a-75eb996fe6d5> | CC-MAIN-2022-40 | https://education.cioreview.com/vendor/2013/globaloria | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00503.warc.gz | en | 0.929062 | 622 | 2.734375 | 3 |
During 2013, Google released Chromecast which changed the way we think of streaming video and using media devices. The popularity of Chromecast has continued during 2014 as the device celebrates its one year anniversary during July 2014.
Google Chromecast is going to be one year old in July and already the product has become widespread during its first year on the market. If you are unfamiliar with Chromecast, it is a small USB dongle that connects to your high definition television as an HDMI device. The purpose of the Chromecast device is to provide an innovative way to stream video.
Although the popularity of Google Chromecast has grown significantly within the last year, you may be among the many people that still do not understand how it works and why it is different than other video streaming devices on the market. In this article we will provide you with a general overview of what Google Chromecast is, how it works, and where it is headed in the near future.
Chromecast Defined and How It Works
If you are familiar with a flash drive which is a USB stick used for portable data storage, Chromecast looks just like a USB storage device. The dongle is about two inches long and provides a gateway for streaming video using the HDMI port in your high definition television. Unlike a USB flash drive, Chromecast is equipped with a small amount of memory and acts as a source for streaming video from your smartphone or tablet to your big screen television.
Chromecast connects to your wireless network and uses apps which are installed on your PC or mobile device to stream video to your television. The apps allow you to forward content to the Chromecast dongle which, in turn, processes the information and searches for the content online. Once Chromecast identifies the content location, it streams the content directly from the location on the Internet to your high definition television.
The advantage of the Chromecast design is to free up your PC and mobile devices by shifting the streaming tasks to Chromecast. This also helps to preserve battery life since video streaming from your mobile device can be processing intensive. It also allows you to use your PC or mobile device as a remote control for Chromecast.
Chromecast vs. Other Streaming Devices
When you use other streaming devices, you are required to stream the content directly from your PC or mobile device. This can take its toll on the processing power and your battery.
With Google Chromecast, you are using your device to send instructions to the Chromecast dongle that is connected to your high definition television. The dongle then takes over by searching the Internet for the content you want to watch and streams it directly to your HDTV instead of from your PC or mobile device.
Although you still have the option to stream content directly from your PC or mobile device, Chromecast helps to free up your device so you can use it for other tasks. Additionally, Google owns YouTube and therefore, they have the app setup to enable you to send multiple videos to the Chromecast device. When one video is complete, the next one on the list will automatically start playing.
In terms of device compatibility with Chromecast, you can use your iPhone or iPad to set up video streaming on your high definition television. Chromecast also works with all Android devices and the Google Chrome browser on a Mac or Windows PC.
For visual learners that would like to see exactly how Chromecast works, enjoy the video clip we have included below.
The Lowdown on Chromecast Apps
Another great feature in Google Chromecast is the capability to use a wide variety of different apps. This extends the capabilities of Chromecast, in addition to providing Chromecast users with a wide variety of options for entertainment. Google’s ultimate goal is to catch up with the likes of other video streaming devices such as Apple TV, Roku, and Boxee. Within the last year, Chromecast has accelerated to a nice variety of apps since the device release in July of last year.
If you browse through the Google Play store, you will find more than one hundred apps you can use with Chromecast. This is quite impressive considering the device is only one year old. Google also plans to release more apps during the remainder of 2014. Some of the apps that are currently available include:
- Netflix: The Netflix app works across multiple devices and allows you to easily pause and play video by controlling Chromecast from the app on your mobile device. Additionally, if you pause the app on one device, you can easily pick up where you left off with the app installed on another device.
- YouTube: Of course, Chromecast would not be complete without access to YouTube. The YouTube app allows you to search for content on your device without interrupting the program you are currently watching on your high definition television. The app includes a tool for creating playlists of your favorite YouTube videos. The videos automatically start playing in the order that you enter them.
- Chrome: Chromecast is designed to work with the Google Chrome web browser. This allows you to choose content from more locations on the web and stream the content directly from your device. It also helps Windows users to access content since Chromecast is not designed for compatibility with Windows phones. The Chrome browser is capable of displaying one tab with content you want to watch without showing all of the other tabs you currently have open in the browser.
- Plex Media: Plex is an application that allows you to convert your PC or mobile device into a home media server for storing and accessing all of your content. The application helps you to index all your media content for access from any device that has the Plex application installed.
The Plex Media app provides a connection from your television to the Plex Media server which is running on your home PC. To use Plex, you simply download the Plex Media server program from the Plex website and then point the application to the folders where you store your media content. The application will create a catalog that makes accessing content easier and allows Chromecast to access the content as well.
- Hulu Plus: If you are a fan of Hulu Plus, Google provides a Hulu Plus app for use with Chromecast. Hulu Plus allows you to access classic TV and older television programming, as well as catch up on current episodes of the latest television programs. Although this is a smaller app than the more popular ones such as Netflix, it still provides a nice variety of program selections.
- Pandora: Pandora is an app that provides you with access to a wide genre of music, albums, songs, and specific artists. The Pandora app for Chromecast allows you to convert your home theatre system into a powerful music streaming system. If you already have a decent sound system for watching movies, the Pandora app allows you to use the sound system to enjoy superior quality music and sound.
- AllCast: The AllCast app allows you to stream content directly from your PC or mobile device to your television. Since Chromecast is not designed for this purpose, AllCast kind of makes up for the deficit by providing you with the option to stream directly from your device to an HDTV.
- Google Play: The Google Play app provides you with access to a plethora of content available from the Google Play store. You can choose the content you want to watch and then send it to Chromecast to stream directly from the Google Play store. There is also a variety of third party apps in the Google Play store but, you have to spend some time searching to find the quality apps for use with Chromecast.
- Crackle: Crackle is much like Hulu Plus in that it offers a variety of older television programs and movies. For die-hard fans of classic television and old movies, the Crackle app will provide you with a way to stream this type of content from Chromecast.
- Vudu: Vudu offers a variety of newer television and movie titles that you can stream on-demand for just a few dollars per program. The Vudu app allows you to send newer programming to Chromecast for streaming directly from Vudu, as opposed to your PC or mobile device.
- HBO GO: If you watch a lot of programming on HBO from your cable TV provider, the HBO GO app allows you to stream HBO movies and programming using Chromecast. This provides a legal way to access HBO without having to subscribe to it through your cable or satellite television provider.
How to Get Google Chromecast
Whether or not Chromecast is right for your needs depends upon your viewing habits. But if you tend to view a lot of programming on Netflix, YouTube, and other types of streaming media, you may enjoy having Chromecast as an option for entertainment.
Chromecast can be acquired at an affordable price of around $35 depending upon where you make your purchase. You can find the Chromecast HDMI stick on Amazon or in the Google Play store. There is also more information on purchasing on the Google website.
The Future of Google Chromecast
In addition to releasing more apps for use with Chromecast during 2014, Google is also working on more tools and features for the compact media streaming device. A few of the projects include the ability to stream media without having to actually be connected to the same network. The new pairing method allows Chromecast users to simply enter a Personal Identification Number (PIN) on the screen and then pair two different devices together.
Another effort underway is to provide support for screen mirroring with Chromecast. This technique will work with Chromecast, in addition to bringing your entire Android experience to a high definition television screen.
Also, Google has not quite forgotten Google TV and is working on the launch of Android TV. The technology giant is currently in the process of partnering with a selection of television manufacturers. The partnership will include the Android operating system on new high definition TVs, in addition to offering set-top boxes that include the Android operating system as well. The set-top boxes will include Razr, Asus, and others and is an effort to compete with Amazon Fire TV. | <urn:uuid:1eb0f1c8-44ec-45e1-9131-dd5328ee225c> | CC-MAIN-2022-40 | https://internet-access-guide.com/an-overview-of-google-chromecast/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00503.warc.gz | en | 0.929375 | 2,031 | 2.546875 | 3 |
Cloud computing has brought enormous change to the world of applications. It makes long-standing constraints on application development and deployment disappear. It’s no exaggeration that most of the innovation in IT over the past decade has been enabled, catalyzed, or caused by cloud computing.
Lately, a new cloud-based technology has emerged that has the potential to drastically alter the existing tech ecosystem. It’s called serverless computing.
In this article we define serverless computing, and take a look at its benefits.
What is serverless computing
In the early days of the web, anyone who wanted to build a web application had to own the physical hardware required to run a server, which is a cumbersome and expensive undertaking.
Then came the cloud, where fixed numbers of servers or amounts of server space could be rented remotely. Developers and companies who rent these fixed units of server space generally over-purchase to ensure that a spike in traffic or activity wouldn’t exceed their monthly limits and break their applications. This meant that much of the server space that was paid for usually went to waste. Cloud vendors have introduced auto-scaling models to address the issue, but even with auto-scaling an unwanted spike in activity, such as a DDoS attack, could end up being very expensive.
Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. A serverless application runs in stateless compute containers that are event-triggered, ephemeral (may last for one invocation), and fully managed by the cloud provider. Pricing is based on the number of executions rather than pre-purchased compute capacity.
Why use serverless
Serverless computing offers a number of advantages over traditional cloud-based or server-centric infrastructure. For many developers, serverless architectures offer greater scalability, more flexibility, and quicker time to release, all at a reduced cost. With serverless architectures, developers do not need to worry about purchasing, provisioning, and managing backend servers. However, serverless computing is not a magic bullet for all web application developers.
Serverless computing can simplify the process of deploying code into production. Scaling, capacity planning and maintenance operations may be hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.
The difference between traditional cloud computing and serverless is that you, the customer who requires the computing, doesn’t pay for underutilized resources. Instead of spinning up a server in AWS for example, you’re just spinning up some code execution time. The serverless computing service takes your functions as input, performs logic, returns your output, and then shuts down. You are only billed for the resources used during the execution of those functions.
What are the advantages of serverless computing?
Let’s take a look at the advantages serverless computing offers to businesses:
Serverless computing is generally very cost-effective, as traditional cloud providers of backend services (server allocation) often result in the user paying for unused space or idle CPU time.
No server management
Although ‘serverless’ computing does actually takes place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.
Developers using serverless architecture don’t have to worry about policies to scale up their code. The serverless vendor handles all of the scaling on demand. As a result, a serverless application will be able to handle an unusually high number of requests just as well as it can process a single request from a single user. A traditionally structured application with a fixed amount of server space can be overwhelmed by a sudden increase in usage.
Quick deployments and updates
Using a serverless infrastructure, there is no need to upload code to servers or do any backend configuration in order to release a working version of an application. Developers can very quickly upload bits of code and release a new product. They can upload code all at once or one function at a time, since the application is not a single monolithic stack but rather a collection of functions provisioned by the vendor.
Simplified backend code
Because the application is not hosted on an origin server, its code can be run from anywhere. It is therefore possible, depending on the vendor used, to run application functions on servers that are close to the end user. This reduces latency because requests from the user no longer have to travel all the way to an origin server
Serverless architecture can significantly cut time to market. Instead of needing a complicated deploy process to roll out bug fixes and new features, developers can add and modify code on a piecemeal basis.
Serverless vs Containers
Both serverless computing and containers enable developers to build applications with far less overhead and more flexibility than applications hosted on traditional servers or virtual machines. Which style of architecture a developer should use depends on the needs of the application, but serverless applications are more scalable and usually more cost-effective.
Containers provide a lighter-weight execution environment, making instantiation faster and increasing hardware utilization, but they don’t change the fundamental application operations process. Users are still expected to take on the lion’s share of making sure the application remains up and running.
With serverless, the cloud provider takes on the responsibility of making sure that the application code gets loaded and executed, and it ensures that sufficient computing resources are available to run your code, no matter how much processing it requires.
Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Serverless computing offers many benefits compared to standard cloud model, including lower costs, simplified scalability, quicker deployemnd etc.
If you have any questions about serverless computing and how it can help you achieve your business goals, contact us today to help you out with your performance and security needs. | <urn:uuid:99b519de-f3ae-47a6-b291-1ea53e319b44> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/what-is-serverless-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00503.warc.gz | en | 0.929124 | 1,283 | 2.96875 | 3 |
Getting a Baseline for Improvement - Business Process Modeling (BP Modeling) was born in the mechanical industry, it was first introduced in 1921 by Frank Gilbreth when he presented a paper to the American Society of Mechanical Engineers entitled “Process Charts: First Steps in Finding the One Best Way to Do Work”.
It was this paper that turned a lot of businesses onto the idea of modeling their processes in order to optimize them.
Gilbreth, the paper’s author, is probably better known as the author and central character of the 1950s novel Cheaper by the Dozen. Gilbreth was a fascinating character with a passion for the study of time and motion, but he was also a man who was laser-focused on the true purpose of processes: finding the one best way to do work.
The basic theory of business process mapping
Why map business processes? Gilbreth told us that it’s because you need to take stock of your processes before you can begin to improve them. Looking at the big picture allows you to see the cause and effect of each and every step and to be able to understand the process flow clearly.
“Every detail of a process is more or less affected by every other detail; therefore, the entire process must be presented in such form that it can be visualized all at once before any changes are made in any of its subdivisions.” — Gilbreth
Gilbreth ended his presentation with two important notes:Visualizing processes does not necessarily mean changing the processes
Process charts pay!
What is business process Modeling?
Business Process Modeling (BP Modeling) is used by organizations to visually document, understand, and improve their processes. It is a part of Business Process Management (BPM) and it can be used as an organizational tool to map out what is (or “as-is”) as a baseline and to determine the future (or “to-be”). However, getting a baseline to measure the effectiveness of your process improvement is critical and this is the first place where the practice of business process modeling comes into play.
A visual representation shows all the connecting activities, events, and resources a process employs. BP Modeling combines process mapping, process discovery, process simulation, process analysis, and process improvement. Text-based documentation is often not enough for employees to truly understand how a process is performed. Using a visual representation can provide a comprehensive picture that is easier to comprehend and follow.
I hope this post has given you some insight into the value and purpose of business process mapping. In my next post I will discuss the difference between business process mapping and business process modelling and look some modeling techniques you can use. | <urn:uuid:3a6c14b6-5728-4315-90c8-9b6e33db3a02> | CC-MAIN-2022-40 | https://navvia.com/blog/business-process-modeling-started-need | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00503.warc.gz | en | 0.95967 | 562 | 2.828125 | 3 |
The reason behind the economic theory is that investors and consumers are rational and "efficient machines," meaning that they make the best decisions for themselves. In comparison to the conduct expected in most economic theories, laboratory test demonstrates that actual investor behavior is far more sophisticated.
As a result, an increasing number of economists and psychologists are studying real (or laboratory) economic behavior rather than the normative prediction conduct supported by various models. However, economists have revolutionized economic theories a lot. Let’s learn more about economic theory and its types.
What is an economic theory?
An economic theory is a set of concepts and principles that define how various economies work. An economist may use theories for a variety of goals, depending on their specific function. Some theories, for example, seek to explain why certain economic events, such as inflation or supply and demand, occur.
Other economic theories may give a framework of thinking that helps economists to evaluate, understand, and forecast the behavior of financial markets, companies, and governments.
However, economists frequently apply theories to the difficulties or events they see to gain helpful insight, give explanations, and develop viable solutions to problems.
(Suggested read: Economic Institutions: Formation and Classification)
Types of economic theory
Market socialism is a theoretical notion (model) of an economic system in which the means of production (capital) are held by the public or collectively, and resource distribution follows market norms (product-, labor -, capital-markets).
When it comes to current socialist economies, the phrase is usually used more loosely to encompass both systems that come close to it in the strictest sense (as in the Yugoslav system after the 1965 reform) and those that replace commands and physical distribution of producer goods with financial controls and incentives as instruments of central planning (regulated market, as in the Hungarian 'new economic mechanism' after the 1968 reform).
Classical economics is a broad term that refers to the dominant school of economic theory in the 18th and 19th centuries. The originator of classical economic theory, according to most, is Scottish economist Adam Smith. However, earlier contributions were made by Spanish scholastics and French physiocrats.
David Ricardo, Thomas Malthus, Anne Robert Jacques Turgot, John Stuart Mill, Jean-Baptiste Say, and Eugen Böhm von Bawerk were other important contributors to classical economics.
Most national economies were administered by a top-down, command-and-control, monarchic government policy framework before the emergence of classical economics. Many of the most well-known ancient thinkers, like Smith and Turgot, developed their ideas as alternatives to the protectionist and inflationary policies of mercantilist Europe.
Economic and, eventually, political liberty were intricately tied with classical economics. You may learn more about the difference between classical and neo-classical economics from here.
The Malthusian Theory of Population is a theory of population expansion that is based on arithmetic food supply growth. Thomas Robert Malthus proposed the hypothesis. He felt that by preventative and positive controls, a balance between population expansion and food supply could be created.
The Malthusian theory is the most well-known population theory. In 1798, Thomas Robert Malthus published his essay "Principle of Population," then in 1803, he revised several of his conclusions. The rapidly rising population of England, aided by a faulty Poor Law, severely upset him.
He believed England was on the edge of ruin, and he viewed it as his moral duty to warn his fellow people of the impending doom. He highlighted "the weird disparity between over-care in breeding animals and carelessness in breeding men."
According to Marxism, the conflict between social classes, notably the bourgeoisie, or capitalists, and the proletariat, or workers, determines economic relations in a capitalist economy and will eventually lead to revolutionary communism.
Karl Marx established Marxism, a social, political, and economic philosophy that emphasizes the struggle between capitalists and workers. Marxism is a social and political worldview that encompasses both Marxist class conflict theory and Marxist economics.
Karl Marx and Friedrich Engels' publication The Communist Manifesto, which lays out the idea of class struggle and revolution, was the first to openly define Marxism in 1848. Marxian economics focuses on capitalism's flaws.
Karl Marx and Friedrich Engels established Marxism, Ideology, and socioeconomic theory. It is the underlying concept of communism, holding that all people are entitled to enjoy the rewards of their labor but are unable to do so in a capitalist economic system that separates society into two classes: nonworking employees and non-working owners.
Marx referred to the ensuing position as "alienation," and he predicted that when workers reclaimed the results of their labor, alienation would be resolved and class distinctions would be eliminated.
According to Marxism, economic relations in a capitalist economy are defined by the struggle between social classes—specifically, the bourgeoisie, or capitalists, and the proletariat, or workers—and will finally lead to revolutionary communism.
The power relations between capitalists and labor, according to Marx, were essentially exploitative, culminating in class conflict. He predicted that the conflict would eventually culminate in a revolution in which the working class will topple the capitalist class and gain control of the economy.
(Also read: Introduction to Unit Economics)
Supply and demand
The economic connection between sellers and purchasers of various commodities is defined by the law of supply and demand.
According to supply and demand theory, the price of a product is determined by its availability and customer demand The law of demand and the law of supply is the foundations of the theory. The interaction of the two laws determines the real market price and volume of products.
Keeping the price high, on the other hand, might harm how purchasers perceive the product. If clients do not believe the product is worth the high price, they may choose a less expensive alternative. A higher price may result in decreased demand, which may result in a decrease in supply.
The Quantity Theory of Money is the cornerstone of monetarism. The hypothesis is an accounting identity, which means it must be correct. It states that the money supply multiplied by velocity (the rate at which money changes hands) matches the economy's nominal expenditures (the number of goods and services sold multiplied by the average price paid for them).
This equation is uncontroversial as an accounting identity. What is debatable is velocity. According to monetarist theory, velocity is typically steady, implying that nominal income is essentially a function of the money supply.
Nominal income fluctuations reflect changes in actual economic activity (the volume of products and services sold) as well as inflation (the average price paid for them).
Keynesian economics is a collection of theories advanced by John Maynard Keynes in his General Theory of Employment, Interest, and Money (1935–36) and other publications to give a theoretical foundation for government full-employment programs. It was the predominant school of macroeconomics and the dominant approach to economic policy in most Western countries until the 1970s.
While some economists think that allowing wages to fall to lower levels can restore full employment, Keynesians contend that firms will not hire people to manufacture items that cannot be sold.
Keynesianism is a "demand-side" theory that focuses on short-run economic fluctuations because it believes unemployment is caused by insufficient demand for goods and services.
New growth theory
The New Growth Theory (NGT) is focused on individuals' desires and needs as the driving element behind economic growth; people purchase, sell, and invest depending on their wants and needs, leading real GDP statistics to climb.
The theory is a novel take on its predecessor, neoclassical economics; although the latter is more concerned with external causes, NGT is primarily concerned with internal (human) aspects.
The New Growth Theory, perhaps, lays the most emphasis on the crucial aspect of information; intelligent people purchase, sell, and invest effectively, hence accelerating economic growth in a wiser and more meaningful manner. Knowledge, according to the NGT, is an (intangible) asset with the potential for exponential expansion.
Moral hazards theory
A moral hazard is an economic phenomenon that has been seen throughout history in which parties engage in contracts in ill faith.
Moral hazards frequently develop when an organization, such as a business, raises its risk exposure during a transaction to maximize profit because the entity may not have to bear the repercussions of taking on that risk.
In such a case, the risk is normally borne by the opposite party to the transaction. The phrase moral hazard refers to the assumption that a party takes the previously taken action in order to gain the most possible benefit without regard for moral problems
Tragedy of commons
The tragedy of the commons is an economic situation in which every individual has an incentive to use a resource, but at the expense of every other individual – and there is no means to prevent anybody from doing so. It began with the question of what would happen if each shepherd, acting in their self-interest, permitted their flock to graze on the common field.
If everyone acts in their seeming best interests, it leads to destructive over-consumption (all the grass is eaten, to the cost of everyone). The dilemma can also lead to underinvestment (since who is going to pay to sow fresh seed?) and, eventually, total resource depletion.
As the demand for the resource exceeds the supply, each person who consumes an additional unit immediately damages others — as well as themselves — by preventing others from reaping the advantages. In general, all persons can access the resource of interest without difficulty.
(Also read: 10 Factors Affecting Supply of a Product) | <urn:uuid:11fd6830-a796-42bc-b7b5-6a61f9244d84> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/types-economic-theory | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00503.warc.gz | en | 0.953656 | 2,039 | 3.28125 | 3 |
Anti-virus company Panda Software will shortly be publishing a new report on the evolution of malicious code. The report will reveal, among other developments, how the number of Trojans detected by the laboratory has increased.
A Trojan is a particularly dangerous type of malicious code as it stays on an operating system and carries out a series of actions without users realizing. At the moment, and due to the new malware dynamic, the majority of these Trojans are used for stealing passwords used to log into financial services.
The next group of malicious code most frequently detected by PandaLabs are bots (from the word “robot”) and then backdoor Trojans, which can allow users’ systems to be used fraudulently. These two types of malware, even though they are also numerous, are far less prevalent than Trojans and have either remained stable or decreased in 2006.
According to Luis Corrons, Director of PandaLabs, this increase in the number of Trojans “underlines how the sole aim of malware creators is to profit from their efforts, rather than to gain notoriety as was the case in the past”. Panda Software’s systems for detecting malicious code, with its proprietary TruPreventTM Technologies, can also detect previously unidentified threats, whether they are viruses, worms, or as in this case, any of the numerous new Trojans that are continually appearing.” | <urn:uuid:a1371c1b-f8e0-44b2-a45a-0a364e0c5f50> | CC-MAIN-2022-40 | https://it-observer.com/increase-number-trojans-detected-2006.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00503.warc.gz | en | 0.950743 | 293 | 2.703125 | 3 |
What Is A CDN & Where Does It Shine?
A Content Delivery Network (CDN) is a geographically distributed network of servers and their data centers that help in content distribution to users with minimal delay.
It does this by bringing the content closer to the geographical location of users through strategically located data centres called Points of Presence (PoPs). CDNs also involve caching servers which store and deliver cached files to accelerate webpage loading times and reduce bandwidth consumption. We will go into more details of exactly how CDNs work below.
CDN services are essential for businesses which rely on delivering content to users.
Consider the following:
- Large news publications with readers in many countries
- Social media sites that need to deliver multimedia content on users’ feeds
- Entertainment websites like Netflix delivering high-definition web content in real-time
- E-commerce platforms with millions of customers
- Gaming companies with graphics-heavy content being accessed by geographically distributed users
All these businesses need to ensure acceleration of their content delivery, availability of services, scalability of resources, and security of web applications. This is where CDN services shine as a unique advantage.
A Brief History of CDNs
CDNs were created almost twenty years ago to address the challenge of pushing massive amounts of data rapidly to end users on the internet. Today, they have become the driving force behind website content delivery and continue to be researched and improved by academia and commercial developers.
The first Content Delivery Networks were built in the late 90s and these are still responsible for 15-30 percent of global internet traffic. Following that, the growth of broadband content and streaming of audio, video and associated data over the internet has seen more CDNs being developed. Broadly speaking, the evolution of CDNs can be categorized into four generations:
Pre-formation Period: Before the actual creation of CDNs, the technologies and infrastructure needed were being developed. This period was characterized by the rise of server farms, hierarchical caching, improvements in web servers and caching proxy deployment. Mirroring, caching and multihoming were also technologies that paved the way for the creation and growth of CDNs.
First Generation: The first iterations of CDNs focused primarily on dynamic and static content delivery, as these were the only two content types on the web. The principle mechanism then was the creation and the implementation of replicas, intelligent routing and edge computing methods. Apps and info were split across the servers.
Second generation: Next came CDNs which focused on streaming video and audio content or Video-on-Demand services like Netflix for users and news services. This generation also cleared a path for delivering website content to mobile users and saw the usage of P2P and cloud computing techniques.
Third generation: The third generation of CDNs is where we are now and is still evolving with new research and development. We can expect CDNs in the future to be increasingly modelled for community. This means that the systems will be driven by average users and regular individuals. Self-configuring is expected to be the new technological mechanism, as well as self-managing and autonomic content delivery. Quality of experience for end users is expected to be the primary driver going forward.
CDNs initially evolved to deal with extreme bandwidth pressures, as video streaming was growing in demand along with the number of cdn service providers. With connectivity advancements and new consumption trends in each generation, the pricing of CDN services dropped, allowing it to become a mass-market technology. And as cloud computing became widely adopted, CDNs have played a key role in all layers of business operations. They are key to models such as SaaS (Software as a service), IaaS (Infrastructure as a service), PaaS (Platform as a service) and BPaaS (Business Process as a service).
How Does a CDN Work?
CDNs work by reducing the physical distance between a user and the origin (a web or an application server). It involves a globally distributed network of servers that store content much closer to the server than the origin. To understand this better, it helps to examine how a user accesses the web content from a website with and without CDNs.
Without a CDN
When a user enters the website into the browser, he establishes a connection similar to the one in the following figure. The website name resolves to an IP address using the Local DNS or LDNS (such as the DNS server provided by the ISP or a public DNS resolution server). If the DNS or LDNS cannot resolve the IP address, it recursively asks upstream DNS servers for resolution. Ultimately, the request may pass to the authoritative DNS server where the zone is hosted. This DNS server resolves the address and returns it to the user.
Then the user’s browser directly connects to the origin and downloads the website content. Each subsequent request is served by the origin directly, and the static assets are cached locally on the user’s machine. If another user from a similar or other location tries to access the same site, he will go through the same sequence. Every time, user requests will hit the origin and the origin will reply with content. Each step along the way adds a delay, or “latency”. If the origin is located far from the user, response times will suffer from significant latency, delivering a poor user experience.
With a CDN
In the presence of a CDN however, the process is slightly different. When the user-initiated DNS requests are received by his LDNS, it forwards the requests to one of the CDN’s DNS servers. These servers are part of the Global Server Load Balancer infrastructure (or “GSLB”). The GSLB helps with load balancing functionality that literally measures the entire Internet, and keeps tracking information about all available resources and their performance. With this knowledge, the GSLB resolves the DNS request using the best performing edge address (usually in proximity to the user). An “edge” is a set of servers that caches and delivers the web content.
After DNS resolution is completed, the user makes the HTTPS request to the edge. When the edge receives the request, the GSLB servers help the edge servers forward the requests following the optimal route to the origin. Then the edge servers fetch the requested data, delivers it to the end-user who requested it, and stores that data locally. All subsequent user requests will be served from the local dataset without having to query the origin server again. Content stored on the edge can be delivered even if the origin becomes unavailable for any reason.
Why Use a CDN?
CDNs help businesses deliver content to end users effectively by minimizing latency, improving website performance and reducing bandwidth costs.
Another unique feature of CDNs is that it allows the edge servers to prefetch content in advance. This ensures that the data you are going to deliver is stored in all CDN data centers. In CDN parlance, these data centers are called Points of Presence (or “POPs”). PoPs help minimize the round-trip time by bringing the web content closer to the website visitor.
For example, assume that you run an ad campaign and advertise your service or product among millions of potential customers. You may expect a large number of customers to rush to your site after reading the post. If you deal with influencers who have good audience engagement rates, the volume of traffic can see an even bigger spike. Can you be sure that your origin server will be able to handle this spike in volume all at once?
In such a scenario, CDNs can help distribute the load between the edge servers, and everyone will get the response. Because only a small fraction of requests will reach the origin, your servers will not experience massive traffic spikes, 502 errors, and overloaded upstream network channels.
Benefits of CDNs
Depending on the size and needs of your business, the benefits of CDNs can be broken down into 4 different components:
Improving website page load times
By enabling web content distribution closer to website visitors by using a nearby CDN server (among other optimizations), visitors experience faster webpage loading times. Visitors are usually more inclined to click or bounce away from a website with a high page load time. This can also negatively affect the webpage’s ranking on search engines. So having a CDN can reduce bounce rates and increase the amount of time that people spend on the site. In other words, a website that loads quickly will keep more visitors around longer.
Reducing bandwidth costs
Every time an origin server responds to a request, bandwidth is consumed. The costs of bandwidth consumption is a major expense for businesses. Through caching and other optimizations, CDNs are able to reduce the amount of data an origin server must provide, thus reducing hosting costs for website owners.
Increasing content availability and redundancy
Large amounts of web traffic or hardware failures can interrupt normal website function and lead to downtime. Thanks to their distributed nature, a CDN can handle more web traffic and withstand hardware failure better than many origin servers. Moreover, if one or more of the CDN servers go offline for some reason, other operational servers can pick up the web traffic and keep the service uninterrupted.
Improving website security
The same process by which CDNs handle traffic spikes makes it ideal for mitigating DDoS attacks. These are attacks where malicious actors overwhelm your application or origin servers by sending a massive amount of requests. When the server goes down due to the volume, the downtime can affect the website’s availability for customers. A CDN essentially acts as a DDoS protection and mitigation platform with the GSLB and edge servers distributing the load equally across the entire capacity of the network. CDNs can also provide certificate management and automatic certificate generation and renewal.
How Else Can A CDN Be Helpful?
The CDN is not limited to the benefits explained above. A modern CDN platform delivers many more advantages to your business and engineering teams.
It can be used to manage access from different regions on the planet. While you allow access for some regions, you can deny access to others.
You can easily offload application logic to the edge and close to your customers. You can process and transform the request/response headers and body, route requests between different origins based on request attributes, or delegate authentication tasks to the edge.
Large amounts of traffic require an infrastructure for log collection and processing for further analysis. CDNs collect the logs and provide an interface to conveniently analyze the data generated by the visitors.
It is only natural that something becomes easy to use when you are already familiar with it. For that reason, CDN Pro edges are NGINX based. This means you can perform tasks using standard NGINX directives.
Our engineering team spent thousands of hours extending NGINX.
Data Security & CDNs
Information security is an integral part of a CDN. CDNs help protect a website’s data in the following ways.
By providing TLS/SSL certificates
CDN can help protect a site by providing Transport Layer Security (TLS)/Secure Sockets Layer (SSL) certificates that ensure a high standard of authentication, encryption, and integrity. These are certificates that ensure that certain protocols are followed in the transfer of data between a user and a website.
When data is transferred across the internet, it becomes vulnerable to interception by malicious actors. This is addressed by encrypting the data using a protocol such that only the intended recipient can decode and read the information. TSL and SSL are such protocols that encrypt the data sent over the Internet. It is a more advanced version of Secure Sockets Layer (SSL). You can tell if a website uses the TLS/SSL certification if it starts with https:// rather than http:// , suggesting that it is secure enough for communication between a browser and a server.
Mitigating DDoS attacks
Since the CDN is deployed at the edge of the network, it acts as a virtual high-security fence against attacks on your website and web application. The scattered infrastructure and the on-edge position also makes a CDN ideal for blocking DDoS floods. Since these floods need to be mitigated outside of your core network infrastructure, the CDN will process them on different PoPs according to their origin, preventing server saturation.
Blocking bots and crawlers
CDNs are also capable of blocking threats and limiting abusive bots and crawlers from using up your bandwidth and server resources. This helps limit other spam and hack attacks and keeps your bandwidth costs down.
Static & Dynamic Acceleration
Dynamic acceleration applies to something that cannot be cached on the edge due to its dynamic nature. Imagine a WebSocket application that listens for events from a server or API endpoint whose response differs, depending on credentials, geographic location, or other parameters. It is hard to leverage the cache machinery on the edge in a way that is similar to caching static content. In some cases, tighter integration between the app and the CDN may help; however, in some cases, something other than caching should be used. For dynamic acceleration, CDN’s optimized network infrastructure and advanced request/response routing algorithms are used.
Billing model or “What do I pay for?”
Conventionally in a CDN, you pay for the traffic consumed by your end-users and the amount of requests. Additionally, HTTPS requests require more computing resources than HTTP requests, which creates more load on the CDN provider equipment. For this reason, you may pay additional costs for HTTPS requests, while HTTP requests are not billed at an additional cost.
As the computation moves to the edge, the CPU becomes an object of billing. Requests might have various processing pipelines and, as result, will require different amounts of CPU time. It is impractical to bill by the requests count; it is more practical to bill by traffic amount + cpu time used.
Who Uses CDN?
CDN is used by businesses of various sizes to optimize their network presence, availability, and provide a superior user experience for customers. A CDN is particularly popular in the following industries:
- Digital Publishing
- Online Video & Audio
- Gaming CDN
- Online Education
- Public Sector
- Financial Services | <urn:uuid:578aec53-697d-40f7-92da-e84b1e0135f1> | CC-MAIN-2022-40 | https://www.cdnetworks.com/what-is-a-cdn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00503.warc.gz | en | 0.93516 | 3,062 | 3.234375 | 3 |
Focusing primarily on Bitcoin (the most widely used CryptoCurrency), this five-day course will give you a rounded understanding of how the currency is used, both legally and illegally.
Beginning with a basic view of how it is accessed, acquired and exchanged, you will then have the opportunity to interactively work with other students to create your own Bitcoin wallet and engage in small transactions. Our industry expert will guide you through the experience and will draw from real-world scenarios to illustrate the usage.
Combined, this will give you real-life experience and understandings of both tracing transactions through the blockchain and, importantly, learning how to draw accurate conclusions to potentially be used as intelligence or evidence.
Until fairly recently, cryptocurrencies were a concept very much confined to the realms of hackers, criminals, and other digital enthusiasts. Then, throughout 2016 – 2017, many ransomware cyber attack cases were publicized in the media, with CryptoCurrency becoming a buzzword surrounding the attention.
This only intensified with the value of Bitcoin reaching an all-time high in mid-December 2017 and suddenly, it seems that everyone is crypto-crazy.
No longer is the idea of virtual, digital money considered to be a hare-brained idea dreamed up by somebody on the internet. Whilst previously viewed across the investigative community as primarily as a tool for criminals, its mainstream use is far wider, and also claims many victims.
To effectively utilize investigative opportunities surrounding CyrptoCurrency, it is imperative that both cyber and non-cyber investigators have an understanding of the subject – it could be invaluable to the success of a case. | <urn:uuid:f770c43f-26d8-4fce-8225-ac7588620be7> | CC-MAIN-2022-40 | https://digitalintelligence.com/store/products/b2804?taxon_id=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00703.warc.gz | en | 0.952339 | 341 | 2.9375 | 3 |
When you read or listen to the news especially in the business world you hear a lot of buzz on the concept of cloud computing. Although most people have a general idea of cloud computing there is still a lot of confusion which surrounds the real purpose of cloud computing and some of the functions it serves.
Without realizing it you may have been using cloud computing services for some time now. If you have ever used Google documents or web-based email such as Gmail or Hotmail, then you have been using a form of cloud computing. Google documents (Google Apps) is a platform that can be accessed from anywhere and at any time, as is also the case with web-based email. This high availability of access to data and applications forms the foundation for the cloud computing concept.
If you are one of the many people that is puzzled by cloud computing you are not alone. The following information will provide you with a solid understanding of what cloud computing is and some of the functions it serves when using new technologies.
Cloud Computing Defined
The term “cloud” is refers to the Internet and when you add the term “computing” into the mix this is what can cause a lot of confusion as to what cloud computing actually means. To keep the definition simple think of it in terms of the web-based email you use or Google’s app service.
When you open your Inbox in a web-based email account you have immediate access to your folders and email messages. When you access your documents on Google you can access them from anywhere by logging in with a username and password. So, where is this data stored?
The data in cloud applications such as your web-based email or Google Docs is stored on a remote server. You access the data using your Internet connection and by entering a username and password. This type of infrastructure defines the term cloud computing where your data is stored remotely and securely and accessed using an Internet connection. This places the storage responsibilities on the cloud service provider (CSP). The CSP such as Hotmail or Google maintains the technology required to store data and enforce data security while you have the convenience of accessing it from anywhere using an Internet connection.
Cloud Computing Services for Individuals
If you have ever heard of online backup services such as Mozy, Carbonite, and others, these services are actually cloud computing services which allow you to store your files and data on a remote server. Then you access it at your convenience from any PC or device that has an Internet connection. Most of these services also allow you to share data and multimedia with your friends using the cloud platform provided by the online backup service.
When you subscribe to an online backup service you are provided with a user interface which allows you to control your data and usage options. By adjusting the settings in the user panel you can control how often backups occur, schedule incremental backups, and set encryption options for data security, to name a few features. Once you have backed up your data with the online backup service you also have access to settings which allow you to share specific folders and files with certain individuals for the purpose of sharing and collaboration.
A service such as Google Apps (Google Docs) works the same way. You simply log in with your username and password and then you can adjust the settings so only specific users have access to the documents you make available for collaboration. You can also use the Google Apps cloud platform for convenience if you do not want to share and collaborate. When you create documents using this platform you can access them from anywhere and edit them at any time. This prevents you from forgetting to bring your storage device along and provides a way for you to retrieve data in the event of a system crash on your PC.
Cloud Computing for Businesses
In the current economy, many businesses are opting to use cloud computing services due to the capability to reduce IT costs while increasing productivity and scalability. Many businesses do not have the budget to continually upgrade and expand IT infrastructures. Instead, they leave this concern to the cloud service provider (CSP) so company personnel can focus on important business initiatives.
Cloud computing for businesses provides a way to continually expand capacity without having to invest in costly infrastructure. Additionally, most CSPs that provide services to businesses are in compliance with industry standards and guidelines which further reduces costs associated with meeting these standards.
CSPs also take care of software licensing, IT procurement and maintenance, and expansion projects while the business accesses the services they need on a pay-as-you-go basis. This means you only pay for the services you use and you can add services on the fly at any time due to the flexibility and scalability that cloud infrastructures provide. This allows businesses with budgetary restraints to compete in today’s marketplace using the latest technologies via IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service).
A Word about Cloud Computing Security
Since cloud computing is very new to both individuals and businesses there is much controversy which surrounds security when accessing cloud computing services. Although this is a legitimate concern and one that must be examined before choosing a cloud service provider, most quality cloud computing services have gone to great lengths to achieve compliance. Without investing the time to achieve security compliance CSPs would be unable to win the confidence and requirements of individuals and businesses.
This is not to say you do not have to do your homework ahead of time. Cloud computing initiatives involve a lot of planning on the part of businesses. Even if you are an individual, you still have to invest the time to do your research to ensure the cloud computing service is secure before you trust your files and data to the service.
Private vs. Public Cloud
Further on the topic of security there are private clouds and then there are public clouds. If you are an individual accessing an online backup service, this type of service typically operates on a public cloud where there are other users accessing the same service at any given time.
For businesses that must ensure access to data is controlled a private cloud is used for added security and data protection. This means that only specified users have access to the cloud service and the server is unavailable to public users.
Many organizations such as government agencies use a private cloud to secure access to sensitive data. This means that the cloud service providers must pass strict compliance standards in order to provide cloud services to these organizations.
The bottom line is cloud computing has grown substantially within the last few years. Individuals and businesses alike are gradually recognizing the benefits of cloud computing and the conveniences the concept has to offer. So far, it has created flexible work environments and provided individuals with new and innovative ways to access data and collaborate with friends and colleagues. Cloud computing is a trend worth following in the news since many of the benefits are yet to be realized. | <urn:uuid:8fc22df7-cb63-4a67-8db8-722dfeef5bf5> | CC-MAIN-2022-40 | https://internet-access-guide.com/different-types-of-cloud-computing-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00703.warc.gz | en | 0.943879 | 1,396 | 3.21875 | 3 |
The mining industry is utilizing modern mining technology to control the increasing demand for mined materials.
FREMONT, CA: The new technological trends in mining refer to a captivating shift towards sustainability. The digital technology continuously works hard to provide productive, modern, and sage mining, which will help control the increasing demand for mined materials, exceeding customer expectations, and worldwide sustainability initiatives. Here are some of the new technological innovations utilized in the mining industry.
1. Spatial data visualisation
The mining industry is witnessing a tremendous shift due to spatial or geospatial data. With the help of technologies like three-dimensional (3D) modeling, virtual reality (VR), and augmented reality (AR), spatial data has become even more detailed and transparent than it was ever before.
Utilization of new technologies like spatial data efficiently, the mining industry can obtain more information about mine systems at a lesser price and create less impact on the environment. The mining industry moves rapidly towards a future where it can virtually construct and deconstruct the mines, buildings, plants, and every other related structure even before breaking the ground to develop an intelligent mine.
2. Geographic information systems
Geographic information systems (GIS) are essential tools that allow taking a better look at how geographic relationships affect worldwide. With GIS's help, the miners can solve real-life problems in situations where the accessibility and locations are crucial.
The geospatial data represents the shape, location, and size of the object. The miners will have more information about the mine environment or the designated system by obtaining such data.
3. Artificial intelligence
In the insight-driven firms, artificial intelligence helps to make decisions. The technology uses smart data and machine learning to enhance the production workflow, operational efficiency, and mine safety. Applying artificial intelligence technology will help develop day-to-day data in less time than previously used.
The mining industry is evolving rapidly because machine learning and AI can help the mines make choices for the future.
See also: Top Mining Safety Solution Companies | <urn:uuid:1ff97cf6-5562-4cd9-9b2f-6b31f4e55d97> | CC-MAIN-2022-40 | https://www.enterprisetechnologyreview.com/news/what-s-new-in-the-mining-sector-nwid-988.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00703.warc.gz | en | 0.902225 | 415 | 2.921875 | 3 |
The author of this blog is Kat McDivitt. Kat is a strategy and planning manager for Cisco DevCX.
We’ve all heard about the learning curve – showing the relationship between time spent studying material and proficiency.
Research indicates that there are multiple factors that impact the technology learning curve – or how quickly people can learn new technologies.
Some of them include:
- Motivation – this can come from internal motivation, work related motivation, or even the accountability provided by a good instructor
- Relevancy – training that is immediately relevant to your situation is more engaging
- Training modality – training that is multi-modal – visual, aural, kinesthetic (or see, hear, do) focused is, all else being equal, more successful than single modal
- Complexity of topic – more complex topics (ACI) are more difficult than simple topics (making a cheese sandwich).
- Repetition – repetition over several days
What many people don’t realize is that there is an analogous “forgetting curve,” which is surprisingly steep. Students sometimes leave a 5-day training and get back to their home environments and feel frustrated because they can’t do what they learned.
DevNet Automation Bootcamps help your team focus, learn, and retain
The same factors that make it easier to learn new ideas, also help combat the forgetting curve. The newly launched DevNet Automation Bootcamp is based on recent andragogy research about not just how adults learn, but how to help them retain their knowledge.
The goal of a bootcamp is to maximize your team’s ability to leave with the knowledge, skills, and confidence to implement what you’ve learned.
Automation Bootcamps flatten the forgetting curve by providing an immersive training experience where we:
- Focus on what you are most motivated to learn – the ideal customer for the Bootcamp has a goal, a big change, for example an implementation, taking over work from a vendor, building a new team, or implementing a new technology. The students generally need the skills, and they need them soon. They know that they will be using what they learn within the next few months. They are motivated.
- Provide just the training you need and remove training you don’t – Every lecture and lab is relevant – the Cisco team starts by interviewing the customer and learning exactly what they need to know to succeed, and creating a tailored outline – focusing on the areas the customer needs and taking out any training they don’t need. This maximizes classroom time as all topics are identified as important for the success of their project.
- Deliver training based on technical learning research and styles – the bootcamp brings together the best of instructor led training and self-paced training in a robust format that maximizes student learning and retention.
- Reduce complexity – the mix of tailored content, multi-modal learning, repetition, and bootcamp style labs helps students internalize complex topics and take the learning back to their home environments and succeed.
- Repeat concepts at different levels to lock in knowledge and build confidence – The training is broken up into three portions, and each portion reiterates and deepens the core knowledge of the topics:
- 5-days of instructor led training –This is foundational training focusing on the concepts, paradigms, and a deep introduction to the core concepts reinforced with over 40% labs
- 2-3 weeks of self-paced training with regular instructor led review sessions, leveraging the “flipped classroom” model and using repetition to settle the learning into the retention parts of the brain
- 4-days of hands-on deep dive lab where the students are taking their knowledge and applying it in hands on labs and building end-to-end solutions and/or troubleshooting real issues
In the 5-Day instructor-led training, you’ll work with Cisco experts to identify topics specific to your situation. In the 4-day deep dive lab, you’ll extend the skills you gained in the 5-day class
Examples of Automation Bootcamps Topics
ACI Troubleshooting and Operations Immersion Bootcamp
- 5-Day Instructor-led training – May include an introduction to ACI (fabric infrastructure, configuration) and ACI Operations (configuration management, monitoring, and troubleshooting)
- 4-day Deep Dive Lab – May include building an ACI fabric and troubleshoot issues and then explore ACI Multisite, ACI Multipod, hypervisor integration, complex ACI configuration problems.
ACI Automation Bootcamp
- 5-Day Instructor-led training – May include an introduction to automation (Python, APIs, and more) and then dive into ACI automation with Python, SDKs, tools, and more.
- 4-day Deep Dive Lab – May include exploring automation actions, tools, toolkits, SDKs and doing deep labs on ACI REST API interface, ACI API Inspector, ACIToolkit, Ansible, pyATS, and CI/CD pipelines using GitLab-CI.
NSO Automation Bootcamp
- 5-Day Instructor-led training – May include intro to NSO (components, use cases, installation, NETCONF/YANG, manage devices) NSO services, administration, and DevOps.
- 4-day Deep Dive Lab – May include exploring introductory techniques such as installation, the network CLI and config database, and more. Or choose to practice advanced topics such as Python-powered services, advanced YANG constructs, northbound integratiosn, and more.
NX-OS Automation Bootcamp
- 5-Day Instructor-led training – May include an intro to automation (Python, APIs, and more) and exploration of NX-OS automation (day-zero provisioning, on-box / off-box programmability, and telemetry).
- 4-day Deep Dive Lab – May include performing complete automation actions and learning the roles of tools, toolkits, and SDKs. You will build gain immersive experience with technologies such as NX-API CLI and NX-API REST, model-driven programmability, GuestShell, Ansible, NX-SDK, pyATS, CML, and CI/CD pipelines using GitLab-CI.
Meraki Automation Bootcamp
- 5-Day Instructor-led training – May include intro to automation (Python, APIs, and more) and Meraki automation (workflows, APIs, and more)
- 4-day Deep Dive Lab – May include exploring Meraki Dashboard and REST, Meraki action batches, and troubleshoot common Meraki issues, or focus on automation provisioning and configuration.
SDA Product Immersion Bootcamp
- SDA 5-Day Instructor-led training – May include an introduction to SDA (Cisco SDA fundamentals, provisioning, policies, wireless integration, and border operations) and SDA operations (operating, managing, and integrating Cisco DNA Center, and understanding programmable network infrastructure)
- SDA 4-day Deep Dive Lab – May include performing complete SDA implementation tasks. You will build an SD-Access fabric network and nodes and then solve common issues using troubleshooting use cases.
If you’d like to find out how a DevNet Automation Bootcamp can work for your team,
Click here to start that conversation
We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!
Visit the new Developer Video Channel | <urn:uuid:e622d369-c121-44c6-adeb-1b4ff632ee0a> | CC-MAIN-2022-40 | https://blogs.cisco.com/developer/automation-bootcamps | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00703.warc.gz | en | 0.919203 | 1,577 | 2.515625 | 3 |
Globally, the HealthTech innovation ecosystem has many startups working at the intersection of IoT and AI to solve some of the tough problems faced by patients and other stakeholders in the healthcare space. Several factors have led to a proliferation of a class of devices in the health, fitness and wellness space – such as, miniaturisation of sensors, advancements in microcontroller design and power management, low cost and flexible circuit design The ubiquitous availability of smartphones and always-on connectivity has enabled real-time access to cloud-based AI capabilities to transform IoT data into meaningful insights that can be used anywhere and anytime.
The key objective of healthcare provision is to engage the multiple stakeholders involved within an ecosystem to deliver better and holistic clinical outcomes.
The Key Challenges to Widespread Stakeholder Acceptance
Despite the promises and great potential of IoT and AI in the healthcare industry, there are major gaps in adoption by stakeholders, especially clinicians. I believe there are three main reasons for the gap:
The Principle of Conservatism in Medicine
The basic premise of medicine is “Do NO Harm”. Due to the relative newness of both devices and apps, there is only a small body of knowledge and precedence for a clinician. This is too small to take a risk with using “new” data to make decisions, even though the new data may be better than other available “time-tested” options on hand.
Clinical diagnosis is based on years of medical training, time-tested knowledge and practice and clinician “Gestalt”. More often than not, the way an AI system performs the predictions is not fully understood by the clinician. Because of the relatively new field, any new AI-based prediction system will need to establish its credibility that it will NOT do any harm to the patient. Clinicians value their ability to do the right thing for the patient more than anything else.
Establishing the trust between the clinician and the black-box algorithm is critical for any successful adoption.
Features vs. Function
There is a lack of a common language and understanding between clinicians and technologists, especially when it comes to what “innovation” means for the other party. For a technologist, innovation lies mostly in the features and capabilities of the technology. Their mindset is that innovative technology can be easily adapted to healthcare, similar to the way mobile phones have been adopted by consumers. Their belief is that people and practices will change quickly when presented with better technology and insights. For a technologist, innovation is what transforms a practice in leaps and bounds.
For a clinician, innovation is incremental to start with, and it evolves with time. There is a widely-held belief that if a new technology is not easy to understand, then it probably will not be good. Features and capabilities do not mean much to them. Innovation MUST be simple to understand, reliable, repeatable and MUST solve a problem that they cannot solve by themselves. The arguments in the public domain whether or not AI will replace clinicians adds to the skepticism.
Evidence through Clinical Trials
Perhaps the biggest factor of all is that clinicians demand evidence of safety and efficacy of new technology through the lens of time-tested processes of randomised clinical trials. Unless there is evidence created by the technologists through well-designed and robust clinical trials, adoption of new technology will be a hit and miss. Technology companies interested in transforming healthcare should have a solid understanding of the clinical trials process and should create adequate scientific evidence to positively influence clinicians.
The most important aspect of gathering clinical evidence is to identify the relevant data for decision making. Many clinicians do not readily utilize the data collected from connected healthcare devices into their diagnosis and decision processes due to the lack of connections between data and clinical practice. While 10,000 steps a day might be a good benchmark for exercising right, that number means nothing if it is not connected to a clinical decision framework.
The Way Forward for Stakeholder Adoption
Healthcare leaders predict that the implementation of healthcare IoT and AI solutions on a scale will transform their industry. The next few years will see more interconnected IoT devices and reliable applications based on deep learning. To achieve adoption and impact of new technology, the innovators and healthcare stakeholder ecosystem leaders should address the need for trust and evidence. Real World Evidence and Randomised Clinical trials are effective ways to bridge the gap and to establish a common framework to address the user adoption issue.
Arun Sethuraman, Principal Advisor MedTech, Ecosystm is also the founder and CEO of Crely Healthcare, a MedTech startup based in Boston and Singapore. Infection of the surgical site, post-surgery, if not detected and treated early, leads to high incidence of mortality in patients, poor health outcomes, poor patient experience, higher healthcare costs, and loss of reputation and reduced profitability for healthcare providers. Crely’s mission is to provide an early warning and clinical decision support system for surgical site infections (SSI), post-surgery. Crely generates an early warning of SSI by algorithms based on biomarker data collected from patients using an IP-protected, secure, non-invasive, continuously wearable, clinical grade medical device. | <urn:uuid:f2794dae-7f2d-420e-87ef-7798eb9e28d7> | CC-MAIN-2022-40 | https://blog.ecosystm360.com/bridging-gaps-between-healthtech-innovation-and-stakeholder-adoption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00703.warc.gz | en | 0.940793 | 1,068 | 2.671875 | 3 |
Safety and security are fundamental needs according to Maslow's Hierarchy of Needs. People should feel safe wherever they are and whatever they do, and school is no exception. Students and teachers must feel safe before focusing on their growth needs, like teaching and learning.
Let's take a step back before we dive deeper into the subject. What are safety and security? Safety and security are not the same concept, although they are strongly connected.
Safety typically means feeling protected from harm or other undesirable outcomes. The term security is related to the presence of peace, safety, and the protection of people and their resources. It generally refers to the freedom from or resilience against potential harm or other unwanted change caused by others.
What do safety and security mean related to the education system? What are school safety and security? Simply put, safe schools are the schools where the students, employees, and visitors are free to interact in a supportive way to support teaching and learning without feeling afraid or threatened. A secure school covers all measures taken to fight learners, school staff, and property threats in education settings.
It goes without saying that all schools looking to stimulate academic performance should work on creating a safe and secure school environment first. And implementing a cloud-based access control system like Kisi is a crucial first step.
The harsh truth is both private and public schools are still battling paramount safety and security issues. Yet, it's up to the school management to protect the people they're responsible for.
The state of school safety and security
To understand and support school safety and security, first, we need to comprehend its state and development for the past few years. These subjects cover a variety of campus crime related topics that can be measured and analyzed.
The National Center for Education Statistics (NCES) and the Bureau of Justice Statistics (BJS) collaborate on the "Report on Indicators of School Crime and Safety," – a comprehensive annual publication. Let's take a look at some of their findings, keeping in mind that students might have spent less time at school in 2020 and 2021 due to the pandemic.
Even though nonfatal criminal victimization decreased, there still were almost a million violent and half a million nonviolent incidents in U.S. public schools during the 2019–20 school year.
Unfortunately, in 2020–21 we also experienced the highest number of school shooting casualties in the last 20 years. And this is just scratching the surface.
Almost 3% of grade 9–12 students reported carrying a weapon on school property at least once in the past month, and 7.4% reported being threatened or injured with a weapon on school property during the last year. It’s hardly surprising that almost twice the students reported being more afraid of attack or harm at school than away from school.
It’s more important than ever for schools to consider access control solutions like Kisi to keep the children feel safer and away from harm. Kisi can help prevent external intruders that shouldn’t have access to the school grounds in the first place. Thanks to its many camera integrations, it can assist personnel in anticipating incidents and empower quicker response with alerts and alarms.
Interested in more data? Click here for a more thorough stats overview.
These indicators and stats help inform policymakers and stakeholders of the nature, extent, and scope of the school safety and security issues that should be addressed. They often contribute to developing programs aimed at violence and school crime prevention.
Schools across the United States are also implementing preventive and responsive measures to minimize these issues that their students and teachers can potentially experience.
- In 2019–20, public schools reported the use of the following safety and security measures: 97% control access to school buildings, 91% use security cameras, and 77% require faculty and staff to wear badges or picture IDs. A comprehensive access control solution like Kisi synergizes these 3 essential safety areas.
- While 97% of public schools controlled access to school buildings, 59% controlled access to their school grounds, and 73% had classrooms that could be locked from the inside. Locking all your classrooms remotely is possible when choosing Kisi.
- Only 40% of public schools had installed panic buttons or silent alarms, while 70% had an electronic notification system, and 66% had a structured anonymous threat reporting system. Kisi’s intrusion detection and alarm integrations can alert stakeholders of potential incidents.
- In 2019–20, about 52% of public schools reported having a written plan for procedures to be performed in the event of a pandemic. Ensuring optimal school occupancy is easier with Kisi’s capacity management.
- ≈51% of traditional public schools had a School Resource Officer, while only 25% of the charter schools had one. Access control systems like Kisi can make their job more efficient and be a solid replacement for the schools that haven’t hired SRO.
School physical security strategies and elements
Tackling complex phenomena like school crime and violence requires education institutions to approach school safety and security methodically. Applying a systems-based approach to physical security can help create safer school environments that facilitate teaching and learning without asking already strained staff members to become security experts.
Given that each school is different, there can’t be a completely uniform approach to physical security. Still, choosing a systems-based approach to physical security can help schools address their unique circumstances while ensuring protection and mitigation measures complement measures to prevent violence and respond to and recover from violent incidents. Taking a layered approach to physical security assures that the system works in an integrated way to detect, delay, and respond to threats. It also helps to prevent single points of failure.
A layered, systems-based approach to physical security ensures that all physical security elements work cohesively to provide security benefits. Before we dive deeper into the physical security elements, let’s explore the three physical security strategies.
School physical security strategies
The ability of the physical security measure to communicate that an incident related to safety is currently occurring or about to occur. Installing monitored closed-circuit TV (CCTV) in the school, security guard patrols at the premises, and open-plan designs that allow natural surveillance can enhance the detection strategy.
All physical security measures that delay the safety-related incident in a way, often by increasing the level of effort, resources, and time needed for an incident to occur. Examples of this type of measure are access control solutions at the school entrances, fencing, reinforcing the school’s windows or doors, regular staff patrols, etc.
The response strategy encompasses the physical security measures that limit the potential damage caused by the threat or contribute to overcoming the threat. Access control solutions that communicate and notify of the possible school threat, first aid kits, or hiring security guards are examples of adequate response.
School physical security elements
- Physical security equipment and technology
Given the school safety and security state and the repercussions when physical security is poorly implemented, modern technology companies, like Kisi, offer a multitude of increasingly sophisticated security equipment. Security measures range from the least costly and modern, like door bolts, to more advanced – access control systems integrated with surveillance cameras and alarm systems.
The ability of the newly purchased security equipment to integrate with the existing technology is a valuable cost-saving strategy and essential for avoiding single points of failure. Let’s take installing Kisi’s access control solution in a school context, for instance. Our tech integrations support linking multiple technologies together. So, attempted unauthorized access can be seen on the video camera surveillance and trigger an alarm. The Kisi access control system can also send a notification of the identified suspicious activity to relevant decision-makers on their mobile, so they can promptly coordinate further action.
- School site and building design features
Unlike some types of facilities, such as military or correctional facilities, schools need to present a welcoming environment while securing their spaces. Striving to maintain an open yet secure climate is quite challenging.
When planning school construction, there are choices around the overall layout of the school, like whether school functions will be clustered or dispersed. Dispersing school functions by installing portable instructional buildings, for example, can mitigate single-point vulnerabilities but increase the complexity of emergency response mechanisms and diminish the effectiveness of specific technologies, like video surveillance.
The orientation of the new school building, the incorporation of open space into school site design, and the number and location of points of entry are some of the site or building design issues that need attention.
Crime Prevention Through Environmental Design (CPTED) is school safety and security design approach. Its multidisciplinary nature deters crime through the built-in, social, and administrative environments. It offers various options that address site, building, and interior space design, installation of accompanying systems and equipment, and the broader community context. When properly integrated, CPTED approaches can offer a school multiple security and safety benefits without making the school look like a fortress.
Unfortunately, measures that fall under the site and building element scope will likely be a significant financial burden for schools, whether it comes to incorporating new site and building design features into an altogether new school or campus or fitting them into existing structures.
- School security personnel
The human component of physical security is inevitable, and decisions in this area can be pretty sensitive and costly. People operate the school security equipment and technologies while being responsible for designing policies and procedures to use that equipment in favor of increased school safety.
Even though most schools maintain only small numbers of personnel assigned exclusively to security work, there are many options for hiring security personnel or assigning security-specific roles and responsibilities to school staff. Schools can decide to rely solely on local law enforcement, hire off-duty police officers or private security personnel, and assign school staff or volunteers to monitor hallways and common areas.
Often, schools hire school resource officers while coordinating with local police departments. School resource officers (SRO) are sworn police officers assigned to a school district or individual school as part of their efforts to improve physical security.
SROs not only enforce the law within school boundaries but also provide expertise on safety. They educate students about broad safety and security issues and can also serve as informal counselors and mentors to them.
- Security policies and procedures
School’s policies and procedures for safety and security, along with security training and exercises, are the glue to the physical security system. Without them, the effectiveness of the tech and equipment, the hired security personnel, and the school safety-related design features could decline significantly. For instance, students, teachers, and staff need to know what to do if an emergency is bound to happen or how to report a criminal or violent act if they witness one.
Unfortunately, there can’t be a uniform policy to protect against and mitigate a specific type of threat or set of threats. Some states or localities mandate certain safety- and security-related policies to be implemented by the schools. Beyond these mandates, each school must determine the additional policies, rules, and procedures that best fit their individual needs considering its distinct school community.
Even though every school is unique, each one should implement the necessary policies and rules to ensure that any installed technology works as planned. Schools should also consider how the protection and mitigation policies and procedures interact with the rest of the school safety system’s elements. Another important thing is how all of this could potentially affect the students’ and staff’s privacy and other rights.
The integrated school policies and procedures should not end when students leave school. The practices and procedures addressing building access, emergency evacuation, and security personnel, for instance, should also account for after-school hours. They should also be relevant to athletic events, school board meetings, student performances, and other events that occur in the later hours or when school is off, like on weekends or in the summer.
Unlike the elements we already discussed, well-planned and consistent physical security school policies, rules, and procedures aren’t that big of an investment.
- Training, exercises, and drills
Regular training increases the effectiveness of the other school security elements, like how to use the technology or apply the policies. Training is valuable in increasing knowledge and awareness since everyone should know their respective roles and responsibilities.
Each school's safety and security training program should match the particular technologies it has installed to protect its spaces. The installed tech won't do much good if the parties don't know how to properly use, maintain, and update the school's physical security equipment.
Even though school safety and security training can be delivered in various ways, the entire school community should be involved. Teachers, administrators, students, security personnel, and external stakeholders, like the local police and fire first responders, all play their part.
Some schools choose a series of short online courses that prepare teachers and staff for emergencies, while others schedule practical drills and exercises periodically throughout the school year. Some states even mandate that schools implement these types of drills regularly. They state that giving students and staff regular opportunities to practice their responsibilities in emergencies is vital to ensure the overall system's functionality.
Still, the training won't be that effective if it is perceived as another wearisome task placed on the school community. It should be based on strategies that are easy to learn, remember, and use. When schools create too complex training regimens or don't prioritize needs, running security exercises and drills can become costly and time-consuming.
Evaluating school security
Considering the previously discussed physical security strategies and elements, it's time to evaluate your school's security. Here are the 4 crucial things you need to assess:
- Identify your school's key security issues, define expectations, and together with the stakeholders, reach a consensus on classroom security, lockdown strategies, and door access control functions.
- Estimate current perimeter and classroom security needs regarding key control or other keyless credentials, building access and time schedules, potential visitors and critical entry points, and lockdown requirements. You should also discuss the estimated timeline to complete security upgrades and agree on the security versus convenience issue.
- Now, you should be ready to recommend upgrades when it comes to perimeter security and the following openings: exterior, entrance, access, interior, and classroom.
- The last part is about expense planning, like reviewing the security upgrade budget and discovering if grant funding is available.
What is school access control?
Cloud-based access control systems, like Kisi, are the go-to solution for enhancing school safety and security. The system consists of software and 3 hardware parts:
The entry barrier
Physically restricts entry to the people that shouldn't access the school. The usual setup consists of a door secured by a magnetic or electric lock and a door reader that interacts with the users.
They are the device the users use to identify themselves as someone who is granted access. Cards, fobs, and keypads are the most common ones. Given that many students can easily lose, forget, or misplace them, Kisi also allows users to enter schools using their mobile phones.
The brain of the access control system. It connects all the parts and ensures that the access control software and hardware communicate seamlessly to provide an exquisite user experience.
Administrators use this to decide who, where, and when can gain access. Advanced systems like Kisi provide numerous options and benefits, like remote management, intrusion alerts, lockdown, and even customizing group access. The user-friendly software is essential for both admins and end-users to have a positive experience.
How can access control help strengthen school safety and security?
- Manage all locations and entry points from a single dashboard
Managing a school with dozens of classrooms, offices, storage rooms, laboratories, and other spaces that require limited access is practical and secure only with cloud-based access control.
A modern access control system will also integrate with your surveillance system and provide options like lockdown, capacity management, alarms, and reporting on the entry events. Admins get to do all this from one screen and don't even have to be on site.
- Improve entry point management
Modern access control solutions allow admins to manage access more effectively while improving entry point security. This way, schools can do the once impossible - monitor and manage all the entries without having dedicated security personnel at every door. The access system can even communicate security issues, like breaches, empowering a quick response.
Admins can customize access permission, so people will only have access to the needed places. The specific pupil groups, maintenance staff, or teachers don’t have the same access needs. For instance, only the teachers need access to the teacher’s lounge.
- Lock spaces with a click
Schools often encourage faculty and students to shelter in place (remain in their classrooms or offices) in the case of an incident. By implementing modern access control, staff members don’t have to rush to lock the doors but can focus on keeping an eye on their students. Even the unoccupied rooms can get locked remotely, easing any potential search efforts.
With systems like Kisi, you can put your entire facility on lockdown or lock individual doors with a simple click. You can rest assured that the classrooms, offices, or other rooms where your people are sheltering will be locked.
- Integrate with other systems
Moving your access control system to the cloud means you get to synchronize it with the rest of your systems. Integrating your security cameras allows you to monitor and analyze the crucial entry points to better respond to potential threats. The result – improved response times and prevented attacks, especially when combined with the alerts your system will send.
Integrating with other directories your school uses makes your staff more efficient as well. You also get comprehensive data to analyze and add to the hours of manual labor you get to save.
- Secure future
The school access control should be flexible and scalable to ensure longevity. Cloud-based access control, like Kisi, scales with you. Adding more physical locations, like campuses, classrooms, or offices, and keeping up with the ever-changing staff and students is a breeze.
As a product manufactured in the USA and engineered in Sweden, Kisi balances security, stability, and tech advancements. The dedicated product team works on constant product updates, delivered to our client as over-the-air updates. This way, you won't experience any compliance issues. And in case you need access control support, you won't have to budget for expensive contractors. The support team is here to address any concerns and issues.
Students, staff, and school access control
Technology makes our lives not only safer but more convenient as well. Keyless and mobile access control solutions can make life easier for students and staff. Given we don’t leave the house without our mobile phones, it can provide the most convenient and secure access and an extra layer of control for the school stakeholders.
Access control systems like Kisi empower school admins to control who, when, and where gains access and manage it remotely from a single pane of glass. The school is more flexible, people feel safer, and costs drop, thanks to streamlined school access management.
Your school can easily differentiate and grant specific entry privileges to students, teachers, support staff, and visitors. For instance, the basketball team can have access to the gym during their training sessions even after regular school hours without the need for additional staff.
Your next one-off events won’t be a security concern and will be a breeze to organize. Instead of hiring extra staff, sharing keys, or issuing extra cards or fobs, you can send visitors a link that will grant access to them during specific hours and only at the event’s location.
School safety and security funding and grants
The schools' need to further improve their safety and security with access control systems like Kisi is evident. Some schools, unfortunately, might struggle with their budget.
Many states, recognizing the importance of school security, are allocating millions of dollars so schools can equip themselves better with security cameras or access control systems. Lately, even federal Covid-19 relief funds are redistributed for security, emphasizing its significance.
The funds' availability and grant programs tend to change yearly. So schools need to be diligent if searching for private, federal, state, or local funds. Using the Grants Finder Tool and following some best practices can take you closer to securing a security grant. You can also ask your state's law enforcement agencies and Department of Education for directions.
Some agencies offer a way to register so you can get notified of grant programs via email. Your Chamber of Commerce might also have information about potential grant programs, foundations, or other funding opportunities. All these places will help you identify suitable programs. What's left is for you to write and submit grant proposals.
Open the door to a safer and more secure school
There's nothing more important than the safety and security of school students and staff. Even the most talented and motivated people can't reach their potential when this basic need isn't met.
Modern access control can positively reinforce all school physical security elements and strategies while improving the user experience. It can bring back the feeling of a free and stimulating environment that doesn't feel like a fortress.
If you already have a legacy access control system, you don’t have to invest in all new tech. With Kisi's Controller Pro 2, you get to move to the cloud and enjoy all its benefits without having to replace all your legacy hardware. Contact our team for more details and open the door to a more secure school and happier, motivated students and staff. | <urn:uuid:bf0db8f1-8906-4b70-8c4c-46e35492e412> | CC-MAIN-2022-40 | https://www.getkisi.com/blog/access-control-school-safety-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00103.warc.gz | en | 0.948007 | 4,437 | 3.03125 | 3 |
Learn about the inner workings of AES 256 encryption, symmetric cryptography, and the most effective encryption algorithm.
Before we get to AES 256 encryption, have you ever been curious about how the US government stores its nuclear codes?
It could be on a document in an Oval Office vault with the warning “EXTREMELY TOP SECRET.” Who knows? Maybe it’s tattooed on the president’s—never mind.
One thing that’s certain is that government secrets and military-grade information are encrypted using a variety of encryption protocols—AES 256 being one of them.
And the best part about it is that AES 256 isn’t a privilege of the state alone; it’s a public software that you can use to reinforce your Data, OS and firmware integrity.
This article will tell you everything you need to know about your data, AES 256 and everything in between.
It will also explain why AES 256 is the closest your organization will ever get to a data security magic wand (and why it’s not one).
What is AES 256?
Advanced Encryption Standard (AES) 256 is a virtually impenetrable symmetric encryption algorithm that uses a 256-bit key to convert your plain text or data into a cipher.
That’s a lot of jargon but don’t despair—it gets a lot easier from here.
How Does the AES 256 Encryption Work?
To understand the intricacies of AES 256 encryption, you have to detour onto the operations of basic encryption protocols like the DES.
Encryption is an excellent option for mitigating file sharing security risks. It works by taking plain text or data and using a key to convert it into a code called a cipher. Cipher code is an unreadable and effectively indecipherable text that neither humans nor computers can understand.
With that out of the way, let’s delve into the complicated workings of AES 256 encryption. Hold your hats because this is where things get interesting. AES works in the following steps:
- Divide Information Into Blocks
The first step of AES 256 encryption is dividing the information into blocks. Because AES has a 128- bits block size, it divides the information into 4x4 columns of 16 bytes.
- Key Expansion
The next step of AES 256 encryption involves the AES algorithm recreating multiple round keys from the first key using Rijndael’s key schedule.
- Adding the Round Key
In round key addition, the AES algorithm adds the initial round key to the data that has been subdivided into 4x4 blocks.
- Bite Substitution
In this step, each byte of data is substituted with another byte of data.
- Shifting Rows
The AES algorithm then proceeds to shift rows of the 4x4 arrays. Bytes on the 2nd row are shifted one space to the left, those on the third are shifted two spaces, and so on.
- Mixing Columns
You’re still there. The AES algorithm uses a pre-established matrix to mix the 4x4 columns of the data array.
- Another Round Key Addition
The AES algorithm then repeats the second step, adding around key once again, then does this process all over again.
What Makes AES 256 Special and Why Should You Use It
That’s enough blabber and technical jargon for today; let’s get to what brought you here in the first place.
Presumably, you want to know what makes AES 256 special, what distinguishes it from the rest and what it brings to your table.
AES 256 brings a lot to your cyber security strategy, including:
1. AES 256 is Unbreakable by Brute Force
Saying that it’s impossible to crack AES encryption is a misnomer. A combination of the perfect brains, the most powerful computer and sheer hacking talent can crack through AES encryption.
But it will take, get this, 10-18 years to do that.
This makes AES 256 and the subsequent data that you protect it with unbreakable for the unforeseen future. Take that, hacker.
However, this is on the condition that you don’t share your cryptographic keys with anyone, your dog included.
2. AES 256 Uses Symmetric Keys
As you’ve seen, encryption uses a cryptographic key to turn your plain text and data into indecipherable and unreadable text.
Subsequently, it also uses a similar key to decrypt your encrypted data into cipherable text. There are two types of keys in encryption, these are:
- Symmetric keys.
- Asymmetric keys.
A symmetric key is a type of encryption where you use the same key for encrypting and decrypting data.
On the other hand, asymmetric keys use different keys for encrypting and decrypting data. If you’re wondering which one of two is better, there isn’t—both have their uses.
AES 256 is symmetric-based encryption. Not just that, it’s the most capable symmetric encryption available today. Some of the benefits of using symmetric keys are:
- Has faster encryption speed.
- It is good for internal or organizational data.
- It is excellent for encrypting large volumes of data.
- Requires less computational power to run.
3. Stopping a Security Breach from Turning into a Data Breach
If you go around reading breach blogs and reports, you might get the impression that a breach is the end of the world for any business.
You’re not entirely wrong. According to statistics, 60% of small businesses that face a cyber-attack are out of business within six months.
Nonetheless, there is a lot that goes on between your systems getting breached and you going out of business. It all comes down to:
- How soon you identify the security breach.
- Your ability to contain the breach and prevent its spread.
- The contingencies you have in place.
AES 256 encryption allows you to contain the spread of a breach from getting to your data. Take the worst-case scenario and assume that hackers compromise your infrastructure.
With encryption, the chances of this security breach turning into a data breach are significantly reduced.
That’s one less thing to worry about because on one end, your systems are on fire but on another, your data is in safe hands. This possibility reduced the chances of:
- Compliance issues.
- Data theft.
- Ransomware attacks.
4. AES 256 is the Most Secure of AES Encryption Layer
Remember the complex encryption process you read earlier. Well, it doesn’t happen in just a single round.
It can happen eight, nine, ten, or 13 times depending on the AES layer.
This is because we haven’t mentioned two other layers in the AES protocol. They are AES 128 and AES 192.
Both AES 128 and AES 192 are extremely capable encryption layers. So capable that back in 2012, there was an argument about whether AES 256 was necessary given the capability of AES 128.
It’s crazy how fast things change.
In 2022, there is no longer much of a discussion. It’s clear that quantum computers are on the horizon, and AES 256 is the only way to base your secure file transfer infrastructure on a future-proof framework.
By choosing AES 256, you’re going for the gold standard, the best in the game, military-grade and future-proof encryption layer.
What It Will Take for a Hacker to Crack Your AES- Encryption
For a hacker to gain access to your data protected with AES-256 encryption, they will have to try 2^ 256 combinations with a pool of the most powerful computers in the world.
To put this into perspective, this is a number so large it’s more than the number of atoms in the observable universe.
And if by some miracle, a hacker is able to decrypt an AES 256 and wreak havoc on your systems, that will be the second most impressive feat they achieve in their lifetime.
Why? Because they’ll have to live a billion years first to get even close.
Can AES Work in Isolation? No, and Why You Need Managed File Transfer (MFT)
This is one of those few data security pieces that don’t warn of impending doom. It might even have left you with a little hope and the feeling that the good guys are winning for once.
You’re not wrong. AES encryption is probably the best thing to happen to file security since the Firewall.
But there’s a bigger picture; AES encryption cannot exist in isolation. In fact, your AES system encryption is only as strong as its environment and the infrastructure surrounding it.
Hackers may not be able to brute force your AES 256 algorithm, but they don’t give up that fast. They can (and will) still be able to try and:
- Gain access to your AES 256 cryptographic keys.
- Leverage side-channel attacks such as mining leaked information.
- Accessing your data right before and after encryption.
That being said, you need a data security ecosystem around your AES-256 encryption, and for that, look no further than Managed File Transfer (MFT).
The MFT-AES 256 is akin to a Brady Gronkowski duo. In addition to the foolproof nature of your encryption, MFT will bring:
- Strict access control so that no one gets hold of your cryptographic keys.
- Multi-Factor Authentication to prevent unauthorized access to your AES infrastructure.
- Real-time visibility and reports into file access.
To protect your cloud data in transit and at rest, you need both AES 256 encryption and Managed File Transfer (MFT). You need a system that brings you the best of two worlds, and this is where MOVEit comes in.
With MOVEit, you get AES 256 encryption, multi-factor authentication (MFA), strict access controls, and much more.
For more information, view our MOVEit Transfer Datasheet. | <urn:uuid:67d56f36-ac94-4efb-a7f6-84256eecdf8d> | CC-MAIN-2022-40 | https://www.ipswitch.com/blog/use-aes-256-encryption-secure-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00103.warc.gz | en | 0.901349 | 2,113 | 3.65625 | 4 |
Keeping pace with today’s complex cybersecurity domain, security teams leverage different tools and technologies to build a dynamic security posture and gain deeper visibility into the threat landscape. While some organizations rely on security operations centers (SOCs), others build cyber fusion centers
, taking a strategic approach to integrating teams, technologies, and processes.
The Rise of Cyber Fusion
Even though organizations today leverage several innovative technologies to safeguard their networks and systems, many still struggle to get a hold of the security-related information within their own four walls. About 30 years ago, military intelligence agencies came up with the concept of cyber fusion
. They built physical cyber fusion centers to collaborate with different intelligence communities and gain a deeper understanding of the threat ecosystem. Now, cyber fusion has started gaining momentum in the cybersecurity domain and modern-day organizations have started embracing this concept.
What is a Cyber Fusion Center?
In the cybersecurity landscape, cyber fusion centers have evolved to be a next-generation approach that combines all security functions such as threat intelligence, security orchestration, security automation, incident response, threat response, and others into a single unit in a collaborative manner. This proactive approach bridges the gap between discrete teams through intelligence synthesis and inter-team collaboration.
Certainly, the concept of cyber fusion is not new. But what’s new is the approach to building a cyber fusion center
(vCFC) and it all depends on how effectively organizations can integrate their technologies, processes, and people to defend their applications and systems against threats.
Why is Cyber Fusion Necessary?
For faster incident and threat response, organizations today strive for real-time threat intelligence sharing and robust communication with different security teams. All this can be achieved by cyber fusion-driven security that automatically ingests machine and human-readable threat intelligence from multiple sources and brings discrete security teams together to rapidly detect, prioritize, and respond to threats. As a result, security teams can make informed decisions and take appropriate actions such as automating actioning about a sudden incident in real-time.
Cyber fusion has the ability to combine threat intelligence with other security functions such as threat hunting, vulnerability management, and incident response to detect, manage, and respond to threats. This allows incessant sharing and exchange of threat intelligence among different teams and strengthens several security processes, creating collaboration and visibility across security teams.
Several aspects make cyber fusion necessary in today’s complex threat scenario. One of them is its capability to leverage avant-garde technologies such as artificial intelligence and machine learning to function on the threat data collected from internal and external sources. While internal sources include SIEMs, UEBA, Antivirus, IDS/IPS, EDR tools, and others, external sources comprise RSS feeds, threat intel reports, research reports, regulatory advisories, etc. Powered by state-of-the-art security orchestration and automation
(SOAR) capabilities, cyber fusion helps security teams in threat intelligence automation, incident response management, vulnerability management, triage and case management, and malware management. Overall, SOAR
capabilities enhance the operational efficiency and effectiveness of security teams, keeping them ahead of the threat actors.
How can a Cyber Fusion Center be Helpful within your Organization?
On a single day, security teams collect heaps of threat data from disparate sources that need to be correlated in order to make strategic decisions. Cyber fusion makes this correlation possible. The unique capability of a cyber fusion center is to connect the dots between the threat information gathered from multiple sources and gain insights into threat actors’ tactics, techniques, and procedures (TTPs). By connecting the dots, security teams can proactively examine threats, develop contextual links, and comprehend adversary behavior. Thus, organizations need to move beyond their theoretical knowledge and build cyber fusion centers to understand and respond to the prevailing threat landscape in real time.
Once an organization builds a cyber fusion center
, it allows its security teams to collaborate remotely and take a collective defense
approach to tackle common threats. The collective defense approach enables all the security teams to collaborate on a single integrated and modular platform-based system and drive improved decision-making in incident response. Cyber fusion’s collective defense approach also enables security teams from different organizations to collaborate with each other through automated threat intelligence sharing. Unlike traditional, big-budget SOCs that are capable of staggering in unforeseeable black-swan events like the COVID-19 pandemic, cyber fusion centers are not only cost-effective but efficient in addressing the complex cybersecurity landscape. In a nutshell, if you upgrade your security operations center to a cyber fusion center, you can significantly enhance your organization’s security posture and quicken the response to threats.
What are the main goals of a Cyber Fusion Center?
In an organization, each security team employs different tools and processes, which leads to siloization of the security operations. The goal of a cyber fusion center is to eliminate the siloization of independently working teams and bring them together to work under one roof. Cyber fusion breaks down the silos with the execution of an automatic, organized, and real-time information sharing
process among teams in a collaborative manner.
The purpose of a cyber fusion center is to allow organizations to collaborate through strategic and technical threat intelligence sharing in real-time and render a collective defense approach to threat response. This strengthens the collaboration between large enterprises, government agencies, CERTs, MSSPs, information sharing communities such as ISACs/ISAOs, and other stakeholders.
Cyber fusion centers aim to gather contextual intelligence on complex threat campaigns, identify potential attackers’ trajectories, and determine latent threat patterns by connecting the dots between isolated threats, incidents, vulnerabilities, malware, and other historical threat information. A cyber fusion center helps security teams generate relevant, consistent, and actionable threat intelligence that helps in accelerating the threat response process and break the cyber kill chain in a timely manner.
Automated Security Operations
Orchestrating and automating workflows across the security tools deployed in different environments—cloud or on-premise—becomes a daunting challenge for security teams. Modern-day SOAR platforms powered with cyber fusion capabilities support orchestration across different environments without having the security teams to expose their network to external traffic. Cyber fusion facilitates cross-functional and cross-environment orchestration, offering the scalability and flexibility required to connect all the security processes across an organization. This allows security teams to track and manage all their environments on a single platform.
How does a Security Operations Center (SOC) function?
The present threat landscape is marked by a continually growing number of cybercriminals leveraging new and diverse techniques to exploit both organizations and individuals. Many companies adopt the monitor and response cybersecurity strategy to tackle these threats. The SOCs are primarily responsible for this strategy within an enterprise.
The SOC team’s role is to detect, identify, investigate, and respond to security incidents that could impact an organization’s infrastructure, services, and customers. Such teams detect and contain attacks or intrusions in the shortest time frame possible and reduce the impact, damage, and recovery costs of the incident. This is achieved by using a combination of technologies and streamlined processes for real-time monitoring and analysis of potentially suspicious behavior across networks and systems that could indicate a security incident or compromise. The SOC team generally works closely with an organization’s incident response
team to address potential security risks or issues without delay. The remotely located multi-disciplined workforce focuses on incident detection and response and monitors security operations and handles the tactical and operational analysis of potential threats.
How does a Cyber Fusion Center Work?
A cyber fusion center is an advanced version of a SOC model that embodies detection, response, threat hunting, threat intelligence sharing, and data sciences. This entity is built to unify disparate teams within an organization such as SecOps, IT operations, physical security, product development, fraud, and others to boost overall threat intelligence, accelerate incident response
, and reduce organizational costs and risks.
Essentially, a cyber fusion center focuses on developing coordination between several different but related teams to increase operational effectiveness, readiness, and response to cyber threats. This is accomplished through the collaborative and streamlined communication of tactical threat intelligence, relevant indicators of compromise (IOC), and analysis of potential threats/risks before they impact.
With teams working together, information and actions can be exchanged and shared among different teams in a multidirectional manner. As a result, an organization can witness better collaboration between teams and quickly identify and address pitfalls in the existing processes.
A cyber fusion model acts as a single source of truth for key decision-makers and stakeholders, enabling them to track all the vital metrics and build a shared goal concerning their security functions. With this model, organizations can leverage security orchestration and automation
to support integrations between multiple tools. This aids security teams in eliminating the loopholes in their existing processes and quickly respond to threats. Furthermore, this approach combines and examines all the threat data generated from disparate security tools in one place to deduce high confidence actionable threat intelligence
Cyber Fusion Center vs. Security Operations Center (SOC)
Both SOC and cyber fusion center models are designed to effectively improve an organization’s security incident detection and response capabilities. The monitoring capabilities of a SOC team give organizations the ability to better defend against incidents and intrusions, reduce mean time to response (MTTR), and stay on top of threats that could target their environments.
However, the cyber fusion centers offer a more proactive and unified approach to dealing with potential threats by bridging the gap between multiple teams through intelligence synthesis and inter-team collaboration. Moreover, they facilitate the fusion of strategic, tactical, and operational threat intelligence for rapid threat prediction, detection, analysis, and incident response.
While both SOCs and cyber fusion centers provide incident detection and response capabilities, the latter connects disparate teams and renders faster threat detection, analysis, and incident response. Contrary to SOCs, cyber fusion centers bring together multiple teams to work as a single entity with shared goals and real-time information on vulnerabilities, malware, and threat actors. Apart from containing all of the same features of a SOC, the cyber fusion centers are more cost-effective and adept at addressing today’s cybersecurity landscape.
When dealing with evolving cybercriminals and security threats, pervasive visibility enables organizations to identify suspicious patterns, quickly respond to them, and mitigate them more effectively. In a nutshell, cyber fusion centers and SOCs are closely connected entities of the incident response chain vital for an organization to gain greater visibility into its networks, systems, and posture against threats.
How are Cyware’s Cyber Fusion Solutions Different from Traditional SOCs?
With evolving threats and attackers’ TTPs, organizations need to embrace a proactive cybersecurity approach to stay ahead of the cybercriminals. By leveraging cyber fusion capabilities, organizations can reinforce their security posture to address the rising threats. Unlike its competitors, Cyware equips its customers with virtual cyber fusion solutions that allow them to build a cyber fusion center without replacing their existing SOC infrastructure. Cyware’s cyber fusion suite consists of modular integrated platforms:
A mobile-enabled, automated threat alert aggregation and information sharing platform for real-time alert dissemination and improved situational awareness.
A threat response automation platform that amalgamates cyber fusion with advanced orchestration and automation capabilities to address evolving threats in real-time.
An innovative threat intelligence platform (TIP) that automatically aggregates, enriches and analyzes threat indicators.
A TIP for growing security teams that comes with premium feeds, enrichment, and automation capabilities at a fraction of the cost of other TIPs.
An exclusive threat intelligence processing and collaboration platform for ISAC/ISAO members to operationalize threat intelligence in a trusted sharing environment.
An any-to-any vendor-agnostic orchestration platform for connecting and automating cyber, IT, and DevOps workflows across the cloud, on-premise, and hybrid environments. | <urn:uuid:a5f2c218-3236-44ad-aaf1-4ce4b50fe7be> | CC-MAIN-2022-40 | https://cyware.com/educational-guides/cyber-fusion-and-threat-response/what-is-cyber-fusion-center-and-how-is-it-different-from-security-operations-center-soc-b13a/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00103.warc.gz | en | 0.923264 | 2,470 | 2.75 | 3 |
Authentication as a service (AaaS) is becoming the extra level of security every company needs in the ever evolving business landscape of remote work.
What is authentication as a service?
Authentication as a service (AaaS) is an emerging way for businesses to handle identity and access management (IAM) by offloading the complexities of management to a dedicated provider. It uses strong authentication methods and cloud computing and provides an overall better user experience.
What Is Authentication and How Can It Work as a Service?
Authentication is a complex process that combines several layers of functionality for a single purpose: to verify user credentials and allow access to system resources. Traditionally, authentication services are embedded into the systems they protect and require a large investment from either the system administrator or the company in terms of money, maintenance, time, and expertise.
That’s because implementing and using an authentication system calls for a deep knowledge of how to use that system correctly, how to manage and secure digital identities as part of that solution, and what to do when something goes wrong that could expose the private information of users and their accounts.
With the onset of widespread cloud infrastructure and computing, the notion of traditionally centralized functionality being offered “as a service” has become a model for more effective and efficient software and hardware infrastructure.
Software, infrastructure, operating systems, security, and storage: all of these features are being rebuilt and repackaged as cloud subscription services that businesses can essentially integrate into their existing operations. Authentication is no different, and providers are now offering authentication as a service (AaaS) to customers.
What is AaaS?
So, what is AaaS? Essentially, AaaS outsources identity management, authentication and authorizations to a cloud platform, managed by a third-party vendor for more optimal and secure systems. As such, it includes several different functions as part of its architecture:
- Identity Management (IDM)
- Authentication Management and Strategies
- Authorization and Access Control Systems and Policies
- Key and Certificate Management
Benefits of AaaS
Why are AaaS solutions more optimal? There are several reasons:
- Security: Authentication is intimately tied to compliance and security. Deploying and maintaining a system can be challenging, if not prohibitive, for many organizations who don’t have the expertise or resources to do so.
A managed solution allows client organizations to use strong authentication while a vendor dedicated to security and compliance can handle those complexities without distraction.
- Scalability: Cloud platforms scale, and cloud-based services are no exception. Client organizations using AaaS solutions can rely on that scalability as their businesses grow and shrink, with cloud-allocated resources matching their needs.
- Compliance: A dedicated AaaS provider will most likely work within one or several different industries, and as such, work to build solutions that meet regulations in those jurisdictions.
An authentication vendor handling IAM compliance is able to relieve their clients from managing compliance in specific industries (HIPAA, PCI DSS), for specific government jurisdictions (GDPR, SCA, PSD2) or for different frameworks (IAL2, AAL2, FIDO2).
- Variety: Since an AaaS vendor will dedicate significant resources to managing their solution, they will most likely have implemented several options for security. This can include multiple forms of multi-factor authentication (MFA), biometrics, identity proofing, and compliant physical identification measures.
One of the more vital implications of these benefits is that having an expert third party dedicate their efforts to authentication and identity management makes that aspect of your system more robust, scalable, and responsive to your needs. That being said, it is still important to vet your AaaS vendors before implementing their services and conduct audits of vendor relationships annually as a part of doing business.
How Does AaaS Support Security and Anti-Fraud Efforts?
In our modern cloud-driven society, IAM is a critical aspect of cybersecurity and data theft prevention. Hackers are regularly attacking cloud applications and resources through their systems to find that one weakness that could compromise the entire system.
Of course, this reality makes managing authentication and identity security even more critical than ever before. It isn’t enough, then, to state that your business doesn’t have the capacity to properly audit, implement, and maintain an IAM solution because too much relies on that solution protecting user data.
What Are the Consequences of Poor IAM Maintenance?
Consequences like identity theft or vulnerabilities due to shared hardware resources, and system destruction can all follow from one ill-managed vulnerability in an identity verification system.
Because an AaaS provider is, ideally, dedicated to developing secure technologies, they are uniquely situated to combat attacks. Due to having a combination of advanced security measures, MFA features, and security specialists, an AaaS provider can prevent security issues better and more comprehensively than an organization managing such situations in-house.
Some of the advanced measures a third-party vendor and specialist can provide include the latest security technology, dedicated security consultants, and anomaly detection. The last measure, which involves developing risk profiles from user activities, can identify fraud issues before fraud occurs.
What Should I Look for in an Authentication Service Provider?
Whether it’s security, scalability, or fraud prevention, AaaS solutions bring a lot to the table for small businesses and enterprise companies. However, not all providers are created equal.
Here are some services and features you should consider in your next AaaS provider:
- Advanced MFA: Any AaaS provider should include a selection of authentication methods to build an identity authentication scheme that meets your business and compliance needs.
- Relevant Compliance: Many providers offer different types of compliance. This can include industry-specific compliance for frameworks like HIPAA or PCI DSS or identity-based compliance, like IAL2, AAL2 and FIDO2, to meet high-level identity proofing requirements. Select a provider that meets, and if possible, exceeds minimum requirements.
- Advanced Biometrics: Fingerprint scans and facial scans are pretty common in consumer and business technology. A solid AaaS solution provider should also offer more advanced biometrics like LiveID and VoiceID which leverage facial features and voice recognition (respectively) to prove identity.
- Identity Proofing: Identity proofing, or the practice of using live and document-based verification, helps solve identity theft and fraud issues. With proofing, a solution can verify that a user attempting to access a specific account is who they say they are.
This approach helps address gaps in some common forms of authentication, namely those that allow individuals not associated with an identity to enter stolen credentials. A robust AaaS provider will have many options to support the onboarding of citizen identity from any country and with multiple workflow options.
- Secure Identity Management and User Ownership: Centralized identity management can serve as a honeypot for hackers while also presenting additional ethical issues regarding how users access and own their digital identities. Many identity management and AaaS providers are turning to decentralized storage using blockchain technology to address these challenges.
Third-party authentication and identity management are revolutionizing how we secure our systems and protect user data. But many providers aren’t taking the necessary steps to address challenges now and in the future.
That’s because our modern conception of security has to change. While a step in the right direction, MFA and biometrics aren’t enough to address the increasingly sophisticated attacks we face every day.
1Kosmos is changing how we approach authentication and ID management. We combine advanced biometrics, identity proofing, and frictionless user experiences to mitigate the primary contributors to unsecure systems: poor technology and poor user cyber hygiene.
Furthermore, we leverage blockchain technology and mobile networks to provide secure, private, and decentralized identity management that resists cyberattacks and places digital ID ownership back in the user’s hands.
Towards this goal, 1Kosmos provides the following features:
- Private Blockchain: 1Kosmos protects personally identifiable information (PII) in a private blockchain and encrypts digital identities in secure enclaves only accessible through advanced biometric verification. Our ledger is immutable, secure, and private, so there are no databases to breach or honeypots for hackers to target.
- Identity Proofing: BlockID includes Identity Assurance Level 2 (NIST 800-63A IAL2), detects fraudulent or duplicate identities, and establishes or reestablishes credential verification.
- Streamlined User Experience: The distributed ledger makes it easier for users to onboard digital IDs. It’s as simple as installing the app, providing biometric information and any required identity proofing documents, and entering any information required under ID creation. The blockchain allows these users more control over their digital identity while making authentication more straightforward.
- Identity-Based Authentication Orchestration: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through credential triangulation and validation.
- Integration with Secure MFA: BlockID and its distributed ledger readily integrate with a standard-based API to operating systems, applications, and MFA infrastructure at AAL2. BlockID is also FIDO2 certified, protecting against attacks that attempt to circumvent multi-factor authentication.
- Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API, including private blockchains.
If you are ready to discover how digital identity and authentication innovations are changing security, watch our webinar on Authentication: Hope-Based vs. Identity-Based. Also, make sure to sign up for our newsletter to stay on top of 1Kosmos products and services. | <urn:uuid:3bb1a5ac-9449-48c3-b330-749c74dee75b> | CC-MAIN-2022-40 | https://www.1kosmos.com/authentication/authentication-as-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00103.warc.gz | en | 0.920627 | 2,041 | 2.71875 | 3 |
AI - How to Stop Hackers and Protect Yourself
Like most new technology innovations and developments, artificial intelligence (AI) is a double-edged sword.
On one hand, AI has led to smarter, more robust security for your network.
But in the wrong hands, AI can be used as a powerful tool for cyber crime.
Read on, you’ll learn how AI is stopping hacking and about adversarial AI – how the criminals are using it.
Will AI stop hacker attacks?
Analysing and improving your cyber security, and calculating risk, in now too large a job for simply human monitoring.
AI and machine learning (ML) are now critical within the information security landscape.
As organisations move away from traditional castle-and-moat security models and more towards edge computing (accelerated by the increase in remote working), cyber security is focusing less on perimeter security and more on detecting suspicious behaviour within systems.
AI cyber response
The problem with traditional cyber security is that once they are breached, it can take an age for anyone to notice something has happened. In the meantime, data can be collected and stolen, and viruses passed on.
AI software works and learns within your systems. It learns how your various software works, how your data is normally managed and about user behaviour.
It then snowballs.
The more AI learns about your system, the better it protects you by creating algorithms that assess and alert your IT staff to any vulnerabilities and violations.
The best bit? AI can continually learn and adapt. It’s far more proactive than traditional methods of cyber security.
Unfortunately, hackers are a clever bunch too.
They’ve also started using AI, but to learn about and infiltrate the networks and systems involved – this is referred to as adversarial AI. In addition, AI can teach malware how to behave in a way that appears normal, so making it much harder to flag up.
What is adversarial AI?
As noted above, adversarial AI is AI technology that’s used to cause harm, e.g., automated attacks and breaches.
For example, AI can create and use more convincing, natural language fitting the corporate culture of the target.
It can be used in the phone system, crawl social media accounts, attack chatbots and text messaging systems.
It can be used to mimic system components to gain access to sensitive data.
And, of course, it’s AI, so it can automatically learn and adapt, resulting in more successful attacks.
How AI is stopping criminal hacking.
You can see why adversarial AI is a growing concern.
Hackers use the same techniques as those defending against hacking – intelligent phishing, analysing behaviour, and the malware used can know when to hide.
Although AI can be used in the battle against cyber crime, it also must contend with the challenges unique to all of the cyber security space:
- A huge attack surface
- The sheer number of devices per organisation
- Multiple endpoints and remote perimeters
- Lack of skilled security professionals
- Sheer volume of data
- Remote working
- Internet of Things
Even though the scale of challenges facing cyber security is huge, AI should be able to help with many of these issues.
It can provide benefits and new levels of intelligence to IT teams across the whole spectrum of cyber security, including:
- Threat exposure. AI can provide up-to-date knowledge of specific threats to help prioritise threats and risks.
- Manages effectiveness. AI can tell you where your security systems have strengths and vulnerabilities.
- Breach risk prediction. Using AI to predict how and where your network and systems are most likely to be breached, means you can put in place measures and processes to improve resilience.
- Incident response. AI can help you understand the causes of vulnerabilities to avoid future issues. It can help with fast responses and prioritisation.
Relying purely on human-based monitoring is a weakness in your cyber security.
AI is far more efficient and effective at monitoring, detecting, identifying and responding to threats automatically.
Hackers are becoming more sophisticated, and commonly used cyber security methods need a boost. It’s still early days, but AI looks like it will allow your cyber security teams to form powerful human-machine partnerships to protect your company now and in the future.
BlueFort Security can advise and guide you. Let us know your particular requirements and challenges. Just call 01252 917000, email firstname.lastname@example.org or use our contact form.
Learn how AI can help overcome the current cybersecurity skills shortage that many businesses are facing.
Find out more about both manual and automated penetration testing including the advantages and disadvantages of each. | <urn:uuid:546be616-6b12-4e8e-98f1-1046325163b2> | CC-MAIN-2022-40 | https://www.bluefort.com/news/latest-blogs/ai-how-to-stop-hackers-protect-yourself/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00103.warc.gz | en | 0.930078 | 986 | 2.9375 | 3 |
An Access Point Name is a link between a mobile network and the internet. The device trying to connect to the internet needs to have this parameter configured and presented to the carrier, who can then decide which IP address to assign the device and which security method to use. So the carrier is being responsible for the creation of a network connection using APN information.
Furthermore, it is good to know that APN is not only used for internet or private network connectivity but also for Multimedia Messaging Service (MMS).
An Access Point Name consists of two parts:
the network identifier
the operator identifier
The operator identifier in turn consists of two other parts:
Mobile Network Code (MNC)
Mobile Country Code (MCC)
As MNC’s and MCC’s are a whole different topic we will not dive deeper into right now, but for everyone interested to know more than there is a GSMA Mobile Network Codes and Names Guidelines and Application Form which gives a good overview.
APN itself can look like this: “internet.mnc012.mcc345.gprs”. But, it can also use a customized domain name from a Domain Name Service (DNS) operator. This will give you an APN like “terminal”, which for example is the most common APN used for 1oT SIM cards. A custom APN is linked with the same network and operator identifier parameters but is just translated by the DNS.
There are a couple of APN types depending on functionality and several abbreviations which might confuse you. Let’s clear things up a bit. We can divide APNs based on if it is connected to a public or private network and also based on how the IP address is assigned. Taking this into account we get four different types:
Public APN - this is mostly referred to as just “APN”. A device connecting to the gateway with a public APN is given an IP address dynamically to mostly access the internet.
Public APN with static IP - the gateway assigns a static IP address to the device based on the available IP pool of the public network.
Private APN - this is also mistaken as “APN with VPN”. A device that has a private APN configuration can be connected to a companies’ own internal network by the gateway.
Private APN with static IP - the gateway assigns a static IP address to the device based on the available IP pool of the private network.
When we compare Public APN and Private APN we can see that the latter doesn’t necessarily even need an internet connection. Private APNs ensure secure data handling by never allowing it to access the public internet while at the same time staying on cellular network infrastructure.
Private APN and Virtual Private Network(VPN) is often mistaken as the same but VPN is an extra layer on top of the public internet to create a secure communication channel.
Dynamic IP is not accessible to inbound connections and therefore it cannot be used to initiate communications to the IoT or M2M device as this IP address is not known until the device initiates the connection itself.
APNs with static IP are fully routable and it is possible to initiate communication externally.
Do I need to configure APN myself?
If you think about using your phone, you might not recap having to enter APN settings to establish a network connection or to use MMS. That’s because these settings are sent via SMS once the device logs onto a network. Some phones ask you to confirm these settings, others do it in the background without you noticing.
For IoT/M2M devices APN needs to be configured manually. This is because the software feature for automatic APN recognition usually has not been implemented by IoT/M2M cellular hardware manufacturers. There simply is not a need for it. That said, this is not a rule and you can find few exceptions like Telit 4G Cat-M and NB-IoT modules for example.
Can I get a custom APN for my business?
Telecoms can create custom APNs but they typically only offer this service for huge multi-national companies. 1oT takes custom APN requests also case by case so make sure to contact us with any inquiries.
Need help with deciding on the right APN type?
We at 1oT can help you with deciding on the right APN type to us for your IoT/M2M project. Make sure to contact us with technical questions at email@example.com.
In order to learn more about security in IoT in general, we have written a thorough article about it, available here.
If you’re interested in how to get your IoT devices connected to cellular networks, contact our sales team and we’ll find the best solution for you. | <urn:uuid:810b7eb1-05ec-459d-b047-23fcb9d06716> | CC-MAIN-2022-40 | https://1ot.com/resources/blog/iot-hacking-series-3-what-is-access-point-name-apn-and-how-it-works | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00303.warc.gz | en | 0.934717 | 1,013 | 3.078125 | 3 |
Rawpixel.com - Fotolia
People are becoming increasingly attached to device screens, with the average person in the UK spending six hours a day looking at some form of screen, research has found.
Almost a quarter spend more than 10 hours a day attached to a screen and 60% even look at two at once, according to research of more than 2,000 adults by OnePoll on behalf of Encore Tickets. Screens include smartphones, tablets, work computers and TV.
But these mobile-first lifestyles could have damaging long-term side-effects. Analysis of the wider research findings by University College London analysis academic Kiki Leutner found it is affecting people’s behaviour within relationships, concentration levels, and losing touch with the real world.
“The research shows that nine out of 10 of us say screens are a necessary part of everyday life and more than a third of people say they couldn’t live without screens,” she said. “This shows our increasing reliance on digital devices, but also supports the idea that people develop attachments towards devices such as their phone, which can be damaging.”
The research found that people even experience distress on separation from their devices. “Attributing this kind of attachment to an object can be damaging in the long term,” said Leutner. “People who have constant contact and validation from mobile devices may deepen their dependence on others, affecting both their behaviour and relationships.”
And it seems one screen is not enough, with three out of five people admitting to using a device while watching TV. Browsing the web, scrolling through social media and keeping up to date with work emails are the usual second-screen activities.
This is bad for concentration, said Leutner. “By using multiple devices, we may like to think we are multitasking, but actually we could be concentrating less,” she said.
As more and more everyday activities go digital, such as banking and retail, people will need to make the effort to spend time away from screens. According to Leutner, attending events such as theatre performances can help to reduce addiction to devices and its negative effects. | <urn:uuid:8c9dd3f4-9580-4b4e-9510-7c5d1118679d> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252436109/People-at-risk-of-psychological-side-effects-of-screen-addiction | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00303.warc.gz | en | 0.965903 | 450 | 2.609375 | 3 |
Important Places in India - Historical Places to visit in India (Part 1)
This article provides information on Important Historical Places in India along with detailed information about those important historical places of India.
List of Important Historical Places in India from Alphabet A - D
Abu, Mount (Rajasthan) : Hill station in Rajasthan; contains famous Dilwara Jain Temple and Training College for the Central Reserve Police.
Adam’s Bridge: Very nearly joined to India between two point’s viz. Mannar Peninsula and Dhanushkodi by a line of sandbanks and rocks called Adam’s Bridge.
Adyar (Tamil Nadu) : A Suburb of Chennai, headquarters of the Theosophical Society.
Afghan Church (Mumbai) : It is built in 1847 known as St. John’s Church. It is dedicated to the British soldiers who died in the Sindh and Afghan campaigns of 1838 and 1843.
Aga Khan Palace : In Pune where Mahatma Gandhi was kept interned with his wife Kasturba Gandhi. Kasturba died in this palace.
Agra (Uttar Pradesh) : Famous for Taj Mahal, Fort, and Pearl mosque. Sikandra, the tomb of Akbar, is situated here. It is also a center of the leather industry.
Ahmednagar (Maharashtra) : It was founded by Ahmed Nizam Shahi. It is the district headquarters of Ahmednagar district. It is an industrial town well known for its handloom and small-scale industries.
Ahmadabad (Gujarat) : Once the capital of Gujarat. A great cotton textile center of India. The anti-reservation riots rocked the city in April 1985.
Ajmer (Rajasthan) : It has Mayo College and the tomb of Khwaja Moinuddin Chishti, which is a pilgrim center for Muslims; Pushkar Lake, a place of Hindu pilgrimage, is about two miles from here.
Aliabet : Is the site of India’s first off-shore oil well-nearly 45 km from Bhavnagar in Gujarat State. On March 19, 1970, the Prime Minister of India set a 500-tonne rig in motion to inaugurate “Operation Leap Frog” at Aliabet.
Aligarh (Uttar Pradesh) : Seat of Muslim University, manufacture locks, scissors, knives, and dairy products.
Allahabad (Uttar Pradesh) : A famous and important place of pilgrimage for Hindus, the confluence of three rivers - Ganges, Yamuna, and the invisible Saraswati. It is the seat of a University and trading center.
Alandi (Maharashtra) : Popularly called ’Devachi Alandi’ is hallowed by the association of saint Dhyaneshwar the author of ’Dhyaneshwari’ who lived and attained Samadhi here at the age of twntyone. Two fairs are held annually one on Ashadha Ekadasi and the other on Karthikai Ekadasi.
Amber Palace : Deserted capital near Jaipur (Rajasthan) containing the finest specimens of Rajput architecture.
Almora (Uttaranchal) : This city is on Kashaya hill. The clean and majestic view of the Himalayan Peak is breath catching. The woolen shawl of Almora is very famous in the region. It is a good hill resort.
Amarnath (Kashmir) : 28 miles from Pahalgam, and is a famous pilgrim center of Hindus.
Amboli (Maharashtra) : Nestling in the ranges of Sahyadri, Amboli is a beautiful mountain resort in the Ratnagiri district. The climate is cool and refreshing; and an ideal place for a holiday.
Amritsar (Punjab) : A border town in the Punjab, sacred place for Sikhs (Golden Temple), scene of Jallianwala Bagh tragedy in April 1919. The 400th anniversary of Amritsar was celebrated with great gusto in October 1977. The city was founded by Shri Guru Ram Dass, the 4th Guru of Sikhs.
Arikkamedu (Puducherry) : It is one of the archaeological places. It describes the relationship between Tamils and Romes (Yavanas) for trade purposes.
Arvi (Maharashtra) : Near Pune, India’s first satellite communication center has been located here.
Ashoka Pillar (Madhya Pradesh) : It was erected by Emperor Ashoka. It is now the official symbol of Modern India and the symbol is four back-to-back lions. In the lower portion of the column are representations of a lion, elephant, horse, and bull. The pillar stands about 20 m high.
Aurangabad (Maharashtra) : It is one of the important towns in Maharashtra. Tomb of Emperor Aurangzeb and this attracted many tourists. Ellora and Ajanta caves are reached from here.
Auroville (Punducherry) : It is an international township constructed near Pondicherry with the help of UNESCO.
Avadi : Situated at Chennai in Tamil Nadu, it is known for the government-owned Heavy Vehicles Factory. Vijayanta and Ajit tanks are manufactured here.
Ayodhya (Uttar Pradesh) : The birthplace of Rama is situated on the banks of the holy river Saryu. The famous ’Babri Masjid’ built on the birthplace of Rama by the Mughal rulers in 15th century has been taken over by the Hindus after 400 years.
Badrinath (Uttarakhand) : It is a place of pilgrimage noted for the temple of Lord Vishnu for the Hindus, near Gangotri Glacier in the Himalayas.
Bahubali (Maharashtra) : A pilgrim center for jains, of both Svetambar and Digambar Jains; there is a giant idol of Shree Bahubali the son of Bhagwan Adinath, the first Tirthankar.
Bangalore (Karnataka) : It is the capital city of Karnataka State and an important industrial center. The places worth seeing are Vidhana Saudha, Lal Bagh gardens, etc. The BHEL, HAL, IIM are situated here.
Barauni (North Bihar) : Famous for a big oil refinery.
Bardoli (Gujarat) : Bardoli in Gujarat State has occupied a permanent place in Indian History for a no-tax payment campaign launched by Sardar Vallabhbhai Patel against British rule.
Baroda (Gujarat) : Baroda, (Vadodara) the capital of former Baroda State is one of the main towns in Gujarat State. Laxmi Vilas Palace is a tourist attraction.
Belur (West Bengal) : Near Calcutta, famous for a monastery founded by Swami Vivekananda; a beautiful temple dedicated to Shri Ramakrishna Paramhansa. It is also known for the paper industry. There is another place of the same name in Karnataka, it is a famous pilgrim center known for Channa Keshava Temple.
Belgaum (Karnataka) : It is a border town in Karnataka State. It has remained a place of a dispute between Maharashtra and the Karnataka States.
Bhakhra (Punjab) : It is a village in Punjab State where the Bhakra Dam has been constructed across the river Sutlej in a natural gorge just before the river enters the plains 80 km upstream Ropar.
Bhilai (Chhattisgarh) : It is known for the gigantic steel plants set up with the help of Russian Engineers.
Bhimashankar (Maharashtra) : One of the five Jyotirlingas in Maharashtra is at Bhimashankar. The beautiful Shiva temple here was constructed by Nana Parnavis the ancient statesman of the Peshwas.
Bhopal (Madhya Pradesh) : Capital of Madhya Pradesh. MIC gas leaked out from the Union Carbide factory in December 1984, and more than 3000 persons died. It was the worst industrial disaster in the world.
Bhubaneswar (Orissa) : It is the capital city of Orissa. Lingaraja Temple is worth seeing.
Bijapur (Karnataka) : It was the capital of old Adil Shahi Sultan of Bijapur. Gol Gumbaz, the biggest tomb in India constructed here, is called the whispering gallery. The town is rich with the remains of palaces, mosques and tombs.
Bodh Gaya (Bihar) : It is situated six miles south of Gaya in Bihar State. Gautama Budha attained enlightenment in a full moonlight in the month of Vaisakha under the peepal tree.
Bokaro (Jharkhand) : The fourth and the biggest steel plant are here.
Buland Darwaza (Uttar Pradesh) : It is the Gateway of Fatehpur-Sikri built by Akbar. This is the highest and the greatest gateway in India. It was erected to commemorate the victorious campaign of Akbar in the Deccan in 1602 A.D.
Bull Temple (Karnataka) : It is situated near Bugle Hill, with a height of 6.2 m (20ft) high stone monolith Nandi Bull. The Bull is carved out of a single stone.
Chandernagore (West Bengal) : Situated on the river Hooghly. It was previously a French settlement. Now it has been merged with the Indian Union.
Chennai (capital of Tamilnadu) : It is the third-largest city in India. Known for Fort St. George, Light-house, St Thomas Mount, and Integral Coach Factory.
Chandigarh (Punjab & Haryana) : Chandigarh the joint capital of the States of Punjab and Haryana is a planned and beautiful city. It is situated at the foot of the Himalayas. It was designed by Mont Corbusier.
Cherrapunji (Meghalaya) : It is the place of the heaviest rainfall. It receives 426” of rain yearly.
Chidambaram (Meghalaya) : It is a town in the South Arcot district of Tamil Nadu. It is famous for its great Hindu Siva Temple dedicated to Lord ’Nataraja’, the cosmic dancer. It is the seat of ’Annamalai University’ founded in 1929. The name of the town comes from Tamil ’Chit’ plus ’Ambalam’- the atmosphere of wisdom.
Chilka Lake (Orissa) : It is the Queen of Natural Scenery in Orissa, though separated from the Bay of Bengal by a long strip of sandy ridge, exchanges water with the sea. It is an excellent place for fishing and duck shooting.
Chittaranjan (West Bengal) : It is famous for locomotive works. Railway engines are manufactured here.
Chittorgarh (Rajasthan) : It was once the capital of Udaipur. It is known for the Tower of Victory built by Rana Kumbha and Mira Bai Temple.
Chowpathy Beach (Mumbai) : A popular beach with Lokmanya Tilak and Vallabhbhai Patel statues where the political meetings for freedom struggle took place, now the coconut day celebration and Ganesh immersion take place.
Chusul (Ladakh) : It is situated in Ladakh at a height of about 14,000 feet. Chusul is perhaps the highest aerodrome in India.
Coimbatore (Tamil Nadu) : It is famous for Textile Industry. The Government of India Forest College is situated here.
Courtallam (Tamil Nadu) : Adjoining Tenkasi and 3 miles south is a common man’s health resort. Famous for its waterfall and a good summer resort.
Cuttack (Odisha) : It is the oldest town and once upon a time the capital of Odisha during the medieval period to the end of the British rules. The city is noted for fine ornamental work of gold & silver.
Dakshineswar (Kolkata) : It is at a distance of about five miles from Calcutta where Swami Vivekananda was initiated into religious life by Swami Ramakrishna Paramhansa.
Dalal Street : Stock exchange Market in Mumbai.
Dalmianagar (Jharkhand) : Cement manufacturing.
Dandi (Gujarat) : It is famous for Salt Satyagraha (Dandi March) staged by Mahatma Gandhi in 1930.
Darjeeling (West Bengal) : Famous for tea, orange, and cinchona, fine hill station, famous for its scenic beauty.
Daulatabad (Maharashtra) : The fort previously called Devagiri is believed to have been constructed by the Yadava Kings in 1338. The fort is very impregnable.
Dayalbagh (Uttar Pradesh) : Near Agra; known for Dayalbagh Industrial Institute, shoe manufacture. The religious and cultural seat of a section of the Hindus.
Dehu (Maharashtra) : Dehu, a town on the banks of the river Indrayani is the birthplace of the famous saint-poet Tukaram whose ’Abhangas’ have a pride of place in Marathi literature.
Dehradun (Uttarakhand) : It is the gateway to the Garhwal Himachal such as Badrinath and Joshimath. The Forest Research Institute is situated here.
Delhi : India’s capital. The Red Fort, the Jama Masjid, The Qutub Minar, the Rajghat (Mahatma Gandhi’s Samadhi), the Humayun’s tomb, Shanti Van (where Prime Minister Nehru was cremated), are located here. It was established by Tomaras in 736 A.D.
Dhanbad (Jharkhand) : Famous for coal mines and the Indian School of Mines, National Fuel Research Institute.
Dhariwal (Punjab) : It is famous for woolen goods.
Dibrugarh (Assam) : It is a town in Assam and the Terminus of rail and river communications along the Brahmaputra from Calcutta.
Digboi (Assam) : It is known for its oil fields and oil refinery. It is one of the oldest oil refineries which is still operative in the world.
Dilwara Temples (Rajasthan) : It is near Mount Abu. There are five Hindu Temples constructed here between the 11th and 13 Century A.D.
Dindigul (Tamli Nadu) : It is famous for cigars, tobacco, and locks.
Dum Dum (Kolkata) : It is a famous AirPort and Government Arsenal.
Durgapur : In West Bengal is known for a gigantic steel plant set up here with the help of British Engineers.
Dwaraka (Gujarat) : It is one of the seven most important places of Hindu pilgrimage. Krishna the eighth incarnation of Lord Vishnu made Dwaraka his center to recapture Mathura. | <urn:uuid:1b978cff-7229-4430-bf02-e7a271fc04b6> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/826/important-places-in-india-historical-places-to-visit-in-india-part-1.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00303.warc.gz | en | 0.92984 | 3,302 | 2.953125 | 3 |
Powershell has its own variables, but you can also tap Windows' environment variables within your scripts
- By Jeffery Hicks
If you have any experience with traditional batch files in the CMD shell, you may used environmental variables like %username%. These variables are likely to be available everywhere making your script more portable. Even though a PowerShell script typically uses its own variables, you can still access the environmental variables.
The Environment PSDrive provider creates a PSDrive called ENV. If you do a directory of ENV: you should see the same environmental variables that you see when running SET in a CMD shell. These variables are completely separate from their CMD counterpart. You can delete, modify or create new variables in the ENV PSDrive and it will have no effect on Windows.
If you need to change an environmental variable that will affect the rest of your CMD and Windows sessions, you'll need to edit the registry (but let's save that topic for a future lesson).
More likely, you'll want to use the environmental variables in your script. To retrieve a value use the environmental variable name prefixed by $env: like this:
PS C:\> $env:username
The $ character acts as a delimiter that gives us only the requested item in the ENV PSDrive. Let's check out some other examples that use environmental variables.
Perhaps you want to find out the size of all files in %temp%:
PS C:\> (dir $env:temp -recurse | measure-object -sum length).sum/1MB
The Get-ChildItem cmdlet (I'm using the DIR alias) returns all files in the %temp% directory and pipes them to Measure-Object, which calculates the sum of all the files' lengths (size) property. The default value is in bytes, but I'm dividing it by 1MB to return a more meaningful number.
Here's one more:
PS C:\> $(([ADSI]"WinNT://$env:computername/administrator,user").passwordage)/86400 -as [int]
This one- liner uses ADSI to retrieve the local administrator account from the local computer. It then takes the PasswordAge property and divides it by 86400 to return the password age in days. I cast the value as an integer [int] merely to round it off. And yes, I know I could have used localhost in this expression, but sometimes it won't work in other ADSI expressions. That's why I've started using the "real" computername for consistency.
As your PowerShell experience grows, I'm sure you’ll find other ways to take advantage of these environmental variables.
Jeffery Hicks is an IT veteran with over 25 years of experience, much of it spent as an IT infrastructure consultant specializing in Microsoft server technologies with an emphasis in automation and efficiency. He is a multi-year recipient of the Microsoft MVP Award in Windows PowerShell. He works today as an independent author, trainer and consultant. Jeff has written for numerous online sites and print publications, is a contributing editor at Petri.com, and a frequent speaker at technology conferences and user groups. | <urn:uuid:cc21b6e2-47c3-47dc-8e1e-09b735cb96e5> | CC-MAIN-2022-40 | https://mcpmag.com/articles/2008/08/06/environmentally-friendly.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00303.warc.gz | en | 0.894048 | 686 | 2.859375 | 3 |
“So simple a child could do it” — I have heard that expression abused often in advertising. Yet it aptly applies to the Kano computer kit.Kano is a computer and coding kit that is suitable for all ages. Well, to be truthful, Kano’s step-by-step instructions in the included booklets and its simplified Linux-based operating system target kids aged 6 to 14.
That said, the hands-on method it uses to teach basic computer structure and coding principles will work for kids of all ages. Even older folks can assuage their curiosity about computers by playing around with this innovative real computer.
I spent years in the classroom pounding out lessons on writing, media and language that were so simple a child could do them. Kano’s instructional philosophy is sound, and the childlike ease of instruction might be just what older learners need.
After all, coding is not rocket science, but it might seem like it is for some people, regardless of age. Kano’s hands-on approach makes learning code — and for that matter Linux — a very enjoyable experience.
The Kano Kit is not your typical low-end computer. It is really the latest advance in small computer hardware. It is a Raspberry Pi running the Raspbian operating system.
The Inside Story
The Raspbian OS is based on Debian Linux optimized to run on the Raspberry Pi Model B. Raspbian comes preinstalled and configured on the Kano. The kit includes all the basic hardware.
For instance, the Kano keyboard has a built-in touchpad. You also get a speaker. The case is included too.
The OS is preinstalled, sort of. It is an SD card with the OS embedded. This card, like all the other components, quickly and easily fits into place in the card socket in the clear plastic case that holds the printed circuit board.
Display By Luck
You provide your own TV to serve as the monitor connected via the HDMI cable. The 4×2-3/4-inch computer and 10×3-1/4-inch keyboard easily can travel anywhere to plug into a waiting HDMI port.
I lucked out in that regard. I remembered an old 20-inch LCD TV sitting on my spare parts shelf in the office just waiting for something to do again.
However, not all users will know that they must access the source setting on the TV to select the HDMI connection. Until that happens, the screen output will show no sign of connection. The guidebook should say this.
I have spent considerable time inside computer boxes, swapping out bad parts and performing hardware upgrades. Assembling the half dozen snap-in parts and components was child’s play.
It should be easy for any teen or older. All that is required is the ability to attentively read the accompanying step-by-step color guidebook.
The instructions come in short word groups. One or two phrases per page. Very illustrative images show what the words mean. Really, I have read more complicated pre-K readers to my children a long time ago.
Here is an example: “Grab the memory card” on the top of the image. “Turn the brain over, and slide it in” under the image. So much for that page of instructions.
Not So Fast
Despite the elementary simplicity of the assembly process, very young computer makers will have some logical stumbling blocks. So will older users with first-time entry inside a computer.
These stumbling blocks are merely delays. Most will figure out solutions by trial and error — but a few descriptive tips added to the pages would eliminate this problem.
For instance, the keyboard has a USB cable that can plug into an available USB port on the PCB. It also has a USB WiFi dongle. The directions say to plug in the dongle.
The USB keyboard cable does not work. If the dongle is not used, the keyboard does not communicate with the CPU or central processing unit. Of course, I assumed the keyboard cable was a better option. Figuring out that it wasn’t slowed me down.
Three more flaws with the directions also could be remedied with some additional verbiage in the guidebook. One concerns the WiFi dongle.
Some users will not have a wireless router. The Kano kit comes with a standard modem cable socket. The directions say to plug in the WiFi dongle. There’s no mention of an optional cable connection. It does work.
The second how-to flaw involves the keyboard. It has a power on/off button on the underside. Pressing it will turn the green “on” light off/on — but the guidebook fails to mention this at all. That also slowed me down a minute or two.
The third flaw is an undocumented Bluetooth button on the back edge of the keybaord. It turns a blue indicator light on/off on the keyboard. However, the guidebook fails to mention anything about the Bluetooth functionality.
Ready, Set, Go
The assembly process take a scant minute or two if you have no glitches to resolve. Plug the Kano computer into a power outlet. You will see text scrolling down the screen while the Kano operating system initiates. It will perform an extensive software update.
Then follow the white rabbit jumping across the screen along with some Matrix-like graphics. The screen prompt tells you that the rabbit is hiding in memory. To find the rabbit, enter the command displayed on the screen. It is this simple: >CD rabbithole
That changes the black screen to a pleasant shade of blue with white lettering as the keyboard and mouse activation occurs. Then you arrive at the Kano OS desktop.
The screen turns into a display reminiscent of early Atari displays on sketchy TV screens. The user interface is classic.
What You See
The display is colorful. You will not enjoy eye-popping visual effects. The screen output is much like that of early cathode ray tube computer monitors.
You will be able to use the typical functionality of point-and-click responses — and yes, Kano includes a fully serviceable LX window for the Rasbian terminal emulator.
Above the panel is a row of square launchers for the built-in accessories. These include icons for three learning-to-code game apps: Snake, Pong and Minecraft.
The included Kano books are color coded. They replace computer jargon with normal language. For example, Kano calls computer parts like the motherboard “the computer’s brain.” It references concepts like network or buss connectivity to the computer talking to its parts or thinking.
Kano takes a similar approach in teaching users how to code in the second Kano booklet. It follows a step-by-step approach.
Coding is accomplished by slowly learning how to manipulate Kano Blocks, which transforms coding from a text-only infrastructure into a puzzle-piece-like exercise that lets you experiment and alter the included games.
Learn by Doing
Learning to code involves placing a series of color-coded blocks in a tray, observing how the game attributes change, and recognizing what the displayed code means.
Based on my earlierinterview with Alejandro Simon, Kano’s head of software and leader of the Kano OS, I pictured this process to involve manually playing with miniature alphabet-like squares. The process is actually non-physical. The blocks are displayed on the screen. You drag them from a supply area to the programming row.
A long time ago in a far away place, I learned Basic Basic. Relearning programing skills the Kano way is so much more fun and more effective. Plus, I get to play computer games while I learn.
The first game, Snake, is the easiest to master. Pong picks up the learning process where Snake leaves off.
You learn a bit diffrently when you reach the Minecraft level. You start to recognize the connection between commands that customize the game environment using Kano Blocks, a graphical programming language.
A tiny crowdfunded startup took the idea behind Lego to teach computer programming by playing first-generation computer games. Kano launched on Kickstarter in November 2013. More than 13,000 people from some 50 countries raised US$1.5 million in 30 days. Barely one year later, Kano started in October to deliver 18,000 preordered kits.
The idea behind Kano Blocks, a collection of Lego-like shapes with embedded programming code, came from the then 6-year-old son of cofounder Saul Klein. Two other cofounders, Yonatan Raz-Fridman and Alex Klein, head the team of software developers at the company’s headquarters based in London.
Kano is a real computer, not a toy or demo model. You can use it to browse the Internet using the Chromium browser, play music, create documents and much more.
The Kano kit comes with all of the basics needed to build a computer and learn to code for $149. It is expandable by adding games, apps and additional programming projects. | <urn:uuid:2ad4f316-aa70-45f0-b848-73c2ece4edfa> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/kano-the-can-do-coding-kit-for-kids-of-all-ages-81305.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00303.warc.gz | en | 0.923978 | 1,917 | 2.625 | 3 |
A key element in government plans for artificial intelligence is building trust between humans and machines so they can work together as teams. And while a lot of the focus, understandably, has been on whether AI systems can prove trustworthiness to their human counterparts, research also has found situations where humans can be a weak spot in the equation.
Trust is seen as essential to the future of human-machine teaming, whether that job is human resources, medical diagnosis, emergency response or military applications involving drones and weapons systems. As the stakes get higher, trust becomes more important, but those might be the situations where it’s most lacking, particularly when humans are already familiar with the task at hand.
Recent tests by the Army Research Laboratory, for instance, found that people were more likely to trust an AI’s recommendations when they didn’t know what the AI was talking about, while participants who were well versed in the scenario tended to disregard the AI’s advice. In that case, familiarity bred, if not outright contempt, at least a level of indifference.
The same recommendation process that works so well in the commercial sector — for choosing a restaurant or getting directions — doesn’t seem to apply in military settings, ARL scientist Dr. James Schaffer, said in an Army report. "Unfortunately, there are big gaps between the assumptions in these low-risk domains and military practice," Schaffer said.
Trusting the Force Isn’t Enough
ARL researchers, working with a team from the University of California, Santa Barbara, created a variation of the Iterated Prisoner’s Dilemma game. During the tests, players could turn on the AI system, which would appear next to the game interface, and then decide whether to take its advice. There were different versions of the AI, including one that always gave the optimal advice based on the situation, some that were inaccurate, some that required game information to be entered manually and some that gave rational arguments for their suggestions.
In the original Prisoner’s Dilemma — created at the RAND Corp. in 1950 and applied since to sociological and biological sciences — two prisoners weigh the pros and cons of sticking together or agreeing to testify against each other. The iterated game continues through a series of rounds, with players learning from past events.
But in ARL’s scenarios, the AI agent, which would assess the situation and suggest a course of action, was often left out of the loop. Players who were the most familiar with the game tended to disregard the AI, choosing instead to trust their own knowledge. The results weren’t promising. Turning off his computer and trusting his instincts may have worked out for Luke Skywalker against the Death Star, but the Force wasn’t completely with the participants in ARL’s game. When the more knowledgeable players ignored the AI’s advice — in some cases not even bothering to check with it — they performed poorly. Novice players who consulted the AI and took its advice actually did better.
"This might be a harmless outcome if these players were really doing better, but they were in fact performing significantly worse than their humbler peers, who reported knowing less about the game beforehand," Schaffer said. "When the AI attempted to justify its suggestions to players who reported high familiarity with the game, reduced awareness of gameplay elements was observed, a symptom of over-trusting and complacency."
The results also raised another question about how users see themselves in relation to AI. In a post-game questionnaire, the players who did poorly — those who were loath to heed the AI’s rational justifications — were the most likely to say they trusted AI.
"This contrasts sharply with their observed neglect of the AI's suggestions,” Schaffer said, “demonstrating that people are not always honest, or may not always be aware of their own behavior."
The research reveals another element to consider in working with complex AI systems, which, for all their impressive capabilities, often remain inscrutable. And it showed where humans can have blind spots as well. "Rational arguments have been demonstrated to be ineffective on some people, so designers may need to be more creative in designing interfaces for these systems," Schaffer said.
Solving AI’s Riddles
But the biggest gaps in building a trust factor still lie with the machines. Among the key factors is establishing that an AI’s programming is unbiased, which could avoid missteps in processes such as facial recognition. Ensuring that systems are robust enough to avoid cyber exploits such as Trojans is also critical.
One problem in establishing trust is AI’s current inability to explain itself. Using complex algorithms and layered processes to ingest and analyze vast amounts of data doesn’t lend itself to an easy debriefing or explanation, and machines currently can’t discuss, in easily understood human terms, how they reached a particular conclusion.
AI researchers in industry, academia and government have been working on getting AI systems to describe their thought processes in several ways. The Open AI research consortium, for instance, is studying how the systems think by having them debate each other. The Defense Advanced Research Projects Agency’s Explainable Artificial Intelligence (XAI) project is looking to find ways an AI could use natural language in recounting its processes.
A new DARPA project is looking for machines to offer a kind of running commentary on their processes, rather than waiting until after a conclusion is reached. The Competency-Aware Machine Learning program aims to have machines continuously assess their performance and keep humans apprised.
“If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” said Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office. “The partner then can make a more informed choice.”
Zhou offered a simple analogy involving someone deciding which of two a self-driving cars would do better driving at night in the rain. The first car says it can distinguish between a pedestrian and an inanimate object 90 percent of the time, having tested itself 1,000 times. The second car might claim 99 percent accuracy, but disclose that it has tried the task less than 100 times. The rider can then make a more informed choice.
That’s a basic example (and seems like a pretty tough call), but it describes the process researchers are going for. “Under what conditions do you let the machine do its job? Under what conditions should you put supervision on it?” Zhou said. “Which assets, or combination of assets, are best for your task?”
The ultimate goal with CAML and other research projects, inside and outside government, is building trust between humans and machines. But the recent Prisoner’s Dilemma exercise proves that we still have a long way to go. | <urn:uuid:72e39744-d09c-402f-a2a1-9684c72532fd> | CC-MAIN-2022-40 | https://governmentciomedia.com/stakes-rise-humans-expect-ai-lies?page=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00303.warc.gz | en | 0.971914 | 1,449 | 2.953125 | 3 |
Keith Bogart's Newest Course is Now Available in the All Access Pass!
Head over to your All Access Pass members account and login to watch Public Key Infrastructure (PKI) & Digital Certificates.
About The Course:
This course will explain the components of the Public Key Infrastructure (PKI) and introduce you to several of its critical elements such as; Asymmetric Key Encryption, Hash Algorithms, Trusted Root Certificate Authorities, how to view Digital Certificates and understand the fields within them, Certificate Revocation methods, and much, much more.
Topics covered include:
- What is PKI?
- Overview of asymmetric encryption and Hash algorithms
- How do Digital Signatures work and what are their benefits?
- Components of, and types of, Digital Certificates
- Certificate revocation methods
- Configuring Cisco IOS devices as Certificate Servers and PKI Clients
Why You Should Watch:
Understanding the components of a PKI and how Digital Certificates provide security will help you whether you are the administrator of a secure website, studying for a Security-related certification exam, or just curious about the process. Without a Public Key Infrastructure (PKI), we wouldn't have the Internet that we have today. Over 50% of all websites utilize Secure HTTP (HTTPS) which relies on a PKI and the exchange of Digital Certificates. This course can, and will, assist anyone working towards their CCNA/CCNP and CCIE Security exams, or anyone who wants to learn more about securing their network, in grasping the inner-workings of PKI. In this course Kieth covers essential topics such as how to recognize secure (and insecure) websites and how to (or help your web developers) make your websites more secure.
The good new is, in order to gain something from this course you don't need to be a security or IT expert. If you know what the Internet is, and you know how to browse websites, you're on the right track! | <urn:uuid:a2f16a07-1a13-476e-9412-b27ad2f90067> | CC-MAIN-2022-40 | https://ine.com/blog/keith-bogarts-newest-course-is-now-available-in-the-all-access-pass | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00303.warc.gz | en | 0.918643 | 423 | 2.515625 | 3 |
TCP is the abbreviation of "Transfer Control Protocol" whereas UDP is the abbreviation of "User Datagram Protocol". TCP and UDP are both the main protocols which are used during the Transport layer of a TCP/IP Model. Both of these protocols are involved in the process of transmission of data. While UDP is used in situations where the volume of data is large and security of data is not of much significance, TCP is used in those situations where security of data is one of the main issues.
While the transfer of data, the existence of ports is a matter of high significance. Each data packet comes with a port number associated with it. This enables the protocols to decide that what are the requirements of the data packets and to which port are they supposed to be directed. In fact, the existence of ports is crucial to make sure that data packets reach their desired destinations accurately. In addition to this, there are a lot of other features such as the security of data packets which is catered by the different types of ports. The versatility of these TCP and UDP ports available enables you to select the most appropriate one for your task according to your requirement.
Following are some of the common TCP and UDP default ports.
SMTP is known as the Simple Mail Transfer Protocol. It is associated with the TCP port number 25. The primary purpose of this protocol is to make sure that email messages are communicated over the network securely. This port usually comes into being during the Application layer. Not only does this protocol carry out the task of delivering messages within networks, i can also successfully deliver messages between different networks. This makes it one of the most important ports for the communication of messages over the network due to the security and it provides along with other features. However, you do not have the privilege to download the emails in order to read them; it is just intended for the purpose of transferring them over the network.
Port 80 is associated with HTTP, Hypertext Transfer Protocol. It comes under the category of a TCP protocol. It is one of the most famous and widely used ports in the world. The main purpose of port 80 is to allow the browser to connect to the web pages on the internet. Port 80 basically expects or waits for the web client to ask for a connection. Once this connection has been made, you will get the privilege to connect to the World Wide Web and get access to various web pages out there. In fact, HTTP - 80 is one of the most important ports associated with the TCP protocol. Moreover, this port is generally used during the application layer of the TCP/IP Model.
HTTPS - 443 is also associated with the TCP protocol. HTTPS port 443 also lets you connect to the internet by establishing a connection between the webpages and the browser. This lets you connect to the World Wide Web. However, this port has an added feature of security to it, which HTTP port 80 does not have. This port is intended for establishing secure connections to make sure that the data is transmitted over a secure network. The use receives a warning if the browser is trying to access a webpage which is not secure. This port comes into being during the application layer. It basically encrypts and authenticates the network packets before transferring them over the network to increase the security. This feature of security is introduced by the use of SSL, which can also be referred to as Secure Socket Layer.
FTP is the abbreviation of "File Transfer Protocol". The purpose of FTP is to transfer files over the internet. It basically lays down all the rules which are to be followed during the transfer of data. Due to the concern of security, it also asks for authentication by the user before the transfer of data. It is associated with the TCP protocol and corresponds to two ports, port 20 and 21. Both of these ports function during the application layer.
Port 20 performs the task of forwarding and transferring of data. It takes over the task of transferring FTP data when it is in active mode.
Port 21 performs the task of signaling for FTP. It listens to all of the commands and provides a flow control for data. It is quite essential for maintaining the flow of data.
TELNET port 23 comes under the category of TCP Protocols. Its main function is to establish a connection between a server and a remote computer. It establishes a connection once the authentication method has been approved. However, this port is not suitable to establish secure connections and does not cater to the concern of security. It enables the remote connection of a computer to be established with routers and switches as well. It makes use of a virtual terminal protocol to make a connection with the server. It comes into existence during the application layer of the TCP/IP protocol.
IMAP is the abbreviation of 'Internet Message Access Protocol'. The IMAP -143 Port lies under the category of TCP protocol. The primary purpose of this port is to retrieve emails from a remote server without having the need to download the email. You have the liberty to access the emails from anywhere by connecting to the server and viewing your email after providing authentication. This opportunity has been provided to you because of the existence of this port. It reserves a virtual memory for the email which enables you to read it by connecting to the server. However, you may also download the mail if you wish to. It also provides you the ability to search for your messages from a bunch of them to get to your desired one. IMAP 143 Port generally operates at the Application Layer of a TCP/IP Model. In addition to this, it also makes sure that the data remain secure during this connection.
RDP is also known as the 'Remote Desktop Protocol'. It operates on the port 3389 of the TCP protocol. This port has been developed by Microsoft. It enables you to establish a connection with a remote computer. With the help of this connection, you get the liberty to control the desktop of this remote computer. This will provide you the ease to access you home desktop system from anywhere in the world just by proper authentication. In order to connect to your remote computer, you will have to forward the connection to the TCP Port 3389 which will then make available to you all the files which you have kept on your remote computer. However, since this port have been developed by Microsoft, it is essential to have a Windows operating system running on your computer in order to access it remotely. Please keep in mind that you might have to do manual settings in order to remotely access your desktop using this port. It operates on the Application layer of the TCP/IP Model. It is used worldwide for the purpose of accessing your desktop remotely.
SSH is also referred to as 'Secure Shell'. It operates on the port number 22 of the TCP protocol. It carries out the task of remotely connecting to a remote server or host. It allows you to execute a number of commands and move your files remotely as well. However, it is one of the most secure ways of accessing your files remotely. Using this port, you can remotely connect to a computer and move your files with ease. This port sends the data over the network in an encrypted form which adds an extra layer of security on it. In addition to this, only authorized people will be able to remotely log on to their systems using the Port 22 which makes sure that the information does not get into unauthorized hands. It provides the chance to move files within networks as well as gives the privilege to move files between different networks securely. It operates at the Application Layer of the TCP/IP Model and is considered as one of the most secure and reliable ports for accessing files remotely.
DNS is referred to as 'Domain Name System'. It operates on the port 53 of TCP and UDP protocols. DNS makes use of relational databases to link the host names of the computers or networks to their respective IP Addresses. The port 53 waits for requests from DHCP to transfer the data over the network. It operates on the Application Layer of the TCP/IP Model.
TCP protocol is used by the Zone Transfer function of the DNS server. Once the connection is established, the zone data will be sent by the server using the TCP 53 port. However, when the query has to be transferred from the client computer, it will be sent using the port 53 on UDP protocol. However, if no response is received from the server within 5 seconds, the DNS query will be sent using the port 53 of TCP Protocol.
DHCP is also known as 'Dynamic Host Configuration Protocol'. It basically runs on the UDP protocol. The basic purpose of DHCP is to assign IP Address related information to the clients on a network automatically. This information may comprise of subnet mask, IP Address etc. Many of the devices are automatically configured to look for IP Addresses using DHCP when they connect on a network. It makes it quite reliable to assign all the devices on a network with automatically produced IP Addresses. It generally operates on the Application layer of the TCP/IP Model. DHCP basically makes use of 2 ports; Port 67 and Port 68.
UDP Port 67 performs the task of accepting address requests from DHCP and sending the data to the server. On the other hand, UDP Port 68 performs the task of responding to all the requests of DHCP and forwarding data to the client.
POP3 is also referred to as Post Office Protocol Version 3. It operates on the port 110 of TCP Protocol. It allows the email messages to be retrieved from the SMTP servers. Using this port, you can download the messages from the server and then read them. However, this means that you will not be able to access the messages and read them without downloading them. Furthermore, the messages are also deleted from the server once they are downloaded. However, this port does not cater to the issue of security. The authentication details transferred over the network are not encrypted and sent in plain text. This means that any hacker can easily intercept this information and misuse it. Port 110 generally operates on the Application layer of the TCP/IP Model.
We have discussed some of the most common and widely used Ports above. We have seen how each of these ports are either related to the UDP protocol or TCP protocol and are used at the Transport or Application layer. All of these ports perform different tasks and different processes. While we have some ports where our data can be sent securely, there are some others where the transfer of data is of more significance than its security. We can also combine different protocols to add the feature of security. For example, SSL can be added to HTTPS port to add a feature of security to it. Considering the uses and applications of these ports, it is important to realize their significance in the process of transmission of data over a network. Not only do they help you to transfer data, they also let you enjoy some other facilities as well. In fact, it is not wrong to say that networking will not be complete without the existence of these TCP and UDP Ports.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:caf16922-49bc-4dc0-b2f7-bf1fe08301f7> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/network-plus-overview-of-common-tcp-and-udp-default-ports.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00503.warc.gz | en | 0.949014 | 2,340 | 3.46875 | 3 |
is a new kind of computing. Unlike classical computers which use bits that can be either 1s or 0s, quantum computers use uantum computingqubits. Qubits can represent numerous possible states of 1 and 0 at the same time. They can also influence one another at a distance. Thanks to superposition and entanglement, a quantum computer can process a vast number of calculations simultaneously. This ability would enable quantum computers to break the existing encryption systems. If these encryption methods are broken, the internet and several industries will be adversely impacted. Financial data, national security information … would be vulnerable.
Researchers are working to develop quantum-resistant cryptography while the existing quantum computers are not powerful enough yet. In the wrong hands, quantum computers may soon pose a major threat to society. | <urn:uuid:f8d091de-be40-4cf3-bcf5-44a3c6814fb6> | CC-MAIN-2022-40 | https://cybermaterial.com/cyberdecoded-quantum-computing-vs-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00503.warc.gz | en | 0.915325 | 159 | 3.4375 | 3 |
Google July 12 launched App Inventor for Android, a tool people without programming knowledge can use to build applications for smartphones based on the company’s open-source Android platform.
While Android was designed for software programmers who speak geek, App Inventor is a sort of software Lego set for amateur programmers who can sign up to use the tool here with a Gmail account.
Instead of writing code, users will drag and drop blocks, which are ready-made code sets, on a programming palette to construct their applications. These blocks include images, sound, text and screen arrangement.
See Google’s demo video here, in which an amateur programmer connects her Google Nexus One to her desktop PC to build an application with App Inventor.
The App Inventor Web page in Google Labs states that the tool provides building blocks for “just about everything you can do with an Android phone,” as well as blocks for storing information, repeating actions and communicating with Web services.
While the Web page said users can use App Inventor to construct games or draw pictures, users may also do more useful things such as creating a quiz application to help classmates study for a test. Users may even take advantage of Android’s text-to-speech capabilities, for example to make the phone ask the test questions aloud.
App Inventor also features a GPS-location sensor to let users build applications that know their location. Those who already command some Web programming knowledge can use App Inventor to write Android applications that talk to Twitter, Amazon.com and other Websites and services.
However, Google’s intent is to let average consumers build their own applications for the smartphones they use every day. This is something that has never yet caught on among desktop computer users, despite tools such as Basic, Logo and Scratch.
Harold Abelson, a computer scientist at the Massachusetts Institute of Technology who led the project as a visiting faculty member at Google, said more than a year ago on Google’s Research Blog that several major universities, including Harvard, MIT, University of California at Berkeley and the University of Michigan, were testing App Inventor.
Abelson told the New York Times that Google tested the tool with “sixth graders, high school girls, nursing students and university undergraduates who are not computer science majors.”
Developers have written close to 100,000 applications for the Android platform. If App Inventor catches on among nonprogrammer Android phone users, it could boost that number considerably.
At the least, App Inventor could increase awareness of Android as an alternative to proprietary platforms such as Apple’s iPhone.
The next logical leap for App Inventor would be an App Inventor Mashup Maker. In such an instance of classic crowdsourcing, Google would provide tools allowing users to build mashups, or application chimeras. | <urn:uuid:86d63c47-86b3-4ba4-bfc6-2b612361274b> | CC-MAIN-2022-40 | https://www.eweek.com/development/google-app-inventor-for-android-lets-amateurs-write-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00503.warc.gz | en | 0.936644 | 599 | 3.15625 | 3 |
This article was written to describe Mimosa’s TDMA protocol selection, explain the importance of GPS and timing accuracy, review what controls are available for tuning performance, and to recommend best practices for collocating radios in mixed RF environments and in various network configurations.
Time Division Multiple Access (TDMA) is a deterministic protocol where each device is assigned a time slot during which it is allowed to transmit. This allows for collocated radios to utilize the same channels and avoid Tx/Rx collisions and interference.
Mimosa’s approach, for maintaining high link reliability in 5 GHz-dense environments, is to load balance across two independent channels using TDMA (Figure 1). If one of the two channels experiences interference, the other channel is used to both transmit and receive all of the data.
Figure 1 – Mimosa TDMA link with two separate channels (blue and green)
Frequency-Division Duplex (FDD) is another protocol that also uses two channels. The first channel is used to send and the second channel to receive (Figure 2). If interference is experienced on either channel, the entire transmission is interrupted, and must be repeated. The reason for this is because both channels must remain free from interference for successful data transfer.
Figure 2 – Non-Mimosa FDD link with two separate channels
Because of its superior efficiency and enhanced resiliency compared with FDD, TDMA was selected for Mimosa’s backhaul products.
A highly accurate and precise clock is needed for time-based protocols like TDMA to work reliably, especially across back-to-back links. This is because radios at the same site must synchronize their communication to avoid interfering with one another.
While there are several ways to synchronize radios, including out of band wireless or wired interfaces that tie to a single GPS receiver, the most flexible approach is to build an internal GPS receiver into each radio.
Mimosa’s engineers designed the B5 and B5c backhaul products with an integrated GPS receiver that receives signals from 24 GPS and 24 GLONASS satellites, an industry first. Having access to 48 potential satellites, twice as many as a pure GPS receiver, increases clock accuracy especially in areas where the view of the sky is limited.
GPS timing alone is insufficient, however, because of clock drift within each device. To ensure reliable time synchronization, the B5/B5c include a GPS-Disciplined Oscillator (GPS-DO) with 3 ppb (parts per billion), or 40 ns accuracy.
The GPS+GLONASS timing source and onboard GPS-DO enable synchronization between radios without requiring any form of communication between them. It is also possible to synchronize with other Mimosa radios on the same tower that are not under your control. The radio’s site-survey feature shows the TDMA configuration of other access points so the same configuration options can be selected for forward compatibility.
Once radios are synchronized, they can share the same frequencies at the same site. This is because both radios are either transmitting or receiving at the same time. Synchronization ensures that Radio 1 will not transmit while Radio 2 is receiving. Otherwise, Radio 2 would detect the signal from Radio 1 as noise.
The Media Access Control (MAC) layer, within Layer 2 of the seven-layer OSI model, provides channel control mechanisms for servicing multiple requests to use the physical layer (or PHY, Layer 1).
Some of these controls are available within the B5/B5c embedded user interface (Figure 3) and allow the user to fine-tune TDMA performance.
Figure 3 – MAC Configuration Controls
When creating a single hop point-to-point (PTP) link between two devices, one device is set as an Access Point (AP) and the other as a Station (STA) (Figure 4).
Figure 4 – Example PTP Link (Single Hop)
Note that once the channel(s), power, and TDMA settings are set on the Access Point, they are communicated to the Station automatically via 802.11 beacons.
An important concept within the Mimosa TDMA protocol is that each radio in a link is assigned a gender: A or B. The Station’s gender is set automatically to the opposite of the Access Point. All devices with gender A will transmit at the same time, while those devices with gender B are receiving and vice versa. The gender should be set to the same value on all radios at the same site. For example, suppose you want to connect two sites with a relay in between. In Figure 5 below, note how Radio 2 and Radio 3 are assigned the same gender “B” since they are located at the same site.
Figure 5 – Example PTP Link (Multi-hop)
Both the gender and traffic split selection are combined within one control on the GUI. Therefore, you must also decide the TDMA Traffic Balance, which determines the percentage of time allocated for each side to transmit. There are two fixed options: 50/50 or 75/25 (Figures 6 and 7). Note that the slash notation in the GUI follows the convention (local/remote).
Figure 6 - 50/50 Traffic Split
Figure 7 - 75/25 Traffic Split
A third option called, “Auto” dynamically changes the duty cycle (ratio of Transmit to Receive) based on utilization of transmit slots by both the Access Point and Station. This option was designed to improve throughput efficiency and thus maximize the aggregate throughput over a single link. It is capable of changing the traffic split between Gender A and B to 25/75.
Choose 50/50 for multi-hop links, for ring topologies, or if you expect that downloads and uploads will be balanced, which may be appropriate for many business users. Figure 8 below shows a ring topology where a balanced traffic split is desired since traffic could move in any direction.
OSPF - Open Shortest Path First
BGP - Border Gateway Protocol
Figure 8 – Ring Topology (50/50 Traffic Split)
Building on the relay example from Figure 5, we’ll set a 50/50 traffic split (Figure 9). Multi-hop links cannot take advantage of uneven traffic splits because the relay transmissions are synchronized.
Figure 9 – Example PTP Link (Multi-hop) with 50/50 Traffic Split
Choose 75/25 for single hop links (Figure 10) or star networks (Figure 11) if you expect that users will be downloading the majority of the time. These settings are useful for serving bandwidth-hungry, one-way applications such as video streaming services. Note that only radios with gender A may be set to 75% with the fixed 75/25 traffic split option, therefore when planning your network, the gender A radios should be placed closer to your bandwidth source (i.e. datacenter), with the gender B radios closer to the customer.
Figure 10 – Example PTP Link (Single Hop) with 75/25 Traffic Split
Figure 11 – Star Network with 75/25 Traffic Split
If the Auto option is chosen, the traffic split shifts dynamically among 75/25, 50/50, and 25/75 based on the transmit window utilization by each side of the link. Like the 75/25 option, the Auto option only benefits a single PTP link.
The TDMA window describes the amount of time allocated for each radio to transmit. Adjust this value to optimize for latency or efficiency. Choose eight milliseconds to maximize raw throughput, two milliseconds to minimize latency, or four milliseconds to balance between the two.
Table 1 – TDMA Window Options
Each step in TDMA window size down from eight milliseconds (the reference value) incurs an approximate 10% throughput performance impact. This is due to the fact that smaller TDMA windows require more management overhead as a percentage of total data transferred.
There are some distance limitations to these options because propagation delay between a transmitter and receiver over a long distance may prevent data from arriving at the receiver before its transmit window. Free Space Path Loss (FSPL) limits distance with the 8 ms option.
Operators should also take into account the number of hops when setting TDMA window size to ensure that total latency meets requirements. For example, 5 hops and a 2 ms TDMA window size would result in 25 ms average round trip latency (5 hops * 2 ms * 2.5 = 25 ms).
All links in a multi-hop network, and all radios on the same tower for that matter, should be configured with the same TDMA window to avoid interference.
When installing more than one radio at a site, you will want to ensure adequate physical separation and pay special attention to antenna direction. This is especially true for non-synchronized radios. This section will cover collocation with Mimosa and Non-Mimosa radios.
One frequently asked question is, “How many GPS-synchronized Mimosa radios can be installed on the same tower”? The answer depends on several factors including the angular separation between antennas and differences in Rx signal strength. A better question might be, “What is the expected MCS index for two antennas at a given angle and with a difference in power input”?
To further explain, every antenna has a characteristic gain pattern within which signals are amplified in both directions (i.e. when sent from the local transmitter, and when received from the remote transmitter). By design, the main lobe of the antenna pattern has higher gain than the side and rear lobes. Note: In the forthcoming examples, we’ll assume that Rx signal strength is the same for all radios on the tower, and cover the effect of the imbalance later.
In Figure 12 below, a monotonic envelope (green outline) was applied to a Mimosa B5 antenna pattern. The pattern was then overlaid on a 360° compass representing all possible angles between collocated antennas. The blue concentric rings represent gain in dBi from -25 dBi to +25 dBi. The dotted line in the center of the antenna pattern represents the ideal Signal to Noise Ratio (SNR).
Figure 12 – B5 Antenna Pattern on 360° Compass
When a second antenna pattern is overlaid (Figure 13), we can imagine how the neighboring antenna might amplify an incoming signal intended for the other radio, thus reducing SNR. Note how the dotted blue line is partially obscured in both patterns. Adequate angular separation between the main lobes is necessary to avoid the situation in the image below.
Figure 13 – Two Overlapping B5 Antenna Patterns
By rotating the second antenna away from the first antenna enough such that the main lobe is not obscured, we can start to see what options are available for achieving maximum performance on both links. With two antennas, the second antenna can be anywhere within either of the two green zones which both allow 80 degrees of angular distance (Figure 14).
Figure 14 – Two Non-Overlapping B5 Antenna Patterns
If three radios are installed on a tower, the green zones become narrower. It is possible to place the second and third radios within the blue shaded zones without affecting the first radio, but care must be taken to prevent the second and third radios from obscuring each other (Figure 15).
Figure 15 – Three Non-Overlapping B5 Antenna Patterns
Up to four radios using the same channel can be installed per tower following this methodology (at 0, 85, 165, and 245 degrees), while still achieving maximum performance as shown in Figure 16 below.
Figure 16 – Four Non-Overlapping B5 Antenna Patterns
In the real world, geography and tower availability often dictate the possible angles. If your design tolerates lower SNR and resulting throughput, additional radios may be added and/or they may be installed at closer angles.
Power imbalance between collocated links could also reduce SNR and affect performance. In this case, the objective is to choose a separation angle that results in a small enough gain applied to the incoming “noise” signal such that a desired SNR can still be achieved.
The radio’s ability to receive at a particular MCS index depends on the SNR as shown in Table 2 below. First, choose an MCS index that meets the throughput needs of your customers. For this example we’ll choose 28.5 dB, which represents MCS 9.
Table 2 – SNR Requirements for each MCS Index
Second, calculate the target gain by adding the required SNR from the table above to the Rx signal strength imbalance, and then subtract both from the antenna gain:
Antenna Gain – (Desired SNR + Rx signal strength imbalance) = Target Gain
Example: 25 dBi – (28.50 dB + 10 dBm) = -13.5 dBi
Note: The 25 dBi gain comes from the antenna datasheet. The 10 dBm Rx signal strength imbalance is just an example value.
Third, compare the target gain to the antenna pattern to determine the appropriate angle. Ideally, this is done with antenna pattern data, like in Table 3 below, but it can also be determined manually with a sufficiently detailed pattern diagram. Using the -13.5 dBi gain calculated above, find the next smallest gain value and angle (in this case, 80 degrees).
Table 3 – Antenna Patterns Data (Partial)
The table below summarizes MCS index that two radios can achieve for various angles (0-180 degrees) with Rx signal strength imbalance (0-10 dBm).
Table 4 – MCS Index Achievable with two B5’s by Azimuth and Power Imbalance
Mimosa radios do not synchronize with non-Mimosa radios, although some general rules of thumb apply to avoid interference that non-Mimosa radios may cause.
Ensure at least three meters (ten feet) of physical separation in both vertical and horizontal directions, greater than 30° angular separation, and 20-30 MHz of frequency separation depending on the Power Spectrum Density (PSD) mask of the neighboring radio.
The built-in Spectrum Analyzer in Mimosa’s Backhaul products can be used to select channels with the lowest amount of noise.
Mimosa chose the TDMA protocol because of its superior efficiency and enhanced resiliency in dual-channel modes. The B5/B5c are equipped with a high accuracy, high precision GPS-DO circuit that enables shared spectrum between collocated radios.
The Mimosa user interface provides options for controlling TDMA parameters for performance tuning.
Angular separation and power imbalance can both affect link performance even with TDMA, so consider these parameters as part of best practice in link design.
Please visit cloud.mimosa.co to access RF planning tools that can aid in this process. | <urn:uuid:2b446135-7a13-4e72-b9fe-f89bf4cf33fe> | CC-MAIN-2022-40 | https://mimosa.co/white-papers/tdma | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00503.warc.gz | en | 0.898941 | 3,076 | 2.671875 | 3 |
The IT security industry is bracing itself for an onslaught of new wave hacking exploits which are powered by smart technology, machine learning, and artificial intelligence. Black Hat computer experts are expected to unleash a number of unethical tactics to target and manipulate individuals and organizations who are not primed to counter it.
The era of artificial intelligence is happening now, harnessing the power of AI is arguably one of the pivotal agenda items within many business organizations throughout the world. Controlling enterprise data and using machine learning to understand business trends is commonplace.
Hackers are also exploring this technology to create AI-powered malware which can deploy untraceable malicious applications within a benign data payload. AI techniques can conceal the conditions needed to unlock the malicious payload making it almost impossible to reverse-engineer the threat, potentially bypassing modern anti-virus and malware intrusion detection systems.
AI-powered malware can be trained to wait until a specific action occurs which triggers the hostile payload. This might be actuated by voice or facial recognition, or even by geo-location properties. It can be argued that AI malware can be trained to listen for specific words or a targeted person’s voice, advanced image APIs can also be used for face recognition on webcams or security cameras.
IBM research scientists have already created a proof-of-concept AI-powered malware called “DeepLocker”. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. They recently demoed this technology at the “Black Hat” technology conference in Las Vegas, 2018 and it used a genuine webcam application with embedded code to deploy ransomware when a person looked at the laptop webcam.
Smart Phishing is another approach being rapidly utilized by unethical hacking experts to attempt to exploit sensitive information from victims. The scam is rolled out using a baseline of intelligent data which is exclusive to the target. Essentially with the aim to fool the victim that the phishing methodology is legitimate.
What makes this assault dangerous is the smart methods used as part of the exploitation. Hacking groups trade exploited personal information on the dark web, such as where you shop, what online services you subscribe to, or who you bank with. This information alone may not be significantly exploitable, however, when you introduce artificial intelligence and machine learning, trends and patterns can be predicted when the data is ingested and transformed.
As a result, smart phishing can target specific victims where the hackers already know relevant information about you. You may receive a malevolent phone call from people impersonating your bank or credit card provider. These people may already know certain information about you such as your address, date of birth and use that to exploit pin numbers and bank account information from you with the aim of defrauding you.
More commonly, smart phishing results in intelligently targeting digital attacks in the form of emails and fake email attachments. These scams try to persuade you to click on a fake URL link and are used to mine data and potentially exploit you by injecting malware onto your system, most likely for financial gains.
Also, there has been an ever growing trend of technology savvy individuals learning and developing open source solutions to assist in hacking activities. This is nothing new, however, the proliferating use and abuse are expanding exponentially. Open Source toolsets and Linux distributions such as Kali Linux contains a suite of white hat tools which can be used acrimoniously.
Open Source tools can be used to exploit websites, servers and cloud infrastructure as well as inject packets into wireless network traffic with the aim of intercepting and decrypting traffic. Password crackers and dictionary attack tools can use machine learning to break complex passwords.
To help prevent the influx of smart hacking, the security community needs to be prepared for AI-powered threats. To combat the threat, security defenders can also use AI-powered intelligence creating trend-based detection systems instead of rule-based.
Greater investment in monitoring and data analysis solutions are also a proven deterrent to undertake. Such solutions can intelligently track and log network and server activity, using AI and machine learning the solutions can learn patterns and trends and help detect weaknesses on a platform.
Above all else, it is imperative to ensure traditional defenses such as ensuring you are on the latest firmware and operating system patch levels and keeping employees engaged on security topics and trained on security fundamentals such as not executing untrusted applications or email attachments remain significant barriers to the hacking community. | <urn:uuid:d6bef970-cf9a-467f-b07e-a95157771a73> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/ai-powered-malware-smart-phishing-and-open-source-attacks-oh-my-the-new-wave-of-hacking-in-2019-and-how-to-prevent/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00503.warc.gz | en | 0.929807 | 901 | 2.65625 | 3 |
As businesses become more and more connected to the internet, the threat of a data breach only increases. A study conducting by the University of Maryland in 2017 discovered that, on average, computers with internet access are attacked every 39 seconds. While this figure may be worrisome, this should come as no surprise to most. As technology continues to progress, becoming more and more a part of not only businesses but individual’s lives as well, criminals will try harder and faster to obtain access to confidential information.
These attacks are not cheap either. A study conducted by IBM found that on average, a cyberattack can cost $3.86 million. This number does not just reflect the damage the breach cost, but it also factors in loss of business, time spent on recovering, and damage to reputation. Taking steps to prevent an attack from happening is imperative. One must have the proper equipment and policies set in place in order to counter cyberattacks.
However, attackers are becoming smarter, more resourceful, faster, more aggressive. Many of them are also playing the long game as well. Lying dormant in a companies, or individual’s computers or server, waiting for the perfect moment to attack. While cyber security specialists are doing there best to stay 1-step ahead of criminals, there is only so much that can be done. Therefore, instead of playing a game of cat and mouse with attackers, cyber security specialists should be turning their attention towards using machine learning and AI to aid them in this constantly evolving battle.
Why Machine Learning and AI Should Be Recruited
Cybersecurity usually relies on methods of created static rules and policies that act as barriers to attackers. These barriers, regardless of how strongly built, are susceptible to cracks and leaks, allowing for unwanted guests to enter. This creates a constant game of catch up, rather than enforcing constant protection. This is especially true since cyber criminals are constantly evolving their viruses, making them stronger and harder to detect. If rules are not kept up to date, and scheduled maintenance is not regularly done, disaster can strike at any moment. Machine learning and AI can help level the playing field.
Even though cyber security specialists will remain as the last line of defense against attacks, AI and machine learning can be used as the first line of defense. AI and machine learning are constantly updating and learning, feeding off information from databases about cybersecurity and networking, as well as information from its experiences while deployed. AI and machine learning add automation to your cybersecurity team, aiding them in evolving and keeping your data safe from criminals.
Hammett Technologies is specialized in cybersecurity, using only the latest cybersecurity software and hardware to keep your data safe. When you partner with Hammett Technologies, you hire a partner who learns your employees, your business, and your process. | <urn:uuid:4c6b940c-aadc-496c-ab62-2d09ee1a02d6> | CC-MAIN-2022-40 | https://www.hammett-tech.com/its-time-to-upgrade-your-cybersecurity-defense/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00503.warc.gz | en | 0.963128 | 564 | 3.078125 | 3 |
People make choices online every day for virtually every step of their lives; whether it’s as simple as ordering pizza or more complicated, such as buying a car, applying for a passport or taking out a loan. So why when it comes to voting are we still using the archaic methods of pencil and paper?
It’s easy to question this, especially when you consider the poor turnouts at the polling stations - due to everything from bad weather to those who are abroad and didn’t register for a postal vote in time. In the 2015 UK general election, a huge 21 per cent of voters didn’t vote because they didn’t have time to get to the polling station.
The younger vote is even more affected by the lack of online provisions, whether it’s due to issues such as living in short-term, rented accommodation - which makes them less likely to appear on the electoral register – or not having the time to get to the polls. In 2015, just 43 per cent of those aged between 18 and 24 voted, compared with 76 per cent of those aged 65 and over.
How much easier would it be to vote online rather than getting up for work extra early, queuing through your lunch hour or adding to your commute home? WebRoots Democracy (opens in new tab), says online voting could increase the turnout of 18 to 24-year-olds by up to 70 per cent, and the overall turnout would increase by up to 79 per cent. The organisation also claims online voting would save £12.8 million, as it would replace the need to spend the whole night counting votes by hand.
What role does technology play?
Some countries already offer an element of online voting, especially those with rural communities such as Australia and Canada. Computers play a role in virtually all major U.S. elections. In some polling booths, people may touch a computer screen to select their chosen candidate. In other states, there might be machines in the back room that read the paper ballots.
In the UK, however, technology plays a small role. Every local authority has an electoral register, but there’s no online electoral roll; so to check you’re registered to vote, you have to contact the local electoral registration office. But you can register to vote via an online form. Then once registered, your local authority sends a polling card telling you where and when to vote. If you haven’t received or lost your polling card, you can still vote in person on election day - you’ll be given a ballot paper to mark with a cross your party of choice.
So why aren’t we digitising our elections more?
These methods seem somewhat low-tech in our internet age, but online voting is a far bigger and more complicated step than we realise. It’s not that it hasn’t been considered - the UK has run trials in the past, with positive results – but security experts say there’s no guarantee that online voting will be both secret and secure. It just isn’t simply possible now due to issues around security and voting machines. There are too many questions: how safe and reliable are computers - both those used by people to cast their vote and to process them afterwards? Could the machines get hacked? And can computerised systems fully protect voters’ rights?
For example, Estonia adopted online voting in 2007 and by the time of their 2015 election just over 30 per cent voted online. However, an independent report into how the system worked found massive technological problems and gaps in security. Norway trialled online voting in its 2011 and 2013 elections, but voter concern over security ended those trials. Meanwhile, cyber security concerns put a stop to France allowing citizens living abroad to vote online.
There are many things that could go wrong from cyberattacks influencing voting to system privacy issues and websites crashing through everyone trying to vote at once. Graham Cluley, security expert, commented in IBTimes UK, "Computers need more maintenance than a pencil and paper. Computers are more expensive than a pencil and paper. Software can contain bugs and vulnerabilities."
Caitriona Fitzgerald, chief technology officer and state policy coordinator for the Electronic Privacy Information Center in Washington, D.C., told ScienceNews that the online systems, “…are not necessarily 100 per cent secure. And with our elections we need to make sure it’s 100 per cent secure.”
But what about the problems with our current voting system?
Our current system isn’t necessarily robust either. In the last General Election in 2015, a proportion of people who were registered to vote couldn’t get access to polling stations as their records weren’t up to date. The London Borough of Hackney, experienced problems the week before polling day that affected many electors who had applied to register online and subsequently didn’t get their registration confirmed or receive either poll cards or postal votes.
Approximately 1,300 applications had either not been added to the electoral management system for Hackney, or had been added but not processed even though these applications had been made prior to the registration deadline. Following subsequent investigations, it was concluded that action needed to be taken on the electoral management system. Processes needed to be significantly improved to ensure that all application data is tracked throughout.
In the 2016 Mayoral elections, voters were turned away from 155 polling stations because their names were missing from the poll list. One station said that of the first 30 voters to show up, only three were on the register.
Could similar things happen on 8 June 2017? There are 45,766,000 voters in the UK but getting all of them on to the electoral register isn’t easy.
In June 2014, a new system was introduced allowing every person to register their vote individually – and for the first time, they could register online. Previously, one person – often the head of the household – was responsible for registering the votes of everyone else who lived at the address. The shift to individual registration is the biggest change to the electoral registration system in 100 years. However, it did mean that universities and colleges could no longer block-register students living in halls of residences to vote, which is another potential reason for the recent drops in younger voters.
Another challenge was transferring all the voter details to the new register. Existing voter details had to be checked against the Department of Work and Pensions (DWP) database or local data. Then if there was a match, the voter details were “confirmed” – and automatically transferred to the new Individual Electoral System (IES) and sent a letter.
However, a total of 42.4 million electoral register entries were sent for matching against the DWP database, and 5.5 million of those existing electors got lost in the system, having not been automatically transferred. And between the matched and non-matched groups, there were 7.5 million people who weren’t registered at their current address and may have been on the register at all. Voters whose details could not be transferred automatically needed to re-register but unconfirmed voters could easily have dropped off the register if their new applications weren’t made by December 2015.
Some Government systems, instead of missing people, have an excess. Conversely, figures from March 2016 identified 57 million patients held within GPs’ records, but census data suggests this should have only been 54 million. Who were these extra three million people, and how can there be more people on GPs’ books than exist?
In the latest election, problems have already started as numerous Plymouth voters were sent the wrong polling cards a few weeks ago. Some people who registered for a postal vote received both a postal poll card and a polling station card, which would be used to vote in person. Apparently, this was due to a printing error. The city council is investigating - and has reassured concerned voters that poll cards alone do not give a right to vote and postal voters won’t be issued with a ballot paper. Meanwhile, voters in Staines have been sent polling cards printed with the wrong polling station locations – three miles away - and cards sent out for the Stratfield Mortimer referendum incorrectly stated the details for the 2017 general election.
Could technology help avoid these problems and make the voting process easier to handle?
The biggest challenge with the current voting system is that records can be held in multiple places. Alongside any central electoral system, data on voters can be within specific and local applications, databases and filing systems.
Building a ‘Single Citizen view’ (SCV) could help to manage the records of voters more easily. These silos of data should be used to build up a personalised and accurate view of each individual. By putting SCV in place, the electoral role would be able to connect more effectively with the people they serve (the voters), maintain real-time insights and improve the performance of the voting system.
The SCV can also be used for compliance too. If a request for data deletion comes in (GDPR’s Article 17 – Right to Erasure), the SCV can provide a complete set of records that can be used for managing the request and dramatically reduce the amount of work required to comply with data protection. It’s also possible to estimate how many sources of individual data exists and the amount of time to process a request for data deletion.
Depending on the level of manual intervention required, the current cost for the Government to process each data deletion request properly within each system could be hours. Automating these steps across each system should save time – and money - on top of not needing each system to be checked for records in the first place.
The thought of online voting in 2017 may have seemed plausible. However, in light of today’s increasingly sophisticated and numerous cyberattacks, the threats posed by a digital vote outweigh the gains. Instead, taking a single-minded approach to data and getting a single customer view in place should make it easier to manage the voting registration process. This not only helps everyone to make their voice heard on important issues but should reduce the cost to manage such a critical set of data over time.
Martin James, Regional Vice President, Northern Europe at DataStax (opens in new tab)
Image source: Shutterstock/everything possible | <urn:uuid:938a8752-4e73-49f6-9e81-38d03df89269> | CC-MAIN-2022-40 | https://www.itproportal.com/features/managing-uk-elections-could-technology-help-prevent-voting-issues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00503.warc.gz | en | 0.963786 | 2,121 | 2.984375 | 3 |
What is Format?
Proximity cards contain chips that store data. The way that data is structured is called the "format" of the card. The number of ones and zeros, and how they’re put together, determines the format and ultimately the credential number.
For example, US phone numbers follow a well-known format: 9395981699 is recognized as 939-598-1699. Knowledge of the format allows proper reading. The format defines the bit length and fields of the credential number.
Every Format has a maximum number of bits – the count of the binary digits (zeroes and ones) that make up the credential. The available sizes are 26, 33, 37, 48 and 50 bit. The most common and industry standard card is a 26 bit card. This 26 bit format is recognized by all access hardware. The higher number of bits (33, 37, 48, 50) can increase card security.
Some of the higher bit formats are "proprietary", and usually carry a higher price tag. One exception is the HID 37 bit proprietary format, priced similarly to a 26 bit card.
Serial number and facility code
Every standard 26-bit card has a consecutive serial number programmed and assigned to a cardholder. For 26 bit cards, it can go from 0 to 65,535. To reduce risk of duplication, a second number, known as a facility or site code is encoded into each card. This number can go from 0 to 255 on a 26 bit format card. This assures that no two companies share the same card numbers.
Other formats have a greater number of bits, and may not need facility codes, because the card serial number (like the serial number on currency bills) is never duplicated at the factory. If your company has more employees or if you want increased security, you may want to consider HID’s Corporate 1000 programming option.
Every Format has a reference number. Current formats are: H10301, H10302, H10304, H200xxxx, or H5xxxx (Corporate 1000), Other open formats exist such as 40134 and C10106.
Examples of Formats
Standard 26 Bit (Format: H10301)
An unmanaged format available for purchase by anyone.
- General: Available Facility codes from 0-255
- Card numbers start and stop range: 0 to 65,535 unique card numbers
- Sales Policy: This is an ‘open format’ and is available to anyone
- *Mandatory Fields: Format, Facility Code and Starting Card Number
Example: H10301 part numbers and programming information.
Managed 37 Bit (Format: H10302)
Customer can authorize a provider/ partner such as J O’Brien to purchase (and manage) on customer’s behalf.
- General: No facility code required
- Card number start and stop range: HID controls the numbers generated for each order.
- Sales Policy: 37-bit H10302 format can be sold to any provider/ partner.
- Note: confirm that the system is capable of using a 37-bit number with no facility code.
- *Mandatory Fields: Format
Example: H10302 part numbers and programming information.
Proprietary 37 Bit (Format: H10304)
The provider/ partner retains control of the numbers so the cards may only be ordered from that partner exclusively.
- General: The 37 bit format has reserved facility codes managed by HID.
- Sales Policy: The H10304 (37 bit) is ideal for partners who would like to have their own format.
- *Mandatory Fields: Format, Facility Code (unique to partner)
Corporate 1000 (Format: H200xxxx or H5xxxx)
Customers/ end users purchase and control the format and can choose the provider/ partner to manage and order. The Corporate 1000 (C1K) format is a tracked and managed format that is unique to each end user organization that joins the program. 35-bit and 48-bit length are in use, however 35-bit formats are no longer available for new customers.
- *1st generation C1K formats are 35 bits. H5XXXX where XXXX is unique to the end-user organization – over 1,000,000 available credential numbers
- *2nd generation C1K formats are 48 bits. H200XXXX where XXXX is unique to the end-user organization – over 8,000,000 available credential numbers
- Additional programming service fee of $ 0.30 USD per card (not applicable on Mobile IDs)
- New (not repeat) orders:
- C1K Request/Authorization form must be completed and signed by end user (format owner)
- C1K Licensing Agreement must be signed by end user (format owner)
- Reorders: Fill out C1K Change Authorization form*Mandatory Fields: Format, Company ID Code or End User Name
Need More Information about card formatting? | <urn:uuid:79f9ab87-9189-4840-b272-88285467142d> | CC-MAIN-2022-40 | https://info.jobrien.com/beginners-guide-to-access-card-formatting | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00703.warc.gz | en | 0.867881 | 1,043 | 2.765625 | 3 |
The physical and mental burden of prolonged exposure to bushfire smoke is vastly underestimated by official statistics that are based upon admissions into the health system, finds a large survey of residents affected by the widespread bushfires close to Canberra, Australia’s capital city, in the summer of late 2019 and early 2020.
Prolonged exposure to bushfire smoke creates a heavy mental and physical burden not portrayed by official statistics of admissions into the health system, finds a new study, published in Frontiers in Public Health.
It highlights the urgent need for improved knowledge in this area to support and build resilience among individuals and communities that are impacted by these catastrophic events.
“We found that almost every single respondent to our survey experienced at least one physical health symptom that they attributed to the smoke,” explains Professor Iain Walker, co-author of the study, and based at the Research School of Psychology, Australian National University, Australia.
“Just less than one-fifth of respondents sought help from a medical practitioner, despite the widespread reporting of negative health burdens in our survey,” he continues. “This shows that the rate of people presenting into the health system is much less than the number of people who experience health symptoms when exposed to bushfire smoke.”
Air pollution from bushfires is known to increase the likelihood of death in humans and it especially affects people with pre-existing respiratory and cardiovascular problems. Its impacts on mental health are not so well understood.
The Australian summer of late 2019 to early 2020 saw fires burn 10m hectares (nearly 25m acres), which spread blanketing, choking smoke over an even larger area. As soon as was possible after this event, the researchers of this study conducted a survey of residents in the area to get a more thorough assessment of the impact of bushfire smoke on the community.
“Our survey of residents living in and around Canberra, Australia’s capital city, asked a wide range of questions about the physical and mental health symptoms the participants had experienced, as well as how their behaviors changed during the bushfire period. For example, staying indoors to avoid the smoke, which in turn reduces physical activity,” explains Walker.
Over 2000 people responded to the survey, which was administered via post, door to door visits, social media, and telephone contact to maximize the number and range of individuals reached.
Widespread negative health burden
“The survey showed that physical health effects were more extensive than previously thought and that there were very high levels of anxiety and depression,” reports Walker.
“It is likely that official statistics greatly underestimate the prevalence of health problems because of the major hurdles in the way of anyone presenting into the system, and we think many residents were motivated to avoid overburdening the health system at a time when it was stretched.”
Walker explains that improving our understanding in this area is important to help design better public health communications and service delivery.
“There is a real need for improved knowledge on the long-term effects of exposure to bushfire smoke, and how these effects vary across different segments of the population. We are conducting further studies to understand how bushfires continue to affect the mental health of people impacted by these fires and the smoke, and how we can build resilience among individuals and communities.”
Wildfires are a global occurrence. Changes in temperature and precipitation patterns from climate change are increasing wildfire prevalence and severity (Westerling et al. 2006; Settele et al. 2014) resulting in longer fire seasons (Flannigan et al. 2013; Westerling et al. 2006) and larger geographic area burned (Gillett et al. 2004).
Wildfire smoke contains many air pollutants of concern for public health, such as carbon monoxide (CO), nitrogen dioxide, ozone, particulate matter (PM), polycyclic aromatic hydrocarbons (PAHs), and volatile organic compounds (Naeher et al. 2007). Current estimated annual global premature mortality attributed to wildfire smoke is 339,000 (interquartile range of sensitivity analyses: 260,000–600,000) (Johnston et al. 2012), but the overall impact on public health in terms of respiratory, cardiovascular, and other morbidity effects is unknown. A better synthesis of current knowledge on the health effects of wildfire smoke is needed to guide public health responses.
Wildfire smoke epidemiology is an active area of research (Henderson and Johnston 2012) with new methods uncovering associations that were previously undetectable. Studies of health outcomes associated with wildfire smoke exposure tend to be retrospective and researchers have to rely on administrative health outcome data such as mortality or hospitalization records.
Achieving adequate statistical power has been challenging because such severe outcomes are less common, fires tend to be episodic and short in duration, and exposed populations from individual events are often small. Many recent studies have increased statistical power by investigating very high exposure events that last for longer periods, large populations over many years in regions with frequent fires, more common health outcomes such as medication dispensations, or a combination of these methods.
Previous reviews of wildfire health impacts have either not included the full range of health end points associated with community exposure to wildfire smoke (Dennekamp and Abramson 2011; Henderson and Johnston 2012) or have summarized the literature without critical analysis of specific studies (Finlay et al. 2011; Liu et al. 2015; Youssouf et al. 2014). Our review follows a modified version of the systematic review methodology outlined in Woodruff and Sutton (2014) to analyze studies critically and to only evaluate the strongest evidence.
Our critical review demonstrated consistent evidence of associations between wildfire smoke exposure with general respiratory morbidity and with exacerbations of asthma and COPD (Table 1). Mounting epidemiological evidence and plausible toxicological mechanisms suggest an association between wildfire smoke exposure and respiratory infections, but inconsistencies remain. Increasing evidence suggests an association between wildfire smoke exposure and all-cause mortality, especially from more recent, higher-powered studies (e.g., Johnston et al. 2011; Morgan et al. 2010; Faustini et al. 2015).
The current evidence for cardiovascular morbidity from wildfire smoke exposure remains mixed; many studies are inconclusive or negative, but some have demonstrated significant increases for specific cardiovascular outcomes, such as cardiac arrests.
Toxicological findings are consistent with cardiac effects through evidence of systemic inflammation and increased coagulability. Most of the other end points of interest, including birth outcomes, mental health, and cancer have not been sufficiently studied.
Our review highlights the lack of information about which populations are most susceptible to wildfire smoke exposure. People already diagnosed with asthma or COPD are more susceptible. We found inconsistent evidence of differential effects by age or SES. Two studies have suggested differential effects by Australian indigenous status with no investigation of other ethnic groups.
Many gaps exist in understanding the public health implications of exposure to wildfire smoke. Larger studies with greater statistical power and more spatially refined exposure assessments are needed to better characterize impacts on mortality, cardiovascular disease, birth outcomes, and mental health effects.
Currently, evidence exists of exacerbation, but not incidence, of asthma and COPD from wildfire smoke exposure. In temperate parts of the world, where wildfire smoke exposure is episodic, it is unlikely that changes in asthma incidence would be observed. Studies have not been conducted in populations more chronically exposed to wildfire smoke.
Additionally, other health outcomes associated with wildfire smoke exposure have not yet been sufficiently studied, such as otitis media, which has been associated with exposure to secondhand tobacco smoke (Kong and Coates 2009), air pollution from woodsmoke (MacIntyre et al. 2011) and recently wildfire smoke (Yao et al. 2016).
Human experimental studies of exposures to wildfire smoke could help clarify biological mechanisms. Very little information exists on health effects associated with measures of pollutants in wildfire smoke other than PM, such as ozone or PAHs. Although this review combined results from studies of various types of fires, it is possible that smoke originating from peat fires, forest fires, grassland fires, and agricultural burning could lead to differential health effects due to different constituents in the smoke. T
o our knowledge, no studies have yet investigated chronic exposure to wildfire smoke, but many populations in Southeast Asia, Africa, and Latin America are exposed regularly for extended periods (Johnston et al. 2012).
Characterization of the exposure–response function is critical for setting smoke levels for public health warnings or interventions, and it is not yet known whether current levels based on undifferentiated PM sufficiently characterize the effects of wildfire smoke.
Four studies (Arbex et al. 2010; Chen et al. 2006; Johnston et al. 2002; Sastry 2002) have attempted to identify effects at different exposure levels, but these studies are hard to compare because of differences in exposure assessment methods, health outcomes, types of fires, and population susceptibilities.
reference link : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5010409/
More information: Physical and Mental Health Effects of Bushfire and Smoke in the Australian Capital Territory 2019–20, Frontiers in Public Health (2021). www.frontiersin.org/articles/1 … 89/fpubh.2021.682402 | <urn:uuid:00dc8ac3-a944-448c-9681-f288955c88de> | CC-MAIN-2022-40 | https://debuglies.com/2021/10/14/prolonged-exposure-to-smoke-from-bushfire-creates-severe-physical-and-mental-consequences/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00703.warc.gz | en | 0.938567 | 1,917 | 3.40625 | 3 |
When asked the other day about a bakery near my home, I responded that I’d recently eaten its mouth-watering chocolate chip cookies. My wife corrected me, noting that the cookies I ate were actually oatmeal raisin.
Is this an early sign of impending dementia? Should I call my doctor?
Or is forgetting the details of a dessert a good thing, given that everyday life is filled with an enormous number of details, too many for a finite human brain to remember accurately?
I am a cognitive scientist and have been studying human perception and cognition for more than 30 years. My colleagues and I have been developing new theoretical and experimental ways to explore this kind of error.
Are these memory mistakes a bad thing, resulting from faulty mental processing?
We’re leaning toward the latter – that memory errors may actually indicate a way in which the human cognitive system is “optimal” or “rational.”
Are people rational?
For decades, cognitive scientists have thought about whether human cognition is strictly rational. Starting in the 1960s, psychologists Daniel Kahneman and Amos Tversky conducted pioneering research on this topic. They concluded that people often use “quick and dirty” mental strategies, also known as heuristics.
For example, when asked whether the English language has more words starting with the letter “k” or with “k” as the third letter, most people say there are more words starting with “k.” Kahneman and Tversky argued that people reach this conclusion by quickly thinking of words that start with “k” and with “k” in the third position, and noticing that they can think of more words with that initial “k.” Kahneman and Tversky referred to this strategy as the “availability heuristic” – what comes most easily to mind influences your conclusion.
Although heuristics often yield good results, they sometimes do not. Therefore, Kahneman and Tversky argued that, no, human cognition is not optimal. Indeed, the English language has many more words with “k” in the third position than words starting with “k.”
Suboptimal or the best it can be?
In the 1980s, however, research started appearing in the scientific literature suggesting that human perception and cognition might often be optimal. For instance, several studies found that people combine information from multiple senses – such as vision and hearing, or vision and touch – in a manner that is statistically optimal, despite noise in the sensory signals.
Perhaps most important, research showed that at least some instances of seemingly suboptimal behavior are actually the opposite. For example, it was well known that people sometimes underestimate the speed of a moving object. So scientists hypothesized that human visual motion perception is suboptimal.
But more recent research showed that the statistically optimal sensory interpretation or percept is one that combines visual information about the speed of an object with general knowledge that most objects in the world tend to be stationary or slow moving. Moreover, this optimal interpretation underestimates the speed of an object when visual information is noisy or low quality.
Because the theoretically optimal interpretation and people’s actual interpretation make similar errors in similar circumstances, it may be that these errors are inevitable when visual information is imperfect, and that people are actually perceiving motion speeds as well as they can be perceived.
Scientists found related results when studying human cognition. People often make errors when remembering, reasoning, deciding, planning or acting, especially in situations when information is ambiguous or uncertain. As in the perceptual example on visual speed estimation, the statistically optimal strategy when performing cognitive tasks is to combine information from data, such as things one has observed or experienced, with general knowledge about how the world typically works.
Researchers found that the errors made by optimal strategies – inevitable errors due to ambiguity and uncertainty – resemble the errors people really make, suggesting that people may be performing cognitive tasks as well as they can be performed.
Evidence has been mounting that errors are inevitable when perceiving and reasoning with ambiguous inputs and uncertain information. If so, then errors are not necessarily indicators of faulty mental processing. In fact, people’s perceptual and cognitive systems may actually be working quite well.
Your brain, under constraints
There are often constraints on human mental behavior. Some constraints are internal: People have limited capacity for paying attention – you can’t attend to everything simultaneously. And people have limited memory capacity – you can’t remember everything in full detail. Other constraints are external, such as the need to decide and act in a timely manner. Given these constraints, it may be that people cannot always perform optimal perception or cognition.
But – and this is the key point – although your perception and cognition might not be as good as they could be if there were no constraints, they might be as good as they could be given the presence of these constraints.
Consider a problem whose solution requires you to think simultaneously about many factors. If, because of capacity limits on attention, you cannot think about all factors at once, then you will not be able to think of the optimal solution. But if you think about as many factors as you can hold in your mind at the same time, and if these are the most informative factors for the problem, then you’ll be able to think of a solution that is as good as possible given your limited attention.
The limits of memory
This approach, emphasizing “constrained optimality,” is sometimes known as the “resource-rational” approach. My colleagues and I have developed a resource-rational approach to human memory. Our framework thinks of memory as a type of communication channel.
When you place an item in memory, it’s as if you’re sending a message to your future self. However, this channel has limited capacity, and thus it cannot transmit all details of a message. Consequently, a message retrieved from memory at a later time may not be the same as the message placed into memory at the earlier time. That is why memory errors occur.
If your memory store cannot faithfully maintain all details of stored items because of its limited capacity, then it would be wise to make sure that whatever details it can maintain are the important ones. That is, memory should be the best it can be within limited circumstances.
Indeed, researchers have found that people tend to remember task-relevant details and to forget task-irrelevant details. In addition, people tend to remember the general gist of an item placed in memory, while forgetting its fine details. When this occurs, people tend to mentally “fill in” the missing details with the most frequent or commonplace properties. In a sense, the use of commonplace properties when details are missing is a type of heuristic – it is a quick-and-dirty strategy that will often work well but sometimes fail.
Why did I recall eating chocolate chip cookies when, in fact, I had eaten oatmeal raisin cookies?
Because I remembered the gist of my experience – eating cookies – but I forgot the fine details, and thus filled in these details with the most common properties, namely cookies with chocolate chips.
In other words, this error demonstrates that my memory is working as well as possible under its constraints. And that’s a good thing.
Funding: Robert Jacobs receives funding from the National Science Foundation.
Source: The Conversation
As we have seen, our memories are not perfect. They fail in part due to our inadequate encoding and storage, and in part due to our inability to accurately retrieve stored information. But memory is also influenced by the setting in which it occurs, by the events that occur to us after we have experienced an event, and by the cognitive processes that we use to help us remember. Although our cognition allows us to attend to, rehearse, and organize information, cognition may also lead to distortions and errors in our judgments and our behaviours.
Why do we forget? Some memories simply fade with the passage of time; they decay as the structural changes learning produces in the brain simply go away. But most forgetting is due to interference; as we learn additional information, it displaces the earlier information. Because we store pieces of information in associative networks, we are more likely to retrieve a meaning concept when it’s connected by a larger number of links. As we integrate new concepts, a stimulus is no longer as effective to retrieve the old response. These interference effects help to explain why we have trouble remembering brand information. Since we tend to organize attribute information by brand, when we learn additional attribute information about the brand or about similar brands, this limits our ability to activate the older information (Meyers-Levy, 1989).
The Cost of Failing to be Memorable
Marketers obviously hope that consumers will not forget about their products. However, in a poll of more than 13,000 adults, more than half were unable to remember any specific ad they had seen, heard, or read in the past thirty days (Burke & Srull, 1988).
How many can you remember right now? Quick, make a list! Clearly, forgetting by consumers is a big headache for marketers.
Cognitive biases are errors in memory or judgment that are caused by the inappropriate use of cognitive processes. The study of cognitive biases is important both because it relates to the important psychological theme of accuracy versus inaccuracy in perception, and because being aware of the types of errors that we may make can help us avoid them and therefore improve our decision-making skills.
Identifying Cognitive Biases
Our perceptions about value and loss are formed by our experiences, culture, upbringing, and messages received through mass media. These are innate biases we learn over time, mostly unconsciously. These unconscious decisions distort our judgment and can lead to stereotyping and bad decision making.
There are numerous cognitive biases that impact our decisions: here are 10 to watch out for:
- Anchoring Bias – Over-relying on the first piece of information obtained and using this as a baseline for comparisons.
- Availability Bias – Making decisions based on immediate information or examples that come to mind.
- Bandwagon Effect – Making a decision if there are others that also hold that belief or opinion. People tend to divide themselves into groups, and then attribute positive attributes to their own group. (Similar to “group think” and “herd mentality”.)
- Choice-Supportive Bias – Once a decision is made, focusing on the benefits and ignoring or minimizing flaws.
- Confirmation Bias – Paying more attention to information that reinforces previously held beliefs and ignoring evidence to the contrary.
- False-Consensus Effect – Overestimating how much other people agree with their own beliefs, behaviors, attitudes, and values. Leads people not only to incorrectly think that everyone else agrees with them, but it can also lead to overvaluing of opinions.
- Halo Effect – Tendency for an initial impression of a person to influence what we think of them overall. Assuming that because someone is good or bad at one thing they will be equally good or bad at another.
- Self-Serving Bias – Tendency for people tend to give themselves credit for successes but lay the blame for failures on outside causes. This plays a role in protecting your self-esteem.
- Hindsight Bias – Tendency to see events, even random ones, as more predictable than they are. (Similar to the the “I knew it all along” phenomenon.)
- Misinformation Effect – Tendency for memories to be heavily influenced by things that happened after the actual event itself. These memories may be incorrect or misremembered.
Perhaps one of the most relatable cognitive biases is the IKEA Effect, which is described as the disproportionately high value we place on items that we have had a hand in creating ourselves. This cognitive bias is more commonly expressed as a “a labour of love” showing our affinity for the blood, sweat, and tears that we put into building a bookshelf; sewing a quilt; assembly a large-scale LEGO project; or simply preparing a delicious dinner from a meal-delivery service. Not only do consumers value the end product more (than if they’d purchased one pre-assembled), they are even willing to pay more to put in the work!
Specifically, consumers exhibit greater willingness to pay for self-made products than for identical products that have been produced by someone else, even if the self-crafted product is of inferior quality (e.g., Franke et al. 2010; Norton et al. 2012). Since effort is considered costly, preferring self-made products over ready-made products incurs extra costs, which should intuitively lower the willingness to pay.
Remarkably, the higher willingness to pay as demonstrated by the literature suggests the opposite and points to an overvaluation of the self-made product. For example, in an experimental study, Norton et al. (2012) asked participants to assemble a standardized IKEA storage box or hedonistic items such as a Lego car or an origami model. Regardless of the product type, the participants exhibited greater willingness to pay compared with the willingness to pay of third-parties and also compared with their own willingness to pay for identical but pre-assembled products.
One potential error in memory involves mistakes in differentiating the sources of information.
Source monitoring refers to the ability to accurately identify the source of a memory. Perhaps you’ve had the experience of wondering whether you really experienced an event or only dreamed or imagined it. If so, you wouldn’t be alone. Rassin, Merkelbach, and Spaan (2001) reported that up to 25 per cent of undergraduate students reported being confused about real versus dreamed events. Studies suggest that people who are fantasy-prone are more likely to experience source monitoring errors (Winograd, Peluso, & Glover, 1998), and such errors also occur more often for both children and the elderly than for adolescents and younger adults (Jacoby & Rhodes, 2006).
The Sleeper Effect
In other cases we may be sure that we remembered the information from real life but be uncertain about exactly where we heard it. Imagine for a moment that you have just read some gossip online. Probably you would have discounted the information because you know that its source is unreliable.
But what if later you were to remember the story but forgot the source of the information? If this happens, you might become convinced that the news story is true because you forget to discount it. The sleeper effect refers to attitude change that occurs over time when we forget the source of information (Pratkanis, Greenwald, Leippe, & Baumgardner, 1988).
In still other cases we may forget where we learned information and mistakenly assume that we created the memory ourselves. Canadian authors Wayson Choy, Sky Lee, and Paul Yee launched a $6 million copyright infringement lawsuit against the parent company of Penguin Group Canada, claiming that the novel Gold Mountain Blues contained “substantial elements” of certain works by the plaintiffs (“Authors Sue Gold Mountain…,” 2011).
The suit was filed against Pearson Canada Inc., author Ling Zhang, and the novel’s U.K.-based translator Nicky Harman. Zhang claimed that the book shared a few general plot similarities with the other works but that those similarities reflect common events and experiences in the Chinese immigrant community. She argued that the novel was “the result of years of research and several field trips to China and Western Canada,” and that she had not read the other works. Nothing was proven in court.
Finally, the musician George Harrison claimed that he was unaware that the melody of his song My Sweet Lord was almost identical to an earlier song by another composer. The judge in the copyright suit that followed ruled that Harrison did not intentionally commit the plagiarism. (Please use this knowledge to become extra vigilant about source attributions in your written work, not to try to excuse yourself if you are accused of plagiarism.)
The Sleeper Effect, Cognitive Bias, & MSG
I can remember back to some time in my youth when there was a public fear of a flavour-enhancing ingredient commonly used in Asian cooking. Monosodim glutamate (better known as “MSG”), was the target of widespread criticism and public outcry because someone somewhere claimed it caused adverse side effects, ranging from headaches to numbness.
Thinking back now I wonder where I first heard this?
Who was responsible for this claim and was the source reliable?
I was young and easily influenced by herd mentality (bandwagon effect) and never bothered to ask. I do remember, however, that every Asian restaurant I went to marketed themselves as being “MSG-free” even to the extent that they would advertise a large image of “MSG” in a circle with a line across it, like a no-smoking sign.
Today I’m intrigued to know how this all happened: how did consumers all over the world develop a negative and unfavourable attitude towards MSG, and presumably like me, have no idea the source or validity of the panic in the first place?
Where it all started
In 1968, MSG’s death knell rang in the form of a letter written to the New England Journal of Medicine by Dr. Robert Ho Man Kwok, a Chinese-American doctor from Maryland. Kwok (1968) claimed that after eating at Chinese restaurants, he often came down with certain unpleasant symptoms, namely “numbness at the back of the neck, gradually radiating to both arms and the back” and “general weakness and palpitation.”(Geiling, 2013)
After Kwok’s letter was published, the New England Journal of Medicine received many more from readers who claimed to experience similar effects after eating Chinese food.
Debunking the MSG myth
Countless scientific and medical experiments were conducted in the two decades following Kwok’s 1968 claim—from the FDA to the United Nations—and extensive examinations completed by many governments (Australia, Britain, Japan) which concluded that MSG was safe to use and consume as a food additive (Geiling, 2013). The U.S. Food & Drug Administration (“FDA”) states the following on its website about MSG:
FDA considers the addition of MSG to foods to be “generally recognized as safe” (GRAS). Although many people identify themselves as sensitive to MSG, in studies with such individuals given MSG or a placebo, scientists have not been able to consistently trigger reactions (“Questions & Answers”…2018).
This leads me to wonder, what was really behind all this panic about MSG?
Confronting consumer bias
Ian Mosby, a food historian who has researched what became known as the “Chinese restaurant syndrome” examined the topic more closely in his research publication, “That Won-Ton Soup Headache’: The Chinese Restaurant Syndrome, MSG and the Making of American Food, 1968-1980”. Mosby (2009) argued that the Chinese restaurant syndrome was, “at its core, a product of racialized discourse that framed much of the scientific, medical and popular discussion surrounding the condition.”
For example, Mosby points out that while Asian restaurants in the western world had to advertise loudly “No MSG,” junk food and packaged good companies selling potato chips and cans of soup didn’t (Mosby, 2012). As it turns out, MSG is not unique to just Asian cuisine and is used in a host of other packaged and frozen products as well as freshly prepared non-Asian dishes.
Bias and prejudice towards Asian people and cuisine fueled an unnecessary marketing movement and the anti-MSG activism in many parts of the world, including in Canada were it is often believed that racism has no home. Mosby reminds us of why that isn’t the case:
The story of the ‘discovery’ and ‘spread’ of the Chinese restaurant syndrome – and its central idea that you were more likely to suffer an adverse reaction to MSG after eating Chinese food – therefore provides an instructive example of the ways in which ideas about supposedly ‘foreign’ food and food cultures can often bring to the surface a range of prejudices and assumptions grounded in ideas about race and ethnicity that, even in supposedly pluralistic and multicultural societies like Canada, continue to inform perceptions of the culinary ‘other’.
Schema & Confirmation Bias
Schemas help us remember information by organizing material into coherent representations. However, although schemas can improve our memories, they may also lead to cognitive biases. Using schemas may lead us to falsely remember things that never happened to us and to distort or misremember things that did. For one, schemas lead to confir
mation bias, which is the tendency to verify and confirm our existing memories rather than to challenge and disconfirm them. The confirmation bias occurs because once we have schemas, they influence how we seek out and interpret new information. The confirmation bias leads us to remember information that fits our schemas better than we remember information that disconfirms them (Stangor & McMillan, 1992), a process that makes our stereotypes very difficult to change. And we ask questions in ways that confirm our schemas (Trope & Thompson, 1997).
If we think that a person is an extrovert, we might ask her about ways that she likes to have fun, thereby making it more likely that we will confirm our beliefs. In short, once we begin to believe in something — for instance, a stereotype about a group of people — it becomes very difficult to later convince us that these beliefs are not true; the beliefs become self-confirming.
Darley & Gross (1983) demonstrated how schemas about social class could influence memory. In their research they gave participants a picture and some information about a Grade 4 girl named Hannah. To activate a schema about her social class, Hannah was pictured sitting in front of a nice suburban house for one-half of the participants and pictured in front of an impoverished house in an urban area for the other half.
Then the participants watched a video that showed Hannah taking an intelligence test. As the test went on, Hannah got some of the questions right and some of them wrong, but the number of correct and incorrect answers was the same in both conditions. Then the participants were asked to remember how many questions Hannah got right and wrong.
Demonstrating that stereotypes had influenced memory, the participants who thought that Hannah had come from an upper-class background remembered that she had gotten more correct answers than those who thought she was from a lower-class background.
Schemas & Pro-Environmental Consumers
How have consumers’ existing expectations about brands and retailers shaped their decision making? What actions can brands undertake to influence how consumers evaluate them? Consider the fashion and apparel industries and today’s highly-involved and conscientious consumer.
Many consumers today are becoming increasingly vocal about pro-environmental issues, making it imperative for retailers to be transparent about how their products are made (“Fashion with a conscience,” 2017). Bhaduri (2019) tells us that several apparel brands have started to not only undertake “pro-environmental” initiatives, but also communicate their effort through marketing communications because research shows that consumers often evaluate brands and their communications based on their existing expectations (schemas). Research also suggests that brand messages that are congruent (aligned) to consumers’ schemas will reinforce their existing expectations and be evaluated positively. A winning outcome for the brand!
Salience & Cognitive Accessibility
Another potential for bias in memory occurs because we are more likely to attend to, and thus make use of and remember, some information more than other information. For one, we tend to attend to and remember things that are highly salient, meaning that they attract our attention.
Does Salience Fool Us?
Things that are unique, colourful, bright, moving, and unexpected are more salient (McArthur & Post, 1977; Taylor & Fiske, 1978). In one relevant study, Loftus, Loftus, and Messo (1987) showed people images of a customer walking up to a bank teller and pulling out either a pistol or a chequebook. By tracking eye movements, the researchers determined that people were more likely to look at the gun than at the chequebook, and that this reduced their ability to accurately identify the criminal in a lineup that was given later.
The salience of the gun drew people’s attention away from the face of the criminal.
In a consumer behaviour context, we see that a consumer’s attention is attracted to and influenced by the most salient features they are faced with in the moment of buying (Bordalo, Gennaioli, & Shleifer, 2013). When deciding between two items—gym memberships, for example—the consumer might feel location is the most salient feature upon which they make their decision. Imagine if a consumer was considering purchasing a gym membership from one of two gyms.
Gym A is close enough to the consumer’s home that they could walk to it. The price, however, is 25% higher than Gym B, which is only a 10 minute drive away. Despite the price difference and the money that could be saved, our consumer selects the more expensive gym nearby because they are more attracted to “location”; in fact, their attraction to this feature is in-proportional to price, which means that location has more salience to our (soon to be fit) consumer.
The salience of the stimuli in our social worlds has a big influence on our judgment, and in some cases may lead us to behave in ways that might not benefit us. Imagine, for instance, that you wanted to buy a new mobile device for yourself. You checked Consumer Reports online and found that, although most of the leading devices differed on many dimensions, including price, battery life, weight, camera size, and so forth, one particular device was nevertheless rated significantly higher by owners than others. As a result, you decide that that is the one you are going to purchase the next day…
That night, however, you go to a party, and a friend shows you their brand new mobile device and after checking it out, you decide it’s perfect for your needs. You tell your friend that you were thinking of buying the other brand and they convince you not to, saying it didn’t download music correctly, the battery died right after the warranty expired, and so forth — and that they would never buy one. Would you still plan to buy it, or would you switch your plans?
If you think about this question logically, the information that you just got from your friend isn’t really all that important. You now know the opinion of one more person, but that can’t change the overall rating of the two devices very much. On the other hand, the information your friend gives you, and the chance to use their device, are highly salient.
The information is right there in front of you, in your hand, whereas the statistical information from Consumer Reports is only in the form of a table that you saw on your computer. The outcome in cases such as this is that people frequently ignore the less salient (more important) information, such as the likelihood that events occur across a large population (these statistics are known as base rates), in favour of the less important but nevertheless more salient information.
People also vary in the schemas that they find important to use when judging others and when thinking about themselves. Cognitive accessibility refers to the extent to which knowledge is activated in memory, and thus likely to be used in cognition and behaviour. For instance, you probably know a person who is a golf nut (or fanatic of another sport). All they can talk about is golf.
For them, we would say that golf is a highly accessible construct. Because they love golf, it is important to their self-concept, they set many of their goals in terms of the sport, and they tend to think about things and people in terms of it (“if they play golf, they must be a good person!”).
Other people have highly accessible schemas about environmental issues, eating healthy food, or drinking really good coffee. When schemas are highly accessible, we are likely to use them to make judgments of ourselves and others, and this overuse may inappropriately colour our judgments.
Authors sue Gold Mountain Blues writer for copyright infringement. (2011, October 28). CBC News. http://www.cbc.ca/news/arts/authors-sue-gold-mountain-blues-writer-for-copyright-infringement-1.1024879.
Bordalo, P., Gennaioli, N., & Shleifer, A. (2013, October 5). Salience and Consumer Choice. Journal of Political Economy, 121: 803-843. https://dash.harvard.edu/handle/1/27814563.
Burke, R., Srull, R., & Thomas, K. (1988, June). Competitive Interference and Consumer Memory for Advertising. Journal of Consumer Research, 15, 55–68.
Darley, J. M., & Gross, P. H. (1983). A hypothesis-confirming bias in labeling effects. Journal of Personality and Social Psychology, 44, 20–33.
Fashion with a conscience: Why sustainable fashion is the next retail frontier. (2017). WGSN [Blog post]. https://www.wgsn.com/blogs/fashion-with-conscience-why-sustainable-fashion-is-thenext-retail-frontier/.
Franke, N., Schreier, M. & Kaiser, U. (2010). The “I designed it myself” effect in mass customization. Management Science, 56(1):125–140. https ://doi.org/10.1287/mnsc.1090.1077.
Geiling, N. (2013, November 8). It’s the Umami, Stupid. Why the Truth About MSG is So Easy to Swallow. Smithsonian Magazine. https://www.smithsonianmag.com/arts-culture/its-the-umami-stupid-why-the-truth-about-msg-is-so-easy-to-swallow-180947626/.
Jacoby, L. L., & Rhodes, M. G. (2006). False remembering in the aged. Current Directions in Psychological Science, 15(2), 49–53.
Kwok, R.H.M. (1968, April 4). Chinese-Restaurant Syndrome. New England Journal of Medicine, 796.
Loftus, E. F., Loftus, G. R., & Messo, J. (1987). Some facts about “weapon focus.” Law and Human Behaviour, 11(1), 55–62.
McArthur, L. Z., & Post, D. L. (1977). Figural emphasis and person perception. Journal of Experimental Social Psychology, 13(6), 520–535.
Meyers-Levy, J. (1989, September). The Influence of Brand Name’s Association Set Size and Word Frequency on Brand Memory. Journal of Consumer Research, 16, 197–208.
Mosby, I. (2009, February 2). ‘That Won-Ton Soup Headache’: The Chinese Restaurant Syndrome, MSG and the Making of American Food, 1968-1980. Social History of Medicine, 22 (1), 133-151. https://doi.org/10.1093/shm/hkn098.
Mosby, I. (2012, December 7). Revisiting the ‘Chinese Restaurant Syndrome‘ [Blog post]. http://www.ianmosby.ca/revisiting-the-chinese-restaurant-syndrome/.
Norton, M.I, Mochon, D., & Ariely, D. (2012). The IKEA effect: when labor leads to love. Journal of Consumer Psychology, 22(3), 453–460. https ://doi.org/10.1016/j.jcps.2011.08.002.
Pratkanis, A. R., Greenwald, A. G., Leippe, M. R., & Baumgardner, M. H. (1988). In search of reliable persuasion effects: III. The sleeper effect is dead: Long live the sleeper effect. Journal of Personality and Social Psychology, 54(2), 203–218.
Questions and Answers on Monosodium Glutamate (MSG). (2012, November 19). U.S. Food and Drug Administration. https://www.fda.gov/food/food-additives-petitions/questions-and-answers-monosodium-glutamate-msg.
Rassin, E., Merckelbach, H., & Spaan, V. (2001). When dreams become a royal road to confusion: Realistic dreams, dissociation, and fantasy proneness. Journal of Nervous and Mental Disease, 189(7), 478–481.
Stangor, C., & McMillan, D. (1992). Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures. Psychological Bulletin, 111(1), 42–61.
Trope, Y., & Thompson, E. (1997). Looking for truth in all the wrong places? Asymmetric search of individuating information about stereotyped group members. Journal of Personality and Social Psychology, 73, 229–241.
Winograd, E., Peluso, J. P., & Glover, T. A. (1998). Individual differences in susceptibility to memory illusions. Applied Cognitive Psychology, 12 (Spec. Issue), S5–S27. | <urn:uuid:ef605c00-91e4-4d2b-8df0-2152e95bb791> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/21/memory-errors-may-indicate-a-way-in-which-the-human-cognitive-system-is-optimally-running/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00703.warc.gz | en | 0.948277 | 7,115 | 3.15625 | 3 |
UPDATE: Graham recently contributed to this article in The Economist on Post-Quantum Cryptography.
Computers that exploit quantum mechanical properties offer the promise of (supposedly) unbreakable cryptography and other exciting applications, but they will also cause a huge, immediate problem: the day a large, practical quantum computer is developed, all existing widely-used asymmetric cryptography will be broken.
This will have serious consequences: massive amounts of secret information exchanged over the internet under public-key or hybrid cryptography could be revealed. Computers will no longer be able to ensure the updates they are downloading and installing are legitimate, since code signing will be broken. The owner of the first powerful quantum computer (which will probably be a large state organisation, who will keep it a secret) could have the power to take over almost every computer and mobile phone connected to the Internet.
Even though nobody knows when this will occur, it makes sense to start preparing. The solution is not quantum cryptography, but post-quantum cryptography: algorithms that run on classical computers now, and resist attack by future quantum computers. The trouble is, they don't exist yet, or at least not in a practical form. NIST has launched a competition to find and standardise practical quantum-resistant algorithms. The first round of the competition closed at a conference this April. Researchers around the world are racing to break and/or improve the submissions. You can see them all here - there is plenty of exotic and infrequently used mathematics being employed, like supersingular isogenies, lattices and quasi-cyclic codes. In the end, unlike for previous NIST competitions such as that which fixed the standard algorithm for symmetric cryptography (the AES standard), there will be more than one winner. That's because there are going to be some painful trade-offs around performance, size of key, size of ciphertext or signature, etc. that will mean difficult decisions for future application developers.
Meanwhile, large companies are already starting their post-quantum crypto migration, without waiting for the results. This is because the job is so huge: imagine you're a big software company with more than 5000 applications. Every one of these applications employs cryptography in multiple ways, in some cases hidden in legacy libraries and components without up-to-date documentation. Finding each use, deciding what to replace it with and making the changes to the application is a huge undertaking. A good first step is to complete your cryptography inventory, finding out what is used and where, something which Cryptosense Analyzer can help you do. | <urn:uuid:1c6f5d55-f2bd-4ef4-a7c2-29ddd80f0257> | CC-MAIN-2022-40 | https://cryptosense.com/blog/post-quantum-crypto-how-to-start-preparing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00703.warc.gz | en | 0.937492 | 525 | 2.59375 | 3 |
The First Thinking Sculpture: Inspired by Gaudi, created with Watson
For Mobile World Congress 2017, architectural designers worked with Watson to create something they’ve never done before. The result is The First Thinking Sculpture. It’s the first sculpture that helped pick its own materials, shapes and colors. A sculpture that’s quite literally shaped by the emotions of the moment.
Figure 1: The Living Sculpture Installation
Creativity is the path to discovering new things
Creativity means always wanting to discover something new. Gaudi understood that the architecture of the past was outdated architecture – that the city, modernity and new ways of life demand something different. Watson is the natural extension of Gaudi’s thinking, and IBM’s proposal of The First Thinking Sculpture takes the thinking of Antoni Gaudí to the maximum, fully understanding and integrating his philosophy.
Michael Szivos, Cloud Architect and Designer at SOFTlab said: ‘What made us so excited about producing an installation that was not only inspired by Gaudi, but would actually be in Barcelona, is the idea of continuing his legacy in a way that is based around the technology we have today.’
Figure 2: Taking the thinking of Gaudi to the maximum
A unique installation that embodies meaning and insight
By feeding Watson hundreds of images, song lyrics, and articles about Gaudi and Barcelona, we were able to choose many different unique shapes, colors and materials. For example, the triangle shape is something that appears in one of Gaudi’s first commissioned projects: Casa Vicens (1888). In his design, Gaudi paid particular attention to the corners of the building, which were ridged in order to avoid the austere appearance of classical architecture. As a result, Casa Vicens, appears as a unique oasis of calm with an Oriental and Moorish flavour.
‘Watson is a cognitive system – a collection of technologies that focus on trying to extract meaning and insight out of unstructured forms of data. We looked at using Visual Recognition and Alchemy Language to identify trends in the architecture that the architects could use as part of their creative process.’ – Jeff Arn, Manager, IBM Watson
Standing out against the existing architecture of the time
Overall, Gaudi took inspiration from nature – his use of color and shape stood out against the existing architecture of the time. There are forms like trees, turtle shells, the use of light – all of it represents the lessons of nature transported and translated into architecture.
According to Michal Szivos, Cloud Architect and Designer, SOFTlab, the installation was designed in the same way Gaudi turned to natural forms and structures, incorporating how they’re made and how they’re built; essentially using natural forces like gravity to engineer a shape.
Watson and Gaudi: a unique partnership that transcends time
In many ways, Gaudi was capturing the mood in the city. In the same way, the reactive nature of the installation is also capturing the mood, but in a way that makes it easy for people to engage with it. Gaudi’s work exists in Barcelona.
The Living Sculpture installation exists on two levels – embodying the beauty and exoticism of Gaudi’s work, while infusing it with present day life, which transforms it into a living, organic, changing, thinking thing. The installation is the first sculpture to keep Gaudí’s legacy alive with data.
‘The history of mankind is to step forward. Watson is a step forward. It’s a revolution.’ – Professor Daniel Giralt-Miracle, Art Historian.
The sculpture reacts in real time
The objective of the project was to ensure Watson was an integrated part of the sculpture itself. As Szivos explains, ‘by taking what people around the world and at the event are saying, and using the Tone Analyzer API, we are able to extract certain personality traits and use the shifts in those traits to actually manipulate the sculpture itself.’
At the IBM Booth, where the installation is suspended from the ceiling, there is also a touch screen experience which lets events attendees interact with the installation, providing them with a deeper insight into how the technology works.
Figure 3: A Touch Screen Experience
As an interactive sculpture, the installation embodies movement, changing based on real-time data analysis and input. Using lights, rings and shapes, the sculpture interprets the sentiment of particular topics being discussed during the MWC 2017 event – topics such as Artificial Intelligence (AI), Internet of Things (IoT), and Security. In real time, Watson analyses Twitter hashtags and feeds using the Tone Analyzer API.
Figure 4: Discover the meaning behind the data
The rings of the sculpture represent three topics of conversation around AI, IoT and Security. Watson is analyzing the levels of optimism and openness around these topics in real-time, feeding that information to the sculpture. Depending on the level of optimism and openness associated with the conversations, the rings move up or down. If a ring is up, that means the prevailing sentiment is negative; if a ring is down, it indicates the level of openness is high.
Figure 5: A look at how IoT is trending
We’ll be on site at the Mobile World Congress this week and sharing our findings on the blog. You can bookmark our Mobile World Congress 2017 landing page or visit the MWC2017 website to find out more about the event.
Learn more about our cognitive solutions by visiting the IoT website, or speak to a representative to find out how we can help you take the next step in bringing the IoT to your business. | <urn:uuid:631ae8be-06f7-4f2c-b34e-da08315d1c10> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/internet-of-things/first-thinking-sculpture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00703.warc.gz | en | 0.944332 | 1,185 | 2.640625 | 3 |
New Training: Administer Group Policy
In this 6-video skill, CBT Nuggets trainer Garth Schulte covers the purpose of Group Policy and shows you how to configure, deploy, and manage it in a Windows Server environment. Learn how to create a Group Policy Object (GPO), how to troubleshoot a GPO, and how to assign a GPO to an Organizational Unit (OU). Watch this new Microsoft Windows Server training.
Watch the full course: Microsoft Server 2019 Essentials
This training includes:
47 minutes of training
You’ll learn these topics in this skill:
Introduction to Administer Group Policy
Group Policy Overview
Configuring Group Policy Objects
Deploying and Troubleshooting Group Policy Objects
Managing Group Policy Objects
Group Policy Best Practices
Why Group Policy is the Ultimate Security Tool
Group Policy can be used for many things, but Microsoft primarily designed it for network administrators to use as a security tool. Rather than having to individually configure each user profile and computer, net admins can define a series of security settings for each and then apply them as needed.
Policies, which are a collection of individual Group Policy settings, are known as Group Policy Objects, or GPOs. GPOs are managed from the Group Policy Management Console, a central interface that further simplifies network admin responsibilities. In addition to this graphical user interface, Group Policy can be managed from the command line interface.
GPOs are applied in a hierarchical fashion, beginning with the objects themselves and then ascending to the site, domain, and organizational levels, respectively. Some of the simpler applications of Group Policy include enforcing minimum password lengths, forcefully installing security patches, and hiding Windows Control Panel from users to prevent them from adjusting any settings on their PCs. | <urn:uuid:24909b20-a71e-4bcf-a844-4f1506fd034b> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-administer-group-policy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00703.warc.gz | en | 0.873725 | 367 | 2.671875 | 3 |
Artificial Intelligence, that poses an ambiguous take for society, considers it “a bane” or “a boon”. Some of us might believe in the fact that “in the coming years, many of the existing jobs will become automated that results in an escalation in unemployment. However, it is a myth like other AI misconceptions. (Read here the top 5 myths about Artificial Intelligence)
Now, the question arises here is, “will we reach a stage where human beings will be eliminated in political decision making?” and “Should human decision making be rejected and replaced with data-driven decision making, powered by artificial intelligence?” Probably yes, or answer might be no, let’s learn through this blog.
AI in Politics
We have seen many politicians studying people’s perspectives by employing AI techniques and then modifying their views accordingly to gain more supporters. Barack Obama’s 2012 presidential campaign, Narendra Modi in 2014, Donald Trump in 2016 are all prime examples of how artificial intelligence can be a successful tool in politics.
In the current situation, there is growing interest in both the applications of AI technology as well as its potential threats to individuals or society. Policymakers have to choose a method to govern a wide variety of AI technologies and applications which will have a dramatic impact on society and create lots of opportunities and benefits in general. According to this research paper, policymakers generally have two main approaches which are listed below :
Preemptively approach - In this approach policymakers ban certain applications that have potential threats to the society also known as precautionary approach.
Permissionless innovation - In this approach, they can prioritize experimentation and collaboration as a default measure, while addressing issues as they arise.
Can policymakers adopt data-driven policies?
Artificial intelligence could be one the most powerful tools for policymakers to pursue a data-driven policy approach, with machine learning, predictive analytics techniques, it will provide a precise image of what the country needs and how its problems could be solved.
Let’s take an example: suppose with AI we can predict certain vulnerabilities in the economy that could lead to more unemployment, we could solve them beforehand.
“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” ― Eliezer Yudkowsky”
Conventionally, decisions making were heavily influenced while taking into consideration many factors, apart from evidence such as personal experience, instinct, hype, and belief.
But currently, technological advancements give an enormous amount of capabilities to analyze large amounts of data within seconds that are enabling decision-makers to cut these distortions.
Data-driven policies will empower the government to be more responsive towards its citizens, resulting in a better democratic system.
Another example of data driven techniques which impacted the sports industry read our blog on Role Of Business Intelligence In The Sports Industry we discussed how Billy Beane employed statistical analysis tools to compile one of the best winning baseball teams while maintaining minimum payroll. (Willing to learn about some business intelligence tools and techniques in 2020, click here)
What are the benefits of AI in politics?
It can detect any corruption made in the system instantly.
With AI-enabled systems, every candidate will be analyzed by his/her past work experience, records, behaviour, leadership skills which can provide the audience with a better idea before voting for any candidate. ( Similar approach of AI-driven system in Digital marketing, grab here)
AI can improve a system's productivity while analyzing the loopholes in the system.
If AI is implemented by independent organizations, we can remove all the fake news, false agendas within minutes, which would be highly beneficial considering the current scenario. (Related blog: Detection of Fake and False News by CNN and deep models)
AI has the potential to reduce the cost of any political campaigns.
Many countries celebrate advancements and growth in robotics, & computational environment, and automation through AI expecting that it will boost their economy. But a recent survey named European tech insights 2019 highlights different phenomena of European thinking which reveals that people do not want machines to perform most of the tasks which are being done by humans.
Data Source: European Tech Insights 2019
These percentages also shed light on some other insights like one in every four European citizens would prefer AI to take all important decisions while running the country.
How AI is impacting the public sector?
Many private companies are using emerging technologies such as artificial intelligence, robotics and cloud computing to improve their efficiency and productivity. There are many areas where artificial intelligence applications can be used in the public sector domain. Artificial intelligence services can make any Government faster and tailored.
Benefits of AI services in different public sectors
1. In the Education Sector
A public school teacher grading system is a small example of wide-ranging benefits where we can use an automatic grading system, using artificial intelligence, learning from previous answers, and getting better as it goes.
For example, you have heard about the famous education app “BYJU that is leveraging the AI for providing a better education interface.
Many universities and MOOCs platforms are using this technology to grade thousands of students and only some of them need a bit of human oversight. AI would help teachers to plan new ways of teaching for struggling students, create extra courses, read more, or simply get more time to spend on extracurricular activities.
Speaking of Education you can also check out our blog on Major applications of AI in the education sector here.
2. In the Health Sector
In our recent blog we have elaborated AI applications in healthcare like Support clinical decisions, Enhance primary care, Robotic surgeries, and Virtual nursing assistance, etc.
The government recently launched an app named “Aarogya Setu” which leverages AI to trace all the close contacts or whoever has been in contact with a COVID19 positive patient. It notifies the user if they were or are in close contact with any COVID positive patient. You can learn more about this topic in our blog on how ai is being used in the fight of covid19?
3. In the Banking Sector
AI can assist in many banking problems such as fraud detection, credit risk assessment, reduction of costs as well as risk management. Alongside these aspects, the sector is also leveraging AI for battling with frauds and hacks while simultaneously abiding with KYC and AML compliance regulations. Read more about how Artificial Intelligence is being leveraged by the banking sector.
Apart from these applications, public servants can leverage this technology to make welfare decisions, immigration decisions, fraud detection, new infrastructure projects, traffic control, citizens query solving platform, adjudicate bail applications, triage healthcare cases, etc.
What should the Government do to improve its efficiency?
The government can decide which tasks will be handed to machines and which require human intervention but by employing AI applications we can save huge amounts of labour time to use them efficiently in other things.
Using AI techniques, government agencies can become more efficient and provide public servants with more quality work to increase their job satisfaction level.
We could save the time spent on carrying out repetitive tasks and use that in doing more creative and productive work.
Using AI techniques, the Government could let go of half of the staff and save taxpayers money or instead re-employ staff and upskill them for better functioning the government.
The Government can offer more jobs in those areas which are much more rewarding and require lateral thinking, creativity, and empathy, skills that cannot be replaced by any artificial intelligence application.
For instance, if any citizen wants more time from their government and requires extra coaching, guidance then the government can use these public servants.
In conclusion, various countries have already begun experimenting with AI technologies in the public sector, many instances of which have been highlighted in the blog.
I am of the belief that there is a grave need to set limits to some extent to the employment of machines and develop a balance between the application of human power and machine power. (Read a combined view “Emotional Artificial intelligence”)
Machines absorb their knowledge through the data fed by humans, making human intelligence irreplaceable in the development of both machines and society. Follow our social media pages on Facebook, Twitter, and LinkedIn. | <urn:uuid:b8f56993-9d76-4ad8-b7b8-1af2b1f0463e> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/how-artificial-intelligence-ai-can-be-used-politics-government | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00703.warc.gz | en | 0.944538 | 1,719 | 2.78125 | 3 |
What is the role of artificial intelligence in manufacturing? Well, there are a lot of use cases for artificial intelligence in everyday life, but what about AI in manufacturing? The effects of artificial intelligence in business heavily include manufacturing.
Are you scared of AI jargon? We have already created a detailed AI glossary for the most commonly used artificial intelligence terms and explained the basics of artificial intelligence as well as the risks and benefits of artificial intelligence for organizations and others. So, it’s time to explore the relationship between artificial intelligence and manufacturing.
Table of Contents
What is artificial intelligence in manufacturing?
Artificial intelligence (AI) in manufacturing refers to a machine’s ability to think like a human, respond independently to internal and external events, and anticipate future occurrences. When a tool wears out or something unexpected—or perhaps even something unexpected—happens, the robots can recognize it and take action to fix the issue.
Artificial intelligence in manufacturing entails automating difficult operations and spotting hidden patterns in workflows or production processes.
Manufacturers now have the unmatched potential to boost throughput, manage their supply chain, and quicken research and development thanks to AI and machine learning.
According to The AspenTech Industrial AI Research, only 20% of large industrial organizations have adopted AI, despite 83% believing it generates superior results.
For artificial intelligence to be successfully implemented in manufacturing, domain expertise is crucial. Because of that, artificial intelligence careers are hot and on the rise, along with data architects, cloud computing jobs, data engineer jobs, and machine learning engineers.
Artificial intelligence (AI) has the potential to transform the manufacturing industry. The potential advantages include enhanced quality, decreased downtime, lower costs, and higher efficiency. This technology is accessible to smaller firms as well. AI solutions with high value and low cost are more available than many smaller manufacturers believe. So, let’s take a closer look at them.
Impact of AI on the manufacturing industry
Markets and Markets estimates that by 2027, artificial intelligence in the manufacturing market will be worth USD 16.3 billion, increasing at a CAGR of 47.9% from 2022 to 2027. The market is currently valued at USD 2.3 billion. Did the precursors of artificial intelligence dream of it?
Capgemini’s research demonstrates how the most common AI application cases in manufacturing are progressing:
- Maintenance (29% of manufacturing AI use cases)
- Quality (27%)
Manufacturing data’s prominence is fueled by AI and machine learning work well with it. Machines can more easily analyze the analytical data that is abundant in manufacturing. Hundreds of variables impact the production process, and while these are challenging for humans to examine, machine learning models can forecast the effects of individual variables in these challenging circumstances.
Check out the 15 real-life examples of machine learning
AI’s impact on manufacturing is revolutionary. Danone Group, a French food business, employs machine learning to increase the precision of its demand forecasts. This resulted in:
- 20% decrease in forecasting errors.
- 30% decrease in lost sales.
- 50% reduction in demand planners’ workload.
Due to these statistics, have you begun to wonder about all the advantages of artificial intelligence in manufacturing? We’ve gathered them all for you, so don’t worry.
Benefits of AI in manufacturing
What are the main benefits AI brings to the manufacturing industry? Industrial manufacturing has the greatest rate of AI functionality among all industries, with 93% of business leaders reporting that it is at least fairly functional in their company. The reasons are:
- Better quality
- Quick decision making
- Lower operational costs
- Preventative maintenance
- Enhanced production designs
- 24/7 production
- Quicker adaptation to the market changes
Check out the effect of artificial intelligence in developing countries
So, let’s explore them.
Quality assurance may be the main benefit of artificial intelligence in manufacturing. Businesses can employ machine learning models to spot deviations from typical design criteria, flaws, or consistency issues that a normal person might miss.
Machine learning techniques improve product quality while reducing costs and time spent on quality assurance.
Quick decision making
When IIoT is linked with cloud computing and virtual or augmented reality, businesses can communicate about industrial activities, share simulations, and send vital or relevant information in real-time, independent of location.
Sensor and beacon data helps businesses estimate future demand, make quick manufacturing decisions and speed up communication between manufacturers and suppliers. It also helps organizations understand customer behavior.
Lower operational costs
Given the significant capital commitment required, many businesses are wary of applying AI to the manufacturing sector. However, the ROI is substantial and gets better over time. Businesses will profit from significantly lower operating expenses as intelligent machines take over a factory floor’s everyday tasks, and predictive maintenance will also help decrease machine downtime.
Consumers anticipate the best value while growing their need for distinctive, customized, or personalized products. It is becoming easier and less expensive to address these needs thanks to technological advancements like 3D printing and IIoT-connected devices. Adopting virtual or augmented reality design approaches implies that the production process will be more affordable.
Systems can be created and tested in a virtual model before being put into production, thanks to machine learning and CAD integration, which lowers the cost of manual machine testing.
Preventive maintenance is another benefit of artificial intelligence in manufacturing. You may spot problems before they arise and ensure that production won’t have to stop due to equipment failure when the AI platform can predict which components need to be updated before an outage occurs.
Enhanced production designs
AI is bringing about positive changes in product design.
One approach, for instance, is for engineers and designers to create a brief fed into an AI system.
Data from the brief might include limitations and guidelines for the kinds of materials that can be used, production techniques that can be used, time restraints, and financial restrictions.
The program would then investigate every scenario before presenting a list of the top options. Testing those solutions with machine learning can determine the most effective approach.
People are flawed and prone to error, especially if they are fatigued or preoccupied. On the factory floor and in any building or processing setting, errors and accidents do happen, but AI and robotic aid can all but eliminate this propensity.
When the work is hazardous or demands superhuman effort, the remote access control reduces human resources. Even routine working conditions will reduce the frequency of industrial accidents and increase safety overall. A simpler and more efficient way to preserve human lives is to create safety guards and barriers thanks to increasingly sophisticated sensory equipment coupled with IIoT devices.
Because we are biological beings, humans require regular upkeep, like food and rest. Any production plant must implement shifts, using three human workers for each 24-hour period, to continue operating around the clock.
AI-powered robots can operate on the production line around the clock and don’t get hungry or fatigued. This makes it possible to increase production capacity, which is increasingly important to satisfy the demands of clients worldwide.
Additionally, robots are more effective in many areas, including the assembly line, the picking and packing departments, and many other areas. Several aspects of the business operation can significantly shorten turnaround times.
Quicker adaptation to the market changes
AI applications in manufacturing go beyond just boosting production and design processes. Additionally, it can spot market shifts and improve manufacturing supply chains.
A manufacturing company can then transition from a responsive attitude to a strategic mindset, which gives it a significant edge.
By seeing connections between variables like location, political status, socioeconomic and macroeconomic factors, and consumer behavior, AI systems create projections about market demands.
When equipped with such data, manufacturing businesses can far more effectively optimize things like inventory control, workforce, the availability of raw materials, and energy consumption.
Is artificial intelligence better than human intelligence? Before you decide, let’s analyze the disadvantages of artificial intelligence in manufacturing.
Disadvantages of AI in manufacturing
Like everything else in the world, artificial intelligence in manufacturing has some disadvantages like:
- High implementation costs
- Lack of skillful experts
Let’s examine them carefully before you decide.
High implementation costs
Although implementing AI in the industrial industry can reduce labor costs, doing so can be quite expensive, especially in startups and small businesses. Initial expenditures will include continuous maintenance and charges to defend systems against assaults because maintaining cybersecurity is equally crucial.
Lack of skillful experts
There aren’t many AI specialists with the necessary expertise because the subject is still developing. Considering expert availability is critical because this toolset frequently requires sophisticated programming.
Additionally, because of their high demand, the cost of hiring is quite high too.
Check out the best master’s in artificial intelligence
Cybercriminals will try to develop new hacking techniques as AI gets more advanced and prevalent since it is susceptible to cyberattacks.
Do you know how employees ignore cybersecurity training? Even a tiny gap can disrupt the production line. In fact, even a little breach could force the closure of an entire manufacturing company. Therefore, staying current on security measures and being mindful of the possibility of costly cyberattacks is important.
Check out the cybersecurity best practices in 2022
Artificial intelligence in manufacturing use cases
Manufacturing companies must change to a more data-driven business strategy to remain competitive. This frequently involves personnel reorganization, hardware, and software updates related to AI like:
- Predictive maintenance
- Supply chain optimization
- Generative design
- Production optimization
- Price forecasting
- Predictive yield
- Energy management
- Quality assurance
- Inventory management
- Process optimization
- Creating digital twins
Artificial intelligence is already a reality and can be used in your factory immediately, like in the scenarios explained below. So, what are the common AI use cases in manufacturing?
Manufacturers use AI technology to spot potential downtime and mishaps by examining sensor data. Manufacturers can schedule maintenance and repairs before functional equipment fails by using AI algorithms to estimate when or if it will malfunction.
Manufacturers may increase productivity while lowering the cost of equipment failure with the help of AI-powered predictive maintenance. It is one of the most important use cases of artificial intelligence in manufacturing.
Supply chain optimization
Managing today’s supply chains, which have thousands of parts and locations, is extremely difficult. AI is quickly becoming a required technology to deliver items from manufacturing to customers quickly.
Manufacturers can specify each product’s optimal supply chain solution using machine learning techniques. It is now possible to answer questions like “How many resistors should be ordered for the upcoming quarter?” and “What’s the optimum shipping route for product A?” without making assumptions or using best guesses.
Managing internal inventories can be quite difficult. The production line primarily relies on inventory to keep the lines supplied and turning out items. Each process step needs a specific number of components to work; once used up, they must be replaced promptly to keep the process moving.
AI can help handle the difficulty of filling the production floor with the necessary inventory. AI can analyze component numbers, expiration dates, and factory floor distribution to make it more efficient.
Machine learning algorithms are used in generative design to simulate an engineer’s design method.
Design criteria (such as materials, size, weight, strength, manufacturing processes, and cost limits) are entered by designers or engineers into generative design software, which then generates every potential result. Manufacturers may swiftly create thousands of design choices for a single product using this technology.
Data-intensive tasks requiring innumerable historical data sets can be involved in process optimization. It is difficult to determine which process variables result in the best product quality. Numerous Designs of Experiments are often conducted by manufacturing and quality experts to optimize process parameters, but they are frequently expensive and time-consuming.
Engineers can discover the best process recipe for various items using the quick data-crunching speed of AI. Such as “What machine should I use for this high pitch emerging technology circuit board?” or “What conveyor speed or temperature should I input for the maximum yield?” AI will continuously enhance process parameters by learning from all production data points.
Industrial robots, often known as production robots, automate monotonous operations, eliminate or drastically reduce human error, and refocus human workers’ attention on more profitable parts of the business.
Assembly, welding, painting, product inspection, picking and putting, die casting, drilling, glass manufacturing, and grinding are a few applications.
Raw material price volatility has long been a problem for producers. Businesses must adjust to the unpredictable pricing of raw resources to remain competitive in the market. More correctly than humans, AI-powered software can anticipate the price of commodities and improve with time.
Conversations about yield prediction often come up when AI in manufacturing is brought up. A high accuracy prediction AI model has an unlimited return on investment.
Supply chain and inventory management can better prepare for future component needs by forecasting yield. Production managers can be warned to extend production time to meet demand if the yield is predicted to be lower than projected. Yield prediction will require AI to solve its vast data problem.
AI can assist in the frequently undervalued field of energy management. Most engineers lack the time necessary to evaluate the cost of plant energy use.
The cost of running a production process can greatly decrease by using AI to analyze energy usage. Additionally, lower costs allow more cash to be set aside for resources for process innovation, improving quality and production.
As most flaws are observable, AI systems can use machine vision technology to identify variations from the typical outputs. AI technologies warn users when a product’s quality is below expectations so they can take action and make corrections.
Since AI-powered machine learning systems can encourage inventory planning activities, they excel at handling demand forecasting and supply planning.
Compared to conventional demand forecasting techniques used by engineers in manufacturing facilities, AI-powered solutions produce more accurate findings. These solutions help organizations better control inventory levels, reducing the likelihood of cash-in-stock and out-of-stock situations.
Organizations can attain sustainable production levels by optimizing processes using AI-powered software. Manufacturers can select AI-powered process mining solutions to locate and eliminate process bottlenecks.
Creating digital twins
A virtual replica of a physical good or asset is called a “digital twin.” Manufacturers can increase their understanding of the product and enable organizations to experiment with future activities that might improve asset performance by combining AI techniques with digital twins.
Artificial intelligence in manufacturing industry examples
There are a lot of use cases of artificial intelligence in manufacturing. However, how are they used, and what are the effects of AI in manufacturing in real life?
Robotic employees are used by the Japanese automation manufacturer Fanuc to run its operations around the clock. The robots can manufacture crucial parts for CNCs and motors, continuously run all factory floor equipment, and enable continuous operation monitoring. It is a good example of artificial intelligence in manufacturing.
The BMW Group employs computerized image recognition for quality assurance, inspections, and eradicating phony problems (deviations from target despite no actual faults). They’ve succeeded in manufacturing with a great degree of precision.
Porsche is another business that has profited from AI in manufacturing. They automate a sizable component of the automotive manufacturing process using autonomous guided vehicles (AGVs). The plant is more resistant to disturbances like pandemics thanks to the AGVs’ ability to transport car body parts from one processing station to the next without requiring human intervention.
Future of AI in manufacturing
Industrial Revolution 4.0 is altering and redefining the manufacturing sector thanks to artificial intelligence (AI). AI has significantly aided the advancement of the manufacturing industry’s growth. You can explore the effect of artificial intelligence in Industry 4.0 with this article.
Businesses already utilize it to streamline operations, increase safety, help manual workers put their abilities to greater use elsewhere, and ultimately boost their bottom line.
Companies will be able to recognize issues before they occur, enhance their product assembly line, and employ computer vision-based techniques to help grow their business, adding to the benefits of AI in manufacturing, which currently include lower costs and saved time.
Even in the face of ongoing change, AI can significantly help keep your manufacturing business running. It offers predictive analytics that can assist manufacturers in making better choices. Artificial intelligence has many advantages, from product design to customer management. These include improving process quality, streamlined supply chain, adaptability, etc.
However, there are several drawbacks to AI technology. Including high costs and susceptibility to cyberattacks. But AI’s benefits outweigh these drawbacks. | <urn:uuid:85941f38-93c9-406b-851c-ce0f66c31ea5> | CC-MAIN-2022-40 | https://dataconomy.com/2022/08/artificial-intelligence-in-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00703.warc.gz | en | 0.925079 | 3,543 | 2.90625 | 3 |
Although many people still associate AI with science-fiction films, AI technologies have been around us for more than 50 years and with each new development, it becomes much more commonplace in our daily lives. Artificial intelligence or AI is everywhere and it seems to be part of pretty much everything we do, use or buy nowadays. Even when we talk about cybersecurity, AI is becoming essential as it is particularly well suited to finding patterns in huge amounts of data. Cyberattacks are affecting organizations, governments, and people everywhere and these technologies emerge as a much-needed tool helping them increase their efficiency in cybersecurity. And with AI blooming really fast, some questions also appear: Do we really understand what AI is and its importance? And what about machine learning? Has technology started reshaping the future? So in case you are new to AI or just want to clarify some ideas, this article will do the job.
How did it all start
Artificial Intelligence has grown to be very popular in today’s world, but to go back to its roots we must travel to the 20th Century. The term artificial intelligence was coined in 1956, at a conference, at Dartmouth College by John McCarthy, a computer and cognitive scientist, and early AI research began in the 1950s exploring topics like problem-solving and symbolic methods.
But the journey to understand if machines can truly think began much before that. In Vannevar Bush’s seminal work “As We, May Think” in 1945 he proposed a system that amplifies people’s own knowledge and understanding. Five years later Alan Turing wrote a paper on the notion of machines being able to simulate human beings and the ability to do intelligent things, such as play Chess. In the 1960s, the US Department of Defense took interest in this new field and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. DARPA also created intelligent personal assistants in 2003, long before Siri and Alexa became popular.
All this early work broke the ground for the automated reasoning and language processing that we see in computers nowadays designed to complement and enhance human abilities. Today, AI applications can be seen in everyday scenarios such as financial services and fraud detection, retail purchase predictions, and online customer support interactions. But what’s under the term artificial intelligence?
What’s Artificial Intelligence?
Artificial intelligence, or AI, is a broad term representing a range of techniques that allow machines to imitate human intelligence and behave like humans in terms of making decisions, text processing, or translation, for example. As stated simply, AI is trying to make computers think and act like humans. For this process to take place, machines are given or fed data that comes from a wide range of sources that have been designed, collected and chosen by humans. AI systems get smarter with each step of data processing since each interaction allows the system to test and measure solutions, and develop expertise in the task it’s been set to accomplish.
This technology is fulfilled by studying the patterns of the human brain and by analyzing its cognitive process. Rather than serving as a replacement for human intelligence, artificial intelligence behaves as a supporting tool. But for many, this raises another basic and fundamental question: Are humans intelligent enough to have their intelligence as a model?
Why is AI important?
Artificial intelligence technology offers a handful of benefits to help make our lives easier, safer, and more efficient. One of them has to do with automation. This technology is a major tool for humans to forget about redundant and repetitive actions that can perfectly be taken by a device with AI. Let’s not forget that in the case of people with vision problems, the possibility to develop a task or give instructions to a device using their voice is priceless.
Another important asset has to do with accuracy. AI can be trained to become more accurate than humans making it possible for computers to find, analyze and categorize images without the need for additional human programming. A clear example of this is facial matching used in smart borders. AI can be used to help decide whether to admit a traveler into a country or not as well as to enhance the efficiency of government agencies involved in these tasks.
In simple words, this not such new technology can definitely allow us to interact with devices and among us, in more convenient ways, becoming the eyes for human management and supervision.
What is the difference between artificial intelligence, deep learning, and machine learning?
AI, deep learning, and machine learning are connected but are not synonyms. Machine learning and deep learning are subsets of artificial intelligence.
Machine learning refers to the ability of a machine to reason or think without being programmed, so as the concept suggests, machines learn by themselves, without human intervention, using the datasets they are given. Traditional devices are programmed with a certain set of rules in order to know how to do something or carry out an action. Machine learning offers devices the ability to continuously think about how to act based on data they have intake.
Deep learning aims to imitate the human brain in the way it learns. In Machine Learning, humans tell an algorithm what features to learn. In deep learning, the algorithm will extract the features automatically. In other words, deep learning is machine learning, as deep learning algorithms are machine learning algorithms. But their main difference is algorithms in machine learning have a simple structure while deep learning is based on an artificial neural network.
This neural network needs less human intervention and has larger data requirements than traditional machine learning.
Why is Machine Learning so popular?
Machine Learning is definitely not new in case you wonder. Machine learning´s model, based on brain cell interaction, was created in 1949 by Donald Hebb. Hebb wrote a book called The Organization of Behavior, where he presents his theories on neuron communication. After the late 70s, Machine learning evolved on its own and it has become a very important tool for cloud computing and eCommerce, and a variety of top-notch technologies. But due to recent progress in computer processing power, the possibility to apply machine learning to many devices has risen. Smartphones and tablets for example have the processing power and capacity not seen before and the more deep learning is applied, the more humans learn from it and the wider uses that come for it. Alexa or Google Assistant have been made possible through this but the list is not that short. There are lots of places where AI is present.
What about AI uses?
There are really lots of applications that are AI-powered! One of this technology’s strengths has to do with the ability to go through large amounts of data and identify patterns that can become solutions (which would take humans a much longer time). It also allows machines to be smarter, that is to say, simplify processes so they can be accomplished more intuitively, faster, and easily making human life less complicated and enjoyable. Interacting with a phone using just the voice is AI. Accessing an app using your camera to identify your face is also AI but that is not all.
Artificial intelligence may also be an important ally when dealing with cyber security. AI systems can not only recognize cyber threats but prevent them from happening. Guacamole ID, for example, is a zero-friction and passwordless identity authentication app that ensures only the right person is always behind the device. In the case of a remote workforce, it works as an MFA that can also send signals so administrators can react remotely to third-party vulnerabilities. In case a potential threat is detected, the app not only blocks the screen but also records footage of the incident, encrypts it, and sends it to the administrator.
What about privacy in an AI-driven world?
In this information era, people have changed the way they interact with technology and devices. Nowadays just a simple smartphone can collect and transmit data over high-speed global networks, store data in huge data centers, and analyze it. From buying food to zoom meetings or telemedicine, everything takes place in the online world. But with the rise of these online resources, also breaches, fraud, and even identity theft went up. As working from home became the new norm, employees have been targeted by hackers with an increase in cyber risks, opening many doors to data theft and threats. According to an IBM’s security report, data breaches during the pandemic cost companies $4.24 million per incident on average – the highest cost in the 17-year history of the report.
Furthermore, with the advance of this internet-based world, privacy has become a significant issue for small and large companies and even governments. Cyberattacks are increasing rapidly and affecting thousands of organizations and millions of people around the world and this won’t stop if enterprises don’t start working on that. According to data gathered by Anomali and The Harris Poll in 2019, 1 in 5 Americans were victims of ransomware attacks. When talking about privacy issues AI is pointed out as one of its main detractors but the fact is that AI can offer solutions to these privacy problems.
A recent branch of AI research called adversarial learning seeks to improve these technologies and make them less susceptible to such attacks by keeping data stored in different devices. A clear example of this is federated learning, which Google uses in its Gboard smart keyboard to predict which word to type next. Federated learning builds a final deep neural network from data stored on different devices rather than a single data repository. So its paramount benefit is that the original data never leaves the local devices.
The same happens with Guacamole ID, an intelligent application looking to protect computers against unauthorized access by continuous third-step authentication. If an unauthorized person tries to look into your information or look over your shoulder, the system locks the computer and doesn’t allow prying eyes to have access to confidential information. The system is cloud-independent and can work without the internet. All the processing happens on the local computer.
The future of AI is not that far away
It’s hard to say how the technology will develop but the first step toward the future is understanding it. As computers and technology evolve, the development of artificial intelligence replacing human workers has become a common fear among some people but the real thing is robots won’t start ruling the world tomorrow, at least not yet.
It is true that with this technology’s progress in the last few years the fear of AI replacing humans in the workforce is not ungrounded as today many tasks that were once executed by them have become automated. So the fear is natural and expected but just to give you some peace of mind and according to a paper published by MIT Task Force on the Work of the Future entitled “Artificial Intelligence And The Future of Work,” the future looks promising.
Is it true that only big companies are benefiting from artificial intelligence?
Artificial intelligence made its way into diverse industries, changing the way businesses operate. These technologies have the potential to further optimize and revolutionize a big number of businesses despite the size of the company. With the world being more digital than ever, algorithms and AI models are becoming more sophisticated. Also, the volumes of data generated are increasing enormously and not only big enterprises but also small and medium ones need to start diving into the AI world finding new ways to improve their operations.
Using AI, businesses can manage better products, automate services and be proactive with customer data. Adopting AI solutions may for example help companies learn about the power of automating their systems instead of doing it manually and helping them achieve greater efficiency, reducing errors, and last but not least, obtain greater profits.
But automation is not the only way to boost enterprises. AI may be really helpful in scheduling workers, security, customer appointments, and marketing among others. There are many AI solutions available in the market today. So what enterprises should look for are those that are aligned with their business needs, culture, and overall mission, in order to achieve competitive advantage and change business for the better. There are many paths to AI in each industry and enterprise and what’s important is to build a strong foundation for an AI-powered future.
Which countries are leading the way in AI?
Although you may think the US is leading the way in AI development, it’s not the only one. Europe and China are playing hard as well. China’s active involvement in this field is an interesting case to dive deeper into. When talking about their output of AI-related research it increased by just over 120%, whereas output in the US increased by almost 70%. Chinese enterprises such as Alibaba, Baidu, and Lenovo, are investing heavily in AI fields. China aims to a three-step plan to turn AI into a core industry for the country. This AI market was valued at almost 50 billion U.S. dollars in 2020 and seeks to become the world’s leading AI power by 2030.
When talking about investment in this new tech, the 2021 AI Index Report, published by the Stanford Institute for Human-Centered Artificial Intelligence in California, showed that the private investment in AI in 2021 reached $93.5 billion—more than double the total private investment in 2020 and that New Zealand, Hong Kong, Ireland, Luxembourg, and Sweden are the countries or regions with the highest growth in AI hiring from 2016 to 2021.
The future is coming quickly, and artificial intelligence will certainly be a part of it. AI has enormous potential for creating the world a better place and will make people better off in the near future. Now, computers are taking an increasingly greater role in doing the heavy lifting for us, their human partners, and will for sure make us better at what we do expanding the scope of our jobs and enabling us to do things faster and raise our ability and insight along the way.
Developments in artificial intelligence have been sweeping the globe and influencing all businesses so countries must work and invest in building robust AI technologies that have the potential to help grow economies and even contribute to bettering the environment. What’s important at this point is that nations keep developing AI strategies to advance their capabilities, through investment, incentives, and talent development. So with more AI is how the future looks like. The change has come and we must all embrace this new reality. | <urn:uuid:e050aba6-cdee-458c-a8ca-b8503e1490e6> | CC-MAIN-2022-40 | https://hummingbirds.ai/with-ai-everywhere-10-things-you-should-know-about-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00703.warc.gz | en | 0.962484 | 2,929 | 3.234375 | 3 |
As the modern world evolves, more threats arise. As of the first quarter of 2022, more than 90% of data breaches are cyberattack-related. Because of this, knowledge on cybersecurity and artificial intelligence has become crucial to a successful business. Businesses everywhere need true expertise to protect their data to avoid their competitors getting ahead.
We spoke with Taylor Hersom, Founder and CEO of Eden Data -- a cybersecurity company offering a retainer model of security leadership over security, compliance, and data privacy for startups -- about the current state of AI in cybersecurity.
Are we truly in the era of AI in cyber, or is it just machine learning?
It’s actually both, as machine learning is a part of AI and both have similar goals—to use computers that are able to streamline processes that normally take a lot of time for humans. These tasks can consequently be done much faster with technology.
How does AI help in minimizing human error in cyber?
Every company is collecting massive amounts of data, and humans just cannot keep up with it. AI and machine learning algorithms can be leveraged specifically in the security space. Other errors AI can help minimize are automated checks and balances, security vulnerabilities in the codes created by developers, and alerting functionality broadly stems.
How do AI and cybersecurity complement each other to prevent phishing attacks?
AI algorithms can compare threat databases, take existing data points, and determine that something harmful prevents them from reaching your inbox. On the other hand, the tool can scan the link in real-time and provide a report on “sketchy” behaviors.
How does AI help in minimizing cybersecurity threats such as ransomware and phishing?
One of the big ones here is threat hunting. Leveraging AI and essentially educating the algorithms, identifying threats sooner, and picking up on more advanced threats is a huge element of all of this. Another fascinating thing is the ability of AI to isolate a package to avoid being downloaded into the system and prevent ransomware from spreading.
What should we expect from the future of AI in cyber?
The future will be related to the automation of controls and procedures. What will my organization do in order to address an associated risk? So we will be able to automate a lot of menial tasks better and reduce the burden on the security professionals and the business as a whole because security it's not just limited to a security team.
With AI, we're getting much better at being able to identify threats, prevent threats, and shut things down at the source. AI is making our jobs easier on the good guy side but also our jobs harder on the bad guy side. | <urn:uuid:65c3a876-1f85-47b6-a9d7-58ea97e0d01d> | CC-MAIN-2022-40 | https://www.enterprisesecuritytech.com/post/how-ai-helps-minimize-cybersecurity-threats-like-ransomware-and-phishing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00703.warc.gz | en | 0.944419 | 539 | 2.546875 | 3 |
Mobile applications involve a lot of things starting right from a basic idea for the app to detailed planning, app design, app development, the testing procedures and then the final deployment of the app to devices for which it has been developed. All of this has to be decided much before you actually start working on your app idea. The two major options that come into the picture when you decide to create and deploy your app are – either you can go for developing a native mobile app or a web app.
Now the question that arises is – what is the basic point of difference between native mobile apps and web apps. Also, it is essential to know which of these two alternatives would be better for you as an app developer depending on your skills, audience, and nature of work.
Native Mobile App
A native mobile app is basically an application that has been specifically been designed for one particular mobile device. This app gets directly installed onto that device itself. The users of native mobile apps have to download the apps from the various app stores available online or the app marketplace including the Apple App Store and the Google Play Store.
One such app is the Camera+ app that has been developed for Apple’s iOS devices.
A web app is the name for internet enabled applications. The users can access all web apps using the web browser on their mobile devices. You do not need to download these web apps on to your mobile devices in order to be able to use them.
The Safari browser is one good example of a web app.
Many companies keep their horizons widened by designing and developing both native mobile apps and web apps that allow more users to reach them and also provide a good overall user experience. In the opinion of the users, many of these native mobile apps and the web apps look and work almost the same way. But in order to know which of these two apps suits better to your needs, you must have a look at these points about native mobile apps and web apps.
App Development Process
For both the type of apps, the development process is different. The native mobile apps have to be developed using the unique development process that that mobile platform asks for. However, in the case of web apps that run on the web browser of mobile devices it gets difficult because each of the mobile devices come with their own unique features.
While a native app is completely compatible with the hardware and features of its device, web apps can utilize only a limited amount of a device’s native features.
A native mobile app has to be updated by the user by downloading the latest updated version. But a web app updates itself without the need for user intervention (but you have to access it through the device’s web browser).
Due to these and many more points of difference like app monetization and efficiency, many developers feel that native apps are better than the web apps.
Native Apps and Web Apps: The Pros and Cons
As we now understand the basic differences between native and web apps better, we can discuss their pros and cons as well.
Native Mobile Apps: Pros and Cons
- They run faster than web apps
- They can access system resources and thus have better functionality
- These apps can be used offline as well
- Once approved by the app store, these apps are generally more safe and secure
- They are also easier to develop as the tools, SDKs and interface elements are now easily available
- Developing a native app is generally much more expensive as compared to web apps
- As they are compatible only with specific platforms, they have to be designed and built from scratches
- It is more expensive to update and maintain native apps
- Getting such apps approved from the app stores also becomes a concern sometimes
Web Apps: Pros and Cons
- These apps do not need to be installed or downloaded into the system as they function over the internet browsers
- They are easy to maintain and also have a common code structure across different mobile platforms
- They are easier to build as compared to native apps and also update themselves
- They can be launched without any hassle as they don't require app store approvals
- These apps can't work offline
- They are generally slower as compared to native apps and have relatively lesser features
- They are not easily discoverable as they are not listed at specific places like app stores
- As they are not approved by anybody, their quality and security are always questionable
Native Apps vs Web Apps
Native apps, as the name suggests, are purposely made for specific platforms and devices like iOS, Android, or an iPhone or similar device. They can be downloaded via the phone's app store and have access to the in-built resources of the system like camera and GPS. Such apps reside and run solely on the native device. Examples of native apps include Twitter, Google Maps, Facebook, etc.
Web apps can be accessed from any device via an internet browser and they are not native to any particular system. They also don't require to be downloaded locally and simply adapt to any device whatsoever. However, their responsive nature somewhat makes them look similar to native apps and that is why people generally get confused between both.
You will understand this more clearly with the help of the example of Whatsapp. If you have Whatsapp installed on your device and at the same time you try to access it over the browser via the Whatsapp web version, you won't notice that much of a difference. Both the native and the web versions of the apps are designed in a way so that the user doesn't feel different while using any one of them. Despite having similar designs and color formats, the two products are entirely different.
While native apps can run offline also, web apps always need an active internet connection. Mobile apps are generally faster and more responsive as compared to web apps. However, you constantly need to update the native apps, that is not the case with web apps. | <urn:uuid:639d6863-a81f-4fa6-a782-d5e89a06d13e> | CC-MAIN-2022-40 | https://www.appknox.com/blog/native-mobile-apps-are-they-really-better-than-web-apps | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00703.warc.gz | en | 0.962374 | 1,303 | 2.609375 | 3 |
Social Media Ban: Lessons LearnedHarrisburg University Students Felt Less Stress During Blackout
Well, in a survey conducted by the school in the wake of the blackout, 33% of students said they felt less stressed during that week, without having to compulsively check updates and posts on social networks.
In fact, one student told university provost Eric Darr that she felt she was on vacation during the blackout week.
Harrisburg made headlines during the week of September 13 -17 for an on-campus ban of social media sites such as Facebook, MySpace and Twitter. The blackout affected all students, faculty and staff at the university.
The goal of the social media blackout was to raise awareness about the strengths and weaknesses of social media, says Darr. "Make people reflect on the power of social media and its potential to take over one's life."
Key Survey FindingsIn an attempt to understand basic usage patterns and opinions, 221 students and 62 faculty and staff were surveyed about their social media habits and reactions to the blackout during and after the experiment.
The survey reveals that the majority of respondents are regular users of social media. Specifically, two-thirds report using Facebook on a daily basis, for mainly social purposes. In fact, 13% of student responders say they rely on Facebook to combat boredom between classes. Exactly half of student responses cite the use of YouTube regularly for academic and social purposes.
The survey results also suggest that a healthier, more productive life-style was practiced by students during the ban:
- 25% report better concentration in the classroom during the blackout week;
- 23% found lectures more interesting;
- 6% report eating better and exercising more.
Overall, school work was given a higher-priority when social media was unavailable.
On the flip side, the study also raises concerns about the heavy use of social media both by students and faculty. An even 40% of the student respondents spend between 11 and 20 hours per day using social media sites. Equally disturbing is that several faculty members and staff report spending more than 20 hours a day on social networking sites.
Lessons LearnedSurvey results show that 44% of the students report that they learned something from the ban. During the blackout, these students were forced to use another tool for working on their business plan and discovered that it was easier than Facebook. Students learned that document management is not one of the core strengths of Facebook.
In all, 76% faculty and staff also report learning from the blackout, mainly focusing on the power of face-to-face dialogue over online interaction. They realized that teaching complex concepts is much easier by face-to-face communication and meetings.
Darr's personal takeaway is about understanding the impact of social media and implementing appropriate education and guidelines within the institution to mitigate some of the risks offered through these channels.
Darr says the school must emphasize "the pros and cons, so people know that social media is not all bad or good, but continues to have a powerful presence in our lives."
Asked if he would do this experiment again, he says "Yes. We will do a much more focused and longer ban next September."
Next time he plans to extend the ban to several weeks and focus the study on one specific social media channel, as well as adopt more comprehensive approach in assessing the impact.
"I am pleased in making habits on social media use more balanced and understandable in our university," Darr says. "It will be interesting to see what happens next." | <urn:uuid:9ab7bd07-7c40-4c0e-979e-f9da986afd16> | CC-MAIN-2022-40 | https://www.govinfosecurity.com/social-media-ban-lessons-learned-a-3181 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00703.warc.gz | en | 0.969276 | 715 | 2.53125 | 3 |
Compared to solar and wind, geothermal energy is still a small fraction of the renewable energy market. It’s holding its own, however, and it’s likely to become a much bigger player in the not-so-distant future.
“The resource is enormous,” Karl Gawell, executive director of the Geothermal Energy Association (GEA), told the E-Commerce Times. “We’re looking for how to solve our energy problems, and this is a huge resource we’re only starting to tap.”
Geothermal energy is based on a primary fact about the earth: It’s hot at the core. The farther you drill down, the hotter things get. This heat energy can be used to create electricity, primarily by accessing water that is heated by rock layers far below the surface. This hot water is then used to turn turbines to create power.
Older geothermal plants relied on very hot water — around 300 degrees Fahrenheit — which is generated in regions with hot rocks relatively close to the surface, particularly those located along active tectonic faults. For this reason, the first geothermal plants in the U.S. were located in Hawaii, Utah, Nevada and California.
Newer geothermal technology, however, allows for what are called “binary power plants.” These can be located in regions with water that is not nearly as hot — usually less than boiling, or just around 100 degrees. Binary plants bring up this medium-hot water, which stays in pipes that run against a heat exchange, which in turn heats a chemical fluid with a much lower boiling point. This heat-activated fluid then turns the turbines, and the water is sent, uncontaminated, back into the earth.
“These plants can be built anywhere there is hot-enough rock,” explained Gawell.
There has, in fact, been something of a boom in the geothermal industry, based on this new technology. Whereas in 2005 there were only four states with active geothermal plants, now there are nine — and 16 more have plants under development, including Louisiana, Texas and Mississippi.
There is ongoing geothermal research, both in the U.S. and worldwide, and this research is leading to other applications and ways of using the heat from the earth. In addition to using the hot water or heated liquid to turn turbines, there is a growing trend toward “direct use” geothermal, which uses hot water from the earth to provide heat, particularly for commercial buildings.
An even newer technology under development, called “enhanced geothermal,” involves pumping water into the ground to be heated. All of this research and development promises to lead to even more geothermal power production in the future.
These new technologies and plants have begun to generate more jobs in the field. In 2008, the GEA reported about 18,000 employed in the industry in the U.S. That number is now around 25,000, Gawell estimated — still relatively small, but continuing to grow despite tough economic times. These jobs include work for designers, architects, geologists, construction workers, mechanics and many others.
“The jobs in this industry are definitely going to grow,” Gawell predicted. “There are a lot of projects in the future. We’ve got this investment in technology that will pay off in the next several years.”
Those that work in the industry, in fact, have confidence in energy that comes from the earth — and its potential for becoming a much bigger player in the energy market.
“I’ve been doing this for 30 years, and I’ve never missed a house payment,” said Bill Rickard, president of California’s Geothermal Resource Group, which specializes in drilling geothermal wells.
In addition to all those new jobs, geothermal energy just makes plain environmental sense.
“The benefits [of geothermal] are many and obvious,” renewable energy consultant Fred Feige told the E-Commerce Times. “There are no fuel costs. There are few C02 emissions, except a small amount during construction. If the system produces steam, it is usually condensed, and the water is re-injected into the ground. The economic benefits will improve as the technology grows.”
Heating and Cooling
The other realm of the geothermal industry that has seen significant growth in recent years is the use of geothermal HVAC systems, or ground source heat pumps, for heating and cooling both residential and commercial buildings.
These systems are based on the fact that the earth’s temperature not far below the surface is relatively steady, in any climate, throughout the year. They’re not technically “geothermal,” in the sense of using heat coming from the earth — since the heat in this instance comes from the sun and is stored in the surface of the earth. Still, they merit mention in any discussion of earth-based, renewable energy systems.
In ground source HVAC systems, loops are sent down into the ground that are then used to regulate the temperature of a building. These systems cost much more to install than traditional electric heat pumps — as much as three times more — but they eventually pay for themselves, since they offer electricity savings of one half to two-thirds when compared to regular heat pumps.
Increasingly, ground source heat pump businesses are springing up around the country, and like geothermal energy production, this industry promises to create new jobs in the coming decades.
“The market is expanding at a remarkable rate,” Jay Egg, a renewable energy expert and owner of Egg Commercial Systems, told the E-Commerce Times. “We expect annual sales to double in the next four years to more than 400,000 units per year.”
And though it’s still a relatively small piece of the residential and commercial renewable energy pie, geothermal heating and cooling has its own particular advantages.
“Wind and solar are good, but they are dependant upon the climatic conditions,” explained Egg. “Geothermal heating and air conditioning works 100 percent of the time, reducing energy consumption by about 50 percent or more. There is no other claim such as this.” | <urn:uuid:cd088f6c-3ed0-4def-a97f-d38911750a57> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/heating-up-jobs-in-the-geothermal-industry-73533.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00703.warc.gz | en | 0.958082 | 1,318 | 3.46875 | 3 |
State-sponsored hacking has yet again reared its ugly head in our news cycle, as fresh articles highlight how Russian hackers were able to influence various social media users through online campaigns. But what exactly is state-sponsored hacking? What are the goals, methods and dangers?
Below, CyberPolicy looks at this phenomenon, its effects on culture and its effects on private industries. While it might be impossible to stop the most sophisticated hacker collectives in the world from breaching your network, it is possible to keep your company afloat with cyberattack insurance. Keep reading to find out how.
Government-Backed Hacking 101
To put it simply, government-backed hacking is a form of digital incursion that works to promote a nation’s interest at home or abroad. This could take the form of crashing a website critical of the state, or crippling the financial systems of an entire country.
By now, almost everybody is familiar with tactics Russian hackers used to hack and leak emails from the Democratic National Convention in order to damage former presidential candidate Hillary Clinton. What is new, however, is the revelation that Russian infiltrators also used thousands of fake Facebook and Twitter accounts to pose as average Americans.
For example, an entirely invented Facebook user named Melvin Redick posted a link to a newly created website promising leaks critical of Clinton and George Soros.
“Mr. Redick turned out to be a remarkably elusive character. No Melvin Redick appears in Pennsylvania records, and his photos seem to be borrowed from an unsuspecting Brazilian,"” reports the New York Times. This wasn’t the only phony account either. “On Election Day, for instance, they found that one group of Twitter bots sent out the hashtag #WarAgainstDemocrats more than 1,700 times.”
More revelations will undoubtedly pour out in the future, but we are already seeing a sophisticated combination of weaponized data breaches and modified social engineering campaigns to influence public opinion.
But as much news space as they garner, Russia isn’t the only government bending the rules. State-sponsored hackers are also suspected in the ransomware that infected devices across more than 60 countries earlier this year; China is known to have spied on companies in the U.S. technology and pharmaceutical industries; and North Korea is suspected of having attempted to infiltrate electrical grids.
As you can see, the goals and methods behind state-sponsored hacking are varied. Even so, one thing is certain – the consequences of these kinds of attacks are dire.
Cyber Protections for Private Businesses
In most cases, cyber crooks are looking for an easy target to exploit. Therefore, the solution is rather straightforward – if you make yourself a tougher nut to crack, hackers will likely leave you be. After all, a cybercriminal wants money and data. It doesn’t have to be your money and data, just so long as they can steal it.
This isn’t the case for the most sophisticated and well-funded hackers on earth. Government-backed hackers have almost unlimited resources and well-defined targets. So stopping them dead in their tracks is about as likely as catching two identical snowflakes on a warm summer’s day.
That being said, there’s no need to abandon hope. You can still block a number of attacks by training your staff to sniff out fraudulent emails, software security gaps, malware and more.
And if a hacker does get through, cyberattack insurance can help assist you with financial damages, public disclosures and potential lawsuits. You might not be able to thwart the bad guys, but you can protect your own digital assets nonetheless with CyberPolicy. | <urn:uuid:b603fbeb-429c-4f9b-acdc-69b1b563522a> | CC-MAIN-2022-40 | https://www.cyberpolicy.com/cybersecurity-education/state-sponsored-hacking-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00103.warc.gz | en | 0.946083 | 744 | 2.765625 | 3 |
In some ways, humans and routers are exactly alike. After all, we both need “keepalives” to keep going.
Having said that, if you were being super-picky and precise about it, I suppose you could say that humans and routers are also different, in that human keepalives include air, water and food, whereas router keepalives involve sending very small electrical signals to each other over some kind of wired or wireless media.
Actually, now I think about it, I see that humans and routers are completely different, so please forget absolutely everything I’ve said so far. Instead, read this:
WHAT IS BFD?
Many networking protocols are slow to detect failure. For example, IS-IS often has a default hold-time of 27 seconds. OSPF has a dead interval of 40 seconds. Now, these default times can be altered to be quicker. And depending on the kind of network it’s connected to, there may be other default timers that are even quicker.
Unfortunately though, it’s often the case that the smallest Hello time configurable on a protocol is 1 second, which means the hold time needs to be a minimum of 3 seconds. And if you’re running a lot of protocols on the box, that means you’re adjusting lots of protocols to each send more hello packets. That might not sound like a lot of strain – but scale it out to a lot of protocols and a lot of interfaces, and it can be a genuine problem.
If only there was a way to send sub-second hellos – and have one single stream of hellos that can be shared by all protocols. IF ONLY. IF ONLY THIS WAS POSSIBLE.
Well, guess what: it is! Please say hello to BFD, or Bidirectional Forwarding Detection.
This protocol has the ability to send multiple “hello” messages a second. This single BFD session can then be used by almost any other protocol running on the box. If the BFD session goes down, all the protocols using the session can also tear themselves down immediately. On more powerful boxes, you can even configure sub-second failure detection!
There are two reasons that BFD is able to act so quickly. The first is that it has a very low overhead. A BFD message is basically just a very small UDP message sent in one direction, from Router A to Router B. As long as both boxes are sending BFD messages to each other, the protocols relying on BFD will stay up.
The other reason is that a single BFD session can be used by multiple protocols. If a box is running IS-IS, OSPF, BGP, LDP, RSVP and more, all of those protocols could share the one single BFD session. The protocols still send their hello messages, but those protocol hello message can be left to their default slow timers. Instead, all the requirements for quicker messages can be bundled up into the one single session. This is far more efficient than tuning a dozen protocols on a box to all suddenly send their hello messages once a second.
By the way, this blog post uses Juniper configuration and terminology, and we’re going to look at Juniper output. But the underlying protocol is an open standard, and the actual behaviour of BFD is the same regardless of who you’re using. So if you’re not familiar with Juniper but you still want to know how BFD works, don’t worry: this post has you covered.
HOW DOES BFD WORK?
BFD is very easy to configure. There are just two essential pieces to know about:
- The “minimum-interval“, which specifies the smallest amount of time a box is able to send and receive BFD packets.
- The “multiplier“, which is how long the router wants its neighbour to wait before the neighbour assumes the BFD session is down. Again, these times don’t have to match.
Neither of these numbers actually have to match at both ends. For example, if Router A is set to a minimum-interval of 300ms, and Router B is set to 500ms, then both boxes will agree to send BFD to each other using the longer, slower timer. This makes sense when you think that Router B has probably been deliberately set to a slower time because it can’t handle 300ms. Router A can send faster, but it sees that Router B can’t, so Router A slows itself down.
Similarly, if Router A is configured with a multiplier of 3, and Router B is configured with a multiplier of 4, then that’s totally fine. However, this piece is a bit more complicated, so let’s look at an example.
If the minimum-interval is 300ms on both boxes, and the multiplier configured on Router A is 3, then Router B will wait 900ms (300 x 3) before it declares an adjacency down. If the multiplier configured on Router B is 4, then Router A will wait 1200ms (300 x 4) before it tears things down.
In other words, by configuring a multiplier on Router A, what we’re actually doing is getting Router A to indicate to Router B how powerful Router A is. Router A is powerful enough to be able to confidently send BFD regularly, and it lets Router B know this via its multiplier setting. By configuring the multiplier on a box, you’re giving that box the ability to tell its neighbours “this is the multiplier I’d like you to use, because it’s what I’m able to support.”
You could think of it as if Router A is saying something like this: “I am Powerful and Strong and Cool enough to be able to reliably send BFD packets. If you miss three packets from me, you know something has gone wrong. Therefore, Router B, please set your multiplier to 3.”
By contrast, think of Router B as saying to Router A “I am also cool, but I’m not as strong and powerful. There’s a chance that my routing engine might become overloaded for a brief time. So do me a favour, Router A, and set your multiplier to 4. This gives me a bit of freedom to not send a few BFD packets if I ever become overwhelmed for a short while.”
WHAT DOES BFD LOOK LIKE IN PRACTICE?
Let’s say we added these two lines of configuration to two boxes, Router A and Router B. These two boxes were already running IS-IS, and the adjacency was already up. I then added this on Router A, to turn BFD on for a particular interface:
set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection minimum-interval 400 set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection multiplier 3
(NOTE: On some older Juniper boxes you won’t see the “family inet” piece in this configuration.)
At this point, Router B is not talking BFD. However, even though Router A is not receiving any BFD from Router B, the IS-IS adjacency stays up!
Why does this happen? The reason is that BFD is smart: it waits until it receives a reply before it decides that the BFD session is actually up. And the BFD isn’t considered “down” until it has first come up.
Once the configuration is committed on both boxes, we see this:
root@Router_A> show bfd session Detect Transmit Address State Interface Time Interval Multiplier 10.1.2.1 Up ge-0/0/0.0 1.200 0.400 3 1 sessions, 1 clients Cumulative transmit rate 2.5 pps, cumulative receive rate 2.5 pps
Notice that you can see the interval, the multiplier, and the total time the session will wait for no replies from the neighbor (3 x 400ms = 1.2 seconds).
You can also see “1 clients”. If we were to add BFD to another protocol (for example OSPF) on ge-0/0/0.0, which talks to the router at 10.1.2.1, we’d see this change to “2 clients”. Try it in a lab of your own, and see what happens!
WHAT DO BFD PACKETS LOOK LIKE?
There’s a lot of nonsense in the packet, but here’s the bits to look out for:
- This is a regular UDP packet, on port 3784.
- The source and destination IPs are clear to see in the IP header: they’re the IPs of the physical interfaces, not loopback IPs or anything like that.
- In the actual payload (labelled in this picture as the “BFD Control message”) you can see that the session state is “Up”, meaning that this router can see BFD packets coming in from the other router too.
- We see a multiplier of 3, and both TX and RX (transmit and receive) timers of 400ms. It’s possible to set the transmit and receive timers to be different, but you generally would want to keep them the same. The “minimum-interval” command above sets both receive and transmit at the same time.
It’s fairly simple when you only look at the bits that matter!
CHANGING BFD TIMERS DOES NOT TEAR ANY SESSIONS DOWN
Updating BFD timers on either of these two routers will have no impact on the service.
In fact, even if the change is done on one box at a time, with a result that each box has different timers for a while during the process of the change, then BFD will still carry on with no problems!
To understand why this is the case, imagine that those two boxes from earlier, Router A and Router B, are still talking BFD to each other, with 400ms timers and a multiplier of 3. Then imagine that we reconfigure Router A to have a 700ms timer, and a multiplier of 4. In other words, Router A and Router B now have different timers and multipliers.
We’ll come back to the multiplier in a moment. For now, let’s focus on what happens to the timers. Here’s the step-by-step process of the negotiation:
- Router A starts sending new BFD messages to Router B, which include the new updated timer and multiplier. However, for the time being, Router A keeps sending at the old, faster speed.
- Router B sees that the timer from Router A (700ms) is larger than the one in Router B’s configuration (400ms). In other words, Router A is asking for slower times.
- Router B replies with the same BFD timers that it was sending before, because nothing has changed in Router B’s config. However, Router B also sets a specific bit in its header to show that it’s received the requested change.
- At this point, Router A can now safely start sending at the slower speed, safe in the knowledge that Router B won’t take these slower message as an indication of a problem.
- All the time, Router B still keeps on telling Router A that it’s willing to do 400ms. In other words, both ends tell the other what they’re capable of, and they then negotiate to the slowest speed.
This all happens gracefully, and no adjacencies are torn down. Aww, that’s cute! They’re working together like best friends!
Router B also sees that the multiplier it’s receiving from Router A has changed. Remember, when we configure it on Router A, this actually changes the multiplier that Router B uses. So, Router B changes the multiplier it uses – but once again, it makes the change gracefully, and nothing goes down.
WHAT DOES IT LOOK LIKE WHEN BFD SETTINGS ARE DIFFERENT ON EACH BOX?
Let’s actually now configure Router A to have a minimum-interval of 700, and a multiplier of 4.
root@Router_A> show configuration protocols isis interface ge-0/0/0.0 | display set set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection minimum-interval 700 set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection multiplier 4
Then let’s keep Router B with its existing config: a minimum-interval of 400, and a multiplier of 3.
root@Router_B> show configuration protocols isis interface ge-0/0/0.0 | display set set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection minimum-interval 400 set protocols isis interface ge-0/0/0.0 family inet bfd-liveness-detection multiplier 3
What would we expect the result of this to be?
On Router A below we see that the hold time is 2100ms – in other words, 700ms x 3! That’s because it’s Router B that actually want Router A to use a multiplier of 3. However, notice that Router A lists a multiplier of 4 here. This is where it really gets confusing: Router A isn’t showing us the multiplier it’s using – it’s showing us the multiplier it’s sending.
root@Router_A> show bfd session Detect Transmit Address State Interface Time Interval Multiplier 10.1.2.2 Up ge-0/0/0.0 2.100 0.700 4 1 sessions, 1 clients Cumulative transmit rate 1.4 pps, cumulative receive rate 1.4 pps
I think a very valid question to ask the boffins that configure vendor CLIs is “Why couldn’t you have called that the Sending Multiplier, instead of just the Multiplier”. Oh well, too late now!
Similarly, on Router B we see that it has also negotiated to use the 700ms time, even though it’s configured for 400ms. As for its detect time, it’s 2800ms – in other words, it’s taking the negotiated timer, and combining it with the multiplier that Router A advertised, which is four. Below you can see that Router B actually show a multiplier of 3 – which again, is what Router B is sending to Router A. Good grief!
root@Router_B> show bfd session Detect Transmit Address State Interface Time Interval Multiplier 10.1.2.1 Up ge-0/0/0.0 2.800 0.700 3 1 sessions, 1 clients Cumulative transmit rate 1.4 pps, cumulative receive rate 1.4 pps
It gets a little confusing, so don’t worry if it takes you a while to get your head around it. The main thing to understand is that you can change BFD timers and multipliers to make them bigger, without it impacting service: all changes happen gracefully, and no neighborships or adjacencies or LSPs are torn down.
SHOULD I USE BFD IN MY NETWORK?
Now that’s the million-dollar question. The short answer is “yes”. The long answer is “yes, but there’s some caveats”.
There’s two key questions you have to ask yourself:
- First, is your hardware powerful (but probably expensive), or not-so-powerful (but probably affordable)?
- Second, how many interfaces will you be running it on?
To give examples of Juniper hardware: their MX series of routers is a truly mighty beast, and can comfortably handle BFD with pretty aggressive timers, on lots of interfaces. By contrast, their ACX series is designed specifically to be run fairly light, and in certain circumstances it can struggle if the BFD timers are anything less than 500ms.
If you’ve got 40 interfaces that are all running OSPF, putting BFD on all of them might be okay, or it might be disastrous. It all depends on the hardware you’re using.
A problem with copper Ethernet interfaces is that they can sometimes stay “up”, even when they’re really down. However, some fibre presentations do actually go down the very split-second they see that the light source has disappeared. With that in mind, you might choose to not run BFD on those interfaces, because they’ll go down immediately. This is a good strategy if you’re scared about running BFD on too many interfaces. Be aware though that this only protects you from physical fibre problems. If the routing engine of the box at the other end has crashed, you might not notice it if you’re not running BFD.
BONUS BFD FACT
If you ever delete BFD config from one end, but keep the BFD config on the other end, what will happen to the adjacency? You might instinctively think that it will go down. After all, if one end suddenly stops sending BFD messages to the other end… well, that’s bad, right?
But in fact, the adjacency will still stay up! The reason for this is because the router without BFD will send a huge flood of about a dozen “BFD admin-down” messages to the other box, to say that BFD has been deleted by the user. This lets the other router know that the BFD hasn’t disappeared because of a fault, and this allows the IS-IS adjacency to carry on normal. The fact that it’s all UDP means that this message can’t be sent reliably, hence the tidal wave of admin-down messages that will be sent. If at least one of them doesn’t get to the other end, then frankly you’ve got bigger problems on your hands!
Did you enjoy this journey down BFD lane? If so then I’d love you to share this post on your favourite social media of choice. The more readers I get, the more I’m inspired to keep on writing even more cool new posts for you.
If you want to find out when I write new posts, follow me on Twitter, where I will also treat you to a healthy heap of news and opinions. And if you fancy some more learning, take a look through my other posts. I’ve got plenty of cool new networking knowledge for you on this website, especially covering Juniper tech and service provider goodness.
It’s all free for you, although I’ll never say no to a donation. This website is 100% a non-profit endeavour, in fact it costs me money to run. I don’t mind that one bit, but it would be cool if I could break even on the web hosting, and the licenses I buy to bring you this sweet sweet content.
Thank you for reading! | <urn:uuid:325d22cf-78c9-437e-9d6e-5ac12c6a48a2> | CC-MAIN-2022-40 | https://www.networkfuntimes.com/bfd-on-junos-bidirectional-forwarding-detection-juniper-config-multi-vendor-explanation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00103.warc.gz | en | 0.923877 | 4,129 | 2.875 | 3 |
Cultures are Built on Fiction
“Cultures are built on fiction.” A statement made by Ashton Kutcher at one of the largest cybersecurity events in the world. More than likely, your initial thought is that you disagree. I did. But let’s break down his statement to help us understand how this belief can actually help create an Internet Culture for minimizing cybercrime.
Cultures are formed based on legal documents like the Declaration of Independence and the United States Constitution or religious doctrines like the Bible or Quran. These narratives, built by men, form the conscious society around which we live. What makes these social norms a necessity? A historical metaphor I like to use is the difference between the United States and the Wild West in the late 1800s. The western territories were given the nickname “Wild” because they did not adhere to the rest of the U.S. culture. People in the Wild West did not have social norms defined for them. They had a degree of understanding but it was really up to lawmakers and the community to accept and define judgments placed on actions within the territory. Without legal doctrines that define the culture, people in the Wild West were more likely do break laws and get away with them. It wasn’t until these territories became states that the Constitution defined what is acceptable and unacceptable.
What does the Wild West have to do with Cyber Security?
The Internet is a new territory, under which the entire world lives. Cybercriminals are the gunslingers of this new territory. They are highly effective criminal organizations in a territory without defined social norms. Like gunslingers, they are able to take advantage of a lawless culture that has yet to create a world-wide doctrine to live by. When we don’t fully understand our new territory (technology), we lack the ability to properly form judgments against activities that are created from it and people will take advantage. It isn’t until a doctrine is formed that we can then expose and collect on judgments of our society.
Why hasn’t a doctrine been created already?
We currently don’t know what the rules are for the Internet. We don’t have a truly defined law for governing and developing the Internet Culture for the entire World. We haven’t created them because we don’t know all the ways in which cybercriminals will take advantage of the new technology frontier. However, more recent patterns like ransomware have given us a more promising understanding for finally defining “criminal” behavior. You can also account for the fact that we have a degree of understanding because of our current cultural norms. With so many different cultural differences, society norms across the world will have to compromise for the Internet, and it’s up to the citizens of the world to make it happen. Once accepted, no single cultural creation will have ever affected as many people.
Who should create the doctrine?
Uber and Lyft utilized technology to redefine a norm within transportation. The new cultural shift in transportation brought us into an undefined landscape. Unfortunately, the government is taking the role in defining this new territory by limiting its ability. We can’t innovate successfully when the government is pinning us down. Government officials aren’t the experts in the space, and therefore, should not be creating the doctrine by which we develop our social norms for technology. They should be there to enforce and bring judgment against those that break the laws set forth by a group of experts that understand the new frontier.
Where do we go from here?
It is up to us to come together and take responsibility for developing and accepting the Internet culture before the government does it for us. I’m hopeful that our technology thought leaders throughout the world will step up and create this doctrine that can then be presented to and accepted by government entities around the world. We must create a conscious society around the web before we can limit cybercrime.
Written by: Eric Edmonds, Netrix LLC | <urn:uuid:e8589aa4-b6bf-4b23-961c-8891563aa732> | CC-MAIN-2022-40 | https://netrixglobal.com/blog/fighting-cybercrime-with-internet-culture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00103.warc.gz | en | 0.955492 | 818 | 2.65625 | 3 |
I'm sure we're all familiar with stories of popular websites being hacked, or of IT systems in airlines and banks coming under attack. We're perhaps less familiar with the mobile apps that power our smartphones being compromised, yet it's an increasingly popular target for attackers.
An accomplished hacker can compromise an app in a matter of minutes, thus gaining access to your database, your ERP, your intellectual property, or even your customers. It’s crucial, therefore, that app developers do all they can to ensure their software is safe and secure.
In an ideal world, apps would undergo an independent security audit before they’re launched onto the App Store or Google Play. This is especially important for apps that are dealing with extremely sensitive information, such as banking or government apps. It’s a process that should ideally start as early in the software development process as possible to ensure that security is considered from the start.
Developers should also endeavor to follow application security guidelines that are already well established, such as Mobile Security Testing Guide, developed by Open Web Application Security Project (OWASP). The guide outlines a number of possible sources of attack, and urges developers to ensure that their particular app isn’t susceptible to attack along any of them.
Another key area of vulnerability is the very source code that powers the apps themselves.
Typically, when apps are shipped, the source code is released as plain text, which makes it easy for everyone to view, whether friend or foe. It’s a sufficient threat that it earned a mention in the ISO 27001 information security standard, with the standard highlighting that the source code needs to be adequately protected otherwise attackers have a strong means of compromising systems, often without detection.
Source code vulnerabilities bring a number of risks, not least of which is the ability for attackers to directly modify the code, change the system API, modify the contents of memory or manipulate the data and resources of the application. This would allow the hacker to change the intended use of the app.
Perhaps even more dangerously, access to the source code makes it much easier for hackers to create an army of copycats in the hope that they fool users into installing it for phishing purposes.
Keeping code safe
To battle against this, it’s important that developers implement robust source code protection methods that obfuscate the source code to make cloning and reverse-engineering apps that much harder. These methods should also enable runtime defenses that thwart any copycats and lock any potential attackers out.
The following are a number of the most common methods used to keep source code safe from attacks:
- Encryption - For apps, the source code is often the most valuable thing, especially on the programming side of things. As such, it is sensible to explore options to encrypt the key bits of data when they’re both in transit and also at rest. This will play a major role in keeping your code secure.
- Monitoring - Developers should also strive to keep a constant watch over their data, with alerts setup to notify them of any suspicious activity. As with so many problems in life, early detection allows for easier and more effective remedial actions, while also providing insights to bolster defences in future.
- Access restriction - Restricting access to the source code is obviously one of the more straightforward means of defence. While this might not be possible once the app is published, within your organization, access should be limited purely to those members with hands-on roles. Even among these employees, two-factor authentication should be deployed to ensure only the right people have access to your code.
- Copyright - Copyright law is one of the better ways of protecting your source code, and it’s sensible to treat your code the way you would with any other part of your intellectual property. This might even include issuing patents to ensure you’re fully protected by the weight of the law.
While each of these approaches may be valuable in isolation, it’s often best to utilize as many of them as you can to ensure that you have all of your bases covered. After all, as far as your source code is concerned, it’s rare that you can ever have too much protection.
Mobile apps are an increasingly important part of our lives, and provide an intersection between the public and vital services. Valuable as these applications are to users, so too are they a tantalizing target for attackers keen to get their hands on such a treasure trove of data. As such, it’s vital that developers do all they can to ensure that their source code is as safe as possible. | <urn:uuid:f55bce38-00bd-4337-9621-1ea9e1f6299f> | CC-MAIN-2022-40 | https://cybernews.com/editorial/using-source-code-protection-to-prevent-hacks-to-your-mobile-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00103.warc.gz | en | 0.950294 | 942 | 3 | 3 |
SD-WAN MPLS MPLS uses labels for forwarding instead of routing lookups. In this category, we can discuss MPLS, VPNs, labels and more. IPv6 IPv6 offers plenty of addresses for the future. In this topic, we can discuss the differences with IPv4 and more. OSPF OSPF is a link-state routing protocol that uses LSAs (Link State Advertisements) and the LSDB (Link State Database) to exchange routing information and to build a topology. Any OSPF related questions can be discussed here. EIGRP EIGRP is Cisco’s distance vector routing protocol on steroids. We can discuss it here. Spanning Tree Spanning-Tree helps us to create a redundant, loop-free topology. Any STP related questions can be discussed here. IP Routing Any general routing questions that are not about a specific routing protocol can be discussed here. Switching Any general switching topics/protocols can be discussed here. Network Services Network services are those protocols/topics that don’t belong somewhere else…think about DHCP, NAT, VRRP/HSRP/GLBP and more. Frame Relay Frame-relay is an old WAN technology that uses PVCs and DLCIs. Any frame-relay related discussions can be done in this category. Multicast Sending traffic from one source to a group of receivers is what multicast is all about. Any questions? We can discuss them here. RIP RIP (Routing Information Protocol) is an old distance vector routing protocol. It’s a great protocol though to become familiar with routing protocols. Any questions? You can discuss them here. Security Any general security questions that are not about the ASA can be discussed here. Quality of Service Quality of Service (QoS) is about picking the winners (or losers). It allows us to give certain preference to traffic, rate-limit it and more. Any QoS related questions can be discussed here. BGP BGP (Border Gateway Protocol) is the routing protocol that we use on the Internet. We also use MP-BGP in MPLS VPN. Any BGP related questions can be discussed here. Network Management Got questions about network management protocols like SNMP, SSH or telnet? We can discuss them here. | <urn:uuid:57e16cfb-8eb0-490c-b7e2-24e7294cb5b4> | CC-MAIN-2022-40 | https://forum.networklessons.com/c/routing-switching/34 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00103.warc.gz | en | 0.882383 | 473 | 2.9375 | 3 |
The International Organization for Standardization is an internationally known and respected agency that manages and structures standards for multiple areas, including cybersecurity. ISO 27001 is ‘a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.’ The standard does not advertise any specific products or tools, it offers some sort of comprehensive and detailed compliance checklist.
Why would companies be willing to go through the ISO 27001 certification process? First and foremost, to make sure that their cybersecurity program is secure enough. Million-dollar suits, tarnished reputation, and inner turmoil can be caused by one weak and hidden chain leak. So the certification process seeks for these weaknesses and adjusts cybersecurity to work for the company, not against it. Second of all, compliance with ISO 27001 facilitates the two most important things for all the businesses – the customers’ and employees’ trust. Who would choose to buy your service or work for your company if you cannot guarantee the security of the private data? Lastly, ISO 27001 certification is a great tool to optimize inner workflow, cut off the obsolete processes and bring your company to continual improvement.
All in all, ISO 27001 provides 14 controls five of which can be related to privileged access management. On the twisted way to compliance, Indeed PAM acts as a centralized platform to control, protect, and audit the privileged accounts. It becomes a real saviour that helps to easily cover these 5 mandatory requirements. Let’s investigate them closer.
Section A.6 Organization of Information Security. It requires a company to provide a transparent and detailed management framework that regulates and exercises cybersecurity programs. The company should be fully aware of what roles, responsibilities, and tasks the employees are allowed to perform and actually perform.
How can Indeed PAM help? Through the use of access policies and permissions, the software regulates and manages users and their rights and responsibilities. Indeed PAM restricts the ability to perform any prohibited actions.
Section A.9 Access Controls. The company should regulate and if needed restrict the employees’ access to different types of resources and information.
How can Indeed PAM help? Indeed PAM can control what resources, what period of time, and what users the access should be given to. It helps to granularly distribute the access rights as required by the company’s needs and cybersecurity program.
Section A.12 Operations Security. It regulates the processes connected to the information flow and storage.
How can Indeed PAM help? The solution is capable of watching over any users’ activities, such as attempts to relocate and change the company’s data. It can also log all the events which contributes to a faster incidents response. All in all, these capabilities provide another layer of audibility and transparency of the data flows.
Section A.15 Supplier Relationships. It describes the secure interaction process between the company and third parties (vendor’s technical support, contractors, remote workers outside of the network).
How can Indeed PAM help? To secure the company’s sensitive data from outsiders and prevent unauthorized access, the software can set the list of policies that firmly defines the third parties’ rights within the company’s information systems. Indeed PAM can also track the users’ activities.
Section A.16 Information Security Incident Management. It controls and checks how the company can act in alert security events and if the response workflows are configured in an effective way.
How can Indeed PAM help? Using the out-of-box mechanisms of event logging and video and text recordings of the sessions, Indeed PAM provides a quick way to understand the reason for the incident. If reacted immediately, the company can overcome the incident consequences with less damage.
Indeed Privileged Access Manager can simplify the ISO 27001 certification process because it is a ready-to-use instrument capable of mitigating threats associated with the misuse of privileged access and adjusting the inner cybersecurity plan per requirements. The software UI and architecture make the user experience smooth and easy.
If you want Indeed PAM to back you up during the certification, let us know by the email: email@example.com | <urn:uuid:904d95af-bde2-485f-a49d-90df54307408> | CC-MAIN-2022-40 | https://indeed-id.com/blog/iso-27001-compliance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00103.warc.gz | en | 0.928189 | 861 | 2.53125 | 3 |
There are many ways you can empower your team. Managers who coach and nurture their team and provide different levels of autonomy are successful in achieving this.
All teams require a leader to manage the team’s direction based on established goals. The team requires coaching along the way to develop the strategy to reach and surpass these goals.
This week we are going to look at three ways:
Setting clear direction is part of the ever-challenging element of human interaction called “communication”. Most of the time, there seems to be a misunderstanding about the direction or what a person was supposed to do or not supposed to do.
Take the time as a leader to provide the necessary details outlined in simple terms to clearly provide direction to all members of your team. Ask for feedback to confirm your team fully understands the direction.
Stating the goals of the company and the team helps to empower your team. The goals must also be clear and well defined. They should be measurable, challenging and yet attainable. Provide everyone with the necessary metrics so your team knows where they are along the way.
Leaders develop a plan to focus on these goals. Leaders provide the direction and support as required. They help all members to successfully attain their goals.
Accountability for each team member is essential. Without accountability to the team leader, most teams accomplish very little. Defining the focus of what everyone is responsible for and providing the guidelines on what this responsibility includes will empower your team.
Implementing these three ways will result in your team being empowered to make decisions with confidence.
How do you empower your Team? | <urn:uuid:43ab30b6-e137-4d3f-97c3-8e17813cfd27> | CC-MAIN-2022-40 | https://www.alphakor.com/blogs/leadership/three-ways-to-empower-your-team/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00103.warc.gz | en | 0.969929 | 340 | 2.625 | 3 |
IoT Communication States
The concept of IoT has two distinct states and many forms. The forms of IoT devices include the concept of wearable devices. There is the concept of stayable devices and finally the concept of portable devices. Each of these devices has a state related to the information they have, acquire or produce. Most IoT devices exist in one of two states either Active or Passive. Active IoT information flows from the device either as a timed, threshold or constant stream.
A passive device is one that you actually connect to and then receive information from that IoT device. In the world of the internet this is often called push and pull data. Some IoT devices can also have a passive mode but an active trigger where a specific threshold met pushes the device from passive to active broadcast mode.
As we go forward into the age of IoT there are other things that will modify the state of an IoT device that are ultimately very interesting. Today there are laws in many US States that prohibit the use of hand held cellular devices while driving. More and more cars sold today come equipped with an integrated car phone or speaker system that allows you to operate your cellular phone in a hands free manner. One of the big things that these initial car phones lack however is a clean handoff. In fact the concept of a handoff can done but it is a manual process. On your phone you have to deselect the car phone and select either the speaker or standard inputs/outputs of your cellular device. Some cars do support the cutting of the Bluetooth connection, but they do not allow the transfer to a home or office audio system. It isn’t just the handoff. There just isn’t a clean way to say there is a better audio system pass my call to that.
(Image Source: IIHS.org)
Secondly we need to concept of pick best available device. Imagine pulling into your driveway and switching your conversation, music or audio book to the nearest available in-home resource. Be that a Wi-Fi enabled wall speaker or some other form of in home audio device. You never have to stop talking your device just pops up a dialog and says you are near a speaker you’ve used before do you want to switch to this new audio? Then shut off the old audio (car).
Beyond that simple handoff there are a number of other things that will come that are game changing. Today you can buy a Wi-Fi hard drive that is then connected to specific software on your device. As we move further into the independent device of IoT tomorrow we will have the ability to connect to our homes. Not just a phone conversation now we can do everything. What if you walked into your house and your car, once in the garage started backing up all its information.
From Telemetry (it’s already available using various add on products) to settings and music you listened to while driving. A listing of all the calls you had in the car and corresponding numbers is created. Your cellular device, wearable and other portable IoT devices started also backing up their data from that day. I call this concept continuation. Keep my conversation going on the best available device. You could even have console lighting that changed color notifying you at a glance what the status of all the backups were. All the LED’s red means we are still pulling data. All the LED’s green means you are good to go. Backup as an IoT service for all devices. These represent active connections that can be added to both your devices and your home. The active device, connecting to your car and the active device backing up that data.
Active and passive IoT devices are here to stay. From a market of less than 2 billion total devices in 2013, to a market that may reach 50 billion devices by 2020 the market continues to expand. Active and passive connections will abound. From weather stations to systems that monitor the air quality in your home IoT devices will seek connection. From personal presence devices and the concept of continuation you will be able to seamlessly transition from driving to being home without changing anything for your conference call. With a self-driving car you will be able to have a video conference that transitions as you step out of the car and into your home. Now can someone help me find my car keys?
By Scott Anderson
Scott Andersen is the managing partner and Chief Technology Officer of Creative Technology & Innovation. During his 25+ years in the technology industry Scott has followed many technology trends down the rabbit hole. From early adopter to last person selecting a technology Scott has been on all sides. Today he loves spending time on his boat, with his family and backing many Kickstarter and Indiegogo projects. | <urn:uuid:2409399b-8fff-4c50-a09e-fb32ce36040d> | CC-MAIN-2022-40 | https://cloudtweaks.com/2015/08/passive-vs-active-iot-communication-states/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00304.warc.gz | en | 0.961614 | 953 | 3.0625 | 3 |
Technology that aids sustainability needs immediate green intervention. Here’s how you can help.
Businesses are constantly innovating towards a better tomorrow—to compete, win customers, and expand into new territories. However, to exist and grow in tomorrow’s world, organizations must adopt sustainability. Technology has played a pivotal role to ease the adoption of sustainability through the digitalization of value chains. However, the proliferation of technology latently contributes to the carbon footprint. That carbon contribution is the unnoticed problem we need to fix.
Zoho Corp CEO Sridhar Vembu, in an interview on CXO Talk, breaks sustainability into two dimensions. The first one is the green dimension, where he cites an example of Zoho Corporation’s offices predominantly being powered by renewable sources to promote sustainability. The second dimension is to move corporate facilities to smaller towns where the office commute and the personal consumption of resources decreases.
Technology is omnipresent today. It is evident in our mobile devices; in the apps we use to book services, purchase goods, and gain knowledge; and for various purposes in the organizations we work for. When documents are digitized, we might feel an ounce of eco-friendliness resulted from the process. However, the IT industry might account for up to 3.9% of the Earth’s carbon emissions. The High-Performance Computing (HPC) sector, which includes artificial intelligence (AI), machine learning (ML), and blockchain technologies, was valued at USD 37.8 billion in 2020, which is growing at a phenomenal compound annual growth rate (CAGR) of 5.5% and is predicted to become a USD 49.4 billion industry by 2025. A University of Massachusetts study found that training AI models for natural language processing (NLP) can generate carbon emissions equivalent to 300 round-trip flights between New York and San Francisco. These compute-intensive technologies may have hazardous environmental repercussions.
While some of the tech organizations have begun to opt for green energy sources to combat carbon emissions, the industry is still primarily fossil-fuel driven. For every action on the internet, there are data centers and a web of optic cables working in tandem, including new-age warehouses and data centers that constitute almost 33% of the information and communication technology (ICT) industry’s carbon emissions.
There’s a lot of discussion around sustainability in industries that produce tangible goods and services, such as food, beverages, and other fast-moving consumer goods; fashion; hospitality; and manufacturing industries. But what about propagating eco-friendliness in the largely intangible IT industry?
There been a massive adoption of automation in various tech fields. Fostered by digitalization, some of these actions are attributed to sustainability. Digitalization is often associated with eco-friendliness due to the ability of an advanced digital device to perform innumerable chores that supersedes a multitude of tedious manual processes.
To put things into perspective, the fashion industry has been able to achieve profits through AI-based tools that facilitate virtual fittings for customers. As a result, the carbon footprint can be curbed by 40% through the use of AI. While AI reduces carbon emissions through increased efficiency in inventory management, AI itself has an environmental impact. As the fashion industry aims to adopt a circular economy, the ignorance of AI’s contribution to carbon emissions hinders the goals of sustainability the industry wants to achieve, since the energy consumed by the AI-based system might be obtained from a non-renewable source.
Achieving sustainability is like a game of whack-a-mole. Although they aid sustainability, AI-based systems process large amounts of data that requires immense amounts of energy to power data centers. As we previously saw, the comparison of emissions generated by the training of AI models to the magnitude of the aviation industry’s greenhouse gas emissions compels us to take action towards IT sustainability. Google has emerged as a leader in this aspect, the company was the first to go carbon neutral in 2007 by committing to utilizing renewable energy sources.
We have seen how far IT has come along when HPC aids sustainability, while we have also uncovered the silent problem of how the IT industry contributes to the carbon footprint. However, the good news is that this can be rectified if organizations collectively start implementing changes to adopt a sustainable approach.
Sustainability can be achieved within an organization with the help of the following steps:
Internal audit – With the help of surveys, evaluate the current processes followed across the value chain to check for any unsustainable practices that may contribute to the carbon footprint. For example, assess the source of power, turn off unused appliances, and optimize data centers.
Green upgrades – Track the consumption of resources across the organization. This can also include upgrades that range from the utilization of energy-efficient equipment to implementing eco-friendly software within the organization.
Green power sources – Look for renewable sources of energy that can power the company. A few companies are strategically located close to power plants that supply renewable energy, and some can even install solar power units within their premises. Location is key since renewable energy sources are not available in abundance yet.
Operational efficiency – Data centers must be managed efficiently to consume less storage and power. Operational efficiency of data centers can be achieved through the identification of suitable data center management software that helps with the timely analysis of bottlenecks, performance, and organization of data to ensure optimum usage.
Data center temperature monitoring – Maintain a cool ambiance in your data centers to offset the heat the data centers generate. Maintaining a cool environment also involves electricity consumption. Smart temperature control devices must be placed to monitor the temperature. Using temperature sensors, the cooling devices can be turned on or off depending on the desired temperature level to be maintained. For example, the data center might be situated in a place with suitable weather conditions and where the more common challenge of maintaining a cool environment is typically not a concern. These data centers will require temperature control systems only during hotter seasons or when the ambient temperature increases; this can be ascertained through sensors.
The responsibility does not stop here. Consider the example of ordering food through a delivery app that has glitches. It is going to consume more battery, and might even cause the phone to crash or hang. If the app has fewer glitches, the energy consumption could be far less. It’s evident that tech companies should roll out sustainable software upgrades to their users to guarantee the performance of the device and help curb device obsolescence or slowdowns. A recent study predicted that if the unsustainable trends in the IT industry continue at the current rate, the industry will constitute 14% of the global carbon emissions by 2040.
To extend sustainability to users, the following can be performed:
Green coding – This refers to programming codes written to produce algorithms that reduce energy consumption during the use of the software. Green coding should result in the simplification of processes to increase efficiency at the user’s end. This can reduce glitches, processing time, and energy consumption through superior performance.
Reduction of device obsolescence – This is deploying software upgrades that do not make devices obsolete due to slowdown and increased consumption of memory.
Customer reviews analysis – Customer reviews can be analyzed to find any hidden sustainability problems. For example, a customer review might talk about increased battery consumption due to a software upgrade in their device. This can mean that the software upgrade is not eco-friendly.
Customer education – How many times have we heard people say they never shut down their laptops? Customers often feel that they aren’t directly responsible for maintaining sustainability until they are made aware of the alarming impact of individual actions. Make customers environmentally conscious through all communication touchpoints to help them on their journey to eco-friendliness.
On the path to a net-zero environment, software giants like Microsoft, Google, and Infosys now account for every action they perform towards achieving a sustainable future. While Zoho Corporation powers its Indian offices with a 5MV solar power plant, its IT management division, ManageEngine has set up “Zoho Farms” near Austin, Texas. Through this innovative effort to curb carbon consumption, employees can work from the Wi-Fi enabled farmhouse as well as step outside to harvest crops for themselves and their families. Excess produce from the farm is contributed to a local food bank. Infosys claims that it is already 30 years ahead of the climate change target set by the Paris Agreement, and have been able to exceed the targets by optimizing energy sources and using renewable energy solar panels set up in its offices. It is important to keep in mind that in this continuous cycle of consuming and releasing energy, we need to immediately make green interventions for a cleaner tomorrow. | <urn:uuid:ef35914b-6681-4adc-b79c-089ee79051bc> | CC-MAIN-2022-40 | https://insights.manageengine.com/digital-transformation/the-unnoticed-sustainability-problem-in-tech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00304.warc.gz | en | 0.941585 | 1,806 | 2.796875 | 3 |
As technology has improved in function and convenience, we seem to demand more and more of it at an increasing pace. Take mobile data as an example: 4G was introduced in 2009, and less than a decade later, there is high demand for 5G. Unfortunately, while 5G has been worked on for some time already, it isn’t likely that it will be commonly available anytime soon.
The technology being touted as the driving force behind 5G has quite a few practical issues, many of which may prove to be too much for the anticipated improvements to offset. Many of these issues are rooted in the proposed use of enhanced mobile broadband (eMBB) via millimeter wave (mmWave) and the inherent issues with this plan.
A big problem comes from the range of mmWave. Currently, 4G signals can reach anywhere between three and thirty miles, while mmWave can only reach a third of a mile – one ninth of its range now, under ideal circumstances. In order for 5G through mmWave to be successful, there would need to be some major infrastructure updates.
This has been addressed in the planning processes, as it is likely that the cell towers we are accustomed to today would instead be replaced by shorter-range femtocells. These femtocells would be approximately the size of a microwave oven, and could be added to existing pieces of infrastructure, like light poles, traffic signs, and even public transportation vehicles like buses. However, these open up the idea of implementing 5G to more complications.
For example, mmWave signals are incredibly easy to block, which is why there would need to be so many femtocells added to the existing infrastructure. When something as simple as an unfortunately positioned traffic sign can block a signal, signals need to be coming from more than one direction.
There is also the matter of bandwidth that needs to be addressed. Consider how much usage each femtocell would see – they just wouldn’t be able to network as efficiently as necessary for proper use. This would mean that the entire network of femtocells would also need to be connected via costly high-speed fiber cabling, which would be an expensive and time-consuming endeavor.
With cloud computing having become such a widely utilized tool, it only makes sense that the femtocell network would be managed via the cloud. By creating a virtual network in the cloud, software-defined networks (SDNs) and network function virtualization (NFV) could be leveraged to manage the 5G network. Trouble is, there are various kinds of SDNs and NFV, with no one standard. The Linux Foundation is working to change this, but this still isn’t an issue that will likely be resolved in the near future.
Regardless, 5G is going to happen – femtocells are inexpensive and, for all their faults, a potentially beneficial way to make it possible. Furthermore, people want better mobile bandwidth. The technology is just going to take some time to develop.
However, if you want to improve your business’ connectivity now, we can help. Give ExcalTech a call at (877) 638-5464. | <urn:uuid:c9c4c13e-7528-4bfa-b73d-b59311ba45e3> | CC-MAIN-2022-40 | https://www.excaltech.com/5g-is-still-going-to-take-a-while/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00304.warc.gz | en | 0.971647 | 657 | 3.21875 | 3 |
Should Scotland vote to become independent, a Rural Connectivity Commission will be set up with a view to considering how to improve issues such as mobile and broadband coverage in rural communities. The Scottish Government has announced the plans as part of a constitutional paper – Connecting Rural Scotland – and it is understood that, should the commission be formed, it will be ‘an expert body which will consider how to deliver a better deal for our rural communities and businesses while also ensuring clarity for industry and stability for investors’. With the paper setting out five areas where independence would improve rural connectivity, in the communications sectors it notes that in an independent Scotland the country would have the power to issue future spectrum licences and could include coverage obligations that ensure the ‘maximum availability of mobile telecoms throughout Scotland as a whole’. Further, the publication also suggests that ‘more flexible approaches for broadband that could extend digital services’ could be considered.
Deputy First Minister Nicola Sturgeon said of the plans: ‘Our rural communities make a very valuable contribution to Scotland’s economy and have huge potential to develop even further … With independence, we will have the powers to regulate these crucial services and to remove barriers which are holding back rural areas from achieving their full potential.’ | <urn:uuid:00a0b620-ffab-495d-8e60-6b0b42bc0883> | CC-MAIN-2022-40 | https://www.commsupdate.com/articles/2014/07/15/scotland-to-set-up-rural-connectivity-commission-if-it-declares-independence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00304.warc.gz | en | 0.960842 | 255 | 2.609375 | 3 |
What is Data De-Identification?
Data de-identification is the process of removing Personally Identifiable Data (PII) from any documents or other media that also contains that person's Protected Health Information (PHI). De-identification is the fastest and simplest way to ensure compliance and identification security on methods of communication that could be accessed by the public or outsiders.
The biggest benefit is that the de-identified data can be stored (PHI with PII removed) in any system; the rules and code that is linked to this data doesn't need to adhere to HIPAA.
And in an age where everything is stored online, data de-identification is becoming increasingly important.
But what do you need to know about de-identifying data? How do you actually de-identify data?
Here Are 5 Important Facts to Know About Data De-Identification:
#1 - It’s Required Under HIPAA's De-Identification Standards for Health Information
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule requires organizations to de-identify their data to remain in accordance with the law.
That’s why so many healthcare organizations invest a considerable amount of time and money into data de-identification.
However, as you’ll learn below, this capability is useful for more than just healthcare information.
#2 - Data De-Identification Is Useful for More Than Just Personal Data
When you think of data de-identification, you probably think of personal records: someone’s data is separated from their other data, usually protected health information. This prevents them from being personally identified.
That’s the case with a lot of data de-identification processes. However, it’s not the case with all of them.
The truth is, data de-identification can be useful for more than just personal data, including some of the following circumstances:
- Businesses involved in statistical surveys (like industry surveys) may wish to have their data de-identified
- Mining companies may wish to de-identify the spatial location of mineral deposits
- Environmental protection agencies may want to de-identify data linked to endangered species
There are countless other examples where de-identified data can be useful.
In short, it’s a valuable way to protect more than just individual healthcare patients – it can be used across industries for all different types of benefits.
#3 - How To De-Identify Data from a Data Set
You know why data de-identification is important. But how do you actually de-identify data?
- Typically, it involves using an intelligent software to remove identifiers, names, addresses, gender, date of birth, and other identifying information from datasets.
- Sometimes, that data is removed entirely. In other cases, the data is coded or encrypted. Some de-identification services also change data values or aggregate data to remove the personal connection.
- But what if you want to reuse that data at a future point – like for inclusion in a future study? That’s where de-identifying data can get tricky.
- In this situation, researchers need to walk through a minefield of legislation, policies, and ethical guidelines to ensure they’re doing everything the right way.
#4 - HIPAA’s Expert Determination Versus Safe Harbor Method: Which Should a Covered Entity Use?
Up above, we mentioned that HIPAA requires one of two ways of de-identifying data for a covered entity (health care provider, health care clearinghouse or health plan) to perform, including Expert Determination and Safe Harbor.
Expert Determination involves applying statistical or scientific principles to the data. This ultimately leads to a very small risk that the anticipated recipient could identify the individual.
Safe Harbor, on the other hand, requires the removal of 18 types of identifiers (like all geographic subdivisions below the state level). With Safe Harbor, there’s no chance that the residual information can be used to identify the individual.
#5 - A Better Solution: How to De-Identify Your Data and Eliminate Identification Risks
BIS offers a solution called Grooper that can help companies like yours with de-identifying information and data.
The technology is particularly helpful for health information companies (HIPAA compliance), colleges and universities (FERPA compliance), financial institutions (PCI standards), and government/public records (SSNs).
Grooper is an information processing platform that offers the following deidentification benefits for your business:
Grooper has the same functionality offered by legacy document capture platforms – but with new features that make it even more useful.
Worried about a data breach? Eliminate identification risks - it's easy with Grooper.
Learn how we empowered fast and easy data de-identification at a large nation wide credit union. Download our case study: | <urn:uuid:c742cf0b-827b-46cc-924a-0a424d0a1bf2> | CC-MAIN-2022-40 | https://blog.bisok.com/general-technology/5-things-you-need-to-know-about-data-de-identification-and-why-its-so-important | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00304.warc.gz | en | 0.894812 | 1,023 | 3.0625 | 3 |
High availability (HA) refers to a system or component that is operational without interruption for long periods of time. High availability (HA) is measured as a percentage, with a 100% percent system indicating a service that experiences zero downtime.
High Availability (HA) Overview
While setting up two Palo Alto firewalls as an HA pair, it is essential that HA peers same have same version of PAN-OS device. High availability (HA) minimizes downtime and makes sure that a secondary firewall is available in the event when the active firewall fails. Dedicated HA ports in the firewalls are used to synchronize data, object and policy configurations and maintain state information with passive firewall. There are some Firewall specific configuration which are not synchronized between peers such as management interface IP address and administrator profiles and log data and Application Command Center (ACC).
High Availability Modes:
There are two modes of firewall deployment in HA pair.
Active/Passive: In this mode, one firewall actively manages traffic while the other is synchronized and ready to transition to the active state if a failure occurs in network. Both firewalls in a HA mode share the same configuration settings and one firewall actively manages traffic. When the active firewall fails, the passive firewall transitions to the active state and takes over role as active node. A/P (Active/passive) HA is supported in the virtual wire, Layer 2 and Layer 3 deployments.
Active/Active: In this HA mode, both firewalls in the A/A mode process the traffic and work synchronously to organize session setup and session ownership. Both firewalls individually maintain routing tables and synchronize to each other. A/A (Active/ Active) HA is supported in virtual wire and Layer 3 deployments.
When a failure occurs in network where one firewall goes down and the other peer takes over the role, the event is called a failover. A failover is triggered when heartbeat and hello messages fail to respond, physical link goes down or ICMP response fails. Below is the explanation of each parameter: –
- Heartbeat Polling and Hello messages: Hello message and heartbeat polling is used to verify the status of peer firewall, i.e. whether it is alive and operational. Hello messages are sent from one peer to the other at the configured parameter.
- Link Monitoring: Physical interfaces to be monitored are grouped into a channel group and their state (link up or link down) is monitored.
- Path Monitoring: Path monitoring uses ICMP to verify reachability of the IP address. The default interval for ping is 200ms.
Device Priority and Preemption
Firewalls in a High Availability (HA) pair can be configured with a device priority value to highlight a preference for which firewall should consider as the active. Enable the preemptive behavior on both the firewalls and configure the device priority value for each firewall. Firewall with the lower numerical value, and therefore higher priority, is designated as an active and the other firewall is the act as a passive firewall.
Floating IP Address and Virtual MAC Address
In a HA deployment of A/A mode, floating IP addresses moves from one HA firewall to the other if a link or firewall goes down. Firewall responds to ARP requests with a virtual MAC address. Floating IP addresses are recommended when layer 3 redundancy functionality such as Virtual Router Redundancy Protocol (VRRP) is configured on firewall. It can also be used to implement VPNs and source NAT.
In a HA deployment active/active configuration, ARP load-sharing allows the firewalls to share an IP address and provide gateway services. Use ARP load-sharing, when there is no Layer 3 device between the firewall and end hosts.
In an active/active HA deployment, firewalls use dynamic routing protocols to determine the best path. In such a scenario, no floating IP addresses is necessary. If link failure or any topology changes occurs, routing protocol (RIP, OSPF, or BGP) handles the rerouting of traffic.
HA Firewall States
Configure Active/Passive HA
Configure Active/Active HA
In High availability (HA), two firewalls are combined together in a group and their configuration is synchronized to prevent a single point of failure in a network. A heartbeat connection between the firewall peers keeps sending keep alive signal to ensure entire failover in the event that a peer goes down. Deploy two firewalls in an HA pair provides redundancy and allows you to ensure business continuity with 99.99% uptime. | <urn:uuid:3f1cae4c-93cc-431e-af06-f6c427514898> | CC-MAIN-2022-40 | https://networkinterview.com/high-availability-palo-alto/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00304.warc.gz | en | 0.88299 | 954 | 3.015625 | 3 |
In the new report from CA Technologies Internet Security team, researchers identify more than 400 new families of threats--led by rogue security software, downloaders and backdoors. Trojans were found to be the most prevalent category of new threats, accounting for 73 percent of total threat infections reported around the world. Importantly, 96 percent of Trojans found were components of an emerging underground trend towards organized cybercrime, or "Crimeware-as-a-Service."
"Crimeware isn't new, but the extent to which a services model has now been adopted is amazing," said Don DeBolt, director of threat research, Internet Security, CA Technologies. "This new method of malware distribution makes it more challenging to identify and remediate. Fortunately, security professionals and developers are diligent about staying one step ahead of these cyber criminals."
The most notable threats and trends of 2010 to-date include:
-- Rogue or Fake Security Software: Also known as "scareware" or Fake AV, the first half of 2010 saw this category of malware continue its dominance. Google became the preferred target for distribution of rogue security software through Blackhat SEO, which manipulates search results to favor links to infected websites domains. Rogue security software displays bogus alerts following installation and will coerce users to pay for the fake product/service. An interesting trend observed recently is the prevalence of rogue security software cloning, whereby the software employs a template that constructs its product name based on the infected system's Windows operating system version, further enhancing its perceived legitimacy.
-- Crimeware: 96 percent of Trojans detected in H1 2010 functions as a component of a larger underground market-based mechanism which CA Technologies Internet Security has termed "Crimeware-as-a-Service." Crimeware essentially automates cybercrime through collecting and harvesting of valuable information through a large-scale malware infection that generates multiple revenue streams for the criminals. It is an on-demand and Internet-enabled service that highlights cloud computing as a new delivery model. This crimeware is primarily designed to target data and identity theft in order to access user's online banking services, shopping transactions, and other Internet services.
-- Cloud-Based Delivery: Research revealed cybercriminals' growing reliance on using cloud-based web services and applications to distribute their software. Specifically, cybercriminals are using web and Internet applications (e.g. Google Apps), social media platforms (e.g. Facebook, YouTube, Flickr, and Wordpress), online productivity suites (Apple iWorks, Google Docs, and Microsoft Office Live), and real-time mobile web services (e.g. Twitter, Google Maps, and RSS Readers). For example, recent malicious spam campaigns are posing as email notifications targeting Twitter and YouTube users, luring targets to a click on malicious links or visit compromised websites. The Facebook ecosystem has been an attractive platform for abusive activity including cyberbullying, stalking, identity theft, phishing, scams, hoaxes and annoying marketing scams.
-- Social Media as the Latest Crimeware Market:CA Technologies recently observed viral activities and abusive applications in popular social media services such as Twitter and Facebook - the result of a strong marketing campaign in the underground market. CA Technologies Internet Security has observed a black market evolving to develop and sell tools such as social networking bots. Underground marketers promote new social networking applications and services that include account checkers, wall posters, wall likers, wall commenters, fan inviters, and friend adders. These new crimeware-as-a-service capabilities became evident as observed from the latest Facebook viral attacks and abusive applications that are now being widely reported.
-- Spamming Through Instant Messaging (SPIM): One new vector used to target Internet users is SPIM, a form of spam that arrives through instant messaging. CA Technologies Internet Security observed an active proliferation of unsolicited chat messages on Skype.
-- Email Spam Trends:When examining email spam trends, the Internet Security team tracked the usage of unique IP addresses in an effort to determine the source of the most prevalent spam bot regions. Based upon its observation, the EU regions ranked as the number one source of spam recording 31 percent, followed by 28 percent in Asia Pacific and Japan (APJ), 21 percent in India (IN), and 18 percent in the United States (US).
-- Mac OS X Threats: Attackers gaining interest remains during the first half of 2010, the ISBU witnessed Mac-related security threats including traffic redirection, Mac OS X ransomware 'blocker' and notable spyware 'OpinionSpy'.
CA researchers continue to urge all users to be security-aware when accessing information via the Internet and have provided the following security tips to help ensure safe computing, including:
1. Do NOT open email from people you don't know. Think twice and verify before clicking a URL or opening an attachment. 2. Implement a strong password that you can remember. 3. When conducting online banking or financial transactions, make sure your browser connection is secure. 4. Encrypt online communication and confidential data. 5. Back up your important data. Keep a copy of all your files and store them separately. 6. Be cautious about instant messaging. Avoid chatting with people you don't know. 7. Protect your identity while enjoying online social networking activities. Be wary of clicking links or suspicious profiles. Be aware when installing extras such as third party applications; they may lead to malware infection, or attackers could use them to steal your identity. 8. If you are using Adobe PDF Reader, prevent your default browser from automatically opening PDF documents. 9. Check for and install security updates regularly.
The CA Technologies 2010 State of Internet Security report is intended to inform consumers and businesses of the newest and most dangerous online threats, forecast trends and provide practical advice for protection. The analysis provided is based on incident information from the CA Technologies Global Security Advisor team, submitted by the company's Internet security customers and consumers, as well as publicly available information. For access to the full report and additional tips, please visit: http://www.ca.com/files/SecurityAdvisorNews/h12010threatreport_244199.pdf
The CA Technologies Global Security Advisor Team delivers the around-the-clock, dependable security expertise, offering trusted security advice to the world for more than 16 years. Providing a complete threat management resource, the team is staffed by industry-leading researchers and skilled support professionals. It offers free security alerts, RSS feeds, PC scans and a regular blog updated by the worldwide team of researchers. In March 2008, CA Technologies and HCL America announced a partnership agreement. As part of this agreement, HCL provides research, support and product development for CA Technologies entire portfolio of threat-related products for home, small and medium businesses, and enterprises.
(Logo: http://photos.prnewswire.com/prnh/20100516/NY05617LOGO )
(Logo: http://www.newscom.com/cgi-bin/prnh/20100516/NY05617LOGO )
About CA Technologies
CA Technologies (Nasdaq: CA) is an IT management software and solutions company with expertise across all IT environments - from mainframe and distributed, to virtual and cloud. CA Technologies manages and secures IT environments and enables customers to deliver more flexible IT services. CA Technologies innovative products and services provide the insight and control essential for IT organizations to power business agility. The majority of the Global Fortune | <urn:uuid:543c0ab4-905a-4009-a3fe-d3b1a6854c37> | CC-MAIN-2022-40 | https://www.darkreading.com/vulnerabilities-threats/new-internet-security-intelligence-report-from-ca-technologies-shows-rise-of-crimeware-as-a-service- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00304.warc.gz | en | 0.90363 | 1,531 | 2.671875 | 3 |
Using some DIY-attitude and machine learning tools, BLS figured out how to automate 85% of its survey workload.
Implementing machine learning and artificial intelligence has radically transformed the Bureau of Labor Statistics’ productivity, freed up its workforce to perform less menial tasks and has resulted in more accurate survey analysis.
The fact-finding agency processes hundreds of thousands of surveys each year to provide the government and public with essential statistical data about society and the economy. In the past, converting the text-heavy records into different codes that make sense of the data required BLS workers to engage in tedious manual labor that didn’t always result in the most accurate outcomes. But automating the once manual processes has had lasting impacts across BLS.
“This all actually worked out much better than we expected,” Alex Measure told Nextgov. Measure was originally hired as an economist at BLS, but over the last eight years he’s led some of the agency’s efforts around integrating machine learning to complete tasks previously done by hand.
For example, each year the agency conducts the Survey of Occupational Injuries and Illnesses, which collects hundreds of thousands of written descriptions regarding work-related afflictions. In the past, Measure said humans would spend countless hours converting the key pieces of text from each survey into codes so that BLS could discern the data. The results would provide insights like how many U.S. janitors were injured on the job annually, or what the most common injuries might be.
“As you can imagine, when you are collecting about 300,000 of these each year and having people read through them by hand and code them by hand, it really adds up to being a lot of work,” he said.
The bureau began exploring different ways to use computers to automate the work.
“The idea in supervised machine learning is you take a bunch of examples of these narratives that have been coded, and then you try to get the computer to learn from these previously coded examples. Ideally, it will learn how to perform this task on its own if you give it enough examples and the right algorithm to learn,” Measure said.
His team realized early on that they could use open-source software to automate their processes and they immediately observed positive results. Not only were they able to train the original systems in just a few weeks, but the original computer-coded results turned out to be more accurate than the results from trained humans. Earlier this year, BLS started switching to “deep neural network models,” which allow for more layers of machine learning and have made “significantly fewer errors” than humans and the earlier systems.
“So the impact is that we’ve automated a very large portion of this sort of routine coding work,” Measure said. “And we are now automatically assigning roughly 85% of our primary codes using these algorithms, so that’s freed up our staff to spend more time on other important things.”
Measure said it’s also enabled BLS to enhance its results because now workers have more time to reach out to companies they need responses from and review the coded outcomes to improve the data they produce.
“So far, I think that people have seen this as a tool that helps them do their job better and I think it’s resulted in big quality improvements that have come from that,” he said.
The bureau is now applying AI to many more projects and Measure said he’s excited to watch the tech continue to evolve and for some of the latest projects in the works. He also believes BLS’ learnings are applicable for other agencies.
Because “some of the best tools” his team used were open source and available through Google and Facebook internally for free, other agencies that process big data don’t need to spend tens of thousands of dollars on proprietary software.
He also noted that websites like Coursera and edX, which he used to learn the tech, offer free or cheap training tools that anyone can adopt to master machine learning.
“I think the most interesting aspects of this to me are actually how accessible the technology is,” Measure said. “Obviously economists are not people who are trained in AI and so it was really interesting to me that I could learn these skills relatively easy and I think a lot of these skills are closely related to skills that people already have in other disciplines.” | <urn:uuid:b32b2f17-0066-4e86-ba7a-285563a0611e> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2019/05/how-bureau-labor-statistics-ditching-hand-coding-data/157261/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00504.warc.gz | en | 0.971753 | 926 | 2.609375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.