text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
What do Bill O’Reilly, Harvey Weinstein, Roy Moore, Louis C.K., Michael Oreskes, Kevin Spacey and many others have in common (other than all being male)? Certainly not political beliefs or professional expertise. Whether left or right leaning in the political arena, or focused on entertainment, journalism or government service, all have been accused of, and in some cases admitted to, sexual harassment or sexual assault.
Sexual misconduct has been a cancer on society for as long as history has been recorded. Have we reached a tipping point? Has the “#MeToo” movement and the press coverage led to a fundamental shift in our thinking and will it permanently affect behavior? These are great questions, that I am not qualified to answer. Only time will tell.
What I am prepared to say is that our approach to sexual harassment awareness, in particular, training programs focused specifically on increasing awareness of sexual harassment and reducing the incidence of sexual harassment, are nearly all sub-optimal.
Computer-Based Sexual Harassment Awareness Training is Sub-Optimal
Whether developed for government or corporate entities large and small, nearly all sexual harassment awareness training programs are classroom or computer-based. They involve having individuals read text, or watch slideshows and videos that define sexual harassment and the behaviors that are appropriate or inappropriate. They describe power differentials that often exist in government or the corporate world and how that impacts the appropriateness of interpersonal interactions. They might even include video interactions so that individuals can “see” sexual harassment in action from a third-person perspective.
In all of these cases, the nature of the training content and the training procedures are such that they recruit the cognitive skills learning system in the brain. The cognitive skills learning system in the brain learns through observation, mimicry and mental repetition. This is an excellent brain system for learning hard skills such as: (a) learning new software, (b) becoming proficient with rules and regulations, or (c) learning a new programming language, but this learning system in the brain is less effective for learning soft skills such as appropriate interpersonal interactions and real-time communication, or for training true empathy for another’s situation.
Appropriate interpersonal interactions and real-time communication skills are best learned by the behavioral skills learning system in the brain that learns by doing and receiving immediate corrective feedback. Physical repetitions, not mental repetitions, are key. Genuine empathy for another’s situation is best trained through a first-person experience in which you “are” that other person.
The Promise of VR for Sexual Harassment Awareness Training
VR offerings currently come in two general types. One takes a first-person perspective and allows you to literally “walk a mile” in someone else’s shoes. This approach involves passive, observational learning, much like computer based training, but the feeling of immersion, and more importantly the feeling that you are “someone else” is powerful. I believe that this offers one of the most effective tools for enhancing emotional intelligence and helping learners understand at a visceral level what it is like to be in a position of weakness and to be the direct target of sexual harassment. There is no better way for a middle-aged, Caucasian male to “feel” the prejudice or sexual harassment that a young, female African-American might experience or to “feel” the discrimination that many members of the LGBT community feel, than to put that man in a first-person VR environment where they are that other individual. Of course, the training content and the training scenarios must be realistic to be effective, but experts in this sector know how to create high-quality content. In my view, first-person VR experiences offer a great first step toward reducing the incidence of sexual harassment by increasing genuine empathy and understanding.
Although these passive, observational VR experiences offer a great tool for enhancing sexual harassment awareness, they are not focused specifically on behavior. The second type of VR offering, interactive VR, addresses this problem directly. Interactive VR platforms incorporate realistic interpersonal interaction and real-time communication into the mix. The learner can be placed in situations involving sexual harassment in which virtual agents react to the learner’s behavior in real-time. In other words, learners learn by doing and by receiving immediate feedback regarding the correctness of their behavior. This approach optimally recruits the behavioral skills learning system in the brain, which is the ideal system for reducing the incidence of inappropriate behaviors. Without taking a deep dive into brain neurochemistry, suffice it to say that behavioral skills learning is best when the brain circuits that initiated the behavior are still active when feedback is received. If the action is appropriate, then that behavior will be strengthened, and if the action is inappropriate, then that behavior will be weakened. Although there are clearly ethical limits to the intensity of the VR environments that one can be compelled to experience, interactive VR experiences with even mild levels of harassment will be effective in changing behavior.
Interactive VR approaches may also be useful in extreme cases as a rehabilitation procedure. Individuals already identified as sexual harassers by previous actions or complaints may benefit significantly from this type of rehabilitative behavioral therapy. In these situations, it may be ethically appropriate to increase the intensity of the interactive VR environments so that real changes in behavior will occur.
Sexual harassment is a serious problem in our society. In many cases, the individual is fully aware of their behavior and simply does not care. In such cases, no training, whether computer-based or VR, will likely have any effect. These are situations involving a conscious bias and behavioral change may be difficult. It is the cases of unconscious bias, where the individual is less aware of the impact of their behavior, that there is hope. The point of this article is not to claim that all sexual harassment can be eradicated. That is unrealistic, wishful thinking. That said, I believe that we can reduce the incidence of sexual harassment through effective training. I believe that the science of learning suggests that VR may provide a better tool for achieving this goal than computer based training.
I am not an expert on sexual harassment, but I do understand the psychology of behavior and behavior change. Although traditional computer-based approaches do their best to define, describe and demonstrate sexual harassment behavior, they target the cognitive skills learning system in the brain. This system is ideal for hard skill training, but not soft skill training, such as the training needed to reduce the incidence sexual harassment. I believe that VR holds significant promise as a training tool for reducing the incidence of sexual harassment. By combining the passive, observational first-person VR experiences that allow one to see the world through someone else’s eyes and experience sexual harassment first hand, with interactive VR experiences that allow one to engage in interpersonal interaction and real-time communication focused on rewarding appropriate behaviors and punishing inappropriate behaviors we might be able to reduce the size of this cancer from our society. The science is clear, and it suggests that this VR approach has merit. | <urn:uuid:83a42248-9f44-40b8-8a17-60e6a8cd6efc> | CC-MAIN-2022-40 | https://amalgaminsights.com/2017/11/13/why-computer-based-sexual-harassment-awareness-training-is-ineffective-and-why-virtual-reality-vr-offers-a-better-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00766.warc.gz | en | 0.937778 | 1,437 | 2.734375 | 3 |
“Fascists of the future will call themselves anti-fascists.” — Winston Churchill
Umberto Eco compiled a list of 14 signs of fascism. Draw your own conclusions again.
1. Cult of tradition
Praising the wisdom and traditions of the ancestors to the level of a folk cult, consolidates traditional views, orders and foundations as undeniably righteous. Accordingly, any development of knowledge and beliefs, any evolution of the mentality and value system, are a priori considered as erroneous and harmful phenomena.
2. Denial of modernity
Traditionalists, as a rule, perceive new technologies and trends with hostility, seeing them as a challenge to traditional and spiritual values. And although both the Italian fascists and the German Nazis were proud of their industrial achievements, their entire ideology was based on the denial of the modern world as the product of Western capitalist plutocracy and moral decay.
3. Action for the sake of action
The fascists of the 1930s treated intellectuals with contempt, because intellectual thinking questions the “why” and “why” of any action. Fascists stubbornly refuse to find justification for their actions, because they see beauty in the action itself, even if there is no rational explanation for it.
4. Disagreement = betrayal
Fascism does not allow pluralism of opinions. Since the truth is one for traditionalists, everyone who tries to question it is enemies and traitors.
5. Xenophobia (“ours” and “not ours”)
The division into “us” and “them”, hostility to everything alien, foreign, incomprehensible, unusual, abnormal – is a fertile ground for the emergence of fascism. All manifestations of intolerance – racism, anti-Semitism, antigypsyism, homophobia, contempt for the mentally retarded and underdeveloped, as well as hostility to foreign influence
6. Irritability of the masses
It is no coincidence that fascist movements have always gained particular popularity during the experience of difficult times, cataclysms, economic stagnation, and national humiliation by wide sections of society. Resentment and anger among the masses makes them susceptible to aggressive appeals.
7. Nationalism and conspiracy ideas
The idea of patriotism works effectively only in the presence of external enemies, without which patriotism loses all meaning. Therefore, at the heart of fascism lies the obsession with the idea of a conspiracy. People must feel that they are in an enemy ring.
8. Contradictory image of the enemy
The enemy must look strong and weak at the same time. He can be richer, more developed, well-armed, but at the same time stupid and cowardly.
9. The cult of war
Pacifism is tantamount to fraternization with the enemy. Militarism was observed in all spheres of life of the fascist regimes – festive military parades were held, monuments to heroic soldiers were built, the military industry flourished
10. The cult of strength and power
The idea of popular elitism – belonging to the greatest people in the world – itself implies the superiority of some over others, the best over the worst, the strong over the weak.
11. Cult of heroism and death
In a fascist society, heroism is the norm. Each person must be a hero, perform feats and, if necessary, give his life for his homeland.
12. Cult of masculinity (machismo)
The images used by the Nazis are dominated by stereotypically masculine features: strength, muscles, weapons, phallic forms and lexical constructions.
13. Selective populism
A person must believe that the will of the people always stands behind the actions of the fascist government, and if he himself doubts, then he is the only one. Taking on the role of the voice of the people, the Nazis are trying to discredit any opponents as traitors and anti-people mercenaries of external enemies.
14. Newspeak and substitution of concepts
For example, one can say that the threat to state stability has been neutralized, or one can say that the leaders of the civil protest were shot. You can say a warning strike, or you can say a military invasion. So the parliamentary opposition is the fifth column, and the more often people hear these expressions, the easier it is for them to believe in fascist myths about traitors and enemies, about an external threat and its priority over everyday problems. | <urn:uuid:02219dc5-878b-4003-a2bb-98faf8097ebc> | CC-MAIN-2022-40 | https://cybershafarat.com/2022/03/27/mislabeled-as-a-culture-war-fascism-in-the-united-states/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00766.warc.gz | en | 0.936401 | 1,049 | 2.625 | 3 |
How can the world make better use of water and energy using IIoT?
Right now, more than 500 million people globally are facing acute drought water shortages.
- Cape Town, South Africa, working around the clock to complete massive desalination plants, faces the imminent threat of its four million taps running dry. Nearby towns have water only on two “wash days” a week.
- Sixty percent of India’s aquifers will be in critical condition within 20 years because farmers are drilling wells and using water faster than it’s replenished.
- The Guarani Aquifer beneath Uruguay, Paraguay and Brazil – the second largest body of underground fresh water on Earth – is under threat of collapse from excessive drilling.
- August 1, 2018 was the earliest Earth Overshoot Day since scientists started keeping track in 1968 – the point in the year when we consume more natural resources, water first among them, than the Earth can replenish in a year. Our current consumption rate is 1.7 planets per year.
- Within the next 30 years, there were will be 1.2 billion more people on the planet. Eighty percent of that swollen population will live in cities. The world’s food system will require 50 percent more water; communities, cities and industry will need 60 percent more; and energy production will use 85 percent more water, according to the World Business Council for Sustainable Development.
It’s no surprise the World Economic Forum says water crises are the biggest threat facing humanity over the next decade.
We understand the primary causes – climate change, poverty and inequality, and lack of rational water-use policies. What we need are solutions to provide clean water for our growing global population. Allow me to suggest a four-fold approach that’s as simple as collecting rooftop rainwater, and as technically challenging as using the mighty Hoover Dam of the American West as the world’s largest energy storage battery. Enabling most of these ideas is the growing power and capability of the globally connected, digital Industrial Internet of Things (IIoT).
Do the obvious stuff we’re not doing now
Rooftop water collection + digital monitoring:
Water is heavy. Moving it somewhere to clean it, then using energy to send it back is wasteful. If energy generation and distribution is moving to rooftops, why not water too? Every time it rains, we’re wasting clean water unless we collect and use it.
In Silicon Valley, where I live, many roofs are covered with solar panels, but I almost never see rain-capture systems. By my calculations, the San Francisco Bay Area – a famously drought-and- wildfire-prone region – could be 100 percent self-sufficient simply by capturing rain that falls on roofs during the winter rainy season. Nationally, I calculate the US could generate half the water we need through simple rain capture.
Amazingly, in many American states, it’s illegal to use captured rainwater for one’s home, for reasons ranging from sanitation worries to complex ancient water rights.
Digital technology could speed wise use of rooftop rainwater collection and use. A proper filtration system guarantees healthy drinking water from rooftop collection. As a homeowner, I’d prefer to have an expert monitor my filtration system remotely, and dispatch repair people before they’re needed, not after the system breaks down.
Fix the leaks:
Water brought to many cities at great cost is wasted by leaky pipes: 20 percent in the average city, 60 percent in Istanbul. In Vietnam, 30 percent of Ho Chi Minh City’s freshwater supply has historically been lost to leaks and other infrastructure problems, but a current IIoT project involving my company will reduce non-revenue water to 10 percent by 2020 by digitally monitoring the water network and instituting repairs in near-real time.
Make smarter use of wastewater:
In many places, 80 percent of municipal wastewater is discharged untreated. Singapore and Israel, to cite two examples, have learned to reclaim wastewater for drinking water and farming irrigation respectively. In many countries, though, even cleaned wastewater is prohibited for religious or cultural reasons. These restrictions can be honored and bypassed by building separate distribution networks for cleaned wastewater for use in irrigation and landscaping.
Rewind the clock. Start by reforesting the Sahara
There are radical “solutions” underway to address the world’s water shortage. I worry they’ll cause as many problems as they address. China, for example, is building tens of thousands of chemical rain-makers – explosive furnaces shooting silver iodide crystals into clouds above Himalayan mountains higher than 5,000 meters in an attempt to create 10 billion tons of rainfall on the Tibetan Plateau. Is it necessary – or safe – to transform the Earth at the risk of a potentially disastrous outcome?
A wiser and safer approach, I’d argue, is taking place in Africa, where projects are underway to fight desertification by putting back the trees and triggering a virtuous cycle of regeneration. The Green Belt Movement, founded in 1977 by the late Professor Wangari Maathai, has planted more than 51 million trees in Kenya.
A larger project, the Great Green Wall Initiative, triggered in part by Europe’s migrant crisis and funded by $4 billion pledged by nations and non-governmental organizations at the 2015 Paris climate accord, stretches across 12 African countries and 7,100 km from Djibouti to Dakar. Together with measures to harvest rainwater, it is intended to allow farmers to grow crops all year round and create a new green lung of biodiversity.
“The Great Green Wall is not just a tree planting initiative,” says Jean-Marc Sinnassamy of the Global Environmental Facility. “We are regenerating the whole landscape, tackling poverty as well as environmental degradation. Better ecosystems mean better vegetation cover (including more trees), better soils, better surface and underground water management, better productivity of lands for better livelihoods and income of rural communities.”
These projects reintroduce the concept of a circular natural economy that nature provided long before humans appeared and disrupted natural cycles. Imagine a replanted forest as a natural conveyer belt – water evaporates as sunlight hits the trees and returns as increased rainfall that irrigates the new forests and the farmland next to those forests. With viable ways to make decent livings, people would be less likely to migrate to distant European cities or be attracted in desperation to extremist organizations.
Digital technology plays an important role here as well. It’s important that local communities have clearly defined ownership stakes in new forests and farmland that will emerge. Satellites and GPS can accomplish that with precision. And the blockchain could create a foolproof chain of custody for the products of these new forests and farms, so local communities would get their fair share of the returns and not be tempted to waste the trees. I envision a new market of blockchain-enabled fair trade forests and farms that would guarantee the traceability of, say, sustainable wood. So, a furniture manufacturer could market its products as coming only from sustainable forests and be able to prove it.
Make water out of thin air
- Stimulating the virtuous local rainwater cycle is the goal of Waterboxx, by a Dutch company called Groasis that combats desertification, sinking water tables, erosion, hunger and poverty with a doughnut-shaped “plant cocoon” that grows trees in the desert using 90 percent less water than drip irrigation, until now the state of the art in water-sparing agriculture. Waterboxx collects condensation from morning mist, then seeps it to the tree planted in its center. Once multiple trees begin to grow at scale, they create more condensation and more moisture and begin the circular water economy. Groasis has reforestation, food production and ecosystem restoration projects underway on five continents. Its Waterboxx units can be digitally connected, monitored and coordinated remotely.
- WaterSeer, a device from a US company, looks like the world’s smallest wind turbine and can pull 42 liters of water a day from the air with no need for an external energy source. It was recently nominated for the Katerva Award, which has been called the Nobel Prize of sustainability. A Water- Seer collection basin is buried 2.5 meters under the ground, with a pipe running from the basin to a small wind turbine that sticks out of the ground. The turbine draws air into the buried basin, where the air condenses into water because the surrounding earth at that depth is cooler than the surface. Networked, monitored, and coordinated, multiple WaterSeers can be linked into modular, self-contained water grids. One such water grid, in Saudi Arabia, harnesses climate change – a very hot country is becoming hotter and more humid as climate change takes hold – to produce 290 liters of water per day for irrigation.
- An engineer named Sonam Wangchuk, who grew up in an Indian stretch of the Himalayas bordering Pakistan and China, has invented human-made artificial glaciers that provide more than 9.84 million liters of freshwater runoff to alpine deserts that average just 10 centimeters of rainfall a year. Called stupas, named after local mound-like Tibetan structures, each artificial glacier is made of a 27.5-meter frame of wire and tree branches onto which water from glacial streams is pumped into the freezing air surrounding the frames. The water runs off when the stupas melt in the spring. In addition to the stupas in Tibet, others are being built in a Swiss skiing village to offset the lost runoff from a melting glacier. Wangchuck’s stupas – “Ice Towers in the Desert,” as the Swiss jury called them – won the 2016 Rolex Award for Enterprise.
Use the water-energy nexus to help us, not hurt us
Our dire water situation can be worsened or relieved by how we understand and use the water-energy nexus. Simply put, we are using too much water to make energy, and too much energy to deliver water. Threatened Cape Town relies primarily on energy from coal, which requires 87,000 liters of water a month to create power for a single home. In California, where I live, 20 percent of the state’s massive energy production is used to move water from the Sacramento Delta in the north to the desert megalopolis of Los Angeles and Southern California. China moves water 1,500 km, the distance from Orlando to New York City. Even desalination – a promising remedy for water shortages – is energy-hungry. Half the cost of desalination, typically, is the energy required to run desalination plants
What if we turned the water-energy nexus around, and used excess energy to deliver more water, and excess water to make more energy? What if we used the water-energy nexus as a form of battery-powered time machine?
I just mentioned that half the cost of desalination – widely considered a costly water-shortage solution – is the energy needed to run desalination plants. And with 98 percent of the planet’s water stored in the salty seas, we are surely going to need desalination as the global population increases. What if we could cut the cost of those plants by almost 50 percent?
Because renewable energy sources like wind and solar often produce far more energy than their grids can use, it is becoming common for the cost of energy for consumers to fall to zero, or even below zero. Since the beginning of 2018, the cost of energy has been zero or below 194 times in Germany, 76 times in California and 104 times in Australia, according to Bloomberg. Zero cost days have also occurred in Denmark, France, Switzerland, Texas and New England.
What can be done with excess energy? How can we store it for use at times when the sun isn’t shining or the wind not blowing? Some companies, such as Tesla, are making utility-scale storage batteries, but their cost, according to a Lazard study, is 26 cents per kilowatt-hour, compared with 12.5 cents per kilowatt-hour that households typically pay for power. Not an attractive value.
But…what if a desalination plant could monitor energy prices and start up every time the price drops to zero? (The plant’s energy cost wouldn’t literally be zero due to transmission costs, but it would be drastically lower than retail prices.) And unlike ordinary people or businesses that can’t store zero-cost energy without an expensive battery, desalination plants can store cheap energy by filling up their storage reservoirs when desalination is cheapest.
In that way, desalination becomes a battery or time machine in the water-energy nexus. There are others:
- Norway’s beautiful fjords are steadily becoming the “green battery of Europe” as neighboring European countries send their excess solar and wind power to Norway to pump water uphill to elevated reservoirs. When electricity demand increases and there’s insufficient solar or wind power available, the fjord water is released to flow downhill and create hydroelectric power for European partners transmitted via low-loss high-voltage DC lines (HVDC, I’m proud to say, was an innovation of my company).
- Hoover Dam, the massive engineering marvel on the Colorado River – tall as a 72-story skyscraper – that in large measure enabled the development of the modern American West, is currently being eyed as the world’s largest battery. Los Angeles Mayor Eric Garcetti refers to the potential as a “once-in-a-century moment.” The idea is to build a pump station 30 km downstream from the dam, which would use excess solar and wind energy from California (that zero-cost energy again) to force water back upstream into Lake Mead behind the dam, where it would once again be released to create hydroelectric power when needed.
- Even the rotation of the earth – which accounts for time zones – can act as a battery. A large utility in North Carolina is currently building HVDC transmission lines to Texas, because when it’s dark in North Carolina the sun’s still shining in Texas and Texas can send North Carolina low-cost solar energy, not energy generated by burning fossil fuels. Similarly, China is building HVDC transmission lines from its north, where wind and hydropower are plentiful, to the cities of its south.
All of this digitally-enabled time-shifting and terra-battery innovation is the beginning of what I see as a Global Energy Internet, which will traverse nations and continents to carry clean, renewable power from where it’s generated at minimal cost (in money and climate destruction) to where it’s needed most to help create, among other things, low-cost desalinated fresh water.
Will any one of these ideas, in isolation, solve the world’s water problems and put an end to drought-driven migration, poverty-caused extremism or fights for precious resources? Of course not. But taken together – along with countless innovations not mentioned here or soon to come – they will put billions on the road to better lives.
For almost two centuries we have run our world with an Industrial Operating System that is clearly now killing its host. That is, OS 1 is killing us. It’s our responsibility to harness the power of digital technologies to develop a new Industrial OS that meets the needs of the world’s billions in water, energy, transportation and food. While all four issues are intertwined, water is a good place to start. As the marine biologist Sylvia Earle says, “No water, no life.”
Originally this article was published here and was written by Guido Jouret, Chief Digital Officer for ABB. Read the latest edition of “In Control” for more articles on sustainability, water & wastewater, smart cities and power generation in a more and more digitalized world. | <urn:uuid:437871d2-9665-44b4-bf10-38d2c362d354> | CC-MAIN-2022-40 | https://www.iiot-world.com/industrial-iot/connected-industry/how-can-the-world-make-better-use-of-water-and-energy-using-iiot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00166.warc.gz | en | 0.939211 | 3,319 | 3.359375 | 3 |
Improving the quality of movement of athletes and patients is important for reducing injury risk and enhancing quality of life. Currently, assessment of movement quality is performed by eye using the experience of a physiotherapist. However, it is difficult to quantify objectively and to record improvement of movement quality over time. In a market feasibility study that we performed with physiotherapists and coaches, we found that there is a significant market for a tool to quantify and record movement. It is important that this tool is accurate and easy to use. We have been working to build a mobile app using deep learning/computer vision technologies so that physios can quantify and track human motion. These technologies are still very new and we believe it is important that physiotherapists have an understanding of how the technology works. This means that they can choose the right trade-offs with respect to privacy and ownership. Below, we will discuss the way that most tech startups or companies would build this product today. We highlight issues with this and finally propose our solution.
How does deep learning/computer vision work?
Like all deep learning algorithms, the aim is to learn a mapping from an input (an image or video in this case) to a target label of interest (the skeletal joint positions of an athlete or patient). To train the algorithm, we need to collect a dataset of input-target pairs. We pass the inputs to the algorithm, e.g. one by one, and tune the many weights of the algorithm until the outputs generally match the target labels for each training sample. If we have done this well, hopefully the algorithm will generalise to unseen samples.
What type of data do we train the algorithm with?
To achieve the most accurate human skeleton estimates, we often use expensive motion capture (MoCap) systems in a laboratory environment. We record videos with standard cameras (the input) and the skeleton joint positions with the MoCap system (the labels). This is one type of dataset. The issue using this approach alone is that the videos from the standard cameras are not diverse. They have a very specific type of lighting (e.g. indoors). Algorithms trained on these images will only perform well on similar images, and will not work on different types of images (e.g. outdoors). To get around this, we supplement the dataset with diverse images of humans taken from the internet. The approximate skeleton joint positions of the humans in these images are labelled by human annotators. We find that the MoCap data provides accuracy of joint estimates, while the internet images provide image diversity. Other labels, such as an assessment of movement quality, are also of interest.
How would most startups/companies create this technology today?
The main task of a new startup or a new project in a company is to acquire datasets. For our use case, videos of human movements are acquired by setting up cameras in the target environment (e.g. a gym or hospital). Separately, motion capture may also be performed in a lab. The subjects captured by the videos and MoCap system give their permission and sign consent forms for the data to be used towards a particular use case. The videos are labelled by human annotators. This might happen in house under the supervision of the company (or be outsourced to another company or a platform like Amazon Mechanical Turk). The company pays the annotators around $1 per image and in return the annotator gives away the rights to the data. The company may also pay physiotherapists (domain experts) to add movement quality labels to the videos. Labels provided by medical professionals can cost up to $100 per image/video. The company then pays data scientists to train algorithms on the data. The data scientists also sign contracts that give all right to their ideas and intellectual property (IP) to the company. The datasets and algorithms are stored on centralised servers controlled by the company.
What are the issues with this approach and how can new technology help?
Value Flow (Who benefits?)
The annotators and domain experts receive a fixed, limited reward. In contrast, the rewards for the tech company are unlimited (although there is also a risk that the technology will not create any value). The domain experts may also benefit from a useful tool that makes their life easier, but this tool will also consistently extract their data for improving the algorithms. This is similar to the Facebook model, where a tool is provided “for free” in exchange for the users data. There is also an additional risk for domain experts in that the data that they label is being used to automate tasks within their area of expertise. Motion analysis is just one of many tasks that physiotherapists perform and the new technology may only be useful for simpler diagnoses initially. However, it is not inconceivable that continued automation over the span of years or decades could result in a reduced amount of work required from domain experts. Ironically, the data that they labelled may be used to train an AI model that eventually puts them out of a job.
Wouldn’t it be better if we set up a system such that those whose jobs were directly affected by automation were those to which the value of the automation flowed? With self-driving cars on the horizon, what if we had started an initiative where cameras were mounted to the cars of professional drivers, such as trucks and taxis, rather than the cars of automotive manufacturers and tech companies? If these professional drivers were put out of work, at least the value produced by the automation technology would flow to them rather than a centralised tech entity. For physiotherapists, can we set the system up so that the value of a future physioAI flows to this community, and not Facebook Health? This idea actually maps very well to the field of deep learning since we directly need the domain experts input in the form of labelled data to train such automation algorithms. One of the steps towards this is to encourage data ownership. Data unions (like a trade union, but for data) can be formed to aggregate individual data into a valuable dataset.
Using the traditional method, the dataset and labels are stored on a centralised server controlled by the company. Any new data captured with an app using this technology is also transferred to the centralised server. This provides a single point of failure for hacks, which are happening all the time. While subjects in the dataset may have the right to request that their data be deleted, many are not aware of their right and this rarely happens in practice.
Privacy concerns also prevent useful existing datasets from being used optimally. Take the analysis of human movement, for example. MoCap systems have been around since the 1980s. There is a mountain of data out there in universities, hospitals and sports clubs. However, the data is considered sensitive and cannot be widely shared. Many of the best-performing deep learning algorithms we have are benchmarked on small public MoCap datasets (the most popular one has less than 10 subjects). In contrast, a single MoCap dataset in a university can have 100s of subjects. It is well known that more (high-quality) data leads to improved performance of deep learning algorithms. In fact, it’s more important than the algorithm itself. While training today’s algorithms on MoCap datasets, what if we could increase the number of subjects from less than 10 to 100s? What if we could connect a network of these datasets that contains 1,000s, 10,000s or even 100,000s of subjects, while maintaining safety and privacy? Would this not likely result in an order of magnitude increase in performance?
Luckily, private AI technologies are reaching maturity. Compute-to-data and federated learning allow deep learning algorithms to be trained on datasets or collections of datasets without the data being transferred to a centralised location controlled by a tech company creating the algorithm. This means that the data can stay on the users’ mobile device or on-site of a trusted third party (such as a sports club, university or hospital), while still being used to advance our knowledge and understanding. This approach is compliant with GDPR, and in fact improves on the privacy and protection provided by GDPR. Even better, much of this technology has been open sourced, provided by groups like Ocean Protocol and Openmined, such that new startups can quickly build on top.
Every company that wants to enter the space needs to collect a dataset. This dataset is considered a barrier to entry and is almost never shared. Often datasets that already exist are re-collected due to inaccessibility. The data acquisition process can take years for a company. This is a huge inefficiency for technological progress.
Individuals are less likely to have competitive considerations and thus increased data ownership may help to reduce this problem. Aside from this, improved incentives for collaboration can help all ships to rise. With the Ocean data marketplace, companies can open a new income stream by monetising their datasets while maintaining full control of the data. If a competitor trains an algorithm on the data, the company could receive royalties every time that algorithm is used. They may also have their own algorithm. Is this not a more desirable competitive environment that encourages innovation rather than barriers to entry?
What is a better way to create this technology?
We now suggest a new approach that differs from the typical approach of today. A new data science group (like VisioTherapy or Algovera) has an idea for an algorithm that provides business value and checks various data marketplaces to see if the data that they require exists. With the ability to bring in new revenue streams while maintaining privacy, many universities, hospitals and sports club make their data available for research and commercial purposes. If the data exists, the group purchases access to the required datasets and trains their algorithms using private AI infrastructure. The data providers are rewarded with fees while retaining full control over the data (it never leaves their servers). If the data doesn’t exist, the group can make use of new apps (like our VisioTherapy app or the DataUnion app) for crowdsourcing labels from annotators and domain experts. Unlike other crowdsourcing platforms, the app rewards the contributors with ownership shares of the dataset. The contributors can exchange these shares for cash or hold on to the shares with the expectation of royalties. The new algorithm is a success and the group makes it available on an algorithm marketplace and within a user app. Whenever the algorithm or app are bought, the value flows back to the data science group and the data contributors. A community of domain experts and other individuals receive value rather than a single centralised tech entity.
How have VisioTherapy been working towards this?
With the VisioTherapy project (funded by OceanDAO), we have developed an app (in collaboration with DataUnion) that can be used to crowdsource videos of human movements. We are collecting a dataset, providing ownership of that dataset to the contributors and making the dataset available on the Ocean marketplace. Within the app, users can upload and annotate videos, and also manage their ownership shares of the dataset. We have also begun exploring the roadblocks to making MoCap datasets owned by universities, hospitals and sports clubs available (privately and safely) on the marketplace. The next steps of the project are to continue to acquire and curate more data and to incentivise communities of data scientists to create algorithms — and maybe even apps – on top (e.g. using the Algovera community). This is a Proof of Concept for decentralised AI applied to the physio space: One that is created and controlled by the community rather than a tech company.
Richard Blythman. Richard is a PhD from Trinity College Dublin and an expert in AI and machine learning | <urn:uuid:51a7dd49-e191-4c65-a30d-fc5fb393d678> | CC-MAIN-2022-40 | https://www.architectureandgovernance.com/app-tech/using-deep-learning-computer-vision-technologies-in-a-physiotherapy-setting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00166.warc.gz | en | 0.949963 | 2,401 | 2.859375 | 3 |
Ransomware is a type of cyber threat that infects the system and then encrypts the data or disables access for the user. To get control back, the organization or individual is forced to pay the ransom.
Ransomware finds its way into the systems in different ways. Often, it’s the human factor. Users visit unsafe sites or the resources that were previously breached by cybercriminals, click on malicious advertisements, or open attachments and links in fraudulent messages. Also, exploit kits automatically scan systems and software for vulnerabilities.
After the ransomware infiltrates the system, access to data and applications is blocked. The user sees the message demanding the ransom which can reach hundreds of thousands of dollars. The malware can spread to the whole network within the organization and paralyze operations, as well as put sensitive data at great risk.
There are several tactics that can help minimize the chance of infecting your systems with ransomware:
Unfortunately, after the ransomware attack takes place, the organization might face additional threats that can’t be mitigated even by paying the ransom. The double extortion tactic means that besides blocking access to critical systems, cybercriminals also steal the data. Apart from the risk of paying the double ransom, a data leak can lead to reputation damage and lawsuits. Therefore, after suffering the attack, it is important to consult with cyber security professionals and lawyers and take the following steps:
Where does DMARC help?
Organizations and their clients are being harmed by malicious emails send on their behalf, DMARC can block these attacks. With DMARC an organization can gain insight into their email channel. Based on the insight this gives, organizations can work on deploying and enforcing a DMARC policy.
When the DMARC policy is enforced to p=reject, organizations are protected against:
How does endpoint security work?
Organizations can install an endpoint protection platform – EPP – on devices to prevent malicious actors from using malware or other tools to infiltrate their systems. An EPP can be used in conjunction with other detection and monitoring tools to flag suspicious behavior and prevent breaches before they take place.
Endpoint protection offers a centralized management console to which organizations can connect their network. The console allows administrators to monitor, investigate and respond to potential cyber threats. This can either be achieved through an on-location, cloud, or hybrid approach.
Can firewalls mitigate ransomware attacks?
A properly configured and placed next generation firewall can detect and prevent ransomware from either entering or your data leaving your organization network. Only a next generation firewall will help, as it inspects your traffic in real time and identifies threats, breaches, and unnatural activity.
How does a SOC as service help protect against ransomware?
Because hackers or bad actors are continuously improving their skills and learning new methods of attack with their ransomware, your organization needs to stay up to date on what hackers are doing and the new technology that can thwart their attacks. Internal teams cannot do that, but experts that run SOC as a Service can. They will be able to catch the ransomware before it enters your network or quickly upon its infiltration, saving your organization risk in the process. Additionally, a SOC will log all information coming and going from your network so it will notice anomalies quickly. SOC as a Service will also use file integrity monitoring (FIM) to identify changes in files, which can alert the experts to potential threats or thefts. When ransomware infiltrates a network, it often works by copying itself and traversing through the network with different names. So, a team could find and remove the initial ransomware file, but its copies can pop up later. SOC as a Service can help identify the hidden malware, preventing reinfection.
How to protect the AD against Ransomware?
AD provides the foundation for all your accounts and internal domain assets. This makes it a prime target for ransomware attacks and why it is so imperative to create a strategic security plan to protect your AD infrastructure.
The best way to interrupt a threat actor’s attempts to hold the environment for ransom is to make it harder for them. Places to start:
With ransomware on the rise, understanding how a ransomware attack operates is key to preparing your organization’s defenses. Putting together and testing an incident response plan is essential to limiting any potential damage.
Join us a 4 Part Ransomware Series to learn how to: | <urn:uuid:1cafd9e5-70bc-44c6-b92c-1f68f3d302c4> | CC-MAIN-2022-40 | https://www.ngnintl.com/solutions-services/cyber-security/ransomware-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00166.warc.gz | en | 0.933575 | 893 | 2.71875 | 3 |
Feb. 10 — China has topped supercomputer rankings on the international TOP500 list of fastest supercomputers for the past eight years. They have maintained this status with their newest supercomputer, Sunway TaihuLight, constructed entirely from Chinese processors.
While China’s hardware has “come into its own,” as Foreign Affairs wrote in August, no one can say objectively at present how fast this hardware can solve scientific problems compared to other leading systems around the world. This is because the computer is new—having made its debut in June, 2016.
Researchers were able to use seed funding provided through the Global Initiative to Enhance @scale and Distributed Computing and Analysis Technologies (GECAT) project administered by the National Center for Supercomputing Application’s (NCSA) Blue Waters Project to port and run codes on leading computers around the world. GECAT is funded by the National Science Foundation’s Science Across Virtual Institutes (SAVI) program, which focuses on fostering and strengthening interaction among scientists, engineers and educators around the globe. Shanghai Jiao Tong University and its NVIDIA Center of Excellence matched the NSF support for this seed project, and helped enable the collaboration to have unprecedented full access to Sunway TaihuLight and its system experts.
It takes time to transfer, or “port,” scientific codes built to run on other supercomputer architectures, but an international, collaborative project has already started porting one major code used in plasma particle-in-cell simulations, GTC-P. The accomplishments made and the road towards completion were laid out in a recent paper that won “best application paper” from the HPC China 2016 Conference in October.
“While LINPACK is a well-established measure of supercomputing performance based on a linear algebra calculation, real world scientific application problems are really the only way to show how well a computer produces scientific discoveries,” said Bill Tang, lead co-author of the study and head of the Intel Parallel Computing Center at Princeton University. “Real @scale scientific applications are much more difficult to deploy than LINPACK for the purpose of comparing how different supercomputers perform, but it’s worth the effort.”
The GTC-P code chosen for porting to TaihuLight is a well-traveled code in supercomputing, in that it has already been ported to seven leading systems around the world—a process that ran from 2011 to 2014 when Tang served as the U.S. principal investigator for the G8 Research Council’s “Exascale Computing for Global Scale Issues” Project in Fusion Energy, or “NuFuSE.” It was an international high-powered computing collaboration between the US, UK, France, Germany, Japan and Russia.
A major challenge that the Shanghai Jiao Tong and Princeton Universities collaborative team have already overcome is adapting the modern language (OpenACC-2) in which GTC-P was written, making it compatible with TaihuLight’s “homegrown” compiler, SWACC. An early result from the adaptation is that the new TaihuLight processors were found to be about three times faster than a standard CPU processor. Tang said the next step is to make the code work with a larger group of processors.
“If GTC-P can build on this promising start to engage a large fraction of the huge number of TaihuLight processors, we’ll be able to move forward to show objectively how this impressive, new, number-one-ranking supercomputer stacks up to the rest of the supercomputing world,” Tang said, adding that metrics like time to solution and associated energy to solution are key to the comparison.
“These are important metrics for policy makers engaged in deciding which kinds of architectures and associated hardware best merit significant investments,” Tang added.
The top seven supercomputers worldwide on which GTC-P can run well all have diverse hardware investments. For example, NCSA’s Blue Waters has more memory bandwidth than other U.S. systems, while TaihuLight has clearly invested most heavily in powerful new processors.
As Tang said recently in a technical program presentation at the SC16 conference in Salt Lake City, improvements in the GTC-P code have for the first time enabled delivery of new scientific insights. These insights show complex electron dynamics at the scale of the upcoming ITER device, the largest fusion energy facility ever constructed.
“In the process of producing these new findings, we focused on realistic cross-machine comparison metrics, time and energy to solution,” Tang said. “Moving into the future, it would be most interesting to be able to include TaihuLight in such studies.”
About the National Center for Supercomputing Applications (NCSA)
The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale. | <urn:uuid:10561aff-eae6-46a5-bc54-47d18070bcdc> | CC-MAIN-2022-40 | https://www.hpcwire.com/off-the-wire/ncsa-facilitates-performance-comparisons-chinas-top-supercomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00367.warc.gz | en | 0.934992 | 1,115 | 2.671875 | 3 |
What is the HITECH Act?
The HITECH Act, or the Health Information Technology for Economic Clinical Health Act, is part of the Recovery and Reinvestment Act of 2009 (ARRA), an economic stimulus package introduced by the Obama administration. The legislation works to create incentives for the adoption and meaningful use of healthcare information technology or electronic health records (EHR) among providers. After HITECH’s creation in 2009, the adoption of EHR systems among healthcare providers proliferated.
The HITECH Act expands the scope of privacy and security protections available under the Health Insurance Portability and Accountability Act (HIPAA). Specifically, the HITECH Act introduces increased legal liability for non-compliance and added enforcement actions. Additionally, HITECH establishes a precedent for breach notifications among healthcare providers, ensures patients have access to their private health information (PHI), and defines compliance requirements for business associates.
HITECH Act Summary
The act contains four subtitles:
- Subtitle A: Promotion of Health Information Technology
- Part 1: Improving Healthcare Quality, Safety, and Efficiency
- Part 2: Application and Use of Adopted Health Information Technology Standards; Reports
- Subtitle B: Testing of Health Information Technology
- Subtitle C: Grants and Loans Funding
- Subtitle D: Privacy
- Part 1: Improved Privacy Provisions and Security Provisions
- Part 2: Relationship to Other Laws; Regulatory References; Effective Date; Reports
HITECH Act and Meaningful Use
The HITECH Act proposed the meaningful use of interoperable electronic health records (EHR) throughout the United States healthcare system as a critical national goal. “Meaningful use” can be defined according to the five pillars of health outcomes policy priorities:
- Improving quality, safety, efficiency, and reducing health disparities
- Engage patients and families in their health
- Improve care coordination
- Improve population and public health
- Ensure adequate privacy and security protection for personal health information (PHI)
HIPPA’s Breach Notification Rule requires covered entities to notify patients when their unsecured PHI is used or disclosed without permission and in a way that compromises the privacy and security of the PHI. Once a covered entity knows that a breach of PHI has occurred, the entity has an obligation to relevant parties (individuals, HHS, the media, etc.) up to 60 calendar days following the data of discovery, whether the entity knows the PHI was compromised or not. If a breach impacts 500 people or more than HHS and, under certain conditions, local media must be notified. All breached individuals will need to receive a first class mailing that addresses personally what happened and what steps are being taken to resolve the breach.
A physician must take an active role in breach notifications in order to determine the severity of improper use or disclosure of PHI. To do this, they use a 4-step test:
- The nature and extent of the PHI involved, including identifiers and likelihood or reidentification
- To who (or whom) the PHI was impermissibly disclosed
- Whether the PHI was actually viewed
- What mitigation processes have occurred to rectify the breach of the PHI
Electronic Health Record Access
In the case that an entity has implemented an EHR system, the HITECH Act stipulates that individuals, or designated third parties, have a right to obtain their PHI in an electronic format (ePHI). Only a fee to compensate for the labor can be charged for an electronic request.
While HIPPA has not been effectively enforced in the past, new government enforcement entities will be performing audits on entities that are reported to have breached PHI data. The HITECH Act requires mandatory penalties for “willful neglect” which is determined on a case-by-case basis but aims to penalize providers who have an insufficient compliance strategy.
Penalties for willful neglect have increased under the HITECH Act. Violations of HIPPAA and HITECH can extend up to $250,000 and up $1.5 million for repeated offenses. HIPPA’s civil and criminal penalties now extend to business associates. HITECH does not allow an individual to bring a cause of action against a provider. Instead, a state attorney general is required to bring an action on behalf of their residents. For consistent regulation and enforcement purposes. HHS is now required to conduct periodic audits of covered entities and business associates.
Best Practices for HITECH Act Compliance
In order to ensure that PHI data is kept private and safe, entities must implement an effective information security program, including solutions that ensure the protection of data and the monitorization of access. Forcepoint’s DLP is designed to ensure and simplify regulatory compliance and includes out-of-the-box solutions for regulations involving PII and PHI data. Additionally, Forcepoint’s DLP has additional protection for: DICOM, DNA Profiles, ICD Codes, HICN, SPSS, and Medical Forms. | <urn:uuid:3a95ab86-411c-48d0-a38e-0bb55fc1a727> | CC-MAIN-2022-40 | https://www.forcepoint.com/cyber-edu/hitech-act-compliance | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00367.warc.gz | en | 0.918278 | 1,035 | 2.75 | 3 |
A growing natural gas shortage in China is now disrupting the country’s chemical output.
One of China’s major chemical companies, Yunnan Yuntianhua Co., said it has been forced to suspend production of synthetic ammonia and urea because supplies of natural gas are being diverted to heat homes in other parts of the country.
According to ICIS News, an increasing number of companies have shifted away from coal in the last year, which has increased demand for natural gas. Poor energy infrastructure has also contributed to the current shortage.
The National Energy Bureau estimates that the gas deficit will be 10-20 percent of the total needed. As the crisis deepens, residential customers are being given preferential treatment over manufacturing or commercial sectors.
Yunnan reported this week that it has halted production at a 500,000-ton per year ammonia plant and on an 800,000-ton per year urea line that both use natural gas as a feedstock. The company expects to lose about $3.78 million due to the production disruption.
“Gas producers have suspended gas supplies to major industrial consumers in southwestern regions. Our Shuifu plant will temporarily halt production of two chemicals as a result,” the company stated. | <urn:uuid:97e3c2e5-174e-4f83-bb87-8bc22d4aab10> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/news/13120178/chinese-company-halts-chemical-production-in-wake-of-gas-shortage | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00367.warc.gz | en | 0.947994 | 252 | 2.546875 | 3 |
5G cellular evolution and smart cities
The power of connectivity is transforming all industries and forward-thinking cities.
This digital transformation is driving the next generation of wireless access—5G—targeting commercial availability around 2020. Just as 4G/LTE technology enabled the explosion of smartphones, mobile applications, and mobile commerce, the evolution to 5G will enable rapidly growing, diverse services for both human and machine communications.
Since 5G builds on licensed 4G/LTE, benefits for low-power IoT devices will begin as early as 2017.
According to Ericsson, over the next five years: Traffic volumes on cellular networks will be multiplied 1,000 times, and 100 times more devices will require connectivity.
Over the next five years traffic volumes on cellular networks will be multiplied 1,000 timesShare this quote
The evolution to 5G will spur innovation, making cities more livable, secure, efficient, and responsive to citizens’ needs.
To support the diversity needed, 5G will include network slicing, which will enable connectivity services that are highly scalable and programmable in terms of:
- Speed levels
Traditional cellular networks and their one-size-fits-all approach will adapt with new 5G frequencies to support thousands of scenarios, many different device types, and varying application requirements.
Characteristics of 5G technology that deliver city benefits include:
- Broadband everywhere – 5G offers better coverage and performance outdoors and in buildings (e.g., crowded urban areas, stadiums, convention centers, public transportation, and subways).
- Reliable speed – In seconds, consumers can download a full-length high-definition movie, police can upload or download high-definition video, and TV reporters can stream remote, real-time broadcasts. These speeds will rival fixed fiber, allowing wireless to reliably reach places too cost-prohibitive to deploy fiber today.
- Adaptive – Future communication networks will be programmable to best support applications’ needs, whether they’re highly encrypted financial transactions or a low-priority signal from a connected trash bin when it is full.
- Energy efficient – Battery life for low-power IoT devices will reach up to 10 years, reducing the maintenance and battery replacement costs.
- Responsive or real time – By reducing the response time or latency in the network to 1 millisecond, equipment like cranes and excavators can be remotely operated and roadways may achieve three times the current capacity through platooning. Combining real-time communications with the speed and capacity for video, remote healthcare services for unserved or underserved citizens can become a reality.
- Combining wireless networks – Short-range, unlicensed wireless networks (e.g., Wi-Fi, RF-Mesh, ZigBee, and Z-wave) often create application silos. Through integration gateways, licensed and unlicensed wireless networks can be managed as a single network with a common set of rules or policies. Handoff between these networks will be seamless, whether it is a smartphone or a city bus pulling into the terminal.
- Quality of experience – 5G culminates into greater reliability, improving the overall experience for the person or the machine.
These 5G characteristics will spur growth in uses for wearables, augmented or virtual reality, remote collaboration, artificial intelligence, and much more. While commercial availability for evolved 5G is expected in 2020, test beds are starting in 2016.
AT&T and Ericsson are working together to help set the stage for widespread commercial and mobile adoption of 5G, and smart cities solutions will be a key beneficiary of this next-generation technology. | <urn:uuid:62e682f7-1512-4391-bcac-aa911db47d6e> | CC-MAIN-2022-40 | https://www.business.att.com/learn/tech-advice/5g-cellular-evolution-and-smart-cities.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00367.warc.gz | en | 0.905558 | 763 | 2.515625 | 3 |
The past few years have witnessed a massive change in the world of information technology by reason of network virtualization and cloud computing. Virtualization brings the efficiency of physical resource and energy use as well as the challenge for network administrators to support large-scale and dynamic workloads with multitenancy. In order to seek a way for addressing the challenge of network scale and workload mobility, VXLAN is released as one of several network overlay technologies. In this article let us probe into the understanding of VXLAN technology and VXLAN switch.
Figure1: VXLAN for cloud data center
Understanding of VXLAN
VXLAN, also called Virtual Extensible Local Area Network, is a network virtualization scheme that enables users to create a logical network for virtual machines (VMs) across different networks. It is designed to provide layer 2 overlay network on top of a layer 3 network by using MAC address-in-user datagram protocol (MAC-in-UDP) encapsulation. In this way, you could possibly create 16 million networks by using VXLAN instead of 4096 VLANs. To be brief, VXLAN can provide the same service as VLAN does with greater extensibility and flexibility.
How Does VXLAN Work?
VXLAN adopts Layer 3 multicast to support the transmission of multicast and broadcast traffic in virtual network. In this environment, a VXLAN gateway device can be used to terminate the VXLAN tunnel and forward traffic to and from a physical network. The following picture helps to deeply understand VXLAN technology.
Figure2: How VXLAN Works
VXLAN gateway: A VXLAN gateway is mostly a bridging between VXLAN and non-VXLAN environments by serving as a virtual network endpoint. For instance, it links a traditional VLAN and VXLAN network.
VXLAN segment: A VXLAN segment is a Layer 2 overlay network over which VMs can communicate. One thing to be aware of is that only VMs within the same VXLAN segment can communicate with each other.
VNI: Virtual Network Identifier (VNI) is also called VXLAN segment ID. The system uses the VNI along with the VLAN ID to identify the appropriate tunnel.
VTEP: The VXLAN Tunnel Endpoint (VTEP) terminates a VXLAN tunnel. And the same local IP address can be used for multiple tunnels.
VXLAN header: VXLAN header carries a 24-bit VNI to uniquely identify Layer 2 segments within the overlay.
Overview of VXLAN Switch
In data center, VXLAN is widely applied in creating overlay networks that sit on top of the physical network, enabling the use of a virtual network of switches, routers, firewalls and so forth. When it comes to VXLAN switch, it usually possesses the features of scalability and agility. What is more, VXLAN network switch can offer multiple solutions for private, public and hybrid cloud networks. Take FS S5850-48S2Q4C 10GbE switch as an example, it comes with 48 10G SFP+ ports and 6 hybrid 40G/100G uplink ports. It is a VXLAN fiber switch with advanced features, including MLAG, VXLAN, IPv4/IPv6, SFLOW, SNMP and etc., which is an ideal choice for traditional or fully virtualized data center.
Figure3: FS S5850-48S2Q4C VXLAN switch
With the rapid development of VLAN technology and layer 2/3 networks, more higher network management technologies will arise definitely. VXLAN technology is one of them to solve the problem of limited scalability at present and bring more convenience for today and future networks. VXLAN support Gigabit Ethernet switch has been accepted as a better solution with sufficient links and capacity to handle massive traffic in cloud environment. If you are looking for professional and cost-efficient network switch solutions or VXLAN switches for networks and data centers, FS is a good choice. | <urn:uuid:0998526f-9a46-44f8-9a45-a8cd5a4b1eb1> | CC-MAIN-2022-40 | https://www.fiber-optic-solutions.com/vxlan-vxlan-switch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00367.warc.gz | en | 0.897516 | 848 | 3.046875 | 3 |
I often struggle to explain the value of ontologies to my clients. The word ‘ontology’ itself sounds complicated, academic, pompous, bombastic, and irrelevant. For that matter, the word bombastic always struck me as, well,
. . . bombastic.
Nevertheless, when you try to look up ‘ontology’ in everyone’s favorite dictionary, err, google, you get an even less helpful definition:NOUN
- 1. the branch of metaphysics dealing with the nature of being.
- 2. a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.
Even if someone understood approximately what an ontology was after reading these definitions, they would probably still scratch their heads wondering why any of this really matters. So, you see, I need a way to convey a more visceral feeling for the concept.
Let’s start with my own intuitive definition: An ontology is essentially a frame of reference that supports reasoning. It is the set of rules and facts that describe the way a system works and, in turn, supports our ability to predict how the system (and all its individual components) would likely behave in response to various internal/external stimuli.
For illustration, if we’re talking about a game of Chess then the ontological model of the game expresses its own rules and facts. In chess a pawn moves one square forward under most circumstances, except on its first
move where it has the freedom to move two squares forward, and except when it has the opportunity to eliminate (“kill”) an opposing piece diagonal to it. If we observed enough chess games, without knowing anything about the rules of chess, we could readily infer the rules of how pawns move and play.
Likewise, any one of the other pieces on the board could be readily understood, if we watched a sufficient number of games. Knowing how the pieces play, is a very basic requirement before we can start to do things like formulate strategies, or predict the outcomes of chess games. While we can certainly do both without knowing any of the rules, it is almost certain that if we knew the rules, we would do a better job in determining the most probable outcome of a given move, or even an entire game. So, an ontology is a way to explicitly capture and use domain knowledge.
At an even more fundamental level we can say that an ontology helps us to nail down the definition (or nature) of each entity in a given system. In the chess example above the nature of each piece is largely determined by its initial position on the board and/or its degrees of freedom of movement on the board (forwards, backwards, sideways, diagonal, etc.). Although we use shapes to differentiate between pieces, the shape is really just for human convenience and mostly redundant. There are situations where, under certain contexts a thing will behave one way, and in other contexts it will behave in a completely different way. This confusion of circumstances gives rise to different interpretations of the nature of the thing itself. For instance, when two people look at the same data and they come to completely opposite conclusions, it is usually because they have a different ontological model of what they are looking at. In such situations the name of the game is to try and figure out which ontological model more accurately reflects the “true rules of the game.”
Recently, I happened to revisit the interrogation scene [Spoiler Alert!] of “Lord Of War” and it struck me that it serves as a perfect illustration of how/why ontologies matter. Have a look at this short clip:
In this (5 min.) scene Yuri Orlov, an international arms dealer, is being interrogated by Jack Valentine, supposedly an Interpol agent that has been unsuccessfully trying to nail Yuri for years. In this scene Jack has Yuri dead-to-rights on multiple counts of various international arms embargo violations. Jack’s world-view and his expectations of what is about to happen to Yuri is very relatable, sequential, logical, and linear; it’s very believable. It reflects the simple world-view that most of the audience probably shared when they began the movie. So, they [the audience] *get* Jack. That is until Yuri drops a truth bomb that re-interprets the situation in a completely different way to arrive at the exact opposite conclusion regarding the expected outcome
So, what happened? Why didn’t Jack come to the same conclusion as Yuri, before Yuri explained himself? Both Jack and Yuri are viewing the situation with the same basic evidence, data, etc. However, Yuri has a deeper, perhaps clearer, insight into the workings of the world (its basic facts + rules), as reflected by his nuanced understanding of how geopolitics actually works. Jack lacks this subtlety in his worldview. In Jack’s [worldview] model, causes and effects have a simple direct relationship and things are straightforward: You break a law, you go to prison. In Yuri’s model, things are less straightforward when you factor in short vs. long-term objectives, interconnected and complex relationships between counterparties, and the various incentives of each of the players involved in “the game.” For instance, Yuri implies that he enemy of my enemy . . .” [is my friend]. He also implies that he [an arms dealer] serves a very critical function [that no publicly accountable gov’t wants to be seen doing]; i.e., however depraved Yuri may seem as a person, in the great geopolitical game of chess between nation-states, Yuri is, unfortunately, “a necessary evil.” And, as it turns out, Yuri is spot on.
The mind-blowing beauty of this scene is when Yuri begins his counterargument by stating explicitly [for the audience, using Jack as a proxy] that “I will be released for the same reasons you think I will be convicted . . .”
This sort of plot twist is extremely satisfying to watch and is a tool frequently used in film to underscore an unforgettable punchline to the story.
Within moments of Yuri’s explanation, the life in Jack’s face drains away as he comes to terms with his new reality. Jack knows in his gut that Yuri is right. In reality Jack and Yuri are very likely two opponents of equal intelligence (/capability), yet only one of them read the situation correctly, because only one of them understood the underlying (hidden) rules of the game.
In a world of predictive models run amok, exposed to the same basic data and sources, the supporting ontology is often the only thing that really matters in creating or sustaining a competitive edge. Because without the correct ontology, you will get the wrong answer even if your data is complete and/or robust.
We work with enterprises every day to help them discover and apply ontologies to their data, so they can make better decisions and understand the world as it is, rather than as they hope it to be. | <urn:uuid:4bcd5bb0-01fc-48af-954f-ecc706f2eacd> | CC-MAIN-2022-40 | https://mastechinfotrellis.com/blog/the-value-of-ontologies-a-visceral-understanding | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00367.warc.gz | en | 0.95479 | 1,460 | 2.75 | 3 |
Despite being aware of security threats, the risky online behavior of young adults can negatively affect their future career prospects and financial standings, while leaving them vulnerable to identity theft and fraud, according to a new survey by RSA.
“The irony of these findings is that the generation that has grown up with the greatest percentage of its life knowing technology and the Internet and that claims to know about the risks of technology is the one that is ignoring the good advice,” said Sam Curry, Chief Technologist at RSA.
Research firm TRU polled more than 1,000 young adults between the ages of 18-24 regarding their online behavior and security precautions, and found that more than seven out of ten admit that they are not always as careful as they should be when posting and accessing information online. The research also reveals that young adults regularly make risky choices when engaging in activities such as file sharing and social networking that can lead to long-lasting negative consequences and result in damage to an individual’s reputation both online and off.
Choosing convenience over online safety
The research revealed that while young adults understand the mounting risks associated with unsafe online habits, they are not taking the appropriate actions to change those behaviors, leaving themselves vulnerable to identity theft and fraud.
While 73% of survey respondents acknowledge concern about being a victim of online fraud or identity theft, 71% also admit that despite good intentions, they are not always as careful as they should be when it comes to their personal online safety. More than 50% of all respondents admitted to both using the same password for all of their online accounts and staying logged in to their personal sites to avoid the time and hassle of logging-in every time. Additionally, more than 75% of those surveyed said most people their age are willing to accept more risk when purchasing items online in return for lower prices.
The survey also found that risky online behavior does increase exposure to threats that potentially can have long-lasting negative effects on financial history, credit scores and housing opportunities. However, 55% of those surveyed indicate they never check their credit report, and 35% do not always check bank records after making online purchases. Moreover, 31% of those surveyed admit they do not always take steps to verify a website is legitimate before submitting credit card information.
Sixty-four percent of all respondents also claim to have experienced at least one of the following:
- Been a victim of identity theft
- Lost or stolen cell phone, laptop, flash drive, credit card or mail
- A compromised hard drive, email, social network, online payment (i.e. PayPal) or other online financial account
- Photos or other personal information ended up online without their knowledge.
The survey also polled young adults regarding their online behavior and how it may affect job searches, finding that while 76% indicate they are currently or soon plan to begin searching for a job, and 67% have posted inappropriate content, photos, and/or videos involving cigarettes, drugs, alcohol and sex online, which could potentially limit employment opportunities.
The complete survey is available here. | <urn:uuid:3ce97509-4c16-41dc-bc5e-4ca7b7a1376f> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2010/04/20/real-life-consequences-for-choosing-convenience-over-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00567.warc.gz | en | 0.956161 | 627 | 2.8125 | 3 |
All organizations contain people, data, and means for people to use the data. A fundamental aspect of operations security is ensuring that controls are in place to inhibit people either inadvertently or intentionally compromising the confidentiality, integrity, or availability of data or the systems and media holding that data. Administrative Security provides the means to control people’s operational access to data.
ADMINISTRATIVE PERSONNEL CONTROLS
Administrative Personnel Controls represent important operations security concepts that should be mastered by cybersecurity personnel. These are fundamental concepts within information security that permeate through multiple domains.
Least Privilege or Minimum Necessary Access
One of the most important concepts in all of information security is that of the principle of least privilege. The principle of least privilege dictates that persons have no more than the access that is strictly required for the performance of their duties. The principle of least privilege may also be referred to as the principle of minimum necessary access. Regardless of name, adherence to this principle is a fundamental tenet of security, and should serve as a starting point for administrative security controls.
Although the principle of least privilege is applicable to organizations leveraging Mandatory Access Control (MAC), the principle’s application is most obvious in Discretionary Access Control (DAC) environments. With DAC, the principle of least privilege suggests that a user will be given access to data if, and only if, a data owner determines that a business need exists for the user to have the access. With MAC, we have a further concept that helps to inform the principle of least privilege: need to know.
Need to Know
In organizations with extremely sensitive information that leverage Mandatory Access Control (MAC), basic determination of access is enforced by the system. The access determination is based upon clearance levels of subjects and classification levels of objects. Though the vetting process for someone accessing highly sensitive information is stringent, clearance level alone is insufficient when dealing with the most sensitive of information. An extension to the principle of least privilege in MAC environments is the concept of compartmentalization.
Compartmentalization, a method for enforcing need to know, goes beyond the mere reliance upon clearance level and necessitates simply that someone requires access to information. Compartmentalization is best understood by considering a highly sensitive military operation: while there may be a large number of individuals (some of high rank), only a subset “need to know” specific information. The others have no “need to know,” and therefore no access.
Separation of Duties
While the principle of least privilege is necessary for sound operational security, in many cases it alone is not a sufficient administrative control. As an example, imagine that an employee has been away from the office for training, and has submitted an expense report indicating $1,000,000 was needed for reimbursement. This individual happens to be a person who, as part of her daily duties, had access to print reimbursement checks, and would therefore meet the principle of least privilege for printing her own reimbursement check. Should she be able to print herself a nice big $1,000,000 reimbursement check? While this access may be necessary for her job function, and thus meet the requirements for the principle of least privilege, additional controls are required.
The example above serves to illustrate the next administrative security control, separation of duties. Separation of duties prescribes that multiple people are required to complete critical or sensitive transactions. The goal of separation of duties is to ensure that in order for someone to be able to abuse their access to sensitive data or transactions, they must convince another party to act in concert. Collusion is the term used for the two parties conspiring to undermine the security of the transaction. The classic action movie example of separation of duties involves two keys, a nuclear sub, and a rogue captain.
LEARN BY EXAMPLE
Separation of Duties
Separation of duties is a hard lesson to learn for many organizations, but many only needed to learn this lesson once. One such organization had a relatively small and fledgling security department that was created as a result of regulatory compliance mandates. Most of the other departments were fairly antagonistic toward this new department because it simply cobbled together various perceived security functions and was not mindfully built. The original intent was for the department to serve primarily
in an advisory capacity regarding all things in security, and for the department not to have operational responsibilities regarding changes. The result meant that security ran a lot of vulnerability scans, and took these to operations for resolution. Often operations staff members were busy with more pressing matters than patch installations, the absence of which posed little perceived threat.
Ultimately, because of their incessant nagging, the security department was given the, thankless if ever there was one, task of enterprise patch management for all but the most critical systems. Though this worked fine for a while, eventually, one of the security department staff realized that his performance review depended upon his timely remediation of missing patches, and, in addition to being the person that installed the patches, he was also the person that reported whether patches were missing. Further scrutiny was applied when management thought it odd that he reported significantly less missing patches than all of his security department colleagues. Upon review
it was determined that though the employee had indeed acted unethically, it was beneficial in bringing the need for separation of duties to light. Though many departments have not had such an egregious breach of conduct, it is important to be mindful of those with audit capabilities also being operationally responsible for what they are auditing. The moral of the story: Quis custodiet ipsos custodes? Who watches the watchers?
Rotation of Duties/Job Rotation
Rotation of Duties, also known as job rotation or rotation of responsibilities, provides an organization with a means to help mitigate the risk associated with any one individual having too many privileges. Rotation of duties simply requires that one person does not perform critical functions or responsibilities without interruption. There are multiple issues that rotation of duties can help begin to address. One issue addressed by job rotation is the “hit by a bus” scenario: imagine, morbid as it is, that one individual in the organization is hit by a bus on their way to work. If the operational impact of the loss of an individual would be too great, then perhaps one way to assuage this impact would be to ensure that there is additional depth of coverage for this individual’s responsibilities.
Rotation of duties can also mitigate fraud. Over time some employees can develop a sense of ownership and entitlement to the systems and applications they work on. Unfortunately, this sense of ownership can lead to the employee’s finding and exploiting a means of defrauding the company with little to no chance of arousing suspicion. One of the best ways to detect this fraudulent behavior is to require that responsibilities that could lead to fraud be frequently rotated amongst multiple people. In addition to the increased detection capabilities, the fact that responsibilities are routinely rotated deters fraud.
Mandatory Leave/Forced Vacation
An additional operational control that is closely related to rotation of duties is that of mandatory leave, also known as forced vacation. Though there are various justifications for requiring employees to be away from work, the primary security considerations are similar to that addressed by rotation of duties; reducing or detecting personnel single points of failure, and detection and deterrence of fraud. Discovering a lack of depth in personnel with critical skills can help organizations understand risks associated with employees unavailable for work due to unforeseen circumstances. Forcing all employees to take leave can identify areas where depth of coverage is lacking. Further, requiring employees to be away from work while it is still operating can also help discover fraudulent or suspicious behavior. As stated before, the sheer knowledge that mandatory leave is a possibility might deter some individuals from engaging in the fraudulent behavior in the first place, because of the increased likelihood of getting caught.
Non-Disclosure Agreement (NDA)
A non-disclosure agreement (NDA) is a work-related contractual agreement that ensures that, prior to being given access to sensitive information or data, an individual or organization appreciates their legal responsibility to maintain the confidentiality of that sensitive information. Job candidates, consultants or contractors often sign non-disclosure agreements before they are hired. Non-disclosure agreements are largely a directive control.
Background checks (also known as background investigations or pre-employment screening) are an additional administrative control commonly employed by many organizations. The majority of background investigations are performed as part of a pre-employment screening process. Some organizations perform cursory background investigations that include a criminal record check. Others perform more in-depth checks, such as verifying employment history, obtaining credit reports, and in some cases requiring the submission of a drug screening.
The sensitivity of the position being filled or data to which the individual will have access strongly determines the degree to which this information is scrutinized and the depth to which the investigation will report. The overt purpose of these pre-employment background investigations is to ensure that persons who will be employed have not exhibited behaviors that might suggest they cannot be trusted with the responsibilities of the position. Ongoing, or postemployment, investigations seek to determine whether the individual continues to be worthy of the trust required of their position. Background checks performed in advance of employment serve as a preventive control while ongoing repeat background checks constitute a detective control and possibly a deterrent.
The business needs of organizations require that some individuals have privileged access to critical systems, or systems that contain sensitive data. These individuals’ heightened privileges require both greater scrutiny and more thoughtful controls in order to ensure that confidentiality, integrity, and availability remain intact. Some of the job functions that warrant greater scrutiny include: account creation/modification/ deletion, system reboots, data backup, data restoration, source code access, audit log access, security configuration capabilities, etc. | <urn:uuid:158c41e0-054e-4ecd-8c5f-02d1d507e7fa> | CC-MAIN-2022-40 | https://cybercoastal.com/cybersecurity-tutorial-for-beginners-administrative-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00567.warc.gz | en | 0.956634 | 2,029 | 2.90625 | 3 |
Megalithic Travel Suggestions
A paper in PNAS, the Proceedings of the National Academy of Science, in April 2019 described how studies in archaeology and genetics show that Neolithic culture spread from the Fertile Crescent through Anatolia and the Aegean into Europe around 9000 BCE. It reached northwestern Europe including Britain and Scandinavia around 4000 BCE.
A pattern of megalithic monuments arose, especially for funerary purposes, around 4500 BCE in France, 3700 BCE in Britain, and 3600 BCE in Scandinavia.
The genetics and physical characteristics of individuals buried in megalithic tombs in Europe suggest that the societies constructing these tombs were socially stratified Neolithic farming cultures.
The 2019 paper presents evidence of a genetic connection across these megalithic peoples, indicating that the tradition had a single origin. The farming megalithic builders had moved into the territory of hunter-gatherer societies. | <urn:uuid:e144b77d-69e5-410d-ac05-21043f635f0d> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/megaliths/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00567.warc.gz | en | 0.920542 | 195 | 3.96875 | 4 |
Physical Access Control
Physical access control protects both tangible and intangible assets by limiting access to only authorized individuals. Contemporary systems also provide a history of who gained access and when access was granted.
In reality, access control is a constant in people’s everyday lives. A simple key and lock to a home or office door is an access control system. More modern systems have relied on cards, in both low and high security technologies (i.e. 125 KHz proximity cards vs. encrypted Smartcards) for 25 years or so, with the advantage that cards are much easier to manage than keys and provide a record of who is accessing facility. But like a simple lock and key, cards are vulnerable to the loss or theft, and in some cases, the lending out of the card by an insider. Furthermore, like PINs and passwords for logical access control systems, cards can be costly to administer and support over their lifecycle due to being lost, damaged or stolen.
Highest security facilities are now migrating to biometrics based authentication for access control in both single factor and multifactor approaches. More than fingerprint solutions, Iris recognition in physical access control lends itself to single factor solutions, due to its inherently very high resistance to false matches, sometimes called false accepts, in which the system would allow an imposter to access the facility. Two factor solutions will typically require swiping an ID card and providing a biometric to gain access, but slow down entry to the facility, and require typically more administration support for lost cards.
Single factor iris solutions, in other words, can be both more secure and less expensive than card only or dual factor implementations.
Many enterprises and governmental institutions are applying iris recognition to their most secure facilities, such as data centers, high value depositories, laboratories, IT system control rooms, among others. | <urn:uuid:1d455dee-8708-452d-b670-b433e0a6b17b> | CC-MAIN-2022-40 | https://cmi-tech.com/applications/access-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00567.warc.gz | en | 0.948777 | 372 | 2.828125 | 3 |
An approach to attract and capture elusive immune disease cells in vivo
T cells, a subtype of white blood cells, play key roles in cell-mediated immunity, be it to fight infections and cancer or, when corrupted, to react against the body’s own cells in more than 80 autoimmune diseases, including type I diabetes, multiple sclerosis, rheumatoid arthritis and others. However, isolating disease-related T cells from the body to better study or eliminate them poses a formidable challenge to researchers and clinicians.
Wyss Institute researchers are generating implantable and injectable biomaterials to concentrate and trap disease-related T cells.
To this end, and for a limited period of time, they place porous scaffolds with biological components under the skin.
These spiked scaffolds can attract and ‘trap’ circulating T cells that after retrieval of the biomaterial can be easily isolated.
Using this approach, researchers can get access to disease specific autoreactive T cells to study and better understand their detrimental functions.
They can even deploy their T cell traps to potentially reduce the number of tissue-damaging T cells therapeutically, or ask how drugs affect disease-specific T cell populations in patients.
The Wyss team provided proof-of-concept for the method by applying it to animal models of human type I diabetes and isolating diabetes-promoting T cell populations.
Wyss researchers are also expanding the use of T cell traps to capture T cells with pivotal roles in other than autoimmune diseases which will enable them to study their functions or exploit them therapeutically. | <urn:uuid:076be346-872b-4e95-8247-7fafef0b052b> | CC-MAIN-2022-40 | https://debuglies.com/2017/06/21/t-cell-traps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00567.warc.gz | en | 0.913356 | 324 | 2.796875 | 3 |
The Zero Trust security model is a concept that has been around for several decades but was popularized by John Kindervag’s seminal paper Build Security Into Your Network’s DNA: The Zero Trust Network Architecture published by Forrester in 2010.
Essentially it defines an environment where there are literally no trusted devices, networks, or users. Previous concepts defined a perimeter where devices such as firewalls, IPSs, and the like would protect an enterprise (where everything is trusted) from the Internet or business partners (where no trust could exist).
So, what is it?
The Zero Trust security model merges networking and security for a holistic approach that assumes assets, users and resources need protection from each other – not just from the outside. It is a set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on protecting data.
A Zero Trust Architecture (ZTA) uses these principles to plan industrial and enterprise infrastructure and workflows. It assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Authentication and authorization (both user and device) are discrete functions performed before a session to an enterprise resource is established.
Zero Trust is a response to enterprise network trends that include remote users, bring your own device (BYOD), and cloud-based assets that are not located within an enterprise-owned network boundary. Zero Trust focuses on protecting resources (assets, services, workflows, network accounts, etc.), not network segments, as the network location is no longer seen as the prime component to the security posture of the resource.
It’s been around forever, so why now and what has changed?
There has been a dramatic expansion of the Internet of Things (IoT) and Operational Technology (OT) usage of IP protocols. These environments can consist of thousands to millions of devices that supply mission-specific real-time data, and most do not support users. These types of systems are often used in physical security (cameras, keypads, etc.) or industrial controls (PLCs, SCADA systems, etc.).
Protecting these systems has become an area of national concern. The U.S. Departments of Homeland Security and Defense (DHS and DoD) have significant concerns about protecting critical infrastructure. As a result, several Executive Orders have been issued concerning requirements for strengthening security. The US National Institute of Standards and Technology (NIST) developed the Framework for Cyber-Physical Systems (NIST Special Publication 1500-201) in response to these issues.
Starting in 2018, NIST began work in earnest to develop a formal architectural standard on Zero Trust to support these initiatives. This was published in August 2020 as the Zero Trust Architecture (NIST SP 800-207) and formally establishes requirements for products and services in both security and networking. This provides enterprises with a standard with which to compare vendor offerings and a set of design paradigms that can be used to protect their environments.
What do ZTA Systems provide?
Forrester has developed a series of papers reviewing the emerging offerings and is explicitly arguing that Enterprises need to merge their networking and security work or sunset their business altogether. They describe the previous model as moats and castles – where security teams provided kit to protect enterprise castles – a largely perimeter-based view.
Forrester’s view is that ZTA defines a Zero Trust Edge (ZTE). A Zero Trust Edge solution securely connects and transports traffic, using Zero Trust Architectural principles, in and out of remote sites leveraging mostly cloud-based security and networking services.
Potential Value to a Business
All enterprises are subject to regulatory and privacy oversight. However, there are specific industries such as utilities that are subject to U.S. executive orders regarding critical infrastructure. One only needs to go as far as the recent headlines regarding Colonial Pipeline to see the impact of inadequate security and the value a ZTA approach would have made.
As a result, systems developed supporting the federal guidelines in the Zero Trust Architecture, and the framework for Cyber-Physical Systems will both provide the protection needed and establish an enterprise as following best practices for their Industry.
In addition, integrated ZTA/ZTE systems are typically software-defined and can replace significant amounts of existing networking and security hardware and software. So, an integrated approach can also simplify management and save money over the long term.
Call to Action
Executives should be examining their IT, cybersecurity, physical security, compliance, and risk teams. The work should include examining workflow and budgets. This will involve looking for synergies, examining current incentives, and restructuring the existing silos to create a more holistic approach. Zero Trust principles should be built into the modified organization, and methods and procedures should be developed to establish a “Whole of Enterprise” approach to protecting critical data.
NetCraftsmen consultants have a long history working with clients in regulated industries such as those found with utilities, healthcare, and the financial sectors. We can work with your teams to identify and mitigate the risks your firm faces in a cost-effective and comprehensive manner. | <urn:uuid:4ea349db-793d-4e6a-b148-3bb2df962f2f> | CC-MAIN-2022-40 | https://netcraftsmen.com/zero-trust-architecture-in-brief/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00567.warc.gz | en | 0.942425 | 1,076 | 2.890625 | 3 |
An intrusion detection system, or an IDS, is any kind of device or application that monitors your network for malicious activity that could be the sign of a cyberattack, or for policy violations which could point towards a data or compliance breach.
If you’re new to the idea of intrusion detection systems, let this guide be your… guide! We’ll cover how an intrusion detection system can help your business, what different options you have for this technology, and how you can use the data your IDS gathers to boost your network security overall. Hold on tight!
Understanding how an intrusion detection system supports your business
There are two main types of detection methods that an intrusion detection system will use. These are either a signature-based approach, or an anomaly-based approach. These are common categories that you might have seen with other antivirus or cybersecurity efforts.
With a signature-based approach, your IDS will look for known malicious patterns in the way that traffic is behaving, often called signatures. The problem with this approach is that if a new attack variety arises, it’s difficult for intrusion detection systems to spot them, as they are not “known” intrusions. It can also take some time for a database to catch up with known attack patterns, which can leave you with a dangerous gap.
In contrast, a newer focus is to use an anomaly-based approach, with or without machine learning and statistics to build a model of usual traffic and communications. This kind of software or device will look (and alert) for anything unusual that doesn’t fit the pattern of regular traffic. This means you need a baseline for what constitutes “regular behavior” and then you’ll be alerted to anything outside of this pattern.
You can learn more about the techniques of the latest intrusion detection systems, including how statistics and modelling are used, here.
Intrusion detection vs firewalls
One common question we hear a lot is, are IDS the same as firewalls? The answer is a firm, no! While a firewall will block access to stop malicious attacks from making it into your environment, (or in the case of next-gen firewalls to stop an attack making it from one part of your data center to another), an IDS will provide information about a suspected intrusion that has already occurred. In this way, a firewall is more of an intrusion prevention system, turning away malicious traffic than an intrusion detection system.
The different kinds of intrusion detection systems
There are a few main kinds of intrusion detection systems, and you’ll need to think hard about which ones will work best for your business requirements, or for your own clients. Let’s run through some examples:
NIDS: This stands for Network Intrusion Detection System, and you can set this up in a specific place within the network. It will observe all the traffic that travels on the subnet, and scan for abnormal behavior using the techniques described above. You could place the NIDS where the firewalls are, and see if there is a brute force attack occurring.
HIDS: In contrast, a Host Intrusion Detection System will run on an independent host or on a device, monitoring what comes in and out of the device itself. This is usually reserved for machinery or assets which perform a specific task or type of communication, and where you want administrators to get an alert if something out of the norm occurs, or the communication changes.
PIDS: This acronym stands for Protocol-based Intrusion Detection System, and it will monitor the HTTPS protocol between the server and the users or devices that are communicating with it. The web server is secured by this monitoring and validation of the protocol on an ongoing basis.
VMIDS: Virtual Machine Intrusion Detection Systems is not deployed on-premises, but rather remotely via a Virtual Machine. This is a new kind of intrusion detection system, and great for MSPs as they don’t have to physically go to client offices in order to implement the IDS. Of course, if your internet connection fails – this could cause a problem.
Using the data from intrusion detection systems to improve network security
When using the data that you get from IDS to improve your network security, it’s important to consider a few shortcomings. For example, this technology has been shown to have a high false alarm rate, so you want to ensure you have a process in place to limit alert fatigue – for example a traffic light system for administrators or security teams. If you use a signature-based detection strategy, remember that new threats might not be caught, and databases could be out of date.
It’s also essential to recognize that IDS is a detection system, not an intrusion prevention system. An IPS will offer controls to keep malicious attackers out and away from your crown jewel applications and assets, while an IDS will usually only alert you to a problem that is taking place already.
It’s therefore very important to use an IDS as part of a multi-layered security strategy, not as your first and last point of defense. For example, make sure your network protocols are strong, and that you have tight identity and authorization management policies in place, and a killer antivirus suite. You’ll also need another solution for encrypted packets, which IDS usually can’t process. Finally – you’ll want a strong security team or IT stakeholder who can look at the results of an intrusion detection system, and make quick and smart decisions on next steps.
Looking for a full suite of security solutions that can protect your clients from malware, malicious intent, and the risk of non-compliance? Check out the awesome integrations Atera partners with as standard.
See Atera in Action
RMM Software, PSA and Remote Access that will change the way you run your MSP Business | <urn:uuid:84092657-f3e8-4867-a63a-5f0df1de4a85> | CC-MAIN-2022-40 | https://www.atera.com/blog/what-is-an-intrusion-detection-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00567.warc.gz | en | 0.929268 | 1,213 | 2.5625 | 3 |
Once described as “the right to be let alone,” privacy is now best described as the ability to control data we cannot stop generating, giving rise to inferences we can’t predict.
With an estimated 2.5 Quintillion bytes of data generated each day, the ongoing challenge is how to control the data we can’t stop generating and protect it from ever increasing malicious threats.
In the wake of increasing privacy concerns and the arrival of new regulations, protecting privacy has never been more critical to ensuring business survival. While losing customer data can damage a brand’s reputation, trust, and revenue, preventing valuable corporate information from leaving the confines of the business and falling into the wrong hands is critical.
How is Privacy Compromised?
Company data is being stolen – often unknowingly. Every day the devices your organization uses runs tens of thousands of transactions as employees browse the internet or use applications. A high proportion of device transactions take place in the background, without the user’s knowledge – often resulting in sensitive company data unknowingly being sent to unidentified servers in regions where high levels of cyber-attacks originate.
Organizations don’t know what they can’t see so most are unaware that unauthorized data is leaving their environment and that their privacy is being compromised. | <urn:uuid:d3ff83ea-9f57-4f9a-a850-a0c6e2dc3eaa> | CC-MAIN-2022-40 | https://www.blackfog.com/what-is-privacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00567.warc.gz | en | 0.924335 | 267 | 2.75 | 3 |
The highly infectious coronavirus is a public health emergency on a level not seen since the Spanish flu over a century ago, killing over a hundred thousand people across the world. By applying tools like artificial intelligence and web intelligence to social media, authorities can prepare for clusters of the disease and rapidly deploy health officials to prevent further spread of the virus.
Along with the Center for Disease Control (CDC) and the World Health Organization (WHO), regional authorities have increasingly focused on how to curb the spread of the deadly virus in their communities and have adopted analytics, machine-learning and AI capabilities to automatically comb through social media and extract critical data to gain critical information.
Unlike the Spanish flu and other deadly virus outbreaks like SARS in 2003, authorities also have social media to monitor and reduce the spread of the pandemic and manage social distancing.
While researchers have been using mapping tools to track the spread of diseases for several years, social media can be used as an early indicator that something is going on in a community-giving medical professionals the time to prepare hospitals for future outbreaks.
The ability of Cobwebs’ AI-driven search engines, capable of automatically sifting through an infinite amount of critical data across all layers of the internet including open source and the dark web, optimizes investigations and provides authorities with precise intelligence much faster than ever before.
Social media posts that could be analyzed and present people that say that they have just returned from abroad, individuals who post that they are experiencing symptoms similar to the virus, people who are concerned that they have been exposed, and more.
Natural language processing can also be used to deconstruct a post on social media to allow authorities to distinguish between someone discussing news of the virus and someone posting about how they feel.
Smart analysis tools then integrate all the data mined from various sources and conduct an automated and predictive analysis on the individual, connecting all the dots to recognize patterns and revealing hidden links to generate even further insights.
Even with minimal leads and minimal resources, using the advanced engines authorities can uncover an individual’s past locations and visits, social connections, and more. An analysis of social media posts combined with data collected, can predict who is most likely to contract the virus.
Machine learning tools, which can detect images and attributes in images across the social media sites, can identify those who may be at risk of contracting the virus and provide authorities with an interactive connections graph.
Authorities will also be able to receive real-time alerts according to keywords or topics, providing them the opportunity to notify the individual and place them in quarantine to prevent the spread of the virus to others.
With Cobwebs’ capabilities of automation, artificial intelligence and machine learning technologies, regional health care professionals can gain complete situational awareness and prevent the further spread of deadly virus. | <urn:uuid:8a65a7cc-e449-4aaf-869b-6ec33bfc45ae> | CC-MAIN-2022-40 | https://cobwebs.com/predicting-and-managing-the-spread-of-pandemics-using-social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00767.warc.gz | en | 0.938588 | 568 | 3.796875 | 4 |
The old paradigm for digital products could be characterized thusly: build IT infrastructure to support a service and “bring” your customers to the application. As many developers have learned, changing expectations on the part of both human and machine users have made high performance an essential part of application value.
The problem is that the client-server architecture that has long characterized application development used to assume that users were near the application. The growing use of mobile devices such as smartphones connecting to applications over the public internet is just one of the trends that highlighted a key issue with this approach: latency.
Latency: In networking, latency is the time taken by a unit of data (typically a frame or packet) to travel from its originating device to its intended destination.
Note that in a client-server architecture, data sometimes needs to go from client to server and back to complete a function or request. This latency is referred to as round trip time (RTT).
Distance is one of the biggest obstacles to overcome when trying to reduce latency. Latency from San Francisco, CA to New York City, NY is roughly 70 to 80ms, one way. Considering that in the case of web pages, over 50% of users abandon a site if it takes longer than 3 seconds to load, it doesn’t take too many missed or delayed packets for latency to add up and kill the user experience.
The logical conclusion is that moving application logic closer to the end-user is key to performance. This is true-most of the time, and location informs much of what is being called edge computing. We’ll explore other factors that influence latency in later articles.
One source of definitions about where edge computing is located in the Open Glossary of Edge Computing, which is an open-source project under the stewardship of The Linux Foundation.
Defining a view of what edge computing is, the report states:
Edge Computing: The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services.
This definition allows for edge services to exist at different layers that extend from the ‘core’ or ‘central’ cloud. A few key terms from the study (see diagram 1):
• The device edge refers to edge computing resources on the downstream or device side of the last mile network. These include laptops, tablets, and smartphones as well as connected automobiles, environmental sensors and traffic lights.
• The access edge is the part of the infrastructure edge closest to the end-user and their devices. Edge data centers will be placed at regular intervals in urban and suburban environments such as the cable or telco headend or the base of a cell tower. In terms of latency, these resources are expected to be located within 5ms to 10ms of the device edge.
• The aggregation edge refers to a portion of the edge infrastructure which functions as a point of aggregation (located in regional datacenters, in diagram 1) for multiple edge data centers deployed at the access edge sublayer. For example, a CDN can act as an aggregation layer by caching content and performing functions on requests before core delivering requests to core cloud (or dedicated ‘origin’) infrastructure. These resources are generally expected to be located within 50ms of the device edge.
• The infrastructure edge refers to IT resources that are positioned on the network operator or service provider side of the last mile network. The infrastructure edge is a broader term that includes both aggregation and access layers. All outer layers of the model may still in many cases communicate with an application or data residing in a cloud data center.
In summary, there are several different edge locations where technology vendors are aiming to supply products and services. The location of the application-or event just component parts-and the source of data all will play a role in which layer that the developer chooses to run their application. In subsequent articles, we’ll examine how developers might use these edges for different consumer and enterprise services.
Source: Edge Industry Review – EdgeIR.com
edge computing | latency | <urn:uuid:9e4ec18e-9f2b-4904-a345-74c21c0c15f7> | CC-MAIN-2022-40 | https://www.edgeir.com/defining-the-edge-layers-for-next-gen-applications-20191226 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00767.warc.gz | en | 0.939791 | 841 | 2.78125 | 3 |
Bucking a central tenet of biology, researchers at the University of California San Diego and their colleagues have discovered evidence for a new path of evolution, and with it a deeper understanding of how quickly organisms such as viruses can adapt to their environment.
Describing their findings in the March 30 issue of the journal Science, UC San Diego biologists conducted a series of experiments with a bacterial virus and found that it could infect “normal” hosts, as expected, but also — through a process previously unseen in evolution — acquired an ability to infect new host targets.
The researchers say their findings, which address longstanding mysteries of how genes acquire new functions and how mutations arise to ease transmission from one host to another, could be applied to investigations of viral diseases such as Zika, Ebola and bird flu.
“This research shows us that viruses are much more adaptable than previously anticipated,” said Justin Meyer, a UC San Diego Biological Sciences assistant professor and the paper’s senior author.
“By learning how viruses achieve evolutionary flexibility, we have new insight into how to set up road blocks to stop the emergence of new diseases.”
Viruses infect by attaching themselves to molecular receptors on the surface of cells.
These receptors are the “locks” that viruses must open to enter cells.
The “keys” to the locks are viral proteins called host-recognition proteins.
Researchers working in this area have focused on how mutations alter these protein keys — and what changes allow them to access new locks.
Scientists have known for years that viruses can gain new keys with relatively few mutations but they have not solved the mysteries of how these mutations first appear.
This question led to a collaborative effort with researchers from UC San Diego, the Earth-Life Science Institute in Tokyo and Yale University.
Katherine Petrie in Meyer’s laboratory led the project’s experiments on lambda, a virus that infects bacteria but not humans and allows broad flexibility in lab testing.
The researchers found that lambda overcomes the challenge of using a new receptor by violating a well-accepted rule of molecular biology through which genetic information is translated into a protein — the molecule that makes up living cells and viruses.
Petrie and colleagues found that a single gene sometimes yields multiple different proteins. The lambda virus evolved a protein sequence that was prone to structural instability that results in the creation of at least two different host-recognition proteins.
Fortunately for the virus — but not its host — these different types of proteins can exploit different locks.
“We were able to capture this evolutionary process in action,” said Petrie, the lead author of the study.
“We found that the protein’s ‘mistakes’ allowed the virus to infect its normal host, as well as different host cells.
This nongenetic variation in the protein is a way to access more functions from a single DNA gene sequence. It’s like a buy-one-get-one-free special for the protein.”
The researchers are now looking for further examples of their newly discovered evolutionary phenomenon and seeking evidence for how common it is.
They are also moving down in scale to probe the details of the new pathway to focus on the processes of individual molecules.
“This is a very atypical adaptation in that it’s an evolutionary innovation,” said Meyer.
In addition to Petrie and Meyer, the study’s coauthors include Nathan Palmer, Daniel Johnson, Sarah Medina, Stephanie Yan and Victor Li of UC San Diego and Alita Burmeister of Yale University. Funding for the research was provided by the Earth-Life Science Institute Origins Network (funded by the John Templeton Foundation) and the National Science Foundation.
- Katherine L. Petrie, Nathan D. Palmer, Daniel T. Johnson, Sarah J. Medina, Stephanie J. Yan, Victor Li, Alita R. Burmeister, Justin R. Meyer. Destabilizing mutations encode nongenetic variation that drives evolutionary innovation. Science, 2018 DOI: 10.1126/science.aar1954 | <urn:uuid:adbd3b98-71c8-4b4d-9c15-c48fd8a89dc3> | CC-MAIN-2022-40 | https://debuglies.com/2018/03/30/virus-found-to-adapt-through-newly-discovered-path-of-evolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00767.warc.gz | en | 0.939417 | 860 | 3.515625 | 4 |
Researchers have found that even after having an advanced encryption scheme in place, more than 100 million Internet-of-Things (IoT) devices from thousands of vendors are vulnerable to a downgrade attack that could allow attackers to gain unauthorized access to your devices.
The issue resides in the implementation of Z-Wave protocol—a wireless, radio frequency (RF) based communications technology that is primarily being used by home automation devices to communicate with each other.
Dubbed Z-Shave by the researchers, the downgrade attack makes it easier for an attacker in range during the pairing process to intercept the key exchange, and obtain the network key to command the device remotely.
Researchers found the vulnerability while comparing the process of key exchange using S0 and S2, wherein they noticed that the node info command which contains the security class is being transferred entirely unencrypted and unauthenticated, allowing attackers to intercept or broadcast spoofed node command without setting the security class.
Read more: The Hacker News | <urn:uuid:cc684106-961a-4f76-b7ba-5c5f34790dd6> | CC-MAIN-2022-40 | https://www.globaldots.com/resources/blog/z-wave-downgrade-attack-left-over-100-million-iot-devices-open-to-hackers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00767.warc.gz | en | 0.940425 | 199 | 2.6875 | 3 |
Behavioral biometrics is the field of study related to the collection and simultaneous analysis of unconscious behavior to accurately identify individuals. This includes simple gestures like how people walk, talk, hold mobile phones, look at things, gesture, and stand.
Most of the time, this data analyzes human behavior, but some scientists analyze animal behavior such as the wingbeats of a bird. They use the data to analyze a specific species and even individual creatures.
The development of behavioral biometrics can be traced back to military defense programs like DARPA (Defense Advanced Research Projects Agency). However, most behavioral data comes from private security companies that are working with banking or financial institutions. These companies track this biometrical data through sensors, cameras, and technological devices based within the institution.
Not surprisingly, there are not many public databases for behavioral biometrics, due to the obvious security issues that would arise. Those that are available to the public tend to provide data on animals or insects.
As noted above, behavioral biometrics can measure how a person types, holds a phone, walks, speaks, and stands. It may even track their eye movement or the gestures they use.
However, the most important attribute of this data is the immediate, real-time collection and assessment of data. Naturally, to protect a client’s bank account, the biometrics security system must evaluate their typing and mobile phone holding behavior within the few minutes it may take for the intruder to log in to a client’s account and make a transaction.
Every institution, whether commercial, public, private or something else, makes use of behavioral biometrics use cases to keep their data and assets secure. Other uses, such as insect species identification or animal behavioral study, remains secondary.
In addition to complete security of the data itself, a quality data set has a fair amount of accurate historical data on each individual it measures. Since some individuals infrequently visit the website or physical locations of whatever institution collects this data, there may not be enough information on them to ensure accuracy. Therefore, a good dataset should collect a representative sample of behavior on each client.
TypingDNA: U.K. DVSA brings their driving test into the future with Capgemini and TypingDNA
BioCatch: Top LATAM Bank Reduces Social Engineering Fraud Targeting Mobile Users by 67% With Behavioral Biometrics
Israeli-founded cybersecurity firm BioCatch announced on Wednesday that four major global banks – Barclays, Citi, HSBC and National Australia Bank (NAB) – are investing $20 million in the company, extending its recent Series C fundraising round to$168 million. Investors Industry Ventures and existing shareholders American Express Ventures, CreditEase, Maverick Ventures and OurCrowd participated in that round.
BioCatch was founded in 2011 by entrepreneurs Avi Turgeman and the late Benny Rosenbaum. With over 50 patents, the company delivers behavioral biometrics analyzing human-device interactions to protect users and data. Financial institutions and enterprise companies use BioCatch’s tech to reduce online fraud and protect against a variety of cyber threats.
KHIPU Networks Cyber Security consists of twenty-five security products maintained by experts in technology all over the world. These services can be divided into two main areas: Next-generation networking and advanced cyber security.
B2BSignals Cybersecurity Review is designed to help users to conduct research and comparison among cybersecurity solutions.
Authentication & Verification can help to authenticate nearly every U.S. consumer by tapping into the unique cross-industry data in the ID Network | <urn:uuid:52615280-410d-4f09-b967-817a2b34743f> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/biometric-data/behavioral-biometrics-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00767.warc.gz | en | 0.917157 | 740 | 3.21875 | 3 |
As summer comes to an end and we start to prepare our children for the beginning of the new school year, we should start to reassess the ways we keep our children safe online. Keeping your children safe online begins with educating yourself on the dangers and teaching your children how critical it is to say safe. Remember, most children have similar characteristics that might pose as challenges when trying to keep them safe: innocence, a desire for independence, curiosity, and fear of punishment. Here are a few things you can do to help your child stay safe on his or her devices.
A sure way to maintain a watchful eye on your child is to keep their computer in a public area, when they are using it, like in the kitchen. Although it may not be wise to hover, keeping the computer and other internet connected devices that they use in your plain view might be a way to protect your child from accessing a site they probably shouldn’t.
Next, set rules and boundaries on your devices that have internet access. The most effective way to ensure safe online behavior is to teach your children about the dangers of certain activities and the reasoning behind the rules you’ve set. Children don’t want to be told not to do something, but they still need to understand why it would be a poor decision for them to do certain things. A few topics of limitation might include: disallowing online chatrooms because they are filled with people that may pose as a different person or try to lure them in, limiting purchases you will do for them online, what programs your child can use on the computer, and downloading privileges.
Some of the boundaries that must be established should be coupled with setting parental controls on their devices. We all know that when you tell a child that they can’t do something, they will go ahead and do it to learn for themselves. Sometimes it can be too risky to trust that they will not experiment with the boundaries. We recommend setting stronger parental controls for younger children and as they get older, earn trust, and learn about the dangers online, parental controls can be lessened.
Last, we all make mistakes… Make sure your child can come to you if they feel like they did something wrong. Children fear punishment, so it’s essential for you to let them know that they did the right thing by coming to you and that we all make mistakes. If you keep communication with your child open like this, problems with devices and internet access can be caught early.
The bottom line is there’s really no better time than the present to become a LibertyID member for identity theft restoration protection. LibertyID provides expert, full service, fully managed identity theft restoration to individuals, couples, extended families* and businesses. LibertyID has a 100% success rate in resolving all forms of identity fraud on behalf of our subscribers.
*Extended families – primary individual, their spouse/partner, both sets of parents (including those that have been deceased for up to a year), and all children under the age of 25 | <urn:uuid:345f5ba9-73f3-49e8-813b-1b881dedfc0a> | CC-MAIN-2022-40 | https://www.libertyid.com/blog/how-to-keep-your-children-safe-online/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00767.warc.gz | en | 0.963153 | 621 | 3.296875 | 3 |
What is Embedded Systems Software?
Embedded systems software can be defined as specialized programming tools in embedded devices that facilitate the functioning of the machines. The software manages various hardware devices and systems. The basic idea behind embedded systems software is to control the functioning of a set of hardware devices without compromising on the purpose or the efficiency.
Embedded systems software can be compared to the operating systems in computers. Much like how the operating systems control the software applications in computers, embedded systems software control various devices and ensure their smooth functioning. Ideally, these software don’t require user input and can function independently on preset parameters.
Devices ranging from something as simple as a microwave to the more complex ones like detonators can all be controlled by embedded systems software. The software can be adjusted and calibrated per need and the device can also be connected with remotely or with other devices. It is for this reason that embedded systems hacking is a risk.
The complexity of embedded systems software vary according to the devices they are controlling and also on the basis of the usage and end goal. Compared to firmware, which acts as a liaison with operating systems, embedded software are more self-reliant and directly coded.
HCLTech is constantly evolving and expanding its offerings in the embedded systems domain and takes pride in partnering with organizations across verticals. | <urn:uuid:2f6817cb-ca8f-4058-851d-f8e718f2ebd0> | CC-MAIN-2022-40 | https://www.hcltech.com/technology-qa/what-is-embedded-systems-software | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00767.warc.gz | en | 0.933992 | 277 | 3.34375 | 3 |
Data is becoming increasingly unstructured. While data is reproduced through several means, including in devices such as laptops, desktops, and smartphones and other devices, data consolidation and definition has become challenging. However, such data should be merged, defined and even made available to the people that need the information for the purpose of decision making. So far, data consolidation either never happened or happened very infrequently among all verticals. This has forced decision markers to guess at trends and reach at conclusions based on the available data. However, today things are different and data consolidation and definition can be handled rightly.
There can be changes in the face of computing around your company due to the incorporation of cloud computing. There are also possible things you can do with your data banks which include, centralizing them and linking your mobile devices to the data banks for the main purpose of either downloading of uploading information on the go. You can easily define polices of your enterprise for the mobile employees and automate the process of data backup from plethora of systems online. This is to make it easy for collection of data, categorization, indexing and presentation of data notwithstanding the fact that it is generated or even within a short period of time.
Loss of mobile devices should not be disastrous. There are ways you can prevent data breaches. For instance, you can run your entire mobile devices data on a very “thin” client; and and also store all your information in an Internet based repository. As a result, there will be no impact due to the loss of your mobile device, as it will just be a loss of a piece of hardware instead of a loss of hardware and important information. Encrypting all of the data on the device will give you an added security; and crooks or unauthorized persons that get hold of your mobile device will not be able to gain access to the data in the device or even in your Internet based storage vaults without having necessary authentication.
Mobile access and computing has brought a paradigm shift in the manner in which businesses are being carried. Thus far, there are still people that are still questioning whether mobile computing is necessarily “ready” when it comes to anchoring data that are mission-critical. No doubt about the fact that there are lots of unchartered and unexploited areas available today. Before deciding to take up the leap, you will need to consider and evaluate the possible risks that could cause your enterprise. In addition, mobile computing architectures need to reach its maturity stage, and the applications are what you need to scrutinize before going ahead to approve that it is risk-free to increase your application platform and migrate to mobile computing. Application developers and system architects could face glitches and confidential expectations when it comes to compliance with statutory privacy requirements.
Hopefully, the above discussion has made you aware about the facts when it comes to anchoring data that are generated on mobile services. | <urn:uuid:9d2e9160-3e9f-446c-a38c-40a92b78b1dd> | CC-MAIN-2022-40 | https://blog.backup-technology.com/tag/data-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00767.warc.gz | en | 0.952384 | 587 | 2.5625 | 3 |
A group of security researchers have discovered a series of vulnerabilities in Electron, the software underlying popular apps like Discord, Microsoft Teams, and many others, used by tens of millions of people all over the world.
It is not uncommon for developers to use other projects, frameworks and libraries as building blocks for their projects. Building on proven code makes sense: It saves time, it is easier for others to get involved, and everyone benefits from all the layers of solved problems in the existing codebase.
The problem with building software on existing foundations, provided by others, is that its developer may not fully understand the security implications of certain decisions or configurations. And they need to rebuild their own application whenever a security vulnerability is fixed in the software they're building on top of, and then distribute that update to their users.
Probably the most famous example of such a building block vulnerability is Log4Shell. Log4Shell is a vulnerability that was found in Log4j, an open source logging library written in Java that was developed by the Apache Software Foundation. Millions of applications use it, and some of them are enormously popular—such as iCloud, Steam, and Minecraft—so the impact of the vulnerability was enormous.
The chances of applications harboring out-of-date underpinnings are software are high. And the reservoir of known bugs that are fixed in, say, Chrome, but not yet fixed in Electron, or fixed in Electron but not yet fixed an application built on top of Electron, is something that criminals and researchers can exploit.
A group of researchers recently presented research into Electron vulnerabilities at the Black Hat security conference having done exactly that. For a peek into what they did, and a look at how complicated modern bug hunting is, read researcher s1r1us's explanation of how they went about finding a remote code execution (RCE) vulnerability in Discord by chaining a new cross-site scripting vulnerability, a CSP bypass in Discord's out-of-date Chrome version, and an exploit for an existing V8 vulnerability.
In the case of s1r1us's Discord bug, what the researchers found could be exploited with nothing more than a malicious link to a video. With Microsoft Teams, the bug they found could be exploited by inviting a victim to a meeting. In both cases, if the targets clicked on these links, an attacker would have been able to take control of their computers.
The most general and best advice in many cases is to avoid clicking on links that come in unexpected or in unusual ways. In an ideal world you would distrust them with the same vigor as the links in your mailbox and on social media. However, this can be very difficult in practice because many of these applications require you to click on links to join meetings, accept invitations and so on.
A more workable solution, suggested by the researcher, is to use apps like Discord or Spotify inside your browser, because then you have the protection afforded by Chrome, which is much larger than the one provided by Electron, and you have control whether it’s up to date or not.
Most of us though, will simply stick to downloading our security updates, and hoping the people who make the software are too. | <urn:uuid:2518f6ff-4634-49b4-946b-9ad0a6c50f89> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2022/08/researchers-found-one-click-exploits-in-discord-and-teams | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00767.warc.gz | en | 0.956609 | 761 | 2.96875 | 3 |
Cloud and Software as a Service are terms that you will see often. You’ll often see descriptions such as cloud-based, hosted SaaS, IaaS or PaaS when comparing different technologies to run your business.
These terms are important because they affect the way you interact with technology every day.
What do these terms really mean?
What is Cloud Computing?
Although cloud computing is sometimes referred to as cloud-based or cloud-based, SaaS (Software as a Service), and cloud computing are very similar terms. We will explain this in greater detail later. We’ll first define each.
Cloud-based software has become a mainstream technology. It is now mainstream technology and people use it every day, whether they know it or not.
Cloud-based business software includes organizational software such as Trello or Slack, to ERPs like NetSuite, and email providers such as Mailchimp. For familiar software names, see the Top 100 Cloud List.
Gartner predicted that the worldwide end-user spending for public cloud services would grow 18.4% to $304.9 billion by 2021, up from $257.5 million in 2020.
What is the cloud? You can think of the cloud as the internet.
Cloud-based software allows you to access it from anywhere and anytime via the internet (the “cloud”) All you need is an internet connection and the ability of logging into the system using a web browser. You can use a phone, a computer or a desktop computer. You can be anywhere you want, including at home, the office, and at the airport. You don’t have to be anywhere, just as long as there is WiFi.
Cloud computing, in technical terms, refers to the instant availability of computer resources such as storage, servers, networks, and databases. Cloud computing makes data centers accessible to all users via the internet.
Cloud Computing vs Traditional IT
Cloud computing was not possible before companies had their own servers or hardware to run software applications. To use the software, you would need to physically insert a CD onto your computer.
Maintaining your own hardware can prove costly and time-consuming. It can also limit the number of people who use the software, since it must be installed locally.
Cloud computing is an IT game-changer. Cloud computing is a game changer for IT teams. They no longer need to own and manage their own software and hardware assets. They don’t require deep technical expertise to setup, manage, and secure their resources.
Instead, they partner with a cloud service provider or third-party that hosts your software on remote servers. They store and process your data. These servers are located in data centers around the globe. Cloud computing is a cost-saving option for IT services because it allows multiple companies to share their computer systems. Teams can also access their cloud services from anywhere on the internet.
Gartner said that the shift to cloud in IT spending will accelerate following the COVID-19 crises. Cloud is expected to account for 14.2% of global enterprise IT spending in 2024, up 9.1% from 2020.
Cloud computing is transforming every industry. It reduces IT costs and supports remote workers. It also allows for mobility and collaboration between team members.
Cloud Software Benefits and Examples
Cloud computing has many advantages. Cloud computing is not a hassle. You don’t need to manage, maintain, update, or worry about the security of your software applications’ data. Your data is available in real time, whenever you require it. You only pay for space on the hosted servers, and can scale your resources as needed.
Cloud computing means that you are responsible for maintaining any applications that you run on third-party servers (i.e. Amazon AWS servers. Third-party manages the physical servers as well as the operating system.
Microsoft offers a detailed explanation of cloud computing.
So where does SaaS fit in with cloud computing?
What is Software as a Service (SaaS), and How Can It Help You?
Software as a service is a delivery model that licenses a cloud-based software program to a user. Access to the application is via the internet. This means that the user does not need to install or maintain the software locally.
The application runs on the SaaS provider’s servers. They are responsible for its security, performance, maintenance, and security.
SaaS applications typically are licensed on a monthly basis. A monthly fee is charged based on the level of service you receive and how many users are required. As a service, SaaS providers deliver and maintain their applications to you via the internet.
SaaS Benefits and Examples
SaaS software offers many benefits. It’s similar to cloud computing and offers instant access to software at a fraction of the cost. Multiple users can have the same software. You don’t even have to worry about the server maintenance as a user.
There are also terms like IaaS or PaaS that can be used to refer to Infrastructure as a service and Platform as a service, respectively. Both terms refer to cloud computing but have different capabilities as services.
SaaS can be thought of as a part or branch of cloud computing. It is a license that allows you to access a particular software application via the internet.
Cloud vs SaaS
It is clear that SaaS and cloud computing are both closely related but have different terms.
Cloud computing allows users to modify and manage any software applications on servers that are hosted remotely by third-party companies like AWS. These servers are accessible via the internet and you have access to all your data.
SaaS allows you to pay a monthly subscription to use a cloud-based, already-developed software application over the internet. The software is not your responsibility. SaaS software has one drawback: you may lose some control over how the software is managed and customized.
nChannel is a great example of both a SaaS and cloud computing application. nChannel is a cloud-based integration tool that connects retail systems such as ERP, eCommerce and POS systems. It allows merchants to share data, including orders, inventory and tracking/shipping information.
SaaS is how we deliver cloud applications to our customers.
We developed and own the nChannel app and provide customers with access via the internet. nChannel manages, secures and processes data from customers that are kept on remote servers in “cloud.” We do not own the servers. They only maintain the application that runs on them.
Our customers pay a monthly fee to access cloud-based software. It can be used by multiple users and accessed via the internet.
Cloud computing and SaaS can be combined to provide easy-to-access, affordable software solutions to all users. | <urn:uuid:2c2a6205-c42c-498a-8726-2abb1335a3b0> | CC-MAIN-2022-40 | https://www.5gworldpro.com/blog/2022/08/15/cloud-vs-saas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00167.warc.gz | en | 0.951262 | 1,421 | 2.9375 | 3 |
How the Healthcare Sector is Leveraging IoT
The Internet & Information technology have disrupted all industries. Healthcare is no different. In many ways, the healthcare industry now is at the forefront of technology adoption. Among the technology movements taking this sector by storm is the Internet of Things (IoT). Improving efficiencies, enhancing care quality, improving patient outcomes and satisfaction, and reducing the cost of healthcare are just some of the areas that where IoT is already at work.
Research shows that the global IoT in Healthcare Market is expected to grow at a rate of 19.8% from USD 60.83 Billion in 2019 to USD 260.75 Billion in 2027. Fueling the growth of IoT is the rising focus on patient-centric care delivery and active patient engagement. With IoT applications, doctors can make patient treatment more accurate and proactive. Hospitals can reduce patient readmission rates, improve diagnostics, enable proactive and preventive care, and also improve communication and workflows in the hospital environment.
IoT applications are all set to transform the healthcare industry by creating avenues to seamlessly connect devices and people to streamline the healthcare delivery process. Some of the areas where IoT is at work in the healthcare segment are:
Improving patient care with remote monitoring
IoT-enabled devices can transform patient care with remote monitoring. They gather real-time patient data for constant monitoring, especially for chronic disease management. These devices can be of great use to deliver quality post-surgery care once the patient has been discharged.
Data generated from these devices can be employed to enable proactive patient care and provide timely intervention in case of anomalies. Any anomaly triggers an alarm alerting the doctors or the primary caregivers and provides real-time information regarding the patients’ health.
With this information doctors and caregivers can improve the quality of care by making it more real-time and personalized. The data generated by these devices can be used to design care plans that can immensely improve patient care and outcomes.
Improving administration, operations, and inventory management
The hospital is a complex environment. IoT applications can be of great help in making this space more connected, secure, and efficient to bring in more transparency and better management. RFID tags and barcodes, for example, are now being placed extensively for medical inventory management. These tags are placed on medical equipment and supplies and help hospital staff efficiently and effortlessly track their location, status, and usage.
IoT smart devices are also being used to make the hospital environment more efficient. These sensor devices in refrigerators, freezers, and laboratories ensure that blood samples, vaccines, and important medicines are stored at the right temperature and dispensed correctly to save time and improve patient care. RTLS (Real-Time Location Systems) used in conjunction with IoT can mark inventory and manage medical equipment employing location sensors. These systems can send out alerts about incorrect device use, contaminated medicines, or stock expiry dates.
The smart sensor network can also make hospital admissions and operations more efficient and streamlined. IoT applications can be employed for contactless registration of patients, tracking patient information, and guiding patients across the hospital environment.
Enhance safety and security of patients
The healthcare industry is using IoT applications to improve the safety and security of patients as well. IoT applications such as heart and respiratory rate sensors are used on hospital beds to monitor vital signs and send out alerts to the nurses in case of a change or behavioral anomaly.
Smart trackers like wristbands are being used to monitor patient movement and have become especially useful to ensure that patients do not access unauthorized areas or leave the premises unattended. These devices can also be used to track infants and ensure that babies can be tracked and located at any time.
These devices can also send out real-time alerts to nurses about patient conditions to drive proactive patient care. It also improves staff efficiencies as they no longer need to spend time manually collecting and collating data.
Remote assistance and monitoring
IoT applications are unleashing a wave of efficiencies in improving patient care and outcomes by enabling remote assistance and monitoring of patients. An IoT app can connect a patient to a doctor miles away and can provide the patient with vital information on how to manage the health condition until help arrives. IoT enables real-time monitoring that helps medics check-in, identify, and evaluate ailments on the go.
This remote medical assistance can be a great boon to the underserved sections and ensure that quality healthcare access and important healthcare information are delivered to all.
IoT helps in making the entire healthcare ecosystem seamlessly connected. However, an IoT solution is not just about using a device or a sensor. It is about using the right combination of technologies to drive the device network. It also involves developing a robust IoT platform that seamlessly connects the device to the healthcare ecosystem. It demands creating robust processes and workflows to enable data capture and analysis to drive predictive capabilities and improved outcomes. A lot depends on the software components that power the devices, platforms, analytics engines, and reporting tools.
IoT has tremendous potential in the healthcare segment to improve patient journeys amidst tightening budgets. Whether it is improving operational efficiencies through facilities management or predictive maintenance, improving clinical efficiencies by enabling better data-driven decisions, fueling drug discovery, or improving patient experiences with personalized treatment plans and remote monitoring, clearly IoT has cemented itself as a viable and essential technology in the healthcare space.
The blog was originally posted on GS Lab’s Website.
Mandar Gadre, Director of Engineering – Healthcare & Manufacturing at GS Lab | <urn:uuid:1e42d712-6967-45f7-b156-4745bb85e243> | CC-MAIN-2022-40 | https://www.dailyhostnews.com/how-the-healthcare-sector-is-leveraging-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00167.warc.gz | en | 0.921397 | 1,133 | 2.59375 | 3 |
Whenever there is a data breach, the key question is whether the stolen information was encrypted. However, the best encryption in the world can’t do much if the attacker has administrator-level access on the server or if the log messages contain sensitive information from database queries. If the key is stored on the server, the attacker with access to the server can decrypt the information.
MongoDB’s engineers spent the last two years figuring out how to move encryption and decryption operations to the client application to protect sensitive data. All encrypted fields are rendered as ciphertext on the server, which means the only way to see the data is to go through the client application and have the right keys. Anyone who uses administrator credentials to access the data directly from the server just see encrypted blobs. Messages written to the logs display the sensitive information as ciphertext and not in plaintext.
Among customers hesitant to move their workloads to cloud providers and database platforms were saying, “It’s not that we don’t want to trust you. We can’t trust you. We have to manage it ourselves,” said Kenn White, MongoDB's product security lead. “We don’t want you to have to trust us.”
Field-level encryption on MongoDB 4.2 is intended for organizations that have regulatory requirements—such as healthcare and finance—to protect certain types of customer records. It can be useful for organizations concerned about user-level privacy, especially in light of the European Union’s General Data Protection Regulation, said White. Organizations can comply with requests to remove data since once the key associated with a customer is destroyed, the data is useless.
Server-side protections on the database puts the data at risk to external attackers with stolen credentials and insiders who have more access than they need. Field-level encryption reduces this particular risk because administrator-level access doesn’t expose sensitive fields. If a user’s credentials are stolen, the attacker will be able to see everything the user could (through the application), but curious server and database administrators (or attackers with stolen credentials) poking around won’t be able to see any sensitive customer data. MongoDB and cloud providers can’t see the information, either. In fact, the only way to access the information is to have a system login and the correct keys—which aren’t stored on the server.
“When it’s easy to do and built-in, it changes the discussion from ‘Should I move this workload to the cloud?’ to ‘How, or when, do I move?’” said Davi Ottenheimer, MongoDB's vice president of trust and digital ethics.
Inspecting the Scheme
MongoDB’s field-level encryption relies on the database’s client library to act as the driver and perform the operations separately from the database layer. When the application sends a database query, the driver determines if there are any encrypted fields, and if there are, obtains the fields’ encryption keys from an external key manager, such as Amazon’s key management service. Once the driver has the correct keys, the driver encrypts the sensitive fields and submits the query to the MongoDB server. As far as the server is concerned, the encrypted data from the application is just another type of data to store. The database returns the encrypted results to the driver, where they are decrypted and sent to the client.
The concept is similar to end-to-end messaging, in that only the intended recipient sees the contents.
Generally, client-side encryption is painful because developers have to make a lot of changes to the application calls and modify the queries, White said. Making field-level encryption transparent to the application so that developers do not have to modify the application code was necessary, or developers wouldn’t bother using the new feature. Developers use the new “encrypt” JSON attribute to use the encryption options and just need to update the driver.
“You are still using the database like a database,” said Ottenheimer.
It’s not a perfect mechanism, as there is a bit of a performance hit as it is harder for the database to perform certain types of sorts and search queries because the encrypted fields can’t be read. Organizations can opt to use the same key for all fields, or have a different key for each field, but regardless of the method, they are responsible for tracking and managing the keys. MongoDB isn’t going to do that.
Implementating the Scheme
Thousands of publicly exposed MongoDB and other noSQL databases have led to data breaches because the servers weren’t configured correctly. Criminals wiped, or otherwise locked up, MongoDB databases and demanded ransom. While the field-level encryption doesn’t explicitly address these issues there is a benefit: attackers may gain access to the database but the not be able to do anything with the blobs themselves.
“Administrators need to still protect the servers,” and follow the best practices on what needs to be done, Ottenheimer said. “Field-level encryption is a form of ‘defense-in-depth’ and ensures no one is putting all their eggs in the same basket.”
Instead of creating their own cryptography scheme, MongoDB’s engineering team decided to build field-level encryption with well-tested, public encryption standards available through the core libraries for major operating systems.
“Even if you are a seasoned engineer, you can still make mistakes,” White said. “If you trust Microsoft to do encryption, than we’ll do the same.”
MongoDB solicited the help of people who deal with “database encryption for a living” to vet the implementation, which is open source, White said. Audits have already begun, and include experts such as Brown University cryptographer Seny Kamara.
MongoDB is the first noSQL database provider to offer field-level encryption, but Ottenheimer said (hoped) other providers will take this step and incorporate field-level encryption. MongoDB plans to keep with cryptography experts to keep refining the implementation.
“We are taking things up a notch,” said Ottenheimer.
Photo credit: Mattt Antonioli from Unsplash.com | <urn:uuid:f4c2803a-df31-4772-8d9f-0cbd6d8a51f7> | CC-MAIN-2022-40 | https://duo.com/decipher/mongodb-moves-encryption-out-of-the-server | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00167.warc.gz | en | 0.922425 | 1,333 | 2.578125 | 3 |
The U.S. Department of Energy (DOE) is investing $30 million in 13 projects that aim to conserve and sustain the supply of materials needed for clean energy systems.
Universities and national laboratories lead these projects to develop technologies that diversify, recycle and provide alternatives for rare-earth (REEs) and platinum group elements (PGEs), DOE said Thursday.
REEs and PGEs generally support a variety of clean energy applications, such as rechargeable batteries and emission regulation systems.
These elements include cobalt, neodymium and platinum, which are critical for electric vehicle batteries, windmills and fuel production systems, respectively.
“Expanding electric vehicle infrastructure, hardening our nation’s electrical grid and powering our economy with millions of clean energy jobs all rely on securing supply chains of critical materials like cobalt and platinum,” said Jennifer Granholm, secretary of energy. | <urn:uuid:05fbd727-22c8-4055-bbe8-b0d08f61814b> | CC-MAIN-2022-40 | https://executivegov.com/2021/09/doe-funds-efforts-to-sustain-materials-for-clean-energy-tech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00167.warc.gz | en | 0.893941 | 189 | 3.203125 | 3 |
More than a year after it first struck, WannaCry is still one of the most damaging cyberattacks to date. It cost the global economy billions of dollars, although the impact goes far beyond the money.
Although companies incurred substantial monetary damages, WannaCry is the clearest example of the physical impact a malware attack can have on critical infrastructure, such as rail systems and hospitals. This can be the case even when the attack does not target or operate on industrial control systems, medical, or Internet of Things devices. WannaCry was "standard" malware aimed at Windows machines. And yet, it affected day-to-day life by preventing employees from getting to work and patients from receiving uninterrupted medical care.
It's important to understand the longer-term effects of WannaCry on the cyber ecosystem, and what security professionals should be aware of, because we'll likely see "WannaCry 2.0" at some point.
As things stand now, we're currently in the phase of "WannaCry 1.5," which is not causing the same level of damage but is still cause for concern. Every day, mutations (some minimal, others significant) of WannaCry appear and are used by ransom-hungry hacking groups. However, as malware becomes more sophisticated, there is an increased chance that a WannaCry 2.0 will become real. The underlying factors that enabled WannaCry to become so successful to its creators are still relevant:
- Patching: Organizations are not implementing patching cycles in a timely manner. For example, a patch for EternalBlue was available in March 2017, but WannaCry was still able to infiltrate systems two months later, in May 2017, because of the delayed patching by organizations.
- Hacker persistence: Zero-day and one-day vulnerabilities are still appearing and being used in the wild. Hackers, including independent and nation-state groups, are looking for the right opportunity to spread a ransomware strain that could have the same (or better) lateral movement capabilities as WannaCry.
This type of looming cyber threat is the "new normal" in today's world, but it's important to understand how we got here, where we are now, and what we can do to better protect against such threats in the future.
Industry and Public Pressure on Government Agencies
The long-term effects of WannaCry are still being felt by many organizations, and it has been a cause for debate both at the enterprise and government level. Industry and public pressure is being put on government agencies, and for good reason. Government agencies have been, for several years now, part of the cyber ecosystem. They no longer enjoy the luxury of public and economic indifference to their cyber-related research and operations, as was the case in the late 1990s and early 2000s. They need to opt for responsible disclosures of vulnerabilities in a way that balances national security interests on the one hand and keeping cyberspace as safe as possible for individuals and corporations on the other. If exploits and vulnerabilities are not in use, or are not needed, they should be disclosed before being discovered or leaked.
Government agencies that discover vulnerabilities must prevent them from leaking and keep them in the hands of the good guys. Secondly, agencies must be timelier in their disclosures. If a vulnerability or an exploit cannot (or can no longer) be leveraged to provide a tangible contribution to national security interests, it should be disclosed. The case should be the same with vulnerabilities that are extremely severe and easily exploitable. If those are leaked or discovered by hackers, the effect could be catastrophic. When surveying the NSA/CIA leaks in the past year or so, it is obvious that some vulnerabilities discovered were held for a long time, and were most likely not used.
To change this current culture, government agencies must adopt clear policies. Of course, they do not have to disclose everything for the sake of national security, but they must own their faults in order to fix the problem.
Unfortunately, code and capabilities leaked from government agencies are continuously trickling down to everyday malware attacks — WannaCry and EternalBlue, for example. We are seeing malware strains from leaked code happening more frequently and at an expedited pace. Leaked exploits are always a hit in Dark Web hacking forums and find their way even to crypto-miners such as Monero. Attacks will become more sophisticated over time, which puts added pressure on enterprises to implement a strong cyber defense plan.
Implications for the Enterprise
Vulnerabilities are being disclosed on a daily basis, and many enterprises are overwhelmed and cannot patch at the fast pace that's required. This issue keeps many IT professionals and C-level executives up at night as hacker groups look to execute exploits at a mass scale to target employees, customers, and stakeholders.
To help mitigate some of this risk, security professionals within the enterprise must keep the following in mind:
- Understand vulnerability databases: IT and security professionals need to take the time to understand vulnerabilities and assess how they will affect the company. Conducting a thorough risk factor assessment to verify how fast and serious the threat is will help inform and decide what the next action should be and the appropriate timeline for execution.
- Out-of-the-ordinary workflow: Timely patching can be a huge burden on an organization, so think of new ways to streamline patching and update systems accordingly. Whether that means dedicating a small team to solely focus on patching or using solutions powered by artificial intelligence to help detect the vulnerabilities. This will leave executives more time to dissect, patch, and properly respond to the threat.
It's just a matter of time until WannaCry 2.0 is here, so understanding the cause of such an attack and having the right processes in place will be crucial for businesses to protect their assets.
- 9 Ways to Protect Your Cloud Environment from Ransomware
- FBI Slaps New Charges Against Researcher Who Stopped WannaCry
- WannaCry? You're Not Alone: The 5 Stages of Security Grief
- 10 Scariest Ransomware Attacks of 2017
Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event. | <urn:uuid:b4490e6e-562d-447a-b733-8b059b141543> | CC-MAIN-2022-40 | https://www.darkreading.com/advanced-threats/how-to-prepare-for-wannacry-2-0- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00167.warc.gz | en | 0.949713 | 1,310 | 2.609375 | 3 |
Most of the government Agencies, Corporations and Private firms have now embraced website as means of connecting to their constituencies, disseminating public information and making their services accessible. Due to the always on nature of these public facing sites, they are more exposed to cyber attack.
Most of the websites are hacked because operating system, database, plugins CMS or related applications are not updated leaving them exposed to known vulnerabilities:
Some of the common channels used by hackers to take control of your system:
- Password compromised,
- PC or server infected with malware to capture credentials,
- Exploiting known or unknown vulnerabilities of unpatched systems,
- Exploiting another system hosted in same server
To diminish the risk of your site being hacked:
- Always ensure that the operating system and all applications are up-to date.
- Minimal installation of OS with only required applications installed reduces the attack surface area.
- Change Default login page and all default credentials, create complex password and secure them.
- Enforce password change atleast atleast once in every 6 months.
- Follow secure coding, (broken link and session management, insecure direct object references, security misconfiguration, SQL injection, XSS, CSRF, etc…).
- Provide only minimal information in error messages, too much of information makes the work of attacker easier.
- Minimised the use of Dynamic SQL code and where unavoidable make use of prepared statement, parameterized queries or stored procedures.
- Incorporate both client and server side validation, restrict input for length, format, type…., for instance for date allow only number or only characters for name.
- Implement proper file permission for uploaded file.
- If your site has forms, it’s recommended to have ssl certificate.
- Run all software as a non-privileged user, without administrative privileges, to diminish the effects of a successful attack.
- Make sure that the local machine used to access the admin panel as well as the server hosting the site is not affected by Malware.
- Always keep a backup, so that you have the way out if you are hacked.
- Always maintain access and error logs.
The site is hacked, what next:
- first thing first, take your website offline, and display maintenance note.
- clear unwanted content from affected pages.
- scan all the files and folder in your web application with an antivirus. You should also clean and re-host your web on a clean server because there are chances that your server/web app is infected with rootkits, trojans and backdoors or other virus.
- Change your system passwords(CMS Login, Hosting Login, Database, Local machine, FTP / sFTP access credentials, ssh, etc… )
- If your server also hosts other websites, scan them for malicious content.
- Review your access logs and error logs to check for successful remote login from suspicious ip addresses and learn how did it actually happen.
A comprehensive guide on securing website is available at https://csrc.nist.gov/publications/nistpubs/800-44-ver2/SP800-44v2.pdf . | <urn:uuid:f7322183-7211-4e7a-8d30-291139a613bb> | CC-MAIN-2022-40 | https://www.btcirt.bt/some-tips-to-secure-websites/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00167.warc.gz | en | 0.864789 | 773 | 2.78125 | 3 |
“Private” is the middle name of virtual private networks (VPNs) – but how private are they really? Unfortunately, when you understand their design, it becomes clear pretty quickly that with VPN technology, secure data transmissions can’t be counted on. While VPNs have traditionally been considered a reliable way to ensure that data transmissions occur with the needed level of privacy and security, the facts don’t validate that assumption anymore. The truth is that many of today’s go-to data security options may open the floodgates to added security risk rather than eliminating it.
What’s the deal with VPNs and why are they less secure in today’s world than in the past? The fact is that the legacy security/connectivity approach of VPNs was conceived not for the current workplace environment of hybrid/multi-cloud and mobile configurations, but for on-premises settings. Not only are VPNs hampered in cloud settings, but their many drawbacks may now outnumber their benefits. These disadvantages include, but aren’t limited to: complex management; unreliable, sloth-like connections; limited scalability; data and network vulnerabilities; and high, continuously escalating costs.
Flagged for concern
This isn’t just conjecture. Earlier this year, two U.S. Senators labeled VPNs a “national security risk” and alerted the Department of Homeland Security about it. You can read the letter written by Sens. Ron Wyden and Marco Rubio to Christopher C. Krebs, director of the branch of the Department of Homeland Security concerned with cybersecurity.
The senators’ concerns were well founded and pointed to a weak link in VPN architecture, which is particularly problematic in relation to downloading mobile apps. A key issue that Wyden and Rubio raised was the disturbing fact that “VPN providers route all user traffic through their own servers.” After outlining their argument against VPNs because of the technology’s inability to protect national security, the senators implored Krebs to “conduct a threat assessment on the national security risks associated with the continued use by U.S. government employees of VPNs, mobile data proxies, and other similar apps that are vulnerable to foreign government surveillance.”
Enterprises not immune
National security may be one of the most wide-ranging privacy problems caused by VPNs, but they’re just the tip of the iceberg. VPNs have been equally problematic in other distributed settings that require reliable security and compliance adherence, particularly when it comes to enterprise security. The issue centers around the fact that data can’t be routed securely at the application level with a VPN. So if your organization is still relying on VPNs to transmit sensitive data over the VPN provider’s server, then you’re taking a big risk that you may be exposing that data to people who shouldn’t see it and may compromise it.
As the letter from Wyden and Rubio highlighted, VPNs complicate private data transmissions when they use third-party servers to transport data. A study that included researchers from UC Berkeley and the University of South Wales revealed that the vast majority (more than 80 percent) of VPN apps on Android devices wanted access to personal user data.
The research also verified that:
Nearly 40% of the VPN apps injected malware to try to access user data.
84% leaked user traffic.
Around 20% failed to encrypt traffic.
Yet of note, virtually no users of VPN apps (less than 1%) expressed any particular worries about privacy related to the apps that have the data security problems noted above. A key piece of this puzzle is that regulators in many industries require that companies be more careful about these decisions, since organizations are being held responsible for the practices of third parties that are processing their customers’ personal data and private information. So it may be time to get real about the situation when it comes to the fact that VPNs simply can’t protect privacy to the degree that is required in today’s enterprises.
In the data-distributed enterprise, a more secure solution is needed, and it already exists in the form of Software Defined Perimeter (SDP) approaches. SDPs are quickly gaining traction and visibility in the market because unlike VPNs, SDPs have been expressly engineered for a world of distributed data and sharing across multiple clouds and hybrid settings, from mobile to Internet of Things. By avoiding VPN shortfalls on both the operational and architectural front, SDPs actually do what many people believe VPNs do but don’t – SDPs bolster security and privacy for data transmission instead of undermining them.
The micro-tunnel design of an SDP is fully cloaked for optimal security, which is amped up further by proper encryption and authentication functionality. This means even if sensitive data does become compromised, the perpetrators won’t be able to decipher the data.
Here are some of the key ways that SDPs correct for the flaws inherent in VPN solutions to provide added transmission security:
Third parties have no access to user data with SDPs, since the data in question avoids third-party servers.
With no possible third-party intervention, the types of concerns common with VPNs – such as requests for data and tracking systems for compliance – become obsolete.
Application-level data delivery occurs directly from the source to target systems.
SDP’s compartmentalized micro-tunnels block outside access to users’ networks.
Once an SDP solution successfully connects the applications and servers, their adjoining ports are no longer open for detection – unlike with open VPN ports that can be easily spotted by hackers.
Micro-tunnels thus become virtually “invisible,” making data transfer truly private and helping companies achieve regulatory compliance.
What is it that makes these tunnels so secure? Their data transmission takes place via user data protocol (UDP), not transmission control protocol (TCP), the latter of which is much more detectable. Random port generation occurs only when a request is made for a connection, which prevents cyber thieves from zeroing in on the usual suspects like SQL Server and other standard application ports.
While inside data leaks commonly occur with VPN, the SDP format prevents even detection of remote data transmissions. With these fortifications and safeguards in place, the types of concerns about potential data compromise that Senators Wyden and Rubio expressed evaporate into non-issues.
VPNs may still be popular, but they’re no longer private. Their very architecture has become almost archaic in today’s cloud-based workplace, making them too risky when it comes to third-party data transmission. SDPs, on the other hand, are designed to be a model of security best practices when it comes to data transfer, offering enterprises a protective alternative to boost data privacy rather than compromising it. | <urn:uuid:664cad9d-c17f-40be-b264-eb73057628f8> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/why-virtual-private-networks-arent-very-private/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00167.warc.gz | en | 0.945243 | 1,411 | 2.609375 | 3 |
Data breaches and Cyber Security should be big concerns for business owners. A hacked system is more than just a hassle. With the potential to destroy your company’s network, halt business processes and put your customer transactions in jeopardy, a data breach can be potentially fatal for any business.
The recent Marriott breach, and the similar Quora breach a week later, reminded the world how crucial it is to protect your data, and demonstrated that even large businesses can fall victim to the increasingly sophisticated techniques hackers use.
During the attacks, over 600 million details were stolen, including personal information such as addresses, names, phone numbers, credit card numbers, travel locations, passport numbers, and so much more. Once such details are compromised, they are at risk of being sold to the highest bidder on the Dark Web.
What is the Dark Web?
The Dark Web is a part of the World Wide Web that is accessible by using special software, allowing website operators and users to remain untraceable and anonymous. Because of its existence in the shadows, the Dark Web hosts many critical marketplaces for unlawful activities and criminal organisations worldwide.
As a result, the Dark Web constitutes formidable challenges for security agencies globally. On the Dark Web, it is possible to buy and sell data anonymously, and hackers take advantage of this to sell stolen data to criminals. Although this sounds like a murky underworld – or something that will only affect huge corporations that are targeted due to their large volume of data – cyber attacks against SMEs are much more common than you think. Ransomware is expected to attack a business every 14 seconds by the end of 2019 and, according to Forrester research, two-thirds of organisations had an average of five or more breaches in the past two years.
The risks to your business
Hackers target a business's private data, including pricing strategies, client lists, and trade secrets. The moment they have this data, they can destroy the competitive advantage of a company by disclosing the data to the public or giving these details to industry rivals.
Loss of reputation
A business that has worked hard to build and maintain its integrity values reputation. A data breach or cyber attack can tarnish the best of reputations. In actuality, 46% of British companies say their reputation has suffered as a result of a cyber attack.
Loss of client trust
Most customers share their personal information with businesses believing that these companies have powerful security measures to guard their data. When a breach occurs, these clients will question the reliability of these companies and understandably question whether they want to continue to engage their services.
Loss of revenue
Along with the financial impact of losing existing and potential customers, a data breach can lead to revenue losses due to downtime. The normal course of action when a data breach occurs is to halt operations until the breach is fixed. If a business doesn't operate, this leads to loss of revenue.
How we can protect you
Because of great strides in technology, your business can take quick steps to protect your data, your employees, and your customers.
CMI's Managed Dark Web Monitoring combines intelligence with search capabilities to identify, analyse and proactively monitor for your organisation’s compromised or stolen employee and customer data on the Dark Web.
The fully managed service utilises Dark Web ID by ID Agent – the industry’s first commercial solution to detect your compromised credentials in real-time on the Dark Web. With CMI’s fully managed service, you can minimise future risks. We will help proactively protect your business by:
> Delivering high-level credential monitoring abilities
> Connecting to various Dark Web services like Freenet, I2P, and Tor to look for compromised credentials, without asking you to connect your systems to these high-risk services.
> Providing information about compromised credentials before data breaches or cyber attacks occur.
As a trusted ID Agent partner, we will manage the service and advise you on the right security to protect your business.
We take care of your IT, so you can focus on running your business. Whether you are looking for a comprehensive outsourced IT support service or something more flexible, CMI can help. As industry-leading specialists in network security, business continuity, hardware and software provision, cloud computing and Internet services, CMI has been helping businesses gain a competitive edge through technology for more than 20 years. Call today on 020 8875 7676 to learn more and sign up for a free consultation. | <urn:uuid:f31a8191-47cc-4f43-9006-6678497e2848> | CC-MAIN-2022-40 | https://www.newcmi.com/blog/protect-your-business-from-data-breaches | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00167.warc.gz | en | 0.942601 | 925 | 2.671875 | 3 |
We’ve seen a lot of fear, uncertainty and doubt around the DNSChanger botnet / malware recently, which has caused a lot of speculation about the security of DNS. But what is it, and why – if at all – should we worry?
Essentially, people find it easier to remember words than numbers, which is why we have domain names in the style that we do today, i.e. www.bbc.co.uk. But machines work with numbers, not words, so networks automatically convert these domain names into the IP addresses that we’re all familiar with. More specifically, devices such as PCs transmit web page requests to their ISP, and somewhere along the line, the ISP finds a Domain Name Server. The Domain Name Server translates the domain name (for example, www.yubnub.org) into an IPv4 or IPv6 address (for example, 18.104.22.168 or FE80:0000:0000:0000:0202:B3FF:FE1E:8329) and then into a binary IP address.
Domain Name Servers don’t store an infinite cache of these translations, so they’ll frequently bounce requests further up the chain to other servers until the IP address is found.
The DNSChanger Botnet infected user PCs and redirected DNS requests to rogue DNS servers, which misdirected traffic to pages with fake advertising on it, compromising 4m PCs and apparently generating $14m in revenue for the hackers. This is reasonably simple to do by editing the ipconfig settings on a machine and is probably how the DNSChanger malware worked. People have also long since used the HOSTS file on a PC to block undesirable websites by changing how computers process domain name requests – it doesn’t always have to be done at the server level.
There are further possible misapplications of DNS hacking, and the FUD has been extensive. However, for a long time, we have been talking about the possibilities of DNSSec, digitally signing DNS transactions using PKI, making sure that servers are valid and that data is not changed in transit.
DNSSec doesn’t encrypt data or provide confidentiality, but it does make sure that data has come from – and is going to – the right place. Whilst this will generate more demands on processing for web servers, they can look into DNS offload, putting the DNS processing onto different servers in much the same way as SSL offload is already done by many servers. This chain of trust would have prevented the DNSChanger from operating, and would also stop ‘cache poisoning’.
DNS can seem like a reasonably harmless thing to corrupt, falling more into ‘mischievous’ than ‘malicious’ hacking, but DNSChanger malware – as evidenced by the four million compromised PCs and $14m of revenue – has proved otherwise.
Whilst we should always be careful to avoid jumping at every ‘movie plot threat’ as Bruce Schneier says, DNSSec would certainly solve a multitude of problems reasonably easily. And for this reason, it should be worth a look. | <urn:uuid:00142ea0-7d60-423a-a94c-6107993b4d58> | CC-MAIN-2022-40 | https://community.f5.com/t5/technical-articles/a-dns-primer/ta-p/274928 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00367.warc.gz | en | 0.943153 | 652 | 2.578125 | 3 |
There are many, many ways that malicious attackers trick normal hard working people into handing over prized data, be it looking for financials, personal information or sensitive documents.
Just last week, I received an email purporting to come from a business partner, requesting that I pay our company’s contractor a specific amount of money IMMEDIATELY. Bank account details were provided.
My initial thought was that it was a legit email sent to me in error, but when I looked more closely, I could see that the sender email was forged. I happen to know personally know al
The contractors I work with, and I also happen to work in cybersecurity, so I am perhaps not the ideal victim for this type of attack.
This type of phishing scam is known as Business Email Compromise (BEC), and it was unsuccessful primarily because it was an opportunistic attack disguised as a targeted attack. The attackers had done poor research, making it much easier for me, the potential victim, to suss out scam.
Whatever phishing attack type is used – spear phishing, whaling, vishing, Business Email Compromise (BEC) or clone phishing – you can bet they will try to make use of social engineering tactics to dupe the victim.
Types of phishing attacks
Phishing: mass-mailing or non-targeting communication, sent in the hope that a small percentage of recipients fall for the ruse.
Spear phishing: A targeted phishing attack with specific potential victims in mind.
Clone phishing: Email is made to look virtually identical to a legitimate communication, to trick the recipient that it is real.
Vishing: Also known as voice phishing, vishing is where the caller might pretend to be a senior player demanding urgent information over the phone (such as login credentials)
Whaling: A targeted phishing attack that goes after a prime target, such as a CEO or top executive.
Phishing attack commonalities
Once potential targets are identified, successful attackers go into research mode to identify the path of least resistance for the greatest return. In other words, who are the choicest victims to make the shortlist. It is a bit like a robber casing a number of homes, deciding which one to burgle. They might be attracted to a particular home, but still be deterred by obvious security features (floodlights, burglar alarm, bolted doors, and windows, etc.). In this case, our burglar is likely to move to another house.
In phishing, you are looking for target victims that you can connect with and dupe into unwittingly parting with some sensitive information, be they sensitive files, money or login credentials.
Here is how this might work if you were identified as a potential target by a phishing group:
- Learn as much as they can about you By digging through all types of sites to learn key information about your friends, family, hobbies, habits, and job.
- Get into your inner circles. They may try and trick their way into your online circles, by posing to be an old colleague or friend for instance.
- Hack your accounts or those of trusted contacts Accounts without multi-factor authentication, or strong and unique login credentials are particularly vulnerable.
Information gathered during this research phase helps attackers hone the scam strategy into something that feels authentic, urgent, and important.
The attackers then must choose their psychological tactics to trick the target – these might leverage fear (e.g., accusing the target of misconduct and threatening penalties), authority (e.g., where the sender pretends to have authority or seniority over the target) and/or shaming (e.g., threatening to expose a target for purported or sensitive activities).
Communication vectors include messaging apps, social networks, and email. They can also communicate in person or via phone, as in vishing. And, of course, the strategy could include a multi-vector approach to increase the credibility of the attacker’s story.
How to protect your users from phishing attacks
There are two key ways to protect your users from unwittingly letting an intruder inside your organization’s secret sanctum.
One – educate them. Teach them what to look out for, how to protect their accounts, what types of communications are suspicious, as well as how to report anything suspicious.
Two – build defenses. Have a solid security strategy in place to deter a potential attacker from even fingering you as a possible victim. This includes powerful business anti-spam, award-winning antivirus protection, and an enforceable email content policy.
More information available at https://www.gfi.com/products-and-solutions/email-and-messaging-solutions/gfi-mailessentials
SEO exercise: New award-winning storytelling podcasts for iTunes and Spotify | <urn:uuid:3617cce2-1588-41c0-950d-1b0118055325> | CC-MAIN-2022-40 | https://techtalk.gfi.com/recognize-a-phishing-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00367.warc.gz | en | 0.953305 | 1,002 | 2.59375 | 3 |
Intel processors use letters at the end of their name like “T” and “K” to tell you what type of use they are intended for.
Intel CPU’s with No Letters At The End
Most Intel CPU chips have no letters which indicates they are the standard processors intended for normal desktop use.
What Does The “F” Stand For In an Intel Processor Name?
“F” chips have no integrated video so the computer will require a separate video card. Intel “F” processors are most common in the highest performing CPU’s because they know gamers and high end workstation users will need better graphics capability than Intel can integrate into their CPU.
What Does The “G” Stand For In an Intel Processor Name?
When you see an Intel CPU with a “G” it means the chip has a powerful built-in graphics processor. Nearly all consumer directed Intel CPU’s have basic built-in graphics processors so you don’t need to buy a separate video card, but the “G” designation means it should be capable of playing some 3D games. the G’s range from G1 (oldest) to G7 (newest) but we don’t think Intel is using this designation any more because of their Intel Iris and Iris xe graphic designation has taken over.
What Does The “K” Stand For In an Intel Processor Name?
Intel chips with a “K” are “unlocked” allowing games to “overclock” them and rung them at higher speeds than they are officially specd for. Compared to Intel CPU’s without a “K”, they are the fastest in that line of chips. For example a i7-12700K would faster than an Intel i7-2700 which is faster than a i7-2700U
What Does The “H” Stand For In an Intel Processor Name?
“H” processors are designed to be the highest performing chips for mobile systems like laptops and tablets. Intel also had a “HQ” designation meaning the chip had 4 (quad) cores, but they no longer use this designation.
What Does The “M” Stand For In an Intel Processor Name?
“M” processors are designed mobile. Intel no longer uses this designation.
What Does The “S” Stand For In an Intel Processor Name?
Intel processors with an “S” indicate they are special edition chips that consume less power than the normal CPU equivalent but have just a tiny drop in performance. This is good for battery life of mobile devices. There are not many chips with the S designation.
What Does The “T” Stand For In an Intel Processor Name?
The “T” means the chips is designed to use less power while also having less performance than the standard desktop focused chips without any letters.
What Does The “U” Stand For In an Intel Processor Name?
Intel “U” chips are “ultra low power” so they are most commonly used in mobile devices like tablets and laptops where heat, size and power consumption are issues. “U” chips are more expensive than other chips but still provide good performance.
What Does The “X” Stand For In an Intel Processor Name?
Intel “X” chips are the highest performing chips. They are “extreme” chips which are unlocked like “K” chips. They are often the most expensive chips and are usually used by gamers and high end workstation computers.
What Does The “Y” Stand For In an Intel Processor Name?
“Y” chips consume the least power of all Intel processors and are most commonly used in executive laptops where battery life is most important. | <urn:uuid:45714733-cfbc-42fd-81de-07f5c29c6ee0> | CC-MAIN-2022-40 | https://www.urtech.ca/2022/09/solved-what-do-the-letters-f-g-h-k-t-u-or-x-mean-at-the-end-of-an-intel-cpu-name/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00367.warc.gz | en | 0.940057 | 819 | 2.90625 | 3 |
As the world digitally transforms, there are more ethical questions than ever before. These questions range in nature, but often involve altered content, information access, sources for transactions, analytics, AI, and data privacy. These are difficult questions, as they generally do not have clear ‘right’ or ‘wrong’ answer.
This calls for a digital ethicist, a person or team of people within your organization responsible for understanding the implications of technology-enabled decisions and helping individuals weigh the ethical and moral impacts of those decisions. Digital ethicists today come from diverse backgrounds including law, philosophy, communication, and technology.
Digital ethicists are important because executives generally do not have the training necessary for ethical decision making. Digital ethicists are trained specifically for ethical decision-making, and dedicate their time and focus to the specific discipline of digital ethics, making them much more equipped for the job. | <urn:uuid:3edd849b-daa1-4fc1-9a30-d79c777924fa> | CC-MAIN-2022-40 | https://aragonresearch.com/glossary-digital-ethicist/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00367.warc.gz | en | 0.93194 | 185 | 2.890625 | 3 |
Researchers from the RIKEN Center for Advanced Intelligence Project ( AIP ) in Japan have shown that a deep-learning algorithm can be used to extract interpretable features from annotation-free histopathology images from prostate cancer patients.
Copyright by physicsworld.com
Their framework outperformed the prediction of biochemical recurrence using conventional, Gleason Score-based methods.
Prostate cancer is the second most common cancer affecting men worldwide, with an incidence rate of 13.5%, according to the World Health Organization. Expert pathologists diagnose this type of cancer through a transrectal biopsy, following the results of a prostate specific antigen (PSA) test. The extracted samples of tissue are examined under a microscope and, if cancerous cells are found, divided into risk groups assigned through the Gleason Score. This grading system is considered the gold standard in cancer medicine, as it determines the aggressiveness of prostate cancer and helps doctors establish the right course of treatment.
This type of diagnostic pathology, however, requires expert knowledge, is time consuming and can suffer from inter-observer variability. Even though automated machine learning tools capable of accurately classifying histopathology images exist, these methods have not yet gained clinical approval – mainly because deep-learning algorithms suffer from a lack of interpretability, making their decisions hard to visualize or even explain.
AI-generated features deliver high accuracy
Paving the way towards interpretable clinical analyses, the group developed an artificial intelligence (AI) framework capable of acquiring interpretable features from annotation-free histopathological images. To achieve this, the researchers used whole-mount pathology images acquired at three different centres. This included images from 842 patients at Nippon Medical School Hospital (NMSH), plus 95 patients from St. Marianna University hospital (SMH) and Aichi Medical University Hospital (AMH).
The team used images from 100 patients at NMSH to extract the features, while the rest of the NMSH dataset was used to validate their method. Lead researcher Yoichiro Yamamoto and his team trained two unsupervised deep neural networks, known as deep autoencoders, to reduce the 10-billion-scale pixel data into 100 features. They achieved this by using both low-resolution and high-resolution histopathology image patches, inspired by the diagnostic process of pathologists. The computation was performed on AIP’s RAIDEN supercomputer.
To validate their work, the researchers used the 100 generated features to predict cancer recurrence in the remaining NMSH dataset. At the same time, they used the human-established cancer criteria, the Gleason score, to make the same predictions. Their results showed that predictions of biochemical recurrence were more accurate with AI-generated features, than when using the conventional method.
Moreover, when combining the Gleason score with the machine-generated features, the accuracy further increased. Furthermore, this framework delivered similar accuracies when used with the SMH and AMH datasets, revealing the potential for broad use.
Finally, the researchers evaluated the AI-generated features. They retrieved the most representative images for each of the features and asked an expert pathologist to examine them. “In summary”, they say, “the pathologist found that the deep neural networks appeared to have mastered the basic concept of the Gleason score fully automatically, generating explainable key features that could be understood by pathologists.”
The researchers point out that the deep neural networks identified features of stroma in the non-cancerous area as prognostic factors, and that such features typically have not been evaluated in prostate histopathological images. This is a key result as it raises the possibility of a tool capable of discovering new and uncharted disease characteristics. As future work, the team plan to further validate the framework by conducting clinical trials and by applying it to other diseases including rare cancers. | <urn:uuid:2236acbb-af9d-4eb9-b070-72db2e2441a3> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/01/31/opening-the-ai-box-can-deep-learning-predict-cancer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00367.warc.gz | en | 0.931506 | 793 | 2.625 | 3 |
PKI is helping us create secure networks. It uses asymmetric encryption to secure data-in-transit. A PKI also issues certificates, which help in verifying the identity of computers, routers, IOT devices, and other devices in the network. This decreases the chance of Man in the Middle attacks (MITM) and other spoofing attacks. It can also be used to create digital certificates which can further strengthen someone’s identity and establish trust.
If PKI was not used, it may be difficult for one computer to trust the other, and there arises the possibility of MITM attacks. Today’s internet has tons of devices including mobile phones, smartwatches, and IOT devices, where privacy and security of transferring data might be a concern. Payment systems also need a seamless encrypted network with both endpoints being trusted, which is created with ease with the help of a PKI.
PKI can be used in:
- Establishing Secure Networks and encrypted connections
- Code Signing
- Online shopping and the Payment Industry | <urn:uuid:9d2fbb09-dd8e-4c9a-bd28-1948feda14a9> | CC-MAIN-2022-40 | https://www.encryptionconsulting.com/education-center/where-is-pki-used/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00367.warc.gz | en | 0.939847 | 222 | 3.578125 | 4 |
Using Data To Lower Indiana’s Infant Mortality Rate
According to the Centers for Disease Control and Prevention, Indiana has the seventh-highest infant mortality rate in the country. Indiana’s infant mortality rate is the highest rate in the Midwest. In April 2019, Governor Holcomb signed a bill directly to address the alarmingly high infant mortality rate. With that, the OB Navigator program was born. The OB Navigator program will connect women with home-visitation services and require healthcare providers to verbally screen expectant mothers for drug and alcohol use to decrease infant mortality rates in Indiana.
The Indiana chapter of the Healthcare Information and Management Systems Society hosted a Healthy Mom + Baby Datapalooza to leverage technology and data to decrease Indiana’s high infant mortality rate. The competition asked for contestants to create data analyses to help steer policies and initiatives with Governor Holcomb’s OB Navigator program.
DATA DICTATES DISCOVERY
Our data team spent a substantial amount of time in the data preprocessing phase of this project. The data sources provided by the challenge came from different locations (ISDH, Regenstrief Institute, Indiana Census, etc.) in multiple formats. To account for as much data as possible despite the several different formats, our final analysis plan involved three parts:
- Multiple linear regression
- Logistic regression
- Series of chi-squared, statistical tests
Multiple linear regression and a logistic regression model was built to better understand the predictors for infant death and their significance at the county level. After, our team of consultants ran a series of chi-squared tests to utilize key data about individual births, such as mothers’ marital status and type of living region (urban, rural, etc.) on infant survival rates in the first year.
CSpring’s analysis provided insight into how socioeconomic factors have strong effects on OB-GYN care/maternal support space. The top three predictors of infant deaths were:
- Median household income
- Access to healthcare
- Wellbeing of infants
CSpring’s chi-squared analysis also discovered that maternal marital status and breastfeeding habits have a significant effect on infants’ survival. Epidemiologists and clinicians at the event agreed, acknowledging that single mothers often lack the support system to navigate their pregnancy and early post-partum stages of childcare. It’s also possible that these mothers lack proper education on maternal care, the ability to breastfeed, and the time to prepare to care for a newborn.
We provided key stakeholders of the OB Navigator program with information regarding Indiana’s socioeconomic status and social factors and their effects on the infant mortality rate. Understanding the health of infants is not just a matter of understanding health claims – it begins with understanding the socioeconomic status and support system of the mother throughout her pregnancy and in the early post-partum stages of infant care.
See our other success stories here. | <urn:uuid:9400eb5f-8fcb-4e5f-ad77-4b860ef088d7> | CC-MAIN-2022-40 | https://cspring.com/indiana-infant-mortality-rate-data-visualization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00567.warc.gz | en | 0.908733 | 613 | 2.578125 | 3 |
What Is SIP?
So what is SIP? SIP stands for Session Initiated Protocol. It is a type of IP based communication. Press play on the below video from sipsense.com to learn more about what SIP is and what it does. This video covers the basics on how SIP voice communications work.
Video transcript: SIP, the Session Initiation Protocol, a protocol to initiate a session. But what is a session and why do we need a protocol to get one started? Well, we’re talking about a communication session. In fact an IP communication session using IP devices and an IP network. At its heart, SIP is a signaling protocol to set up an IP communication session.
Now when we think about it, virtually all communications begin with signaling, even a face-to-face conversation. Dave here wants to talk to Nath. So he signals his intent by calling over. Hey Nath! A sound wave carried through the air. Recognizing the invitation to talk, Nath signals back to accept. What’s up? With Dave’s invitation signal answered with Nathan’s acceptance signal, both parties are ready to converse. They begin a dialogue, an exchange of media. In this case, more sound waves traveling through the air. Conversation complete, Dave issues another signal, this time to end the session and just to be polite, Nathan signals back, confirming he knows the session is ending.
Signaling in an IP world follows the same principles. Instead of signaling with an audio wave through the air, IP devices signal with IP packets over the IP network. Now there’s lots of different types of IP packet, each with their own function and content. The important thing is that both sender and receiver understand the packet construct and what to do with this data.
In IP communications, there are two types of IP packets. There are signaling packets and media packets. Signaling packets to establish or set up the session and media packets to then convey the audio, the video, the white boarding data and so on. With a voice session, the analog wave on one side is encoded into ones and zeros that are put into media packets and then sent out over the IP network.
On the other side, the ones and zeros are unpacked and decoded, the wave reconstructed and played back to the user. Encoding and decoding also applies to video. Images from the camera are encoded. The binary digits packaged in media packets and sent out over the IP network. Now the other side, again the data is decoded, the image reconstructed and then displayed to the user.
Now every second of conversation has many, many media packets, audio and perhaps video packets that convey the real time conversation. Simple enough. But for this media exchange to take place, we’re assuming two important things have already been sorted, that both parties know each other’s location. They know where to send their media packets and that both parties are using the same codecs for encoding and decoding the media.
So here’s the big question. How do we locate or find the other user’s IP address and how do we decide which codecs to use? We need a process, some descriptions and rules that help us locate each other and agree on codecs. What we need is a protocol and that’s where SIP comes in, the Session Initiation Protocol.
It’s essentially a rule book that describes how to locate the other party and which codec to use for encoding and decoding media. It also defines how to construct and send IP signaling packets to set up the call and what to do to then manage the call.
So now, if Dave wants to use his SIP phone to call Nath on his SIP phone, Dave’s phone follows all the steps defined by SIP to construct a special IP signaling packet, a SIP packet, populated with all the data needed to set up the call and sends it out over the IP network. On the other side, Nath’s SIP phone, understanding the rules of SIP, recognizes the packet is an invitation to talk and knows to alert Nath by playing the phone’s ringing sound.
More signaling packets are exchanged. We will talk about those later. Then with signaling complete and codecs determined, the phones get busy exchanging media packets that digitally convey the conversation. Then sometime later, one of the parties signals the end of the call and both stop sending media. The call is terminated. Simple.
Now understanding the essence of SIP is really quite easy. It’s just signaling to set up and manage IP communication sessions. Understanding its full potential however and knowing enough to fully implement real-life SIP solutions and to support real IP communications networks is something else.
There’s much to learn, many complexities to unravel and understand. But worry not. That’s why we’re here. That’s the purpose of this course. By the time we’re done, you will be a SIP expert. It will all make good SIP sense. If you want to utilize internet-based phone system, you may start with buying 3cx phone software license. | <urn:uuid:d53ca2a5-7d06-43e4-b6ac-26f17bdf7281> | CC-MAIN-2022-40 | https://www.fastmetrics.com/blog/tech/what-is-sip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00567.warc.gz | en | 0.936108 | 1,078 | 3.328125 | 3 |
Worms are the most common form of malware. They are viruses that infect computers while replicating and spreading to others in the network.
Worms exploit vulnerabilities in devices’ operating systems, replicating and spreading until they burden the network by overloading servers with requests and monopolizing available bandwidth. Worms replicate without interaction with users, and they do not need to be attached to software programs in order to reproduce. A worm might be deployed on the Internet, and after scanning for machines running a prior Windows version lacking a security patch, it would infect the computer.
Some worms can carry payloads that damage host computers but often the burdens caused by self-replication are damaging and disruptive enough. Examples of well-known worms are the 1988 Morris worm, said to be the first known worm, as well as ILOVEYOU, Michelangelo, and MSBlast.
“Worms are the most common kind of malware and since these self-replicating virus attack OS vulnerabilities, a way to mitigate them is to update your OS to the most recent version since the security patches are a way of keeping a step ahead of these viruses.” | <urn:uuid:e8eafaee-62a2-4e4e-8b28-fc6f421f9ada> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/worm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00567.warc.gz | en | 0.953106 | 243 | 3.84375 | 4 |
Before we get into how artificial intelligence (AI) and machine learning (ML) are used by both the good guys and the bad guys in cybersecurity, lets define the terms and discuss their capabilities. Defining the terms is important because most folks use the terms interchangeably or incorrectly to describe their use.
On a fundamental level, artificial intelligence (AI) security solutions are programmed to identify safe versus malicious behaviors by cross comparing the behaviors and system traffic of users across an organization or environment to those in a similar environment. Typically, in cybersecurity, AI works most effectively when a base case is created for user behavior and any change from the base line of behavior will alert IT security to the change. This process of monitoring the change of activity is often referred to as “unsupervised learning” where the system creates patterns without human supervision.
The value of AI is that sophisticated AI cybersecurity tools have the capability to analyze enormous data sets allowing them to develop activity patterns that can quickly flag or alert IT security to potential malicious behavior. As we all know, prior to AI security tools, this was a brutally manual process handled by SOC analysts who often couldn’t identify anomalous behavior until after the breach was successful.
Thus, AI emulates the threat-detection aptitude of its human counterparts. In cybersecurity, AI is also used for automation, triaging, aggregating alerts, sorting through alerts, automating responses, and more. Simply, AI is often used to augment the first level of a SOC analysts’ responsibility of investigating alerts and events.
Similarly, machine learning (ML) detects threats by constantly monitoring the behavior of the network for anomalies. Machine learning engines process massive amounts of data in near real time to discover critical incidents. These techniques allow for the detection of insider threats, unknown malware, and policy violations.
Machine learning can predict malicious websites online to help prevent people from connecting to them. Machine learning analyzes Internet activity to automatically identify attack infrastructures staged for current and emergent threats.
Algorithms can detect new malicious files and malware that is trying to run on systems throughout the environment. It identifies new malicious files and activity based on the attributes and behaviors of malware that has never been seen and doesn’t have an existing signature.
As we suggested earlier, Artificial intelligence (AI) and machine learning (ML) are often used interchangeably, but machine learning is a subset of the broader category of artificial intelligence.
Thus, put in context, AI refers to the general ability of computers to emulate human thought and perform human tasks in real-world environments, while ML refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience and data. I’m going to put that in bold below because it is important, and as we said prior, most people do not know the difference between the terms.
AI refers to the general ability of computers to emulate human thought and perform human tasks in real-world environments, while ML refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience and data
Software developers and computer programmers enable systems to analyze data and create an accurate picture of user and system behavior — basically, they create artificial intelligence systems — by applying tools such as:
Artificial intelligence and Machine Learning in Cybersecurity
Artificial intelligence (AI) has grown increasingly powerful and easy to use. More cybersecurity technologies have embedded AI and machine learning (ML) into their products. As we said, many organizations today do not have enough SOC analysts to capture, analyze, triage, and execute a cyber-attack response fast enough to prevent future breaches.
Organizations are now using AI and ML tools in IT security to empower defenses while hackers break down the client's security protection layers using the same tools. AI-powered ransomware and phishing attacks are becoming common.
As we face unprecedented, sophisticated attacks, AI-based tools for cybersecurity have emerged with great success. These tools can help us reduce our vulnerability to breaches and improve our overall cybersecurity stance.
For example, ML can protect productivity by analyzing suspicious cloud app login activity, detecting location-based anomalies, and conducting IP reputation analysis to identify threats and risks in cloud apps and platforms. In addition, ML can detect malware in encrypted traffic by analyzing encrypted traffic data elements in common network telemetry. Rather than decrypting, machine learning algorithms pinpoint malicious patterns to find threats hidden with encryption.
AI cybersecurity solutions are able to identify, predict, respond to, and learn about potential threats, without depending on human input. AI security tools can:
Let’s finish this section with one more attempt at clearly defining Artificial Intelligence and Machine Learning – we’ll even add one more term, Deep Learning that you’ll hear consistently used in conjunction with the two other terms.
Artificial Intelligence is the field of developing computers and robots that are capable of behaving in ways that both mimic and go beyond human capabilities. AI-enabled programs can analyze and contextualize data to provide information or automatically trigger actions without human interference.
Machine learning is a pathway to artificial intelligence. This subcategory of AI uses algorithms to automatically learn insights and recognize patterns from data, applying that learning to make increasingly better decisions.
By studying and experimenting with machine learning, programmers test the limits of how much they can improve the perception, cognition, and action of a computer system.
Deep learning, an advanced method of machine learning, goes a step further. Deep learning models use large neural networks — networks that function like a human brain to logically analyze data — to learn complex patterns and make predictions independent of human input.
As we suggested in our Digital Transformation, 4-part blog series, digital transformation is happening so quickly that IT security is often an afterthought. Digital transformation strategies have created new attack surfaces requiring SecOps and IR teams to adopt new adaptive control capabilities. In addition, business model transformation is changing everything from how we manage our daily lives to how we run companies.
Digital transformation is the process by which companies embed technologies across their businesses to drive fundamental change
Consumers are now expecting what they want almost instantaneously because service providers have the technology to provide them with it. Despite the increase in malware attacks and other attacks, organizations are forced to press forward with their transformation strategy or be left behind by their competition.
“Supporting digital transformation initiatives and a remote work model has led to a dramatic increase in the exposed edges of the network,” says Bob Turner, field CISO of higher education at Fortinet. “At the same time, malware, ransomware and other threats continue to challenge organizations by exploiting inconsistently protected endpoint devices.”
Artificial intelligence and machine learning need massive amounts of data to be useful in cybersecurity. Processing huge amounts of data while feeding content into machine learning classifiers is one of many goals for AI to become a critical element for organizations.
Without big data, AI and ML have little value in cybersecurity. Sophisticated advanced attacks generate large amounts of data. If an AL and ML engine is not configured optimally, the system will produce more false positives for SecOps.
Knowing the goal and purpose of AI and ML for security operations is critical for organizations. The organization's lack of precise alignment to best leverage the AI ability will lead to undeveloped and unitized investment. Organizations choosing to invest in AI and ML know the initial cost, and ongoing investment in technology and human capital expertise is critical to getting the most out of the platform.
Organizations making a critical investment in AL and ML should be fully aware of the acceptance of the risk they are introducing. AI-based security tools are up-and-coming for processing data and analyzing better ways for rudimentary processes; AI also comes with the inherent risk of process manipulation. Artificial intelligence systems are susceptible to attacks like any other system.
Adversarial attacks leverage the power of AI for persistent threats against corporate networks. Much like companies investing in ways to data flows into usage data sets for machine learning classifiers to help predict the next cyber-attack, hackers also use AL and ML to help determine what should be the most effective to attack their targets.
Machine learning systems are different from traditional computer programs because they don't need to be updated after installation. Therefore, they're vulnerable to attack even if they aren't connected to the Internet. And unlike traditional security flaws, which usually need physical access to the device, machine learning weaknesses can exist without any connection to the outside world.
As a real-world example, ML algorithms can be implemented within network traffic analysis to detect network-based attacks such as DDoS attacks. For example, a trained algorithm can detect the large volume of traffic a server receives during a DDoS attack. In addition, the algorithms can also discover the attack vector or the attack type, like TCP Flood, which enables SOC teams to take precautions against upcoming cyber threats in the future.
One more real-world example is ML-based solutions can be trained to detect anomalies in HTTP requests and create alarms in case of an attack. They can also be trained to classify the type of attacks (SQL injection, XSS attack) and detect attack vectors.
AI and ML are becoming increasingly critical for cybersecurity because these technologies learn throughout their lifetimes to recognize new kinds of threats. They draw upon histories of user behavior to create profiles of individuals, organizations, and systems, which allow them to spot deviations from established norms.
Conventional security tools use signatures or indicators of compromise (IOC) to identify threats. This technique can easily identify previously discovered threats. However, signature-based tools cannot detect threats that have not been discovered yet. In fact, they can identify only about 90 percent of threats.
Artificial intelligence can increase the detection rate of traditional techniques up to 95 percent. The problem is that you can get multiple false positives. The best option is a combination of AI and traditional methods. This merger between the conventional and innovative can increase detection rates by up to 100 percent, thus minimizing false positives.
In addition, as more organizations have discovered through their incident response activities, including the lessons learned phase, cybersecurity breaches, especially early deviations happen six months to a year before the attack. These "breadcrumbs" appear in the log files or live SMNP alerting into the SIEM. A deviation found with the CASB DLP, along with an additional crumb found in the endpoint security solution today now correlates with the extended detection and response (XDR) systems. By capturing telemetry from several hosts and adaptive controls, XDR, through the power of ML, can auto-correlate these events into a kill chain report or align with the MITRE attack framework for additional threat hunting and incident response capabilities.
With XDR capabilities, organizations can leverage the power of AI and ML. However, what if the breadcrumbs were a ruse? What if the breadcrumbs set by hackers were a false flag attack, only attempting to manipulate the ML classifier and automation response?
This reality is essential for organizations to realize when considering the value of AI and ML. Data manipulation by hackers could happen months or even a year before the attack. By sending in dry run bread crumbs, the hacker can measure the method and response of ML and Security Orchestration, Automation, and Response or (SOAR) capabilities.
How long did it take for the organization to recognize, identify, eradicate, and restore a system after the initial cyber-attack? Hackers probing and testing the client's ML capabilities will continue this reconnaissance a month before the attack is launched.
Over the past few years, artificial intelligence (AI) has emerged as an essential tool for augmenting the efforts made by human information security professionals. Because humans cannot scale up to adequately secure the ever-growing attack surface of the modern organization, AI offers much-needed analysis and detection capabilities that cybersecurity professionals can use to reduce risk and improve security posture.
Human error in managing AI systems does exist. Many potential threats happen across the applications of machine learning and AI tools. Accurate data sets are critical for organizations to fully realize the value of AI and ML for cybersecurity and business operations.
AI and ML can offer enhanced protection, without increasing staff or putting a major dent in most organization’s budget. Because machine learning is advanced technology, it is not inexpensive. However, one dollar spent on the preventive response capabilities of any organization is going to equal five or six dollars spent dealing with a breach. It’s definitely more expensive to have to deal with a fire than to buy a fire extinguisher – I heard this in a cyber conference recently and though it was a great analogy.
To be successful in nearly any industry, organizations must be able to transform their data into actionable insights. Artificial Intelligence and machine learning provides organizations the advantage of automating a variety of manual processes involving data and decision making.
By incorporating artificial intelligence and machine learning into their systems and strategic plans, leaders can understand and act on data-driven insights with far greater speed and efficiency. | <urn:uuid:bac159ab-d5ef-48c5-8650-5283ff89f616> | CC-MAIN-2022-40 | https://hitachi-systems-security.com/www-hiartificial-intelligence-and-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00567.warc.gz | en | 0.938557 | 2,660 | 3.203125 | 3 |
Government Data Protection Law in the State of Virginia
Virginia’s Government Data Collection and Dissemination Practices Act is a data privacy law that was enacted in 2015. As the name of the law suggests, Virginia’s Government Data Collection and Dissemination Practices Act was enacted for the purpose of safeguarding the various forms of personally identifiable information that residents of the state submit to governmental agencies that operate within Virginia. With this being said, the law outlines the responsibilities that government agencies have within Virginia as it concerns ensuring that the personal information of the state’s various residents is collected, maintained, and disseminated in a manner that protects the confidentiality and integrity of said information.
How are government agencies defined under the law?
Under Virginia’s Government Data Collection and Dissemination Practices Act, a government agency is defined as “any agency, authority, board, department, division, commission, institution, bureau, or like a governmental entity of the Commonwealth or of any unit of local government including counties, cities, towns, regional governments, and the departments thereof, and includes constitutional officers except as otherwise expressly provided by law.” Alternatively, the law defines a data subject as “means an individual about whom personal information is indexed or may be located under his name, personal number, or other identifiable particulars, in an information system.”
What are the duties of government agencies under the law?
The data and privacy protection duties that government agencies have under Virginia’s Government Data Collection and Dissemination Practices Act include the following:
- Government agencies are prohibited from creating or maintaining information systems that are hidden from the general public.
- Government agencies may only collect information pertaining to Virginia residents in accordance with a specific need or purpose that has been expressed to them in advance.
Information that a government agency collects must be relevant and appropriate with respect to the purpose for which it was collected.
- Government agencies are prohibited from collecting personal information via fraudulent or deceptive methods.
- Government agencies are only permitted to use personal information that is accurate, reliable, and up to date.
- Government agencies are responsible for ensuring that the personal information in their possession is protected from misuse.
- Government agencies are responsible for developing procedures that will allow individuals to correct, amend, or erase personal information pertaining to them.
- Government agencies must ensure that all personal information they collect is done so in accordance with applicable legislation.
What data elements are protected under the law?
The data elements concerning citizens that are legally protected from unauthorized access or disclosure under Virginia’s Government Data Collection and Dissemination Practices Act include but are not limited to:
- Social security numbers.
- Drivers license.
- State identification cards.
- Student identification numbers.
- Vehicle license plate numbers.
- Education information.
- Property holdings that have been derived from tax returns.
- Financial transactions.
- Medical history.
- Employment records.
- Voice prints.
- Ancestry information.
- Political information.
- Religious information.
How can government agencies comply with the law?
As government agencies will inherently collect large amounts of personal information from their respective citizens, effectively protecting this information can prove to be challenging. To this end, one way in which government agencies can comply with legislation such as Virginia’s Government Data Collection and Dissemination Practices Act is through the use of automatic redaction software. To illustrate this point further, the provisions of the law protect license plate numbers from unauthorized dissemination. Using an automatic redaction software program, government agencies can automatically redaction thousands of license plates within video or images within minutes, ensuring that this information does not become compromised.
The provisions of Virginia’s Government Data Collection and Dissemination Practices Act serve to hold government agencies within the state accountable with regard to the forms of personal data they collect, manage, and disclose in relation to residents within the said state. As this information constitutes the trust that the American populace places in their local institutions and agencies, it is imperative that said information is protected at all times. Through such legislation, citizens within the state of Virginia have the means to hold their local elected officials accountable. | <urn:uuid:d9f2bd99-2b9a-4e56-ae35-43d85ec851ab> | CC-MAIN-2022-40 | https://caseguard.com/articles/government-data-protection-law-in-the-state-of-virginia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00567.warc.gz | en | 0.915907 | 858 | 2.953125 | 3 |
Since 2017, federal agencies have been mandated to follow the National Institute of Standards and Technology’s Cybersecurity Framework to manage cybersecurity risk. However, for years before that, feds needed to follow another NIST publication to do similar activities: the Risk Management Framework for Information Systems and Organizations.
The guide, NIST Special Publication 800-37, has been around since 2007 and was updated in December 2018. During the Obama administration, the Office of Management and Budget Circular A-130 noted that the Risk Management Framework “requires agencies to categorize each information system and the information processed, stored, and transmitted by each system based on a mission or business impact analysis.”
What Is the NIST Risk Management Framework?
What does that mean in plain English? Ron Ross, a fellow at NIST and one of the agency’s cybersecurity experts, says the RMF is intended to help agencies “select and deploy the appropriate safeguards to protect their information and their information systems.” The RMF was originally designed to help agencies comply with the Federal Information Security Modernization Act.
Over the past decade, Ross says, the RMF has evolved to include cybersecurity, privacy and supply chain risk management. Now, its main purpose is to give “discipline and structure to how organizations go about selecting the appropriate safeguards and countermeasures. The framework is the process of managing risk, and its security controls are the specific things we do to protect systems.”
The Risk Management Framework is composed of six basic steps for agencies to follow as they try to manage cybersecurity risk, according to Ross.
What Are NIST’s Risk Management Framework Steps?
- Categorize. This is the first step in the NIST risk management framework, and it forces agencies to follow the “triage concept,” Ross says, categorizing their IT and data based on how it might impact their mission, ranging from low impact to high. A low-impact system would be something that, if it were lost or compromised, would have a limited adverse impact. A moderate-impact system’s loss would be serious but not catastrophic, according to Ross. And a high-impact system’s compromise would result in severe or catastrophic effects. Agency IT leaders are required to “take an honest look” at all of their data and systems and place them into those three buckets. From there, agencies apply different security controls to their data.
Select. This is the next step, in which agencies select “an initial set of baseline security controls for the system based on the security categorization; tailoring and supplementing the security control baseline as needed based on organization assessment of risk and local conditions,” as a NIST webpage notes. Those controls are then put into agency security and privacy plans.
Implement. The third step is to implement the security controls and document how they are being deployed throughout the agency. Many of the controls come from commercial cybersecurity solutions, Ross notes, and “a lot of that technology is built into the products.”
Assess. The fourth step is to assess the security controls “using appropriate procedures to determine the extent to which the controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security requirements for the system,” according to NIST.
Authorize. After that, the fifth step is to authorize system operations based on the risk level to the agency’s operations and assets, individuals, other organizations and the nation, and the determination that the risk level is acceptable.
Monitor. After all of that is done, agencies must monitor and assess their security controls continuously to determine how effective they are, and must document “changes to the system or environment of operation, conducting security impact analyses of the associated changes, and reporting the security state of the system to appropriate organizational officials,” according to NIST.
NIST Risk Management Framework vs. NIST Cybersecurity Framework
The NIST Cybersecurity Framework was born out of an executive order that former President Barack Obama issued in February 2013, which directed NIST to “lead the development of a framework to reduce cyber risks to critical infrastructure” in an open, transparent and collaborative manner.
That first version was issued a year later, after being reviewed by agencies, industry, state and local governments, foreign governments and companies, and academics. NIST revised the framework and issued Cybersecurity Framework Version 1.1 in April 2018. Since President Donald Trump’s May 2017 cybersecurity executive order, the framework has been mandated as the document that agency heads should use to manage cybersecurity risk.
Originally, the RMF was designed for federal agencies to follow to implement FISMA, and the Cybersecurity Framework was designed for the private sector, Ross says. Now, agencies have two mandatory frameworks to use, but NIST does not want agencies to be doing double work as they enhance cybersecurity controls, Ross says.
When NIST revised the RMF in December 2018, the agency put in place indicators of where users could turn to see where an action in the RMF corresponds to a commensurate action in the Cybersecurity Framework, Ross says. The goal is to give agencies choices on how to select controls. The RMF pushes agencies to select baseline cybersecurity controls, and the Cybersecurity Framework can be used to drive control selection as agencies tailor them for their mission environment and operations.
What Is the Purpose of NIST 800-53?
A different publication, NIST 800-53, catalogues the security and privacy controls that agencies can use.
There are 115 low-impact controls, ranging from security awareness training to time stamps, security assessments, continuous monitoring, information systems backup, risk assessments and vulnerability scanning.
There are 159 moderate-impact controls, including least privilege, remote access, contingency planning and device identification and authentication.
And there are 170 high-impact controls, including concurrent session controls, supply chain protection, denial of service protection, malicious code protection and memory protection.
Ross says he views 800-53 as a “parts bin” for agencies to protect their systems and data once they have an idea of what they want to build.
“I look at the RMD as the framework; it’s the car,” Ross says. “800-53 is the gas that goes in the car.”
What Are NIST Security Controls?
NIST tries to give agencies guidance for when and how to use low-, moderate- and high-impact controls, Ross says.
Every agency is different, he notes, ranging from banking institutions to those developing weapons systems and protecting critical infrastructure. Some have missions that are critical to national security and public safety. That can influence the technologies they are using.
“It’s a tremendous scope of diversity across there,” Ross says. “You have to have frameworks that are agile and have the breadth and depth that can stand up, at the high end, to nation-state adversaries.”
Some agencies will want to put more emphasis on firmware controls and integrity, since laptops’ basic input-output systems have been used as the basis for cyberattacks, Ross says.
“You need to know the threats out there and the vulnerabilities you have, and, if a threat should exploit that, what you should do,” Ross says. | <urn:uuid:845f3468-344e-4234-8994-5bb0ecb6b65b> | CC-MAIN-2022-40 | https://fedtechmagazine.com/article/2019/09/nist-risk-management-framework-how-it-can-help-feds-boost-cybersecurity-perfcon | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00767.warc.gz | en | 0.953497 | 1,528 | 2.703125 | 3 |
Anyone who’s bought a computer lately knows that the price of storage has been falling rapidly. And as disk drives get cheaper, databases are getting bigger, which is enabling companies to do things they couldn’t do a few years ago, according to Richard Winter, president of Waltham, Mass.-based Winter Corp., a consulting shop that specializes in very large database installations.
Winter, who periodically surveys his customers about the size of their databases, estimates that the largest commercial databases in use today are in the range of 50 terabytes.
Most databases in the commercial arena are in the 100 gigabyte to one terabyte range. Winter estimates that there are currently a couple hundred commercial databases over 10 terabytes, and only a handful in the 50 terabyte range.
Based on what his clients are telling him, by next year that figure could climb to around 75 terabytes.
To put that amount of data in perspective, consider that storing one single terabyte of data on paper would require some 150 miles of bookshelves, according to Winter.
Faster Than Moore’s Law
The rapid growth of data is partly a result of the steady decline in the cost of disk drives, he says. “The price of storage capacity has been dropping by half roughly every nine months. That’s twice as fast as Moore’s law, which says that the number of units on a chip doubles every 18 months.”
That’s a key enabling factor, according to Winter, because large databases require even larger storage facilities.
“With commercial databases the ratio of total storage to actual data is about five to one, on average,” he says.
That means that typically only one fifth of the disk space is being used for the actual database. The rest goes to indexing, mirroring. or free space for growth of the database.
The exact ratio of data to storage depends both on the individual application, and on the database technology being used. Major players in the very large database space include Oracle, IBM, Sybase, and NCR’s Teradata division.
Larger databases will inevitably mean larger storage area networks or other storage structures, according to Winter. “If we’re seeing commercial databases of 75, or maybe even 100, terabytes next year, then the total storage associated with those is probably going to be in the range of 500 terabytes, or half a petabyte.”
Oceans of New Data
And databases are growing rapidly.
“We’ve seen over the last few years that video cameras connected to computers have gotten incredibly cheap,” says Winter. “And not just video — all the devices that gather data are getting smaller, faster, and cheaper all the time. So the technology of capturing data is improving rapidly, and the number of devices that capture data is growing, and that results in rapidly growing oceans of data that are available for scientific and commercial analysis.”
These vast quantities of data are opening up new avenues of research for some companies.
“The advances of the last several years are enabling new applications,” says Winter. “In many industries, it has not been economically practical until now to retain full transaction detail for long term analysis. Large retailers are now at the point where they can store as much as seven years of full transaction detail, for example, which is allowing them to really look at the details of customer purchase patterns.”
This feature originally appeared on the CIO Information Network.
Back to Enterprise Storage Forum | <urn:uuid:c922bb0c-ad13-4503-b9ad-fa0a2451b9a8> | CC-MAIN-2022-40 | https://www.datamation.com/storage/rapidly-falling-storage-costs-mean-bigger-databases-new-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00767.warc.gz | en | 0.949216 | 734 | 2.796875 | 3 |
Dry eye syndrome is a common problem among contact lens wearers and non-wearers alike. People who wear contact lenses may be familiar with the irritation that comes from dried-out eyes. It also damages ocular tissue.
Scientists at the Technical University of Munich (TUM) found a new type of lubricant based on molecules found in pig stomachs that keeps would-be dry eyes safe and sound. Generally, the surface of the human eye kept well-lubricated by a molecule called mucin MUC5AC. The molecule found in tears as well as the stomach and intestines. It keeps the eye surface nice and moist due to its ability to bind together lots of water.
Lack of MUC5AC can problematic for those of us who wear contact lenses without a protective lubricant film between the eye and the contact lens, the tissue of the cornea can injure.
In experiments, the researchers needed large quantities of the molecule, which eliminated human tears as a possible source. Therefore, the team optimized a method for isolating the necessary mucin MUC5AC from the stomachs of pigs. The chemical structure of this pig mucin is similar to the human molecule. The purification procedure must conduct carefully to ensure that the purified molecule retains its characteristic property as a lubricant and does not suffer from chemical changes during the purification process.
Most of the commercially available mucins are already used for the treatment of oral dryness, have lost exactly this ability, we able to demonstrate this in a series of experiments, TUM’s Oliver Lieleg, who led the work. These commercial mucins are not suitable for treating dry eyes.
The team carried out tests on pig eyes, using pig-derived mucins to lubricate contact lenses, and monitored their performance. These lenses caused no tissue damage, and soaking the contact lens in the mucin solution overnight should be enough to avoid problems associated with dry eyes.
“We showed that the mucin passively adsorbs to the contact lens material and forms a lubricating layer between the contact lens and the cornea,” explains Benjamin Winkeljann, first author of the study.
The researchers say the main benefits of their porcine-inspired approach closely associated with the natural molecule found in tear fluid.
More information: [Advanced Materials Interfaces] | <urn:uuid:25b42baf-8871-4bc8-976f-2ca4fc5e2097> | CC-MAIN-2022-40 | https://areflect.com/2017/08/02/molecule-in-the-pig-stomach-might-contain-the-solution-to-dry-eyes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00767.warc.gz | en | 0.939107 | 493 | 3.0625 | 3 |
ODBC is a standard set of calls, which is used for connecting and interacting with a database, and defines a standardized way of interaction with any kind of database.
There are various kinds of databases available from various vendors, who normally have a native (proprietary) driver for their database. ODBC is a standard that was created to make applications independent of the database, making those applications portable to other types of databases. If an application is developed, which uses only ODBC calls for interaction with a database, the application is independent of the kind of database being used. The database vendors supply an ODBC driver along with the database. If an application uses only ODBC calls, it will be able to interact with any database, for which an ODBC driver is available, without recompilation. | <urn:uuid:a297e808-682e-44dc-95d1-386045d7421c> | CC-MAIN-2022-40 | https://www.dialogic.com/glossary/open-database-connectivity-odbc | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00767.warc.gz | en | 0.947746 | 161 | 3.359375 | 3 |
A business email compromise attack (or BEC attacks for short) is a form of cybercrime that utilises email fraud tactics. Victims of BEC attacks are varied and include government departments, charities and enterprises in both the industrial and commercial sectors.
While the specific aim of a BEC attack may vary, they are always conducted to have a negative impact on their chosen victim, with the aim of achieving a specific result that adversely affects the target.
How BEC attacks work
Business email compromise attacks will often start with a threat actor cunningly spoofing company email accounts, enabling them to impersonate a firm’s members of upper management, and in some cases even the CEO. The email address used appears legitimate. As a result, it can bypass email security filters that normally block malicious messages from reaching personnel.
After getting past these security measures, the spoof email then ends up in intended recipient’s inbox. The message looks entirely legitimate, as does the requests it includes. The operator behind the BEC attack will commonly ask for a financial sum to be paid out to a nominated account.
As the message looks authentic and appears to originate from management, BEC victims feel comfortable complying with any request made. The BEC operator may ask the target to transfer funds directly or request that a cheque is deposited. The payment method asked for is informed by details gleaned on how the firm typically makes a financial transaction, and the option insisted upon in BEC emails will match the company’s standard operating procedure. This tactic ensures that the recipient never becomes suspicious and raises an alarm.
In recent years, threat operators have moved on from using BEC attack for financial gains. Now, this cybercriminal activity is a common option for threat actors seeking to obtain personally identifiable information (PII) and employee credentials. The information obtained through BEC attacks is then used to penetrate an enterprise’s network even more deeply. As such, BEC attacks can be a prelude to a more serious threat such as a ransomware assault.
Can you avoid BEC attacks?
Businesses keen to protect their staff from BEC attacks can consider specific steps. For example, they can set up a company domain and use it to establish dedicated email accounts for employees instead of making use of free options which are far easier to spoof.
All email accounts should be protected with multifactor authentication. This advanced measure for email authentication will ask for further information to access email accounts, and may include PINs, passcodes, and even biometric data like fingerprint scans and facial recognition.
Staff must also be instructed on how to respond to emails that they receive from suspicious senders. These messages should not be opened, and any attachments or links that are included must never be interacted with.
At Galaxkey, we can offer a wide range of email security tools for enterprises including powerful encryption that can keep all data stored in accounts, or sent out to recipients free from prying eyes. Contact our team today for more information and a free trial. | <urn:uuid:dd092182-5ba0-4f92-a25e-984024d4fb94> | CC-MAIN-2022-40 | https://www.galaxkey.com/blog/what-is-business-email-compromise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00767.warc.gz | en | 0.938573 | 615 | 2.578125 | 3 |
What is MPLS?
MPLS stands for Multi-protocol Label Switching. MPLS is a packet forwarding technology that is capable of carrying any L3 protocol and here comes the word multi-protocol. MPLS is capable of tunneling L3 packets inside the MPLS network using MPLS labels. The MPLS label is pushed into the packet between the layer two header and the layer three header of the packet at the ingress router and is used to switch the packets across the network to its destination.
What is the MPLS Label and How is it used?
The MPLS label is a fixed 4 byte identifier added to the packet by the ingress router between the data-link layer (Layer2) and the network layer (Layer3) and is used by all middle routers to switch the packet to its destination without the need for any routing table (Layer3) look-ups. MPLS is considered a layer 2.5 technology and the MPLS header is called the shim header.
The diagram below illustrates the structure of the label. One or more labels are pushed on the packet at the ingress router forming a label stack. The first label is called the top label or the transport label, other labels are used by different MPLS applications if needed.
- Label: label value, 20 bits.
- EXP: Experimental bits, Name is currently changed to Traffic class, 3 bits.
- S: bottom of stack, 1 bit.
- TTL: Time to live, 8 bits.
A couple of definitions are important before moving to MPLS operation:
- Downstream router: This is the router which advertises the prefix. In other words the router that is the next hop to a specific prefix is the downstream.
- Upstream router: This router receives the routing information from its downstream router.
- Label Edge Router (LER): Operates at the edge of the MPLS network (ingress/egress) and make forwarding decisions based on the IP header information of the packet.
- Label Switch router (LSR): the routers in the middle of the MPLS network which forwards MPLS packets based on label information.
Routing information flows from downstream routers to upstream router. while data flows from upstream routers to down stream routers.
How MPLS works?
MPLS uses the concept of Forwarding Equivalence Class (FEC). The FEC is of a set of packets forwarded in the same manner by the label switching routers (LSR). Each router assigns a label to a FEC and distributes this label to other routers using label distribution protocols forming label switched paths or LSPs.
When a packet is received by the ingress router it determines the next hop and inserts one or more labels to the packet . Then the labeled packets are passed to the next-hop router (downstream). When the packets reach the downstream router, the top most label is examined and used as a unique identifier to look into the label forwarding table to determine the next hop and label operation to be performed on each MPLS packet.
Finally the packet reaches the egress router, the label is removed and the packet is forwarded using an IP lookup or another label based on the MPLS application used.
As you can see the provider routers do not need to examine layer 3 information of the traversed packets, allowing for protocol independent packet forwarding.
- R1 advertises prefix 10.10.10.0/24 to the network using any IGP.
- Routing information about the subnet flows away from R1.
- An IP packet enters R4 (LER) with a destination of 10.10.10.0/24.
- R4 looks in its label forwarding information base, determines the next hop (R3) and pushes the label assigned by R3 (L4) for this FEC.
- R3 receives labeled packet from R4 with a label L4. R3 examines the LFIB and swaps L3 label to L2.
- R2 receives the MPLS packet, looks up the LFIB and pops the label (penultimate hop popping) before sending the packet to R1 as an IP packet.
- R1 forward the packet to its destination based on IP header information.
Please refer to MPLS label operations post for a description of different label operations.
MPLS network requirements
The following elements must exist in the network to be able to run MPLS
- A layer 3 routing protocol (IS-IS, OSPF, EIGRP or RIP); preferably IS-IS or OSPF for Traffic engineering.
- Label distribution protocol (RSVP, LDP or BGP).
- Network capable of handling MPLS traffic.
- BGP free core in the service provider.
- MPLS Applications like MPLS VPN and Traffic Engineering.
- Having unified network in the service provider as you can provide IP, L3 VPN or L2 VPN over the same network.
I hope I have been able to clearly and simply answer this question, please do not hesitate to post any comments, corrections or request. | <urn:uuid:5f64d96d-2bad-4688-bd8f-5690bce15cdb> | CC-MAIN-2022-40 | https://www.networkers-online.com/blog/2010/03/what-is-mpls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00767.warc.gz | en | 0.874741 | 1,063 | 3.65625 | 4 |
Waste heat has become a glaring problem for data centers, prompting engineers to seek opportunities to transform it from a liability to an asset. The severity of the waste heat situation has implications stretching far beyond data centers themselves, but also for the environment and life as we know it.
Each data center runs thousands of servers, featuring increasingly powerful processors. These processors run the calculations to power social media, news sites, and the entirety of the World Wide Web. As essential as they are to keeping us connected, they also produce waste heat – or the heat released from operating hardware.
The processors are only designed to withstand certain temperatures. When data centers fail to dissipate the excess heat, the servers will fail. Air cooling systems are a valiant attempt, but they struggle to remove enough heat to be considered eco-friendly. Traditional air cooling systems mitigate the server damage by redirecting the heat into the atmosphere, which contributes to global warming.
This lose-lose situation poses a challenge for the industry. Innovative data centers are asking themselves how they can actively recycle waste heat. Liquid cooling is a promising technological solution that may just save our digital lifestyles and the planet we depend on.
Data Centers Produce Waste Heat: The Science of the Problem
Inside a data center, electricity feeds the servers and other IT equipment. The processors convert that electricity into computations for internet applications. As a byproduct, the processors also emit some waste heat.
The waste heat can have a serious negative effect on processor performance, slowing down the calculations and even damaging or destroying parts. Temperature is one of the main limiting factors of processor performance.
The waste heat also impacts the environment, adding to the climate crisis. On one hand, burning fossil fuels to produce electricity heats the planet as a byproduct of combustion. On the other hand, waste heat seeping out of data centers adds fuel to the fire, making the earth even hotter.
Data Centers CAN Benefit from Recycling Waste Heat with Liquid Cooling: The Technology of the Solution
In traditional air-cooled data centers, waste heat is irrevocably lost to the environment. But liquid cooling technology allows data facility operators to efficiently capture and reuse that waste heat. This answers the problem of waste heat by turning it from an ecological nightmare to a valuable resource.
Believe it or not, this concept is already being put to work. For example, the EcoDataCenter in Sweden reuses waste heat to make wood pellets. You can recycle heat for any process that requires it as an input. This offers a win-win-win for data centers. Liquid cooling systems prevent heat-damaged servers, the businesses profit from recycling the heat, all while saving the environment!
With liquid immersion cooling, like GRC’s ICEraQ, you transfer the heat from the servers through the rack’s liquid into the data center’s coolant. Then you can use the heat for whatever you like. The heat given off by the servers remains in liquid form the whole time, making it feasible to repurpose that heat with minimal loss – unlike air that’s difficult to control.
Liquid immersion cooling recycles virtually all of the server heat, versus air cooling’s 30%. Moreover, liquid recycling works at 45 degrees Celsius, nearly double the temperature of air cooling.
New computer processors have comparable power use density to a nuclear generator. Experts see the reuse of waste heat in urban data centers as a growing trend in response to this challenge. For example, you can reuse the “waste heat” to warm the water for tens of thousands of buildings. Again, redesigning waste to become an invaluable service to us in real, everyday life.
Recycling Waste Heat Contributes to Sustainability
As with other forms of recycling, the recycling of waste heat ameliorates an organization’s sustainability. Many large European cities now even require server heat reuse to receive a permit. In Sweden, data center users can resell waste heat for hundreds of thousands of dollars to heat water for the capital city. They can use this sustainability initiative to then generate a tenth of the city’s heating needs from data centers!
There are ample creative and environmentally friendly applications of recycled heat. Think of all the processes that rely on heat – nearly everything! Plants and animals need heat for their metabolism; factories, warehouses, offices, and houses need heat to sustain their processes. Heat is a basic resource essential to society.
Recycling waste heat through liquid immersion cooling essentially prevents uncontrolled resource loss, by mobilizing the heat into a sustainable tool for civilization.
Liquid cooling also offers numerous other advantages over air cooling. It has a lower total cost of ownership, can handle far denser server capacity, protects hardware against debris and corrosion, and operates more quietly. Furthermore, liquid immersion is a more reliable system that you can install anywhere, from hyperscale data centers to shipping containers.
Recycle Data Center Heat With GRC
While data centers inevitably produce heat, that doesn’t have to be waste heat. Instead of seeing it as a problem, innovative organizations are approaching heat as a resource we didn’t even realize we had. To take advantage of this resource, they’re upgrading from air cooling to liquid immersion cooling.
Liquid immersion cooling dissipates the heat from the servers more effectively than air cooling systems. This saves the servers from damage while transmitting the heat into a reusable liquid – rather than irresponsibly polluting the atmosphere. There are countless creative uses of recycled heat, from cultivating fish to heating the water and air for human use. Our future is as bright as you can imagine it.
As with all worthwhile things, the best time to start was yesterday. The next best time to deploy GRC’s liquid immersion cooling solution is now! Achieve the best of both worlds; recycle data center heat with our forward-thinking technology today. Do us all a favor by contributing to the green data revolution. Ready to take your data center to the next level? Get in touch with us! | <urn:uuid:792a2c35-a436-479e-992d-2396c6200b2e> | CC-MAIN-2022-40 | https://www.grcooling.com/blog/can-data-centers-benefit-from-recycling-waste-heat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00767.warc.gz | en | 0.916634 | 1,238 | 3.546875 | 4 |
Wireshark Display IP Subnet Filter
When asked for advice on how to be a proficient protocol analyst, I give 2 pieces of advice;
Practice looking for patterns. In most cases, you are looking for patterns, or a break in the pattern. Don’t worry about memorizing the RFC’s or learning about every protocol. It is easier to focus on whatever protocol you are working on at that time.
Learn your display filters in whatever your protocol analyzer you use. The correct display filter will make the patterns jump out at you.
I caution analysts about going capture filter crazy. Unless you know exactly what you are capturing, I typically try to leave the capture filter as ‘open’ as possible. My concern when troubleshooting is that due to the very nature of the unknowns when troubleshooting, you may inadvertently filter out valuable packets.
I great example is you may decide to use a capture filter for a web server ip address when capturing from the client. In this scenario you would miss any packets from the router or other devices along the way if they send the client an ICMP error packet or if the client communicates with other servers.
In this example, I show you that the ip.addr display filter can be used for a subnet. You are probably familiar with this filter when filtering on a single device. What do you do if you need to filter on more than one host? The typical approach is to combine the ip.addr filter with an or. For example ip.addr==192.168.1.1 or ip.addr== 192.168.1.2 is one way to capture from two hosts.
Now let’s take this a step further. What would you do if you wanted to capture from all addresses on a server farm or client subnet? I’ll make this a touch more realistic and add that you don’t know the all the IP addresses on the other subnet. This is where the subnet/mask option comes in.
You can simply use that format with the ip.addr == or ip.addr eq display filter. If I wanted to display the IP addresses from the 192.168.1.1 to 192.168.1.254, my filter would be ip.addr == 192.168.1.0/24 or ip.addr eq 192.168.1.0/24. The mask does not need to match your local subnet mask since it is used to define the range. If you wanted to display all the packet from 192.168.1.1 – 192.168.1.14, my display filter would be ip.addr == 192.168.1.0/28. | <urn:uuid:69949953-2318-4100-b681-0261c82228bb> | CC-MAIN-2022-40 | https://www.networkdatapedia.com/post/2018/05/23/wireshark-display-ip-subnet-filter | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00767.warc.gz | en | 0.890968 | 561 | 2.84375 | 3 |
Nano command in Linux is not just a command but it is a text editor. Nano text editor is used to create and edit files, included in most Linux distributions.
It has a very simple interface, Which makes it a great choice for Linux beginners. If you are not pro in Linux then this tutorial is very useful.
I will cover of nano text editor of nano command in Linux with appropriate images.
Today, I am using CentOS 8, So I will show demonstration images on it.
Problem: nano command not found
If you are new and using Linux, read somewhere about the nano command in Linux. But when you try to run the nano command and get an error “-bash: nano: command not found”
You get a headache and start pulling your here. But calm down, Here is the solution.
If you are running any Linux version or derivatives such as Fedora, RHEL, Ubuntu, Arch, etc. and you get the error nano command not found.
It means Nano text editor doesn’t install on your Linux machine So the first thing you must do is Install nano text editor on Linux.
How to Install Nano Text Editor on Linux
As I have told you already, Nano is by default included in most Linux distributions. However, if you didn’t get pre-installed nano on your system. Then It is necessary, You must know the installation process.
It is very easy, and can be completed in simple two steps.
Step 1: Update repository:
Open the terminal and update the apt repositories with the command:
sudo apt update
Step 2: Install Nano Text Editor
Then, install Nano by running the following command. You must use different commands on different OS.
sudo apt install nano
yum install nano
With this, you have successfully installed the text editor.
I am using CentOS, which has a preinstalled nano text editor. So I can run nano commands without any interpretation.
- bash: nano command not found
- Create a New file by using nano command in Linux
- Use nano command to open an existing file
- Edit files in a nano text editor
- Keyboard shortcuts of Nano command in Linux
Create a New file by using the nano command in Linux
You can read another article on how to create a file in Linux. I have described 5 ways to create a new file including cat command, touch command, etc.
You can use several methods to open a nano text editor.
As it is a command-line editor, now your first step is to open the terminal. You can open the terminal and the easiest way to access the terminal is the Ctrl+Alt+T shortcut key.
You can use nano command without any argument, You will get open a blank nano file.
Later on, you can decide to save or discard the file at the time of exit (Ctrl+x)
Press Ctrl+X to exit from the file.
Press (Ctrl+X) for exit the file, You will see the 3 option on the bottom of the screen.
- Y for Yes
- N for No
- Ctrl+C for cancel
If you press y to save the file, you will have to give the name of the file. Type the name and press Enter.
In this example, the provided name is file.php.
As you hit enter the will be saved in the current working directory. If you want to save in another location then you will have to specify the path.
Use nano command to open an existing file
Open an Existing File by nano command
You can use the nano command to open an existing file, use the nano command followed by the file name. It is pretty simple.
For example, if the file is in your current location named file.php, the command will be as follows:
If you want to open a file in another directory, you must include the path in the command, where the file is located.
I am going to open the file.php file which exists in the location /home/vijay/Documents/file.php. So the command will be
It is also possible to open a file and directly go to a specific line or column.
nano +line,column file.php
Edit files in a nano text editor
Nano text editor has a graphical interface that makes it more attractive. I agree this doesn’t full graphical interface, but you can interact directly with what are you writing inside the file. You may see what is written already inside the file.
And You can do everything with the help of a keyboard inside the nano editor. example save the file, search the content, replace the content, and many more.
These are keyboard shortcuts.
Keyboard shortcuts of Nano command in Linux
When you open the nano text editor, you will see multiple keywords written on the bottom. You can see in the image.
These are keyboard shortcuts. You can control the keyboard shortcuts with a combination of the Ctrl button on the keyboard (Ctrl). which are represented by a carat (^) followed by a symbol.
For example, Press Ctrl+X to Exit out of the Nano text editor, but it is displayed as ^X in the bottom of the file.
In addition, there are combinations that require the Meta key (usually the Alt button). They are represented by the letter M followed by a symbol.
For example, the shortcut to Undo action in a text is Alt+U (displayed as M-U).
The two bottom lines in the text editor will display some of the most commonly used shortcuts, as seen in the image above.
If you want to see all valid shortcuts for a nano text editor, then press Ctrl+G (displayed as ^G) or F1. This will open Nano’s help text and list all possible keyboard shortcuts.
Now you have learned about know how to create a file in linux by using nano text editor. You have learned the basic text commands and commands used for creating, editing and saving files.
You can always refer to the Help text with Ctrl+G (^G) for additional commands. | <urn:uuid:7b2d9d0c-0143-4a1f-a667-4704381abb93> | CC-MAIN-2022-40 | https://www.cyberpratibha.com/nano-text-editor-or-nano-command-in-linux/?noamp=mobile | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00767.warc.gz | en | 0.883573 | 1,318 | 3.234375 | 3 |
Everyone’s upgrading to cloud computing. Cloud resources, of course, are still physical devices located somewhere on planet Earth. So even though your resources are stored in the cloud, you will still have to choose where they will be used. Selecting your Region and Availability Zone are about the first option when setting up any cloud resource.
What are availability regions and zones?
Availability regions are the geographic locations of the cloud data centers. Different regions offer different service qualities in terms of latency, solutions portfolios, and costs. For the large providers, their availability regions exist across the globe.
The availability zone refers to an isolated data center within a single region. Each availability zone includes multiple data centers, and no single data center is shared between multiple availability zones.
Of the three major providers, Amazon’s regions and zones are the most developed. Microsoft is comparable. Google is the newest to the scene, but it really isn’t that far behind (and certainly isn’t far behind in any way that makes it less of a viable choice). Under most circumstances, each provider covers the same major areas:
- North America
- Southeast Asia
- East Asia
There are only a few exceptions where Azure and AWS cover different locations, such as South Africa, that Google has yet to reach.
Amazon Web Services
Here are the AWS regions around the globe:
Azure regions and zones
Azure covers these regions:
Google Cloud Provider
GCP offers 24 regions with 73 total zones. There are three zones per region, with the US-Central1 region having 4 zones.
Comparing availability zones
Each region has availability zones, and each zone has its own data center. Each data center has its own hardware. Regions can be known for being good at some things and bad at others. For example, AWS East is known for having more downtime than AWS West. The reason for this comes down to the hardware at and usage of an availability zone. The data centers in each zone can consist of different hardware.
There are two key features among zones:
- The number of people using it compared to how many people the zone supports
- The available hardware in the zone
Based on these two things, the user can gauge their experience on:
- Zone downtime
- Zone latency
- Resource availability (i.e.; a task needs to run on high-memory SSDs)
An example of the resources available in one zone versus another on GCP are:
Zone: South America East1-B
Zone: Europe West4-C
Global vs regional vs zonal resources
The length of the list says nothing, but the resources themselves say a lot more. Some items are standard among all zones, some resources are old and get phased out, some are new, advanced features only available in some zones.
When designing your cloud infrastructure, you will want to define what tasks need to be performed where. By keeping your computation local and doing as little cross-regional operations as possible, you isolate your system from hardware and infrastructure failures.
According to GCP: “Resources that live in a zone, such as virtual machine instances or zonal persistent disks, are referred to as zonal resources. Other resources, like static external IP addresses, are regional. Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone.
“For example, to attach a zonal persistent disk to an instance, both resources must be in the same zone. Similarly, if you want to assign a static IP address to an instance, the instance must be in the same region as the static IP address.”
You can see the different equipment available at each Availability Zone/ Region.
These subsections outline which resources apply at which level:
- Addresses: IP address list
- Images: Container images
- Snapshots: Persistent Disk Snapshots
- Instance Templates: Templates to create a VM
- Cloud Interconnects: On-Premise Network to Cloud Network connection
- Cloud Interconnect Locations: Physical connection point to Cloud Network
- VPC Network: A VPC Network
- Firewalls: Firewalls are on a single VPC and packets reach them from other networks
- Routes: Network Routes
- Global Operations: Operations can be of any type
- Cloud Interconnect Attachments
- Regional Managed Instance Groups
- Regional Persistent Disks
- Regional Operations
- Persistent Disks
- Machine Types
- Zonal Managed Instance Groups
- Per-zone Operations
Choosing an availability zone
Before choosing a provider, you will need to look at where your company does business and how it does business. A few questions to consider when selecting a cloud provider are:
- Where does your company do business?
- Can data be stored in a single place for remote offices or must data be shared among offices across regions?
- Will data need to get passed from one zone to another?
- Are data retrieval or computation times important?
Finally, when choosing which zone will be best for you, consider these four criteria to consider:
1. Latency and proximity
General rule of thumb—opt for the closest zone for lower latency. Check Stack Overflow and other forums to see what others might be saying about any zone’s latency being better than another.
This is a task for the accountants. The differences between each are in the tenths of a penny, and the resources vary from provider to provider. Meaning, you’re not always comparing apples to apples.
Here’s a brief comparison for Google prices vs AWS prices for CPU and storage, for instance:
3. Regulatory compliance and security
Each zone exists in a different country, and each country has different laws regarding data safety and protection. Some regions may prohibit the transfer of data between regions, which could affect how you design your infrastructure. There can be significant penalties for breaking these data compliance laws.
4. Service level agreements
You’ll want the right parameters for better service. Check the service level agreement (SLA) for each provider.
For example, AWS’ General Service Commitment for Compute on EC2 instances: “AWS will use commercially reasonable efforts to make the Included Services each available for each AWS region with a Monthly Uptime Percentage of at least 99.99%, in each case during any monthly billing cycle (the “Service Commitment”). In the event any of the Included Services do not meet the Service Commitment, you will be eligible to receive a Service Credit as described below.”
Other service level agreements available here:
For more on navigating cloud complexity, browse the BMC Multi-Cloud Blog or read these articles: | <urn:uuid:e30542ce-58b7-47a6-b721-673ec1927baa> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/cloud-availability-regions-zones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00167.warc.gz | en | 0.908529 | 1,447 | 2.78125 | 3 |
Looks like Watson has a new, younger, but somewhat brawnier, family member.
On June 8 the U.S. Department of Energy’s Oak Ridge National Laboratory unveiled IBM’s Summit supercomputer and immediately billed it as the “world’s most powerful and smartest scientific supercomputer.”
As impressive as it certainly is, IBM and the DoE might get a legitimate argument about that lofty claim from other computers in the Top 500 around the world—namely in Guangzhou, China; Cern, Switzerland; and Japan. The Titan supercomputer in Oak Ridge, Tenn. was listed as No. 4 in the last ranking. We’ll have to see where Summit eventually ranks on the list, and it may well become No. 1.
Secretary of Energy Rick Perry attended the debut in Oak Ridge June 8 to meet with the ORNL team and see firsthand this monumental supercomputer that has 4,600 individual nodes—large enough to fill two tennis courts with racks of servers.
200 Quadrillion Calculations Per Second
With a peak performance of 200,000 trillion calculations per second—that’s an astounding 200 quadrillion calculations, or 200 petaflops—Summit will be eight times more powerful than America’s current top-ranked system, Titan, also housed at ORNL.
For specific scientific applications, Summit will be capable of more than 3 billion-billion mixed-precision calculations per second. Summit, IBM said, will provide unprecedented computing power for research in energy, advanced materials, and artificial intelligence (AI), among other domains. Summit’s power is expected to enable scientific discoveries that were previously impractical or impossible.
“Summit is also optimized for AI in a data-intense world,” IBM Senior Vice President of Portfolio and Research Dr. John F. Kelly III said in a media advisory. “We designed a whole new heterogeneous architecture that integrates the robust data analysis of powerful IBM Power CPUs with the deep-learning capabilities of GPUs. The result is unparalleled performance on critical new applications.”
“This project has always been about pushing the boundaries of innovation and technology to solve what was previously unsolvable. For instance, with this system we can make connections and predictions that will help us advance cancer research, understand genetic factors that contribute to opioid addiction, simulate atomic interactions to develop stronger, more energy efficient materials, and better understand supernovas to explore the origins of the universe.”
Can Analyze 30 Years’ Worth of Data in an Hour
Summit’s computing capacity is so powerful that it has the ability to compute 30 years’ worth of data saved on a desktop computer in just one hour, Kelly said. ORNL researchers have also figured out how to harness the power and intelligence of Summit’s state-of-art architecture to successfully run the world’s first exascale scientific calculation, or exaops, as DOE’s fleet of proposed exascale computing systems come online in the next five years.
From its start 75 years ago, ORNL has a history and culture of solving large and difficult problems with national scope and impact, ORNL Director Thomas Zacharia said.
“ORNL scientists were among the scientific teams that achieved the first gigaflops calculations in 1988, the first teraflops calculations in 1998, the first petaflops calculations in 2008, and now the first exaops calculations in 2018,” Zacharia said. “The pioneering research of ORNL scientists and engineers has played a pivotal role in our nation’s history and continues to shape our future.”
“For the first time, we (IBM) are making the same architecture that powers Summit available in commercial form,” IBM’s Kelly said. “Clients are already using the same hybrid architecture in our business product line with the IBM Power Systems AC922 system, and the family of new IBM POWER9-based servers. The result: business computing that can help every industry advance their products and services, from banking, to healthcare, to retail, to transportation.”
Summit will be open to select projects this year while ORNL and IBM work through the acceptance process for the machine. In 2019, the bulk of access to the IBM system will go to research teams selected through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. | <urn:uuid:80734779-2711-4644-9740-57ad8e86e242> | CC-MAIN-2022-40 | https://www.eweek.com/pc-hardware/ibm-watson-gets-brawny-younger-brother-in-summit-supercomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00167.warc.gz | en | 0.934033 | 927 | 2.59375 | 3 |
As this blog article explains, latency is the delay for a packet to travel on a network from one point to another. Different factors like processing, serialization and queuing, drive this latency. When using newly hardware and software capabilities, you can potentially reduce the impact these elements have on latency. But there is one thing you will never improve: the speed of light!
As Einstein outlined in his theory of special relativity, the speed of light is the maximum speed at which all energy, matter, and information can travel. With modern optical fiber, you can reach around 200.000.000 meters per second, the theoretical maximum speed of light (in a vacuum) being 299.792.458 meters per second. Not too bad!
Considering a communication between New York and Sydney, the latency is about 80ms. This value assumes a direct link between both cities, which will of course usually not be the case. Packets will traverse multiple hops, each one introducing additional routing, processing, queuing and transmission delays. You’ll probably end up with a latency between 100 and 150ms. Still pretty fast right?
Well, latency stays the performance bottleneck for most websites! Let’s see why.
The TCP/IP protocol stack
As of today, the TCP/IP protocol stack dominates the Internet. IP (Internet Protocol) is what provides the node-to-node routing and addressing, while TCP (Transmission Control Protocol), is what provides the abstraction of a reliable network running over an unreliable channel.
Even if new UDP-based protocols are emerging, like HTTP/3 discussed in one of our future articles, TCP is still in use today for most popular applications: World Wide Web, email, file transfers, and many others.
One could argue TCP cannot cope with performance requirements of today’s modern systems. Let’s explain why.
The three-way handshake
As stated before, TCP provides an effective abstraction of a reliable network running over an unreliable channel. The basic idea behind this is that TCP guarantees packet delivery. So it cares about retransmission of lost data, in-order delivery, congestion control and avoidance, data integrity, and more.
In order for all of this to work, TCP gives each packet a sequence number. For security reasons, the first packet does not correspond to the sequence number of 1. Each side of a TCP-based conversation (a TCP session) sends a randomly generated ISN (Initial Sequence Number) to the other side, providing the first packet number.
This information exchange occurs in what is called the TCP “three-way handshake”:[
- Step 1 (SYN): The client wants to establish a connection with the server, so it sends a packet (called a segment at TCP layer) with SYN (Synchronize Sequence Number) signal bit set, which informs the server that it intends to start communicating. This first segment includes the ISN (Initial Sequence Number).
- Step 2 (SYN/ACK): The server responds to the client’s request with SYN/ACK signal bits set. It provides the client with its own ISN and confirms the good reception of the first client’s segment (ACK).
- Step 3 (ACK): The client finally acknowledges the good reception of the server’s SYN/ACK segment.
At this stage, the TCP session is established.
The impact of TCP on total latency
Establishing a TCP session costs 1.5 round trips. So, taking the example of a communication between New York and Sydney, this introduces a setup delay typically between 450 and 600ms!
This is without taking secured communications (HTTPS through TLS) into consideration, which introduces additional round trips to negotiate security parameters. This part will be covered in a future article.
How to reduce the impact of latency on performance?
So how to reduce the impact of latency on performance if you cannot improve the transmission speed?
In fact, you can leverage two factors:
- The distance between the client and the server
- The number of packets to transmit through the network
There are different ways to reduce the distance between the client and the server. First, you can make use of Content Delivery Network (CDN) services to deliver resources closer to the users. Secondly, caching resources makes data available directly from the user’s device. No data at all to transfer through the network in this case.
In addition to reducing the distance between the client and the server, you can also reduce the number of packets to transmit on a network. One of the best examples is the use of compression techniques.
Nevertheless, the optimization you can achieve has some limits, because of how transmission protocols work… The TCP handshake process does require 1,5 round trips. The only solution to avoid this would be to replace TCP by another protocol, which is the trend we’ll certainly see in the future. | <urn:uuid:f6861cb9-e58f-4a9d-b6bc-4c287c1cd179> | CC-MAIN-2022-40 | https://kadiska.com/why-network-latency-drives-digital-performance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00167.warc.gz | en | 0.911375 | 1,009 | 3.5 | 4 |
Who's Hijacking Internet Routes?Attacks Increase, But There's No Easy Fix in Sight
Information security experts warn that Internet routes are being hijacked to serve malware and spam, and there's little you can do about it, simply because many aspects of the Internet were never designed to be secure.
The Internet hijacking problem relates to Border Gateway Protocol, which is responsible for routing all Internet traffic. In the words of Dan Hubbard, CTO of OpenDNS Security Labs: "BGP distributes routing information and makes sure all routers on the Internet know how to get to a certain IP address."
BGP provides critical Internet infrastructure functionality, because the Internet isn't a single network, but rather a collection of many different networks. Accordingly, BGP routing tables give the different networks a way to hand off data and route it to its intended destination.
That assumes, of course, that no one tampers with BGP routing, in which case they could reroute traffic or disguise malicious activity. "The trouble is it ... all relies on trust between networks, so if someone hijacks an ISP router, you wouldn't know," Alan Woodward, a visiting professor at the department of computing at England's University of Surrey, and cybersecurity adviser to Europol, tells Information Security Media Group. "It's just another example of how people are forgetting that the Internet was never built to be a secure infrastructure, and we need to be mindful of that when relying upon it."
Spam, Malware, Bitcoins
Hijacking router tables could allow an attacker to spoof IP addresses and potentially intercept data being sent to a targeted IP address. Thankfully, Woodward says, that is "not a trivial task," and Internet service providers have some related defenses in place.
But some attacks get through. One four-month campaign, spotted by Dell Secureworks in 2014, involved redirecting traffic from major Internet service providers to fool bitcoin-mining pools into sharing their processing power - which is used to generate bitcoins - with the attacker. Dell estimates that the attacker netted about $84,000 in bitcoins, although it's not clear that such attacks are widespread.
What has been on the increase, however, are incidents in which malware and spam purveyors hijack an organization's autonomous system numbers, or ASNs, which indicate how traffic should move within and between multiple networks, says Doug Madory director of Internet analysis at Dyn Research, which was formed after Dyn last year acquired global Internet monitoring firm Renesys.
In a blog post, Madory describes six recent examples of bogus routing announcement campaigns, some of which remain under way, and all of which have been launched from Europe or Russia. By using bogus routing, attackers with IP addresses that have been labeled as malicious - for example by the Zeus abuse tracker, which catalogs botnet command-and-control servers - can hijack legitimate IP address space and trick targeted autonomous systems on the Internet into thinking the attack traffic is legitimate.
"These are not isolated incidents," Madory says of the recent attacks that he has documented. "First, these bogus routes are being circulated at a near-constant rate, and many separate entities are engaged in this practice, although with subtle differences in approach. Second, these techniques aren't solely for the relatively benign purpose of sending spam. Some of this host address space is known to circulate malware."
One takeaway, Madory says, is that any information security analysts who review alert logs should know that the IP addresses attached to alerts may have often been spoofed via BGP hijacking. "For example, an attack that appeared to come from a Comcast IP located in New Jersey may have really been from a hijacker located in Eastern Europe, briefly commandeering Comcast IP space," he says.
The security flaws associated with BGP that allow such attacks to occur haven't gone unnoticed. In January, the EU cybersecurity agency ENISA urged all Internet infrastructure providers to configure Border Gateway Protocol to ensure that only legitimate traffic flows over their over networks.
But ENISA's advice belies that while BGP can be fixed, it can't be done quickly. "There are efforts to cryptographically sign IP address announcements," Madory says. "However, these techniques aren't foolproof and until they achieve a critical mass of adoption, they won't make much difference."
No Quick Fix
"Why Is It Taking So Long to Secure Internet Routing?" is the title of a recent research paper from Boston University computing science professor Sharon Goldberg, who notes that any fix will require not just a critical mass, but coordinating thousands of different groups. "BGP is a global protocol, running across organizational and national borders," the paper notes. "As such, it lacks a single centralized authority that can mandate the deployment of a security solution; instead, every organization can autonomously decide which routing security solutions it will deploy in its own network." That's one reason why BGP hasn't gotten a security makeover, despite weaknesses in the protocol having been well-known by network-savvy engineers for the past two decades.
Lately, however, BGP abuse has been rising. "It appears to be more systematized now," Dyn's Madory warns. Pending a full fix, he says that service providers might combat these attacks by banding together and temporarily blocking Internet traffic from organizations that repeatedly fail to secure their infrastructure, thus allowing BGP attackers subvert it.
In the meantime, keep an eye on security logs for signs of related attacks. "There's no easy defense, but it is kind of possible [to spot attacks] by monitoring and watching for unexpected changes in routing," Woodward says. | <urn:uuid:b28d54f0-a33f-4c1f-a78b-923d40bdc6dd> | CC-MAIN-2022-40 | https://www.databreachtoday.com/whos-hijacking-internet-routes-a-7874 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00167.warc.gz | en | 0.959233 | 1,153 | 2.65625 | 3 |
Starting from scratch is liberating. It is also imperative for true innovation.
Indeed, instead of incrementally improving on what’s been done before, new entrants in many industries are taking a fresh look at the constraints that hold their customers back – and coming up with innovative ways to eliminate those constraints.
Taking Control of the Space Race
One of the most high profile examples of the start-from-scratch approach is Elon Musk’s SpaceX. In 2012, SpaceX became the first privately funded company contracted by NASA to deliver supplies to astronauts at the International Space Station. NASA has confirmed that SpaceX will take astronauts themselves there by 2017. SpaceX also launches satellites into space for companies the likes of ORBCOMM, Iridium, and Thales Alenia Space.
How did Musk’s company, which was founded in 2002, disrupt the space industry, whose players are mostly 100-year old companies and government institutions? By being efficient and keeping the prices low. By independent estimates, SpaceX charges half as much as its main competitor.
To read the full article please click here. | <urn:uuid:35fd032d-2214-45f9-875a-7a0b19c5991b> | CC-MAIN-2022-40 | https://datacenterpost.com/reducing-data-center-cost-time-and-physical-constraints/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00167.warc.gz | en | 0.961546 | 223 | 2.546875 | 3 |
St. Louis, Missouri. Orleans, France. The Hot Gates, Greece.
Throughout history, certain places have been anointed with the term “gateway to the _____.” These locations are famous for being strategically important to either the defense of a kingdom – France, Sparta – or as the beginning of an expansive effort to settle new lands – St. Louis. Gateways are places that lead to somewhere else, usually someplace important. If you control it, you control who or what comes in (or goes out). They provide control over access as well as affording a more efficient point of protection against invasion or attack. In the modern world we see these gateways at airports around the globe when we’re required to traverse “customs” in an international airport or have to pass through the dreaded “security” to enter in the first place.
It is no surprise, then, that a group of devices has arisen in the course of technology’s history that are considered strategically important and are known collectively as “gateways.”
In the case of applications, these devices (that occupy a strategic point in the architecture), provide for security, scale, and often interoperability, of emerging technologies.
Loosely, we could deem the network firewall as “the first” instance of an app-centric gateway in the network. Back in the early days, they were the gatekeepers, after all. Apps could only be accessed from outside the corporate network if the network firewall allowed that access. Today’s gateways, however, are far more application savvy and address not only the need for security, but scale, access, and interoperability, as well.
All are key concerns as we ramp up into the era of IoT. A recent HCL survey conducted by Vanson Bourne found that 93% of respondents were concerned about security, 86% about scale, and 83% about interoperability. None of this should be surprising. The sheer volume of devices needing access to apps to collect data, exchange commands, and monitor things is sure to have an overwhelming impact on any network. The speed with which firms are pressured to get the next “thing” to market has a deleterious impact on security. And interoperability is always challenging in a new market that is reliant as much on communication with other things and apps as it is performing its intended use. Nascent markets tend to diverge quickly and pockets coalesce around a variety of standards until one day, we settle on one or two. But early on there is often a complex and confusing set of choices. Innovators are unwilling to wait for the dust to settle. Innovators win, after all, by getting to market.
The issue then becomes how to address security, scale, and interoperability while the emerging technology is maturing and standards are being hashed out. The answer to that is, generally, gateways.
In the past, gateways have arisen to deal with the same challenges around other technologies and protocols such as XML and SOAP. I’m not the only one who will remember (and perhaps cringe) at the fisticuffs between RPC/ENC and DOC/LIT as the “standard” for web services. And I’m not the only one to recall the more recent battle between JSON and XML. Today, those tug-of-wars are occurring in the IoT arena, where MQTT and CoAP and AMQP are vying for ascendancy.
Innovation can’t wait, however, and gateways are emerging left and right to deal with challenges arising from new technologies and protocols.
HTTP2 gateways primarily address a challenge with interoperability between HTTP/1x and HTTP/2. These devices terminate HTTP2 on the “outside” in order to support mobile and IoT devices that perform with greater alacrity when using the most recent HTTP standard. They enable innovators to provide support for HTTP2 to consumers and things without the massive disruption required to upgrade app and network infrastructure to support the new standard. Certainly it is hoped (expected) that one day everyone will be running HTTP2, but in the mean time, HTTP2 gateways provide the interoperability necessary for innovators to move full steam ahead.
The emergence of API gateways is akin to that of a butterfly emerging from a cocoon. It was a caterpillar (SOA gateways) but now it’s a butterfly (an API gateway) instead. Now, that’s not to say that there aren’t brand new entrants to this category – there are – but they do share a great deal of similarities primarily because the foundations of both reside in HTTP. Where SOA gateways were primarily concerned with XML and SOAP, API gateways are focused on JSON and RESTful APIs implemented using HTTP endpoints.
These devices provide security and scale, and in some cases interoperability services. They’re fluent into the language of apps, speaking JSON and HTTP with equal ease, and provide a strong foundation for supporting emerging app models like microservices and serverless that can distribute API calls not just across applications but environments, as well. API gateways also serve to protect scalability by enforcing quotas (rate limiting of calls) as well as controlling access via API key management.
These are not just “load balancers”, though scale through load balancing is certainly a key characteristic of API gateways. These bad boys must be able to go beyond plain old load balancing to enable a consistent consumer-facing experience while simultaneously enabling developers and businesses to take advantage of cloud and serverless. API gateways need to be “smart” if they’re going to secure and scale APIs, which means being able to inspect requests and responses as well as integrate with external authentication providers like OAuth2 and JWT.
IoT gateways are the most nascent of the gateways today, but they are out there and they’re vitally important to the success of IoT initiatives, perhaps more so than that of other gateways to their respective markets. This has to do with the protocols, which are not at all web-friendly protocols. While many consumer gadgets do speak “web”, it’s more and more the case that thing-makers are relying on IoT specific protocols like MQTT and CoAP because they are more efficient and consume less compute on the device. But the apps that receive that data don’t necessarily speak MQTT and even if they do, they can’t scale on their own to meet (hopefully outstanding) demand.
So arises the IoT gateway, which is fluent in the languages of IoT and the web, and can scale and secure access at the same time. These gateways, like API gateways, need to be “smart” enough to translate and route requests and responses as well as detect anomalies or bad behavior to prevent exploitation.
Architecturally, gateways provide access to networks and applications. Requests are funneled through them, making them a strategic point of control at which access can be controlled, translations provided, and security enforced. They are a key enabler of new technologies as they afford organizations the ability to innovate during the natural transition period that occurs when any new technology or protocol emerges with less disruption and risk to existing business and apps.
While gateways tend to be viewed as architectural constructs they are just as frequently today key enablers of innovation that enable business to harness the power of emerging technologies. | <urn:uuid:4ad56dfe-1e65-4503-9757-a87c0f746e39> | CC-MAIN-2022-40 | https://www.f5.com/ja_jp/company/blog/the-gateways-to-innovation-are-in-the-network | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00167.warc.gz | en | 0.957695 | 1,558 | 2.515625 | 3 |
Agrology, a provider of predictive agriculture platform, announced partnership with Google Cloud to assist farmers.
Using Agrology’s Predictive Agriculture Platform, growers are leveraging Google Cloud and TensorFlow to monitor crops and receive predictions on irrigation, extreme weather, soil carbon respiration and sequestration, pest and disease outbreaks, and more.
The two companies kicked off their partnership in June at the 2022 Google Cloud Sustainability Summit. Both companies are now working to develop new technologies to help farmers face a new era of climate threats and embrace sustainability opportunities.
Agrology uses TensorFlow to forecast microclimate conditions on farms and Google Cloud to process the data Agrology gathers in customers’ fields. They use machine learning (ML) and artificial intelligence (AI) models to process data. Agrology uses Google Earth Engine to analyze terrain and determine geographical impacts such as the accumulation of smoke, fog, or carbon dioxide in specific areas. | <urn:uuid:e48caa48-03a1-407a-bf38-ba0dd564238d> | CC-MAIN-2022-40 | https://infotechlead.com/cloud/google-cloud-to-power-agrology-in-monitoring-crops-74379 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00368.warc.gz | en | 0.90231 | 189 | 2.53125 | 3 |
Elliptic Curve Cryptography (ECC) has become the de facto standard for protecting modern communications. ECC is widely used to perform asymmetric cryptography operations, such as to establish shared secrets or for digital signatures. However, insufficient validation of public keys and parameters is still a frequent cause of confusion, leading to serious vulnerabilities, such as leakage of secret keys, signature malleability or interoperability issues.
The purpose of this blog post is to provide an illustrated description of the typical failures related to elliptic curve validation and how to avoid them in a clear and accessible way. Even though a number of standards1,2 mandate these checks, implementations frequently fail to perform them.
While this blog post describes some of the necessary concepts behind elliptic curve arithmetic and cryptographic protocols, it does not cover elliptic curve cryptography in detail, which has already been done extensively. The following blog posts are good resources on the topic: A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography by Nick Sullivan and Elliptic Curve Cryptography: a gentle introduction by Andrea Corbellini.
In elliptic curve cryptography, public keys are frequently sent between parties, for example to establish shared secrets using Elliptic curve Diffie–Hellman (ECDH). The goal of public key validation is to ensure that keys are legitimate (by providing assurance that there is an existing, associated private key) and to circumvent attacks leading to the leakage of some information about the private key of a legitimate user.
Issues related to public key validation seem to routinely occur in two general areas. First, transmitting public keys using digital communication requires to convert them to bytes. However, converting these bytes back to an elliptic curve point is a common source of issues, notably due to canonicalization. Second, once public keys have been decoded, some mathematical subtleties of the elliptic curve operations may also lead to different types of attacks. We will discuss these subtleties in the remainder of this blog post.
In general3, vulnerabilities may arise when applications fail to check that:
- The point coordinates are lower than the field modulus.
- The coordinates correspond to a valid curve point.
- The point is not the point at infinity.
- The point is in the correct subgroup.
An Illustrated Guide to Validating ECC Curve Points
Elliptic curves are curves given by an equation of the form (called short Weierstrass form). Elliptic curve cryptography deals with the group of points on that elliptic curve, namely, a set of values satisfying the curve equation.
These values, called coordinates (more specifically, affine coordinates), are defined over a field. For use in Cryptography, we work with coordinates defined over a finite field. For the purpose of this blog post, we will concentrate our efforts on the field of integers modulo , with a prime number (and ), which we call the field modulus. Elements of this field can take any value between and . In the following figure, the white squares depict valid field elements while grey squares represent the elements that are larger than the field modulus.
Mathematically, a value larger than the field modulus is equivalent to its reduced form (that is, in the to range, see congruence classes), but in practice these ambiguities may lead to complex issues4.
In ECC, a public key is simply a point on the curve. Since curve points are generally first encoded to byte arrays before being transmitted, the first step when receiving an encoded curve point is to decode it. This is what we identified earlier as the first area of confusion and potential source of vulnerabilities. Specifically, what happens when the integer representation of the coordinates we decoded are larger than the field modulus?
This is the first common source of issues and the reason for our first validation rules:
Check that the point coordinates are lower than the field modulus.
What can go wrong? If the recipient does not enforce that coordinates are lower than the field modulus, some elliptic curve point operations may be incorrectly computed. Additionally, different implementations may have diverging interpretations of the validity of a point, possibly leading to interoperability issues, which can be a critical issue in consensus-driven deployments.
In the figure below, this means that the point coordinates should be rejected if they are not in the white area in the bi-dimensional plane.
Since both coordinates are elements of this finite field, it might seem that an elliptic curve point could theoretically take any value in the white area above. However, not all pairs of values in this plane are valid curve points; remember that they need to satisfy the curve equation in order to be on the curve. We represent the valid curve point in blue in the figure below5. The number of points on the curve is referred to as the curve order.
This is another common source of issues and where our second validation rule arises:
Check that the point coordinates correspond to a valid curve point (i.e. that the coordinates satisfy the curve equation).
What can go wrong? If the recipient of a public key fails to verify that the point is on the curve, an attacker may be able to perform a so-called invalid curve attack6. Some point operations being independent of the value of in the elliptic curve equation, a malicious peer may carefully select a different curve (by varying the value of ) in which the security is reduced (namely, the discrete logarithm problem is easier than on the original curve). By then sending a point on that new curve (and provided the legitimate peer fails to verify that the point coordinates satisfy the curve equation), the attacker may eventually recover the legitimate peer’s secret.
In practice, curve points are rarely sent as pairs of coordinates, which we call uncompressed. Indeed, the -coordinate can be recovered by solving the curve equation for a given , which is why point compression was developed. Point compression reduces the amount of data to be transmitted by (almost) half, at the cost of a few more operations necessary to solve the curve equation. However, solving the equation for a given may have different outcomes. It can either result in:
- no solutions (in case does not have a square root in the field, i.e. is not a quadratic residue), in which case the point should be rejected; or
- two solutions, and , due to the fact that . However, all coordinates must lie in the range, which are all positive numbers. Since we’re working in the field of integers modulo , that negative -coordinate is actually equivalent to the field element , which lies in the correct range.
Hence, when compressing a point, an additional byte of data is used to distinguish the correct -coordinate. Specifically, point encoding (following Section 2.3.3 of SEC 1, works by prepending a byte to the coordinate(s) specifying which encoding rule is used, as follows:
- Compressed point:
0x02 || xif y is even and
0x03 || xif y is odd;
- Uncompressed point:
0x04 || x || y.7
Any other value for the first byte should result in the curve point being ignored. Point compression has a significant benefit in that it ensures that the point is on the curve, since in case there is no solution, implementation should reject the point.
Now, the careful reader may have realized that the set of points in the figure above is incomplete. In order for this set to form a group (in the mathematical sense), and be useful in cryptography, it needs to be supplemented with an additional element, the point at infinity. This point, also called neutral element or additive identity, is the element such that for any point on our elliptic curve, . The figure below shows the previous set of points on our arbitrary curve with the addition of the point at infinity, which we (artificially) positioned slightly outside our plane, in the bottom left corner.
Since the point at infinity is not on the curve, it does not have well-defined and coordinates like other curve points. As such, its representation had to be constructed artificially8. Standards (such as SEC 1: Elliptic Curve Cryptography) define the encoding of the point at infinity to be a single octet of value zero. Confusingly, implementations sometimes also use other encodings for the point at infinity, such as a number of zero bytes equal to the size of the coordinates.
Implementations sometimes fail to properly distinguish the point at infinity, and this is where our third validation rule comes from:
Check that the point is not the point at infinity.
What can go wrong? Since multiplying the point at infinity by any scalar results in the point at infinity, an adversary may force the result of a key agreement to be zero if the legitimate recipient fails to check that the point received is not the point at infinity. This goes against the principle of contributory behavior, where some protocols require that both parties contribute to the outcome of an operation such as a key exchange. Failure to enforce this check may have additional negative consequences in other protocols.
Recall that the curve order, say , corresponds to the number of points on the curve. To make matters more complicated, the group of points on an elliptic curve may be further divided into multiple subgroups. Lagrange’s theorem tells us that any subgroup of the group of points on the elliptic curve has an order dividing the order of the original group. Namely, the size (i.e. the number of points) of every subgroup divides the total number of points on the curve.
In cryptography, to ensure that the discrete logarithm problem is hard, curves are selected in such a way that they consist of one subgroup with large, prime order, say , in which all computations are performed. Some curves (such as the NIST curves9, or the curve secp256k110 used in bitcoin) were carefully designed such that , namely the prime order group in which we perform operations is the full group of points on the elliptic curve. In contrast, the popular Curve25519 has curve order , which means that points on this curve can belong to the large prime-order subgroup of size , or to a subgroup with a much smaller order, of size 2, 4 or 8, for example11. The value such that is called the cofactor, it can be thought of as the ratio between the total number of points on the curve and the size of the prime-order subgroup in which cryptographic operations are performed.
To illustrate this notion, consider the figure below in which we have further subdivided our fictitious set of elliptic curve points into two groups. When performing operations on elliptic curve points, we want to stick with operations on points in the larger, prime-order subgroup, identified by the blue points below.
And this is where our last validation rule comes from:
Check that the point is in the correct subgroup.
This can be achieved by checking that . Indeed, a consequence of Lagrange’s theorem is that any group element multiplied by the order of that group is equal to the neutral element. If were in the small subgroup, multiplying it by would not equal . This highlights another possible method for checking that the point belongs to the correct subgroup; one could also check that . Contrary to the previous validation rules, this check is considerably more expensive since it requires a point multiplication, and as such is sometimes (detrimentally) skipped for efficiency purposes.
What can go wrong? A malicious party sending a point in the orange subgroup, for example as part of an ECDH key agreement protocol, would result in the honest party performing operations limited to that small subgroup. Thus, if the recipient of a public key failed to check that the point was in the correct subgroup, the attacker could perform a so-called small subgroup attack (also known as subgroup confinement attacks) and learn information about the legitimate party’s private key12.
Does that apply to all curves?
While the presentation above is fairly generic and applies in a general sense to all curves, some curves and associated constructions were created to prevent some of these issues by design.
NIST curves (e.g. P-256) and the Bitcoin curve (secp256k1)
These curves have a cofactor value of 1 (namely, ). As such, there is only one large subgroup of prime order and all curve points belong to that group. Hence, once the first 3 steps in our validation procedure have been performed, the last step is superfluous.
Curve25519, proposed by Daniel J. Bernstein and specified in RFC 7748, is a popular curve which is notably used in TLS 1.3 for key agreement.
Although Curve25519 has a cofactor of 8, some functions using this curve were designed to prevent cofactor-related issues. For example, the X25519 function used to perform key agreement using Curve25519 mandates specific checks and performs key agreement using only -coordinates, such that invalid curve attacks are avoided. Additionally, the governing RFC states in Section 5 that
Implementations MUST accept non-canonical values and process them as if they had been reduced modulo the field prime. The non-canonical values are 2^255 – 19 through 2^255 – 1 for X25519.
This seems to address most issues discussed in this post. However, there has been some debate13 over the claimed optional nature of these checks.
With the popularity of Curve25519 and the desire for cryptographers to design more exotic protocols with it, the cofactor value of 8 resurfaced as a potential source of problems. Ristretto was designed as a solution to the cofactor pitfalls. Ristretto is an abstraction layer, on top of Curve25519, which essentially restricts curve points to a prime-order subgroup.
Double-Odd Elliptic Curves
Finally, a strong contender in the secure-by-design curve category is the Double-Odd family of elliptic curves, recently proposed by Thomas Pornin. These curves specify a strict and economical encoding, preventing issues with canonicalization and, even though their cofactor is not trivial, a prime order group is defined on them, similar in spirit to Ristretto’s approach, preventing subgroup confinement attacks.
With the ubiquitous use of elliptic curve cryptography, failure to validate elliptic curve points can be a critical issue which is sadly still commonly uncovered during cryptography reviews. While standards and academic publications provide ample directions to correctly validate curve points, implementations still frequently fail to follow these steps. For example, a vulnerability nicknamed Curveball was reported in January 2020, which allowed attacker to perform spoofing attacks in Microsoft Windows by crafting public points. Recently, we also uncovered a critical vulnerability in a number of open-source ECDSA libraries, in which the verification function failed to check that the signature was non-zero, allowing attackers to forge signatures on arbitrary messages, see the technical advisory Arbitrary Signature Forgery in Stark Bank ECDSA Libraries.
This illustrated guide will hopefully serve as an accessible reference on why and how point validation should be performed.
The author would like to thank Eric Schorn and Giacomo Pope for their detailed review and helpful feedback.
- Standards for efficient cryptography, SEC 1: Elliptic Curve Cryptography, Section 22.214.171.124 Elliptic Curve Public Key Validation Primitive.
- NIST Special Publication 800-56A, Section 126.96.36.199.2 FFC Partial Public-Key Validation Routine and 188.8.131.52.3 ECC Full Public-Key Validation Routine.
- That is, unless using an elliptic curve that was designed specifically to address these potential issues, we will come back to that at the end of this blog post.
- Specifically, implementations may handle values that are larger than the field modulus in different ways. They may reject non-reduced values (i.e., non-canonical encodings), accept non-reduced values and reduce them modulo the prime order, or accept non-reduced values and discard the most significant bit(s). An interesting example happened with the cryptocurrency Zcash, where different implementations had distinct interpretations regarding the validity of curve points. Some details can be found in a blog post by Henry de Valence, as well as in a public report following a cryptography review performed by NCC Group.
- Note that this figure does not represent an actual elliptic curve; it is just an arbitrary diagram designed for illustrative purposes.
- Ingrid Biehl, Bernd Meyer and Volker Müller. “Differential Fault Attacks on Elliptic Curve Cryptosystems”. In: Advances in Cryptology – CRYPTO 2000, 20th Annual International Cryptology Conference, Santa Barbara, California, USA, August 20-24, 2000, Proceedings. 2000, pp. 131–146.
- Note that there is a hybrid form starting in
0x06defined in ANSI X9.62, but this format is very rarely used in practice.
- Note that some curves and alternate point representations (for instance, when working in projective coordinates) may allow the point at infinity to have a well-defined representation.
- Standardized in FIPS PUB 186-4: Digital Signature Standard (DSS).
- Standardized in SEC 2: Recommended Elliptic Curve Domain Parameters.
- The paper Taming the many EdDSAs provides some very interesting discussions around ambiguities in the Ed25519 signature verification equations (which is based on Curve25519). These ambiguities led to different interpretations of the validity of signatures, which resulted in implementations returning different validity results for some signatures, which could be critical in consensus-driven applications.
- Chae Hoon Lim and Pil Joong Lee. “A Key Recovery Attack on Discrete Log-based Schemes Using a Prime Order Subgroup”. In: Advances in Cryptology – CRYPTO ’97, 17th Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 1997, Proceedings. 1997, pp. 249–263.
- See https://moderncrypto.org/mail-archive/curves/2017/000896.html. | <urn:uuid:348e1bb4-c6f8-41ab-9a93-78c762b07771> | CC-MAIN-2022-40 | https://research.nccgroup.com/2021/11/18/an-illustrated-guide-to-elliptic-curve-cryptography-validation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00368.warc.gz | en | 0.928791 | 3,839 | 2.9375 | 3 |
By Erik Fossum Færevaag
For over a decade we have witnessed a proliferation of internet-connected devices. Nowadays, the number of internet-connected devices is estimated to be around 25 billion. Most of these are mainstream consumer devices that enable human communications and facilitate human-machine interactions. There is also a growing number of devices that are deployed in industrial environments to enable the collection of digital data about operational performance such as efficiency and quality.
By analyzing this data, industrial enterprises derive insights on how to automate and optimize their business processes. In this direction, devices, and smart objects with actuation capabilities such as robots, drones, smart sensors, and automated guided vehicles can be used to reduce human errors, increase automation, and improve the quality and reduce the cost of industrial operations. Currently, industrial enterprises exploit only a small fraction of these data. However, this is gradually changing as the internet of things (IoT) is combined with other cutting-edge digital technologies such as cloud computing, machine learning, and artificial intelligence (AI). Additionally, a prerequisite for IoT at scale is simplicity and the low cost of the technologies used.
Rise of the Industrial Internet of Things
The IoT computing paradigm is sector agnostic. As such IoT is already deployed in many different sectors of the economy such as supply chain management, transport, healthcare, and industry. According to recent market research, the lion’s share of IoT’s business potential lies in industrial applications, in sectors like manufacturing, energy, oil and gas, and smart buildings. This has given rise to the term Industrial Internet of Things (IIoT), which is the main technology behind the fourth industrial revolution (Industry 4.0).
IoT’s disruptive potential will be primarily realized in business areas that can directly benefit from remote observations and related data analytics. The latter enables enterprises to achieve unprecedented levels of automation, along with ambitious business improvement targets. Large industries that integrate IIoT in their digital transformation agendas will benefit the most from IoT based on projects with a considerable return on investment. On the other hand, Small Medium Enterprises (SMEs) are provided with IoT-based innovation opportunities based on novel use cases such as the use of their assets as a service. In the medium term, IoT’s benefits will be diffused to the consumer space as well, as part of applications like smart homes and smart living. In the scope of these applications, consumers will be provided with instant information about leakages, defects in fridges and white appliances, goods’ delivery status, as well as the condition of their car, boat, and cottage.
IoT in Smart Cities and Facilities Management
Prominent IoT applications examples can be found in the areas of smart cities and smart buildings. Specifically:
In smart cities IoT enables automated and remote management of infrastructures and processes: For instance, it provides insight into the management of critical infrastructures like smart grid and smart water infrastructures. As another example, it delivers efficient and scalable urban security services based on information from video camera surveillance systems. Moreover, it facilitates the delivery of intelligent and sustainable transport services based on the management of information from bike rental stations, taxis, public transport, and intelligent transportation systems. In the scope of the above-listed use cases, data monitoring is performed from remote and on a 24×7 basis. Likewise, based on the use of machine learning and AI techniques, smart city services can predict and anticipate potential incidents (e.g., service disruptions, transport congestion) towards confronting them proactively. In these ways, IoT empowers sustainable cities, which sustain the pressures of rapid urbanization and changing demographics.
Smart buildings and facilities management will also benefit significantly from IoT: The deployment of sensors and IoT devices enables the collection of data about the status of the assets, the spaces, and the physical conditions of a building. Leveraging these data, IoT applications extract predictive insights on how to optimize space allocation, asset maintenance, as well as the operation of systems like HVAC (Heating, Ventilation, and Air Conditioning). These optimizations lead to significant cost savings, increased comfort for the tenants, as well as considerable sustainability benefits. Following the COVID19 pandemic outbreak, facility managers are increasingly deploying IoT applications to optimize space management and other resources in the light of the dynamically changing occupancy patterns imposed due to COVID19 restrictions.
In all the above applications, IoT is a catalyst for improving environmental performance and helping cities and communities to achieve ambitious green targets.
Security and Privacy Concerns
Security, privacy, and data protection concerns are among the main setbacks to the accelerated deployment and wider use of the IoT paradigm. In this direction, proper regulation such as the General Data Protection Regulation (GDPR) in Europe can provide a foundation for protecting the interests of data owners and end-users. Nevertheless, there is always a need for balancing regulation with innovation.
There are many use cases where IoT enables the collection of sensitive data at scale without notice, which creates privacy and data protection risks. Hence, privacy needs to be considered seriously by all businesses to avoid failures and regulatory penalties. Along with privacy concerns, enterprises must also deal with the ever-important security issues. Risks associated with IoT devices must be considered as part of a holistic approach that addresses both cyber and physical security risks.
IoT adoption is accelerating at a quick rate. The proliferation of IoT devices, the rapid evolution of related technologies, and the validation of successful business models foster an increased number of IoT deployments. The latter boosts business competitiveness and sustainability while improving the citizens’ quality of life. Overall, IoT is here to stay and make our world a better place.
About the author
Erik Fossum Færevaag is the Founder and President of Disruptive Technologies. Erik has a strong background in the semiconductor industry, architecting the world’s lowest power microcontroller at Energy Micro (now Silicon Labs) and the world’s fastest-growing industrial, scientific and medical (ISM) band radio integrated circuits (ICs). In 2013, he founded Disruptive Technologies and started the journey to recruit the best people in the industry. | <urn:uuid:1f9e9d0c-6e7e-46ad-acb7-ecf7e927a06e> | CC-MAIN-2022-40 | https://bdtechtalks.com/2021/03/08/internet-of-things-outlook/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00368.warc.gz | en | 0.928978 | 1,265 | 2.625 | 3 |
Canadian researchers developing new fingerprint scanning process
A research team at the University of Windsor in Ontario, Canada is currently developing a new technology it says could potentially enhance the process of fingerprint scanning for police, military and border services, according to a report by CBC.ca.
The researchers have developed a new biometric system at the Institute for Diagnostic Imaging Research in Windsor that can scan a finger into a 3-D image 2 mm from the skin’s surface.
“What we do here is unlike current, existing fingerprint devices in market which are optical,” said Aryaz Baradarani, a member of the research team. We are talking about something so advanced… “it’s reconstructing fingerprint patterns from the surface of the skin. We are going to do this process from the internal layers.”
The biometric system can still read those fingerprint patterns that have been damaged through work, an accident or on purpose, said Baradarani. “For example criminals are simply able to manipulate the surface of the skin, for that reason we are actually planning to offer fingerprint patter not from the surface of the skin, but from internal layers,” said Baradarani, pointing out that this is a function that existing optical fingerprint machines are incapable of performing.
The technology has great potential for use when it comes to security personnel at a nuclear power plant, the borders or military,” added Baradarani.
According to another research team member, Fedar Seviaryn, the group first began developing ultrasonic images of the skin for medical purposes before soon realizing the technology’s potential in fingerprinting application.
“The main difference from existing systems is that it can take an image of some structures inside the skin and that makes it a much more…solid, much more secure device,” said Seviaryn. “I believe that this technology will work very well in high-security areas. Of course it’s a little more expensive than the existing one, but it provides much more higher level of security.”
The team has already built a second prototype of the machine that can already be used for basic purposes, but it will continue to develop a final version of the system, said Seviaryn.
The researchers have already demonstrated the biometric system for the FBI at a defence biometric conference in Washington, D.C., and will soon present it to the Canadian government.
If the federal government approves of the project, the technology could be implemented in borders or airports to randomly check travelers, said Baradarani. | <urn:uuid:c5929d39-6255-4e2e-8fe7-0bfb40f366ef> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201409/canadian-researchers-developing-new-fingerprint-scanning-process?replytocom=97217 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00368.warc.gz | en | 0.941923 | 534 | 2.625 | 3 |
A new study by led by researchers from University College London (UCL) has found that certain people who test negative for COVID-19 despite being exposed to the SARS-CoV-2 coronavirus might actually be having an “immune memory” due to pre-existing polymerase-specific T cells.
These individuals can clear the virus rapidly due to a strong immune response from existing T-cells, hence resulting in a COVId-19 test being seronegative.
There have been cases of individuals who despite their entire household catching COVID-19, has never tested positive for the disease.
The study team from UCL has now found an explanation, showing that a proportion of individuals experience “abortive infection” in which the virus enters the body but is cleared by the immune system’s T-cells at the earliest stage meaning that PCR and antibody tests record a negative result.
Hence these individuals with potential exposure to SARS-CoV-2 do not necessarily develop PCR or antibody positivity, suggesting some may clear sub-clinical infection before seroconversion
Interestingly about 15% of healthcare workers who were tracked during the first wave of the pandemic in London, England, appeared to fit this scenario.
According to the study team, “T-cells can contribute to the rapid clearance of SARS-CoV-2 and other coronavirus infections.”
The study team hypothesized that pre-existing memory T-cell responses, with cross-protective potential against SARS-CoV-24–11, would expand in vivo to support rapid viral control, aborting infection.
The team measured SARS-CoV-2-reactive T-cells, including those against the early transcribed replication transcription complex (RTC), in intensively monitored healthcare workers (HCW) remaining repeatedly negative by PCR, antibody binding, and neutralization (seronegative HCW, SN-HCW).
Interestingly, the seronegative healthcare workers (SN-HCW) had stronger, more multispecific memory T-cells than an unexposed pre-pandemic cohort, and more frequently directed against the RTC than the structural protein-dominated responses seen post-detectable infection (matched concurrent cohort).
Also, the SN-HCW with the strongest RTC-specific T-cells had an increase in IFI27, a robust early innate signature of SARS-CoV-214, suggesting abortive infection.
RNA-polymerase within RTC was the largest region of high sequence conservation across human seasonal coronaviruses (HCoV) and SARS-CoV-2 clades. RNA-polymerase was preferentially targeted (amongst regions tested) by T-cells from pre-pandemic cohorts and SN-HCW. RTC epitope-specific T-cells cross-recognizing HCoV variants were identified in SN-HCW.
Enriched pre-existing RNA-polymerase-specific T-cells expanded in vivo to preferentially accumulate in the memory response after putative abortive compared to overt SARS-CoV-2 infection.
The study findings highlight RTC-specific T-cells as targets for vaccines against endemic and emerging Coronaviridae.
The study findings were published in the peer reviewed journal: Nature.
We provide T-cell and innate transcript evidence for abortive, seron- egative SARS-CoV-2 infection. Longitudinal samples from SN-HCW and an additional cohort, showed RTC (particularly polymerase)-specific T-cells were enriched before exposure, expanded in vivo, and prefer- entially accumulated in those in whom SARS-CoV-2 failed to establish infection, compared to those with overt infection.
The differential biasing of T-cells towards early expressed non-structural SARS-CoV-2 proteins in HCW not seroconverting may reflect repetitive occupational exposure to very low viral inocula, reported to drive the induction of non-structural T-cells in HIV, SIV and HBV26,37,38. Such repetitive exposure would be congruent with the observed protracted induction of the innate signal IFI27 and the devel- opment of de novo T-cells in some SN-HCW.
However, we also documented expansion of pre-existing T-cells, with responses capable of cross-recognising epitope variants between sea- sonal HCoV and SARS-CoV-2. Cross-reactive SARS-CoV-2-specifc CD8+ T-cells directed against epitopes highly conserved among HCoV are now well-described, with pre-existing T-cells frequently targeting essential viral proteins with low scope for tolerating mutational variation, such as those in ORF1ab6,18,32.
The abundant SARS-CoV-2-specific CD4+ T-cells may also contribute to protection in SN-HCW by antibody-independent mechanisms, such as antiviral cytokines and chemokines production. HCW have higher frequencies of HCoV-reactive T-cells than the general public19 and recent HCoV infection is associated with reduced risk of severe COVID-19 infection39, likely partly attributable to cross-reactive neutralising antibodies;40,41 however, pre-existing T-cells have also been implicated15,42. The early induction of T-cells, before detectable antibod- ies in mild infection30 and concurrent with mRNA vaccination efficacy, support a role for pre-existing cross-reactive memory T-cells2,31.
Pre-existing RTC-specific T-cells, at higher frequency than naïve T-cells and poised for immediate re-activation on antigen cross-recognition, would be expected to favour early control, explain- ing their enrichment after abortive compared to classical infection.
However, the relative contribution of viral inoculum and cross-reactive T-cells needs to be further dissected in human challenge experiments and animal models. A caveat of this work is that we only analysed peripheral immunity; it is plausible that mucosal-sequestered anti- bodies43 played a role in our seronegative cohort.
It also remains pos- sible that innate immunity mediates control in abortive infections, with RTC-biased T-cell responses being generated as a biomarker of low-grade infection. Interferon-independent induction of RIG-I has been proposed to abort SARS-CoV-2 infection by restraining the viral lifecycle prior to sgRNA production;13 this would favour the presenta- tion of epitopes from ORF1ab, released into the cytoplasm in the first stage of the viral life cycle12, whilst blocking production of structural proteins from pgRNA. This raises the possibility that some SARS-CoV-2 infected cells could be recognised and removed by ORF1ab-reactive T-cells without widespread production of structural proteins and mature virion formation.
We have described induction of innate and cellular immunity with- out seroconversion, highlighting a subset of individuals where risk of SARS-CoV-2 reinfection and immunogenicity of vaccines should be specifically assessed. The HCW we studied were exposed to Wuhan Hu-1 and had partial protection from PPE; it remains to be seen whether abortive infections can occur upon exposure to more infectious vari- ants of concern, or in the presence of vaccine-induced immunity. How- ever, clearance without seroconversion points to T-cells which may be particularly effective vaccine targets.
Cross-protection between coronaviruses is proportional to their sequence homology in mice44, making the highly conserved NSP12 region studied here, as well as less studied NSP3/14/16, top candidates for heterologous immunity. Our data highlight the presence of pre-existing T-cells in a proportion of donors that are able to expand in vivo and target a highly conserved region of SARS-CoV-2 and other Coronaviridae. Boosting of such T-cells may offer durable pan-Coronaviridae reactivity against endemic and emerging viruses, arguing for their inclusion and assessment, as an adjunct to spike-specific antibodies, in next-generation vaccines. | <urn:uuid:239a3f9b-d74c-4e2a-998a-a55e5a83b9c3> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/11/certain-people-who-test-negative-for-covid-19-despite-being-exposed-to-the-sars-cov-2-coronavirus-might-be-having-an-immune-memory-due-to-preexisting-polymerase-specific-t-cells/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00568.warc.gz | en | 0.939878 | 1,744 | 2.515625 | 3 |
The National Oceanic and Atmospheric Administration anticipates obtaining data from various government satellites and commercial sensors as it plans for its future satellite observing architecture, SpaceNews reported Monday. Karen St. Germain, director of NOAAâs office of systems architecture and advanced planning for satellite and information service, said the Joint Polar Satellite System in low-Earth orbit could feed data to the agency in the mid-2030s.
Germain noted that NOAA can look for approaches to leverage new acquisition measures and commercial technology platforms to build up data derived from JPSS and that the agencyâs near-term focus is to have imagers in geostationary orbit by 2030. She said the agency started collecting data from âa half dozenâ satellites operated by foreign partners in the last year, including Indiaâs Scatterometer Satellite and Japanâs Himawari 8 weather satellite.
NOAAâs Satellite Observing System Architecture study underscored the value of imagery collection in Tundra orbits to enhance high-latitude regional observations.
âParticularly in the high latitudes, we believe weâre going to be seeing more drilling, more fishing, more tourism, more shipping,â St. Germain said. âThatâs going to mean we need more situational awareness when it comes to the risks associated with weather and environmental phenomenon. Thatâs a capability weâre looking at in the future architecture.â | <urn:uuid:0af3794e-b468-4365-8d2d-9a84072d693f> | CC-MAIN-2022-40 | https://executivegov.com/2019/05/noaa-plans-for-future-satellite-observing-architecture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00568.warc.gz | en | 0.917728 | 292 | 2.578125 | 3 |
Cyber criminals are turning towards automation to scale up their malicious operations more and more. According to a recent study, malicious automated bots are costing US businesses an estimated 3.6% of their annual revenue. This equates to about $250 million per year for 25% of the businesses surveyed.
You need to constantly be on your toes to keep up with the rapidly changing landscape of malicious actors who are using automation in their attack chains to reach a greater number of potential victims. It is becoming ever more apparent in this digital age that businesses need to look at efficient and effective tools to protect their networks and data.
Automation benefits the cybersecurity industry as much as it does cyber criminals, when implemented correctly. It can be a proactive and powerful barrier against ever-increasing, sophisticated cyber threats targeting valuable data.
What is automation?
Automation refers to software and systems created to replace repetitive processes and simple tasks and reduce manual intervention. The goal is to minimize human input and streamline activities and functions.
Within cybersecurity, automation is a tool that can be used to accurately predict behaviors and execute actions to protect against malicious threats. If implemented and used correctly, automation can help prevent cyberattacks from breaching networks and stealing sensitive information.
How are malicious actors utilizing automation?
Malicious actors are automating cyberattack processes for the same reason the cybersecurity industry does: it’s quicker and more efficient. Automation deploys tools to the target and gets data and sensitive information back automatically.
Active cyberattack campaigns now generally use some level of automation. It is an incredibly effective tool for conducting malicious activity, and as a result, operations have scaled up.
Tools that malicious actors use automation for:
Malicious automation comes in many different forms. Cyber criminals can easily build tools or online bots that can learn the flow of an application or browser in the same way cybersecurity builds tools to monitor those same applications for suspicious patterns or behaviours.
Data breaches are one of the most common results of actors using automation. The actors use automation tools to pick out information from a database like email addresses and passwords to sell or ransom – rather than selling all the contents of the database.
Fighting fire with fire
To successfully defend your business’s networks against automated malicious cyberattacks, you should look at incorporating automation into your cybersecurity.
Potential cybersecurity threats can be identified with automation, which significantly frees up time. Automated data collection and processing is rapidly gaining momentum in the cybersecurity industry, thanks to its use in protecting against data breaches and cyberattacks. Some of its uses are:
Constant monitoring and ongoing maintenance of networks will level the playing field by reducing threats and enabling faster protection. It also helps security teams respond to threat alerts faster, further negating the damage cyberattackers can cause.
When used by cybersecurity experts – like managed service providers or security vendors – automation also provides real-time 24/7 monitoring and analysis. This constant vigilance is a strong tool for the prevention against cyberattacks.
IT cybersecurity experts generally utilize automation tools within IT infrastructure and networks for tasks such as:
Automation is a powerful tool in the war against cyberattackers when implemented and utilized by security experts. If your organization is struggling to keep its network endpoints secure or dangerous emails out of your system, talk to the cybersecurity specialists at Merit Technologies about how their advanced security monitoring and incident response can help you prevent a data breach. | <urn:uuid:a52987b3-7c06-4622-a8e6-4e5324fd50ff> | CC-MAIN-2022-40 | https://merittechnologies.com/insights/malicious-actors-and-automation-whats-the-connection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00568.warc.gz | en | 0.927616 | 692 | 2.625 | 3 |
For the octopus and cuttlefish, instantaneously changing their skin color and pattern to disappear into the environment is just part of their camouflage prowess.
These animals can also swiftly and reversibly morph their skin into a textured, 3D surface, giving the animal a ragged outline that mimics seaweed, coral, or other objects it detects and uses for camouflage.
This week, engineers at Cornell University report on their invention of stretchable surfaces with programmable 3D texture morphing, a synthetic “camouflaging skin” inspired by studying and modeling the real thing in octopus and cuttlefish.
The engineers, along with collaborator and cephalopod biologist Roger Hanlon of the Marine Biological Laboratory (MBL), Woods Hole, report on their controllable soft actuator in the October 13 issue of Science.
Led by James Pikul and Robert Shepherd, the team’s pneumatically-activated material takes a cue from the 3D bumps, or papillae, that cephalopods can express in one-fifth of a second for dynamic camouflage, and then retract to swim away without the papillae imposing hydrodynamic drag.
“Lots of animals have papillae, but they can’t extend and retract them instantaneously as octopus and cuttlefish do,” says Hanlon, who is the leading expert on cephalopod dynamic camouflage.
“These are soft-bodied molluscs without a shell; their primary defense is their morphing skin.”
Papillae are examples of a muscular hydrostat, biological structures that consist of muscle with no skeletal support (such as the human tongue).
Hanlon and members of his laboratory, including Justine Allen, now at Brown University, were the first to describe the structure, function, and biomechanics of these morphing 3D papillae in detail.
“The degrees of freedom in the papillae system are really beautiful,” Hanlon says.
“In the European cuttlefish, there are at least nine sets of papillae that are independently controlled by the brain.
And each papilla goes from a flat, 2D surface through a continuum of shapes until it reaches its final shape, which can be conical or like trilobes or one of a dozen possible shapes.
It depends on how the muscles in the hydrostat are arranged.
“The engineers’ breakthrough was to develop synthetic tissue groupings that allow programmable, 2D stretchable materials to both extend and retract a range of target 3D shapes.
“Engineers have developed a lot of sophisticated ways to control the shape of soft, stretchable materials, but we wanted to do it in a simple way that was fast, strong, and easy to control,” says lead author James Pikul, currently an assistant professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania.
“We were drawn by how successful cephalopods are at changing their skin texture, so we studied and drew inspiration from the muscles that allow cephalopods to control their texture, and implemented these ideas into a method for controlling the shape of soft, stretchable materials.”
“This is a classic example of bio-inspired engineering” with a range of potential applications, Hanlon says.
For example, the material could be controllably morphed to reflect light in its 2D spaces and absorb light in its 3D shapes.
“That would have applications in any situation where you want to manipulate the temperature of a material,” he says.
Octopus and cuttlefish only express papillae for camouflage purposes, Hanlon says, and not for locomotion, sexual signaling, or aggression.
“For fast swimming, the animal would benefit from smooth skin.
For sexual signaling, it wouldn’t want to look like a big old wart; it wants to look attractive, like a cool-looking mate.
Or if it wanted to conduct a fight, the papillae would not be a good visual to put into the fight.
Signaling, by definition, has to be highly conspicuous, unambiguous signals. The papillae would only make it the opposite!”
- J. H. Pikul, S. Li, H. Bai, R. T. Hanlon, I. Cohen, R. F. Shepherd. Stretchable surfaces with programmable 3D texture morphing for synthetic camouflaging skins. Science, 2017; 358 (6360): 210 DOI: 10.1126/science.aan5627 | <urn:uuid:c6b60bec-916c-49d5-bf9c-59015bec1304> | CC-MAIN-2022-40 | https://debuglies.com/2017/10/13/engineers-develop-programmable-camouflaging-material-inspired-by-octopus-skin/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00768.warc.gz | en | 0.940257 | 974 | 3.109375 | 3 |
Brazil may be about to join the rush to standardise smartphone chargers. The National Telecommunications Agency (Anatel), the Brazilian telecom regulator, has proposed making USB-C chargers mandatory for all smartphones sold in the country.
Anatel has released an open public consultation for assessing the compliance of the wired charging interface with the USB type C standard in mobile cell phones.
This isn’t a new idea, of course. The European Parliament has advanced proposals for harmonising the charging interface and there have been movements in a similar direction in the US.
There are obvious reasons for this, apart from end users spending less time sorting through multiple chargers.
Firstly the USB Type-C is widely used by most global manufacturers and has internationally recognised standards.
Also, at a time when waste and expenditure are major concerns, addressing unnecessary consumer costs, mitigating e-waste and making it easier to decide on electronic devices are likely to be popular with the general public.
According to the IANS news service, in 2019, humans generated 53.6 million metric tons of e-waste. Only 17 per cent of this waste was recycled.
However, while the European Commission’s announcement of the adoption of a USB-C port as a single charger by 2024 applied to a variety of devices, including smartphones, tablets, portable speakers, and e-readers, the Brazilian proposal seems, for the moment, only to apply to smartphones. | <urn:uuid:85a4876f-b202-4d88-9ba7-1323d480b14a> | CC-MAIN-2022-40 | https://zpzccvl.developingtelecoms.com/telecom-technology/energy-sustainability/13686-brazil-may-soon-mandate-one-type-of-smartphone-charger.html?utm_source=related_articles&utm_medium=website&utm_campaign=related_articles_click | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00768.warc.gz | en | 0.929728 | 294 | 2.96875 | 3 |
California, much like the U.S. Federal Government, is looking to streamline its data center operations and cut energy use. How? One way the state is accomplishing it, well the latter at least, is by installing RFID-equipped temperature control systems.
According to this report in RFID Journal, California’s Department of General Services is installing Federspiel Controls‘ DASH system, which uses RFID tags and sensors from Dust Networks in 12 of the state’s data centers. The move comes after the Franchise Tax Board’s implementation helped the agency slash energy consumption in its 10,000 square foot data center from 59 kilowatts to a mere 15 kilowatts and save it an estimated $42,700 per year. This allowed operators to fine tune cooling by analyzing data generated by temperature sensors installed in server racks and attached to Dust Networks’ RFID tags, which in turn, transmit data to DASH. Instead of blasting the entire facility with super-cold air, DASH system can take measures like automatically lowering fan speeds and reduce cooling output. Conversely, if things get toasty, the system will automatically increase cooling and alert staff to potential problems.
If government agencies, typically the most bureaucratic and slowest to adopt game-changing tech, can get behind sensor-based temperature control platforms, there’s little excuse for corporations to keep their data centers meat locker cold.
Image Credit: Coolcaesar – CC | <urn:uuid:3b9f7815-37b9-4c2c-af16-47a8e447d066> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2010/08/rfid-helps-california-cut-data-center-energy-use.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00768.warc.gz | en | 0.885546 | 300 | 2.59375 | 3 |
The FBI is worried by rapid diffusion of the Internet of Things devices, according law enforcement smart objects could represent a serious threat for cyber security, and more in general for the society.
Security experts are aware that Internet of Things could be abused cyber crooks for criminal activities and use principal vendors to adopt security by design in order to produce smart objects resilient to cyber attacks.
The FBI’s public service announcement, published on September 10, highlights that Internet of Things poses opportunities for cyber crime as explained in the following statement from the Bureau.
“As more businesses and homeowners use web-connected devices to enhance company efficiency or lifestyle conveniences, their connection to the Internet also increases the target space for malicious cyber actors. Similar to other computing devices, like computers or Smartphones, IoT devices also pose security risks to consumers. The FBI is warning companies and the general public to be aware of IoT vulnerabilities cybercriminals could exploit, and offers some tips on mitigating those cyber threats.” states the announcement.
The announcement has raised a heated discussion on the responsibility for the exploitation of such kind of devices, it seems that the FBI attributes the responsibility for the security of these devices on the consumer.
“Consumers should be aware of the capabilities of the devices and appliances installed in their homes and businesses. If a device comes with a default password or an open Wi-Fi connection, consumers should change the password and only allow it operate on a home network with a secured Wi-Fi router” states the announcement.
Recently we have discussed several flaws that could be exploited by attackers to conduct illegal activities. Crooks are able to exploit home routers, fridges and baby monitors to carry on illegal activities, crimes that could impact million of users worldwide. I wrote an interesting analysis, titled “How Hackers Violate Privacy and Security of the Smart Home” to explain how Internet of Things could be exploited to hack modern smart home.
Smart objects could be hacked in different ways and for different reasons, they could be affected by vulnerabilities such as the “UPnP vulnerabilities” recently discovered, or they could be simply poorly configured (e.g. adoption of unchanged default passwords)
Every Internet of Things devices is insecure if it is not properly deployed and configured, for this reason the FBI invites end-customers to get smart objects away from the Internet.
“Isolate IoT devices on their own protected networks” states the announcement in the section dedicated to the Consumer Protection and Defense Recommendations.
I personally consider very important the advisory issued by the FBI, but probably is is not so clear when dealing with attribution of responsibility. It is clear that we cannot pretend that every final customer becomes a Tech savvy, so we must improve the security by design making Internet of Things more reliable and resilient to cyber attacks.
Let me close with the recommendations for the customer’s securtiy included in the announcement:
There is no time to waste, Internet of Things devices are already surrounding us, we must improve their security before hackers exploit them.
(Security Affairs – Internet of Things, cybercrime) | <urn:uuid:4d5ed4af-5948-4973-8629-964d8c5d8fa1> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/40139/cyber-crime/fbi-internet-of-things.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00768.warc.gz | en | 0.943383 | 634 | 2.6875 | 3 |
Hardware Failover allows a second unit to function in an idle role and act as a backup device. The idle device will assume the active role in the event of failure or loss of connectivity on the active unit.
The two units are connected by a cable for heartbeat communication and should be given the same physical network connectivity so failover is automatic. To achieve this, a hub or switch should be placed on each segment. The diagram shows below different network configurations – one with a single firewall behind an Ecessa pair and the other with redundant firewalls behind an Ecessa pair – and illustrates the need for hubs/switches. Please note that these diagrams are for demonstration purposes only; they reflect the most common configurations but other solutions are possible as long as both units have equal access to the WANs and LANs.
Configuration changes may be replicated from the active to the idle device to ensure both units share the most recent configuration. The idle device monitors the active device’s status, including network connectivity. If the idle device detects it has better access to network resources or if it fails to communicate with the active device entirely it will force hardware failover. During the failover, the idle device will load the configuration, assume the active role, and place the previously active device into idle mode.
Both the active and secondary units will test the heartbeat (connection between the devices) as well as any configured testing IP addresses. If the active device does not respond to a keep-alive query after a specified number of timeouts and the active IP addresses do not appear to be in service, the idle device will trigger the failover and become active.
For versions prior to release 8.4.x, if either unit determines a LAN or Gateway test IP address is dead (reached the number of failed responses) then that unit’s total count of accessible test IP addresses will decrease. If it is determined between the devices that the idle unit has more accessible test IP addresses than the active unit, a failover will occur.
The values for the Detection Interval and Failover After X Timeouts settings are multiplied together to determine failover latency after a failure occurs. If these are configured with values too low it may cause false failovers to occur and setting them too high may result in unnecessary delay.
If the Gateway and LAN testing are not enabled or the units are using 8.4.x or later firmware, failover will only be triggered in the event of hardware failure which is detected by the heartbeat.
Definition of terms
The Primary and Secondary labels are assigned to the Ecessa devices in a hardware failover pair. These labels do not change and are used only to distinguish between the devices. They are not related to the current state of the device or the ability of the device to handle traffic as both are equally capable.
The Active and Idle roles are dynamic and define the current state of the given device. The Active role is used by the device that is currently handling network traffic (regardless if it is the Primary or Secondary). The Idle role refers to the device operating in a hot standby state. The idle device monitors the status of both the active and idle units and will assume the active role if it is determined that it can provide better performance than the currently active device.
In the diagram, it is assumed the “Primary” device is currently in the “Active” state and the “Secondary” device is currently in the “Idle” state.
Two Ecessa devices are connected over a failover link (aka the Keep-Alive Port or “heartbeat”) which allows the pair to communicate device status information as well as replication session statistics. Typically the Keep-Alive port will be the highest numbered port but can be any available port.
The following screenshot shows the Hardware Failover page from a PowerLink running on version 8.4, which does not include the LAN or Gateway testing options:
Select the “Enable Hardware Failover” check box to enable the feature. Each pair will have a Primary and a Secondary unit and this designation can be changed with the drop-down located at the top-right corner:
The section beneath these settings reflects the current state and status of each device. When Hardware Failover is enabled and the pair successfully communicate over the failover link, the Hardware Failover status will look similar to the image below:
The next section defines the testing parameters between the units. While it is typically not necessary to alter these settings, testing sensitivity may cause issues such as failovers triggered too quickly or failures are not detected soon enough – both situations causing downtime – so it is important to keep these settings within acceptable thresholds.
By default only the active device is accessible for remote management, however Idle LAN or Idle WAN IP addresses can be entered to assign the idle device its own IP address. Additionally, LAN Testing can be enabled to trigger a failover if the idle unit can successfully ping the LAN Test IP address while the active unit cannot.
The access ports and policies are the same between the devices after a successful replication; however, the user account information (username/password) is not replicated between units so each device will need to be configured individually with the desired login credentials.
Finally, the Keep-Alive Port Settings on the Secondary device will mirror the settings on the Primary device:
In this example, the Primary is using Ethernet port 4 with VLAN 3999 enabled. The IP address assigned to the Primary for keep-alive communication is 220.127.116.11 and it is expecting its peer to use the address 18.104.22.168.
The Secondary unit will need to use Ethernet port 4 with VLAN 3999 enabled. The Local Address on the Secondary unit will have to be 22.214.171.124 with a Remote Address of 126.96.36.199.
Please note: The Keep Alive addresses must be in a subnet that is not already in use for the LAN or WAN.
Failover can be tested manually using the “Force Failover” button on the Hardware Failover page in the web interface or through the text user interface. Failover will succeed only if the heartbeat connection between the units shows an “UP” status for both the active and idle units.
Failover can also be tested by removing power to the active unit to simulate an outage which should also trigger a failover.
What behavior does my bridge have on the idle device?
All bridges are disabled on the idle device. This is to avoid an Layer 2 loop when idle or on failover.
How Fail-To-Wire (FTW) affect Hardware Failover?
It can also cause a Layer 2 loop - FTW should be disabled when using Hardware Failover.
How is Ethernet Bonding affected by Hardware Failover?
Ethernet Bonding is still enabled on the idle. This should not cause any problems, since logically a bond is like any other physical port. | <urn:uuid:cb13eba6-6d5a-462e-a6c1-cde9e77a1d53> | CC-MAIN-2022-40 | https://support.ecessa.com/hc/en-us/articles/200143876-Hardware-Failover-Configuration | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00168.warc.gz | en | 0.909645 | 1,450 | 3.109375 | 3 |
Cultural resource management (CRM) is the profession and practice of managing cultural assets, including historic structures and works of art. It encompasses Cultural Heritage Management, which is concerned with the preservation of traditional and historic cultures. It also digs into archaeology’s material culture. Cultural resource management embraces contemporary culture, including progressive and inventive forms of culture, such as urban culture, rather than focusing exclusively on the preservation and presentation of traditional forms of culture.
- Cultural Resource Management (CRM) is a method through which people manage and make fair decisions regarding finite cultural resources.
- CRM (also known as Heritage Management) encompasses a variety of resources, including cultural landscapes, archaeological sites, historical documents, and spiritual locations.
- The procedure must strike a balance between a range of competing interests: safety, environmental preservation, and the transportation and building demands of a developing society, as well as the respect for the protection of the past.
- State authorities, lawmakers, construction engineers, indigenous and local community members, oral historians, archaeologists, city officials, and other interested parties make such judgments.
Cultural resource management (CRM), also known as cultural heritage management or salvage archaeology, is the process of surveying and documenting archaeological sites that are prompted by the necessity to study sites prior to their destruction by development or natural catastrophes. Although Section 106 of the 1966 National Historic Preservation Act established the first legal requirements for archaeological investigation and mitigation on federally funded projects, state, tribal, and municipal governments frequently enact legislation requiring developers to survey, record, and possibly excavate or avoid archaeological remains, depending on their significance. CRM projects include surveys along public utility easements and burial relocation. CRM archaeologists sometimes operate under tight timelines, putting them under pressure to ignore academic archaeologists’ more systematic approach.
Why should we teach children to code biology rather than just software? We have curated a blog post for it. Read here. | <urn:uuid:0799bd94-166d-454d-b450-8939f7d98578> | CC-MAIN-2022-40 | https://www.akibia.com/what-is-crm-archaeology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00168.warc.gz | en | 0.939303 | 396 | 3.390625 | 3 |
I think we can all agree when I say that power is one of the most important, if not the most important aspects of a data center when discussing colocation and hyperscale data centers. Today more than ever, topics such as power density, power usage effectiveness (PUE), and renewable energy dominate industry panels and discussions.
In this article, we will take a look at how Alternating Current (AC) and Direct Current (DC) power is used in the modern data center. We will also look at future power trends including the shift to making data centers more efficient using DC power. If that sounds interesting, you should definitely continue reading this article.
Difference Between AC and DC Power?
Before we get started, what is the difference between AC and DC power? There are many differences but to keep things simple, direct current is linear and alternating current alternates. Sounds straightforward enough. This means that with DC power, the current always flows in the same direction and does not oscillate between positive and negative terminals.
With AC, the current reverses 60 times per second (U.S.) or 50 times per second (Europe), and the voltage is easily transformed. This makes AC power easier to transport over several miles. With DC power, it’s not as easy to transport over long distances.
How Data Centers Use AC and DC Power?
Do most data centers use AC or DC power or both? To answer this question, lets first look at how power is used in the data center.
Data centers receive AC power from a utility provider or municipality’s electrical grid. Utility power is feed to the Automatic Transfer Switch (ATS) and into the switchgear. The switchgear is configured for critical supplies and non-critical supplies. It is used to Switchgear is also used for switching on and powering transformers. The transformers ensuring that the AC power from the grid is the right voltage and current type. This is referred to as a step up or step down in power.
Power from the transformer is transferred to the Main Distribution Board (MDB). MDBs are enclosures that house fuses, circuit breakers, and ground leakage protection units. The purpose of the MDB is to transfer low-voltage electricity and distribute it to various endpoints within a data center. This includes the Uninterruptible Power Supply (UPS) system.
UPS systems have several purposes within the data center. First, they distribute clean electricity by conditioning the AC power to ensure that electrical issues like power surges do not impact IT equipment. Clean power from the UPS is distributed to a number of circuit breakers. Individual circuit breakers are tied to individual power circuits that are delivered to specific colocation racks. Servers, storage devices, network hardware, and other IT equipment plug into rack-mounted power strips that connect to the power circuit.
In addition to power conditioning, UPS systems are primarily used for storing electricity in batteries. These backup batteries closely resemble car batteries. There can be tens or even hundreds of backup batteries within a single UPS depending on the size of the system. The electricity stored in the batteries is DC power. Inverters are used to convert AC power into storable DC power.
When a power outage occurs, the UPS system uses the power inverter to convert stored DC power from the batteries to AC power so the data center can remain operational. This includes powering servers and related IT equipment in colocation racks as well as mission-critical systems such as chillers, air conditioning units, fire suppression, lighting, and other systems. The amount of power to systems can be limited or turned off completely to save power.
How long can a UPS system power a data center? The answer is long enough to start up the backup diesel generators and initial a power transfer from the grid to the generators. We’re talking somewhere between 10-20 minutes in most cases. At that point, the ATS is responsible for sensing when there is a utility power failure and transfers the load from UPS to the backup diesel generators.
Diesel generators supply the data center with its power, delivering energy as alternating current (AC), just like the main power grid. Backup diesel generators serve a vital purpose and that is to keep the data center operational for a longer period of time. The length of time is dependent on the amount of onsite diesel fuel storage and the delivery of fuel which is referred to as fuel contracts. This can be hours, days or weeks.
Can DC Power Make Data Centers More Efficient?
Much of the industrialized world is based on AC power. However, there is a growing interest in using DC power sources in a variety of commercial applications. DC power is the product of sustainable energy sources such as photovoltaic solar panels, wind energy, fuel cells, and microgrids. That power is converted from DC to AC for use in our homes, commercial buildings and data centers.
Telecom carriers like AT&T, Verizon, CenturyLink, and others have long used DC power in their central offices. In fact, nearly every telco central office houses a 48V DC plant to provide power directly to its telecommunications equipment and UPS systems.
That’s not all. In 2016, Google announced the development and use of a 48V rack solution. Google also announced that it was working with Facebook and others to further the development of a DC power within the Open Compute Project. Could this be a sign of what’s to come? Can data centers increase their power efficiencies by using DC power?
Here’s where it gets interesting. With AC power, we know that it is possible to transmit power great distances with very little loss. With DC, it is not easy and there can be major losses during transport. However, advances in DC power technologies have made it easily regulated with compact integrated electronic circuits and power electronics which makes it more efficient and accurate. This is especially true with low <50V and industrial voltage up to 1000V.
In addition, energy transport in a data center is confined to the data center facility itself. This makes the concept of transport losses less critical. There is a lot of attention and discussion surrounding low and medium voltage power systems as well as ultra-low-voltage networks. The logic is that If the semiconductor losses in converters are reduced, the total system losses are decreased when DC is used and that DC power leads to better utilization of high and medium voltage transformers. This allows for an increase in demand without changing the transformer. This has the ability to remove sequential AC to DC and DC to AC power conversions.
Conclusion: Advancing Data Center Power Efficiencies
Colocation providers, cloud providers, and data center owners and operators will continuously explore different paths to make their facilities more efficient from a power perspective. It is in their best interest to do so as power is one of the largest costs for data centers.
Advancements in energy sources such as renewables will also have a profound impact on the industry as the demand for data centers surges. Could DC power play a greater role in powering the data center? I’ll leave that question up to you to answer. | <urn:uuid:3e5df53b-b670-47ea-91de-ed93b174ba53> | CC-MAIN-2022-40 | https://www.datacenters.com/news/data-center-power-how-ac-and-dc-power-are-used-in-data-centers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00168.warc.gz | en | 0.938684 | 1,463 | 3.53125 | 4 |
What happens if someone shuts down the Internet? Is it possible?
Our society heavily depends on technology and the Internet is the privileged vector of the information today. Blocking the Internet could paralyze countless services in almost any industry, from finance to transportation.
Early September the popular cyber security expert Bruce Schneier published an interesting post titled “Someone Is Learning How to Take Down the Internet” that reveals an escalation of cyber attacks against service providers and companies responsible for the basic infrastructure of the Internet.
We are referring to coordinated attacks that experts consider a sort of tests to evaluate the resilience of most critical nodes of the global Internet. The attacks experienced by the companies request a significant effort and huge resources, a circumstance that suggests the involvement of a persistent attacker like a government, and China is the first suspect.
“Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing.” wrote Schneier.
“I am unable to give details, because these companies spoke with me under a condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there’s a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn’t have the level of detail I heard from the companies I spoke with, the trends are the same: “in Q2 2016, attacks continued to become more frequent, persistent, and complex.”
It is clear that attackers aim to cause a global blackout of the most common top-level domains paralyzing a large portion of the Internet.
Schneier, who has spoken with companies that faced the attacks, pointed out powerful DDoS attacks that attacks that stand out of the ordinary for their methodically escalating nature.
The attacks start with a certain power that increases as time goes by forcing the victims to deploy all its countermeasures to mitigate the threat.
The report mentioned by Schneier, titled “VERISIGN-OBSERVED DDoS ATTACK TRENDS: Q2 2016” confirms that companies are experiencing a wave of DDoS attacks even more sophisticated.
“DDoS Attacks Become More Sophisticated and Persistent DDoS attacks are a reality for today’s web-reliant organizations. In Q2 2016, DDoS attacks continued to become more frequent, persistent and complex.” states the report.
Schneier also reported other types of attacks against the Internet infrastructure, such as numerous attempts to tamper with Internet addresses and routing.
“One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.” continues Schneier.
Who is behind the attacks?
Schneier believes that the attacks are launched by someone with cyber capabilities of a government, and he seems to exclude the efforts of hacktivists or cyber criminals, and I agree.
“It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes — and especially their persistence — points to state actors.” explains Schneier.
The attribution of the attacks is very difficult by data suggests that China is behind them, let me add also that Russia has similar cyber abilities and is able to hide its operations online. Both countries are largely investing in building infrastructures that would be resilient to such kind of mass attacks.
“We don’t know where the attacks come from. The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it’s possible to disguise the country of origin for these sorts of attacks.”
(Security Affairs – Internet, Hacking) | <urn:uuid:68122609-282c-4776-9585-25558154e408> | CC-MAIN-2022-40 | http://securityaffairs.co/wordpress/51669/hacking/internet-takedown.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00168.warc.gz | en | 0.950573 | 927 | 2.625 | 3 |
Provide the best learning experience for your students by unleashing the power of Apple technology in your classroom.
Transforming Your Classroom with Apple
Apple empowers educators and students by design. Whether using Macs, iPads, or Apple TV, Apple devices encourage creativity and can simplify teaching with apps to make the classroom more flexible, collaborative and personalized for each student. To unleash the full potential of the technology and create the best learning environment, you need to understand the tools and resources available, and develop an education-focused, comprehensive plan, from equipment purchase to deployment, management and use in the classroom and beyond.
In our webinar, Transforming Your Classroom with Apple, we’ll explain how to make the best use of Apple devices in your classroom, and the tools and resources you need for success.
- Apple technologies for the classroom, from devices to setup and workflows
- The importance of helping students learn 21st-century skills
- Tips, tricks and cost-saving measures to transform your learning environment
- How to facilitate student-led education | <urn:uuid:98ec92f1-2e1c-404d-ac19-539ab5d9068d> | CC-MAIN-2022-40 | https://www.jamf.com/resources/webinars/transforming-your-classroom-with-apple/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00168.warc.gz | en | 0.915604 | 223 | 2.78125 | 3 |
It’s difficult to imagine a modern office without computers of some kind — desktop PCs, notebooks, tablets and smartphones litter workplaces across the country and the world. But in the early 1970s, computers in offices were rare.
In the age before the microprocessor and ubiquitous personal computers, Wang Laboratories, which started off as a maker of electronic calculators, developed and introduced what some would hail as one of the first desktop computers — the Wang 2200. In the 1970s and into the 1980s, the company became a dominant player in the office computer market.
As Bloomberg reported in 2004: “An Wang invented the computer memory core, founded Wang Laboratories, and became known as one of Boston’s greatest philanthropists. In the early 1980s, more than 80% of the 2,000 largest U.S. companies used Wang office equipment, and in 1984 Wang Laboratories’ profits reached $210 million on sales of $2.2 billion.”
WHAT Is the Wang 2200 Computer?
Courtesy Computer History Museum
Wang’s first general-purpose computer was the Wang 2200, although, as John A. N. Lee writes in the International Biographical Dictionary of Computer Pioneers, the 2200 was “actually called a ‘computing calculator’ to keep from frightening customers” who might have been turned off by the connotations of the word “computer.” Up until this time, computers were mostly large machines that were complex to operate.
“What made it an interesting machine was that it was clearly what one would now call a personal computer,” according to Jim Battle, writing at Wang2200.org. “Up until that time, programmers dealt with the large and impersonal mainframe computers, or perhaps programmed on terminals connected to mini-computers via serial lines.”
Programmable calculators were difficult to program, but the 2200 ran on the BASIC programming language and “had a capable BASIC interpreter in ROM, meaning it could be turned on and used within seconds,” Battle notes. “It was dedicated to the needs of a single person at any one time. The 64x16 [cathode ray tube] display made editing and running programs interactive and immediate, vs. the then-standard method of studying printouts on greenbar paper.”
As a Computerworld article from November 1972 notes, the 2200 had a hardwired compiler, four kilobytes of initial RAM, a cathode ray tube display unit and a keyboard. Users also had the option of a cassette tape unit to retrieve programs or data.
When users keyed in “end program,” they could see how much memory was left. Additionally, the publication notes that by hardwiring the BASIC compiler into the 2200 using microcoding, users could have all of the memory as work area. The machine’s keyboard also had trigonometric, exponential and mathematical function keys.
And, importantly, it allowed for correction and editing capabilities such as backspace line corrections, program or segment renumbering by block insertion, and single line deletion and/or replacement. Users could take advantage of the 2200’s support for alphanumeric processing of data files as well.
WHEN Was the Wang 2200 Introduced?
Although the Wang 2200 was previewed in 1972, it did not start shipping until the spring of 1973 and cost $6,700. It was heavily advertised. By August 1974, Computerworld reported that the 2200 line “contributed in large part to record earnings and revenues” at Wang Laboratories. By then, the company said it had sold 2,300 units of the 2200 line since June 1973. By March 1977, Wang was trumpeting that 7,000 units had been sold.
The Wang 2200 was widely used in hospitals and laboratories. As Battle notes, the 2200 was expandable and that “eventually nearly 100 different peripherals were developed for the system.”
Wang advertised it as “a calculator with all the facilities of a computer,” noting that users could add storage and peripherals as needed. Wang boasted in one ad that the 2200 could “accept raw data, digital or analogue, from almost any analytical system, and process, interpret and present it in any form you want.” The system could be used to keep patient records and could be used to feed information into a large, mainframe computer. And yet, the ad noted, the 2000 was “exceptionally easy to operate” and “no esoteric skills are required.”
WHAT Happened to Wang Computers?
Courtesy Computer History Museum
The 2200 was eclipsed by newer models, including Wang’s own popular VS system, which it introduced in 1977. However, the 2200, and Wang Laboratories more broadly, were undone by the turn toward personal computers in the 1980s.
As Bloomberg reported: “Toward the end of [the 1970s], however, Wang made two decisions that would later prove to be the company's undoing: He decided to concentrate on hardware, not software. And the pieces of hardware he chose to concentrate on were word processors and minicomputers (these not-so-aptly named machines were designed to link computers networks), not personal computers.”
Standalone word processors like the Wang 2200 fell out of favor in the face of competition from PCs that combined word-processing software with spreadsheets and other applications.
An Wang’s son, Fred, became the company president in 1986 after the elder Wang stepped down from that post. By then the company’s fortunes were already sinking, and Wang filed for bankruptcy in 1992. However, the 2200 stands out as one of the earliest examples of popular minicomputers and word processors, a machine that led the market in its time, but, like so many others, was eventually supplanted.
According to Computerworld, the 2200’s internal memory could be expanded in increments of four kilobytes up to 32 kilobytes.
"This Old Tech" is an ongoing series about technologies of the past that had an impact. Have an idea for a technology we should feature? Please let us know in the comments! | <urn:uuid:f165538f-e3d1-4f5a-a846-a33213c5ac92> | CC-MAIN-2022-40 | https://biztechmagazine.com/article/2017/04/advent-office-pcs-wang-2200-reigned-computing-dynamo | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00168.warc.gz | en | 0.970185 | 1,313 | 3.25 | 3 |
As a result of my PowerShell series ,,, where I used the handling of certificates as an example, mainly because I wanted a method to keep track easier of which certificates were being added by malware, I've have received some questions about how security certificates work and how they stopped our software from working.
First, it helps to take a look at your own certificates. Go ahead and open the Microsoft Certificates Management Console. You can do this by typing certmgr.msc in the search field of your start button. You will have to do this as an administrator of the system.
You should see an overview of your certificates divided up into categories. The most used and usually the most important categories are Trusted Root Certification Authorities and Untrusted Certificates.
What are these certificates?Root certificates are a method to prove that a communication you are receiving (from a website, by mail, or otherwise) comes from the source that it claims to be. This is done by public key encryption to establish a trust between the holders of the public and the private keys. But since it would be impossible to store certificates for every site we've ever visited or wish to visit, the system of certificate authorities (CA) was set up. To establish trust that a certificate is genuine, it is digitally signed by a root certificate belonging to a trusted certificate authority. Operating systems and browsers maintain lists of trusted CA root certificates so they can easily verify that they have been issued and signed.
You may have seen prompts warning you about a website’s security certificate, or as in the example below, a mismatch between the certificate and the name of the site:
The image shows which checks have been made before allowing a free exchange of information:
- Can we trust the source of the certificate?
- Is the certificate still valid? They all have a starting and an expiration date.
- Is the name valid, and does the name on the certificate match the name on the site’s certificate?
- Is the signature strong enough?
Untrusted certificatesAs we have seen in the past, certain types of malware place certificates in the Untrusted category, which basically disables users from downloading and using security software to remove the malware. Below you can see that the Malwarebytes certificate was placed in the Untrusted category by the Wdfload malware.
This certificate, however, has nothing to do with our website. Instead, it's associated with our software. With the certificate above in the Untrusted category, this is what you will see if you try to run our software.
Even though the CA (DigiCert) did not revoke our certificate and can still be found under our Trusted Root Certification Authorities, the Malwarebytes certificate was listed as revoked by the malware. We have to remove the certificate shown above from the Untrusted category before we can use the software again.
So there you have it: a brief explanation of how security certificates work and how malware can abuse the certificates system to block you from downloading and/or running your favorite software. | <urn:uuid:023a46a4-2770-4879-866e-a8bbd58c86fa> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2017/08/explained-security-certificates | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00168.warc.gz | en | 0.951347 | 633 | 3.015625 | 3 |
As a security professional, you are tasked with plugging holes in your network every day. You must keep your firewalls patched and your overall system updated, just to keep the bad guys from getting in and data from going out. But one sneaky avenue may be sitting right under your nose – the DNS or Domain Name System.
Using DNS as a transport method to access blocked content has been around for decades. The exploit started back in the days when there was no such thing as free Wi-Fi but there were plenty of smart tech folks who didn’t want to cough up cash when faced with a captive portal. These “ambitious” folks noticed that in many situations they could send an outgoing ping to a server and get a response even without an internet connection. The individuals realized that DNS was still working in the background and that requests to a DNS nameserver and responses from it passed right through the captive portal. If the user had an “authoritative” server for a namespace that they controlled, these individuals mused, then DNS traffic would go there and come back, independent of the captive portal. The next step was to create software that could chop up an outgoing message and embed the payload into the DNS requests. The same software could be used to “decode” the received traffic, and to “encode” the nameserver’s response. This process becomes the basis for the DNS tunnel. The function is similar to what a VPN tunnel does, except that the entire process was controlled by the “ambitious” individual. And, because the process of chopping up the message and limits on the size of a DNS transmission had to be considered, the process was very slow. But…it worked!
Now that free Wi-Fi is pretty much ubiquitous, you might think that this exploit would have gone the way of the VCR or the dot-matrix printer. Wrong. DNS tunneling has been adopted by a whole new class of attacker – those who now leverage it for infiltration/exfiltration purposes: evading the network firewall to get information into or out of the network.
In the past, when the concept was first conceived, a DNS tunnel wasn’t the easiest thing to setup, and took a fairly savvy individual to get one working. Today, toolkits and instructional videos on how to leverage a DNS tunnel are openly available online. This has made it easier for someone to access unwanted content, steal data or documents, or even plant malware into your organization.
Although DNS tunnels are now easy to setup, they are unfortunately, hard to detect. As stated in the MITRE ATT&CK framework, “The DNS protocol serves an administrative function in computer networking and thus may be very common in environments. DNS traffic may also be allowed even before network authentication is completed. DNS packets contain many fields and headers in which data can be concealed. Often known as DNS tunneling, adversaries may abuse DNS to communicate with systems under their control within a victim network while also mimicking normal, expected traffic.” While DNS is not the only protocol that can be used to form a tunnel, its prevalence in normal network communications may make it easier to lose in the background noise. Adding to that is the fact that DNS queries and responses aren’t completely uniform, so it may not always be easy to spot the ones that don’t conform to “standards.” Nevertheless, the sheer potential for damage makes it worthwhile to take every precaution.
One of the elements that attackers rely upon when utilizing DNS tunneling is that no one will suspect it as an avenue for attack or exfiltration.
So, what can you do to detect potential DNS tunneling on your network? You can start by examining the DNS queries themselves. A good threat feed can help you block DNS queries going to any sites that are known to be bad, malicious, or that are known DNS tunnel endpoints. If you have your own recursive service, you may be able to block outgoing requests at that point. Additionally, you can also examine the overall network traffic, looking for uncommon data flows or new DNS queries from processes or clients that don’t usually communicate on the network. Finally, and this goes without saying, follow up on network-wide malware infections, particularly if a strain is known to construct DNS tunnels. Don’t assume that only one device was affected. By proactively remediating clients that might have been exposed, you can save yourself a world of hurt later. | <urn:uuid:0157b7b7-d5ff-40a1-b0bc-ed709d5f2949> | CC-MAIN-2022-40 | https://neustarsecurityservices.com/blog/has-dns-turned-against-you | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00168.warc.gz | en | 0.962476 | 927 | 2.671875 | 3 |
Overview: What is a Digital Certificate?
When users come to your website, they have a way of telling whether your site is safe to connect with or not. It comes in the form of something called a digital certificate. Today, we'll help you understand what a digital certificate is, its key components, the role it plays in Web security, and other concepts associated with it.
What a digital certificate is in a nutshell
A digital certificate primarily acts like an identification card; something like a driver's license, a passport, a company ID, or a school ID. It basically tells other people who you are. So that, for example, when a user arrives at your site looking for yourdomain.com, your site's digital certificate (a.k.a. cert) will help that user confirm whether he actually landed at yourdomain.com.
In addition, a cert also holds a copy of your site's public key, which is used in encrypting data transmitted between your site and the user's web client (in most cases, a web browser).
Not all websites offer digital certificates. In the past, the use of digital certificates were mostly limited to sites with whom users had to engage in secure transactions or share sensitive information. For instance, you normally encountered certs on online banking websites, secure file transfer servers, major e-commerce sites, or EDI servers. But because users are now becoming more conscious about web security, more and more sites are employing digital certificates to gain users' trust.
You won't actually see the entire digital certificate as you connect to a site. However, you'll easily know it's there. Websites protected by certs usually display a lock icon followed by "https" on the leftmost part of that site's URL when viewed on your browser's URL bar. To view the contents of the cert, just click on the lock icon.
Most digital certificates in use today follow what is known as the X.509 standard. X.509 is used in SSL (Secure Sockets Layer) and TLS (Transport Layer Security), so yes, it's what's being used in HTTPS, FTPS, WebDAVS and other secure data transfer protocols. Let's now take a look at the kind of information you'll find in this kind of certificate.
Contents of a X.509 certificate
The contents of a digital certificate typically include the following:
- Information about the subject a.k.a. Subject Name - "subject" refers to the site represented by the cert.
- Information about the certificate issuer/certificate authority (CA) - The CA is the body that issued and signed the certificate. More about this shortly
- Serial number - this is the serial number assigned by the issuer to this certificate. Each issuer must make sure each certificate it issues has a unique serial number.
- Version - the X.509 version used by a given certificate. These days, you'll usually find version 3.
- Validity period - certs aren't meant to last forever. The validity period defines the period over which the cert can still be deemed trustworthy.
- Signature - This is the digital signature of the entire digital certificate, generated using the certificate issuer's private key
- Signature algorithm - The cryptographic signature algorithm used to generate the digital signature (e.g. SHA-1 with RSA Encryption)
- Public key information - Information about the subject's public key. This includes:
- the algorithm (e.g. Elliptic Curve Public Key),
- the key size (e.g. 256 bits),
- the key usage (e.g. can encrypt, verify, derive), and
- the public key itself
While most of the contents of a digital certificate are there for providing information regarding the subject, the issuer, or the certificate itself, the certificate key or public key has a special purpose. It's a vital component in the encryption of data exchanged between the server and the client. If you're not familiar with public keys and their role in encryption, I suggest you read about symmetric and asymmetric encryption.
Another element of a digital certificate that does more than provide information is the certificate's digital signature. As mentioned earlier, the certificate's digital signature is generated using the certificate issuer's private key. If you've read the article on digital signatures, you know that a cert's digital signature can be used in authentication. But in order for a web client to verify/authenticate a digital signature, it will need a copy of the issuer's public key.
If the issuer happens to be a widely recognized certificate authority (CA), that won't be a problem. A copy of that CA's public key will likely be pre-installed in the user's web browser. Popular Web browser's like Chrome, Firefox, Safari, and Internet Explorer all come with the certificates of recognized CAs. That means, they already contain copies of those certificate authorities' public keys and can therefore be used for verifying certificates issued/signed by them.
Certificates signed by widely recognized CAs are called signed certificates. There are also certificates that are simply signed by issuers who aren't widely recognized certificate authorities. For example, when you create your own digital certificate using JSCAPE MFT Server but don't bother processing a Certificate Signing Request (CSR), you will end up with what is known as a self-signed certificate.
If you want to see how a digital certificate is created, read the article How To Set Up A HTTPS File Transfer, especially the section entitled Preparing Server Keys.
Signed vs Self-signed certificates
In theory, certificate authorities are supposed to exercise due diligence before signing digital certificates submitted to them through CSRs. They need to verify first whether the information placed on the digital certificates are in fact true. This is important because their attestation would later on serve as the sole basis that certain websites who are able to present certs signed by them can really be trusted.
So, assuming due diligence is really exercised, it would be safe to assume that signed certificates are more reliable and trustworthy than self-signed certificates. In fact, when a user attempts to connect to your site and your site only has a self-signed certificate, the user's browser will display something like this:
Self-signed certificates are relatively safe to use internally, i.e., within your organization, where you have more control over the servers that operate in the network. So, for instance, you can use it to add security to a web file transfer that takes place behind your corporate firewall.
Let's end this for now.
We'll continue our discussion on digital certificates on our next post, where we'll talk about the process involved when a web client connects with a web server via HTTPS.
JSCAPE MFT Server is a managed file transfer server that allows you to create digital certificates and set up web-based file transfers. Download the free, fully-functional evaluation edition now.
If you like to read more posts like this, subscribe to this blog or connect with us. | <urn:uuid:dcf1acc1-1e58-4bfe-81bc-aadf9b5714b6> | CC-MAIN-2022-40 | https://www.jscape.com/blog/what-is-a-digital-certificate | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00168.warc.gz | en | 0.936642 | 1,456 | 3.421875 | 3 |
The two primary characteristics of a solenoid are the amount of voltage applied to the coil and the amount of current allowed to pass through the coil. Solenoid voltage characteristics include pick-up voltage, seal-in voltage, and drop-out voltage. Solenoid current characteristics include coil inrush current and sealed current.
Magnetic coils are normally constructed of many turns of insulated copper wire wound on a spool. The mechanical life of most coils is extended by encapsulating the coil in an epoxy resin or glass-reinforced alkyd material. See Figure 1. In addition to increasing mechanical strength, these materials greatly increase the moisture resistance of the magnetic coil.
Because magnetic coils are encapsulated and cannot be repaired, they must be replaced when they fail.
Figure 1. The mechanical life of most coils is extended by encapsulating the coil in an epoxy resin or glass-reinforced alkyd material.
Coil Inrush and Sealed Currents
Solenoid coils draw more current when first energized than the amount that is required to keep them running.
In a solenoid coil, the inrush current is approximately 6 to 10 times the sealed current. See Figure 2. After the solenoid has been energized for some time, the coil becomes hot, causing the coil current to fall and stabilize at approximately 80% of its value when cold. The reason for such a high inrush current is that the basic opposition to current flow when a solenoid is energized is only the resistance of the copper coil. Upon energizing, however, the armature begins to move iron into the core of the coil. The large amount of iron in the magnetic circuit increases the magnetic opposition of the coil and decreases the current through the coil. This magnetic opposition is referred to as inductive reactance or total impedance. The heat produced by the coil further reduces current flow because the resistance of copper wire increases when hot, which limits some current flow.
Figure 2. Solenoid inrush current is approximately 6 to 10 times the sealed current.
Coil Inrush and Sealed Current Ratings
Magnetic coil data is normally given in volt amperes (VA). For example, a solenoid with a 120 V coil rated at 600 VA inrush and 60 VA sealed has an inrush current of 5 A (600/120= 5 A) and a sealed current of 0.5 A (60/120 = 0.5 A). The same solenoid with a 480 V coil draws only 1.25 A (600/480= 1.25 A) inrush current and 0.125 A (60/480= 0.125 A) sealed current. The VA rating helps determine the starting and energized current load drawn from the supply line.
Solenoids are rated for intermittent or continuous duty. An intermittent-duty solenoid is designed to produce a strong force in a small package but will overheat if current is continuously applied to the coil. A continuous-duty solenoid is designed to handle a continuous current but is larger to help dissipate the heat produced.
Coil Voltage Characteristics
All solenoids develop a magnetic field in their coil when voltage is applied. This magnetic field produces a force on the armature and tries to move it. The applied voltage determines the amount of force produced on the armature.
The voltage applied to a solenoid should be ±10% of the rated solenoid value. A solenoid overheats when the voltage is excessive. The heat destroys the insulation on the coil wire and burns out the solenoid. The solenoid armature may have difficulty moving the load connected to it when the voltage is too low.
Pick-up voltage is the minimum voltage that causes the armature to start to move.
Seal-in voltage is the minimum control voltage required to cause the armature to seal against the pole faces of the magnet.
Drop-out voltage is the voltage that exists when voltage is reduced sufficiently to allow the solenoid to open.
Seal-in voltage can be higher than pick-up voltage because a higher force may be required to seal in the armature than to just move the armature.
Drop-out voltage is lower than pick-up voltage or seal-in voltage because it takes more force to hold the armature in place than to release the armature.
For most solenoids, the minimum pick-up voltage is about 80% to 85% of the solenoid rated voltage.
The seal-in voltage is somewhat higher than the pick-up voltage and should be no less than 90% of the solenoid rated voltage.
Drop-out voltage can be as low as 70% of the solenoid rated voltage.
The exact pick-up, seal- in, and drop-out voltages depend on the load connected to the solenoid armature and the mounting position of the solenoid. The greater the applied armature load, the higher the required voltage values.
Voltage Variation Effects
Voltage variations are one of the most common causes of solenoid failure. Precautions must be taken to select the proper coil for a solenoid. Excessive or low voltage must not be applied to a solenoid coil.
A coil draws more than its rated current if the voltage applied to the coil is too high. Excessive heat is produced, which causes early failure of the coil insulation. The magnetic pull is also too high and causes the armature to slam in with excessive force. This causes the magnetic faces to wear rapidly, reducing the expected life of the solenoid.
Low voltage on the coil produces low coil current and reduced magnetic pull. The solenoid may pick up but does not seal in when the applied voltage is greater than the pick-up voltage but less than the seal-in voltage.
The greater pick-up current (6 to 10 times sealed current) quickly heats up and burns out the coil because it is not designed to carry a high continuous current. The armature also chatters, which creates noise and increases the wear of the magnetic faces.
Solenoid Selection Methods
Solenoids are selected based on the outcome required. It is important to select the correct solenoid to achieve the desired outcome. Solenoid selection methods include push or pull, length of stroke, required force, duty cycle, mounting, and voltage rating.
Push or Pull.
A solenoid may push or pull, depending on the application. In the case of a door latch, the unit must pull. In a clamping jig, the unit must push.
Length of Stroke
The length of the stroke is calculated after determining whether the solenoid must push or pull. For example, a door latch requires a Z\x′′ maximum stroke length.
Manufacturer specification sheets are used to determine the correct solenoid based on the required force. A solenoid is selected from the manufacturer specification sheets based on required solenoid function.
Solenoid characteristic tables are also used to check the duty cycle requirements of the application against the duty cycle information given for the solenoid. For example, an A 101 solenoid is required for an application requiring 190 operations per minute.
Manufacturers provide letter or number codes to indicate the solenoid mount. See Figure 3. For example, an A solenoid is selected for a door latch application because the door latch application requires an end-mounting solenoid.
Figure 3. Manufacturers provide letter or number codes to indicate the solenoid mount.
Manufacturers provide letter or number codes to indicate the voltages that are available for a given solenoid. See Figure 4. For example, a 2 A solenoid may be used for an application that requires a 115 V coil.
Figure 4. Manufacturers provide letter or number codes to indicate the voltages that are available for a given solenoid. | <urn:uuid:6f9b1b73-28f8-4227-94f8-873f26d60f22> | CC-MAIN-2022-40 | https://electricala2z.com/electrical-circuits/solenoid-characteristics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00168.warc.gz | en | 0.911708 | 1,681 | 3.921875 | 4 |
As used in a building or other structure construction: A structural unit used to distribute loads of the bearing materials.
Free Electrons / Valence Electrons.
The electrons located in the outer orbit of an atom. Normally associated with metals or other conductive material, free electrons are loosely held within the outer valences (orbits) of the atomic structure of the individual atoms, and consequently, tend to move at random among the atoms of the material.
As used with an electric motor-circuit disconnecting means, the motor-circuit disconnect cannot be located behind locked doors or inside locked cabinets or other enclosures that are sealed closed by special screws, bolts, or other fasteners.
Full-Load Current (FLC) / Full-Load Amps (FLA).
Listed on the nameplate of a given electric motor (NP FLC), the full-load current or full-load amp rating, is the current the motor will draw when operated at its rated voltage, rated frequency (AC motors), and rated torque (horsepower rating). | <urn:uuid:4907ffe5-7dec-4df6-a90f-866448afd2f8> | CC-MAIN-2022-40 | https://electricala2z.com/glossary/electrical-engineering-terms-f/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00168.warc.gz | en | 0.903117 | 227 | 3.328125 | 3 |
The Use of AI in Cybersecurity
Written by Ronda Payne
The cybersecurity battle is a constant one. Because technology continues to advance, the abilities of cyber criminals grow in power with it. Artificial intelligence (AI) has become more commonplace in this area and while it is being used in this growing advancement of cyber attacks, those in cybersecurity are also making use of it to combat those attackers.
AI is able to learn, somewhat like a human does, through experience. Plus, the relative ease with which AI tools can be created, trained, modified and let loose makes them particularly dangerous. Combine that with the fact that they are constantly working and never take a day off means there is a much higher likelihood that criminals will be successful in their nefarious aims.
Ways Cyber Criminals are Using AI
There are numerous ways attackers are incorporating AI into their tools. One place is in scanning for vulnerabilities and weaknesses. This could be websites, networks, devices or other electronic assets. Because AI easily spots patterns of behaviour, it will find entry into personal, business, government and other sources of information.
Another way AI is being used is through manipulation. Organizations that use AI to capture data for AI training can end up having that same AI tool hacked and tricked to follow small changes that may seem insignificant but could lead to incorrect operation and vulnerabilities in the future. With the ability to “control” the AI in an organization, hackers would then be able to use the AI to capture the data used to train it to their own criminal goals.
Additionally, all kinds of communication from social media to emails, robocalls to fake websites can be created with AI. Consider some of the spam emails you’ve received lately. With the help of AI, you may be seeing higher quality phishing emails with very little that gives them away as fraudulent. Plus, AI can be used to publish erroneous content through online channels causing an eruption of misinformation.
AI makes the job of the hacker easier and definitely more dangerous.
Using AI for Good
Fortunately, BOTH sides of the battle can use AI. Individuals who have taken cybersecurity courses like CySA+ certification will learn a few ways to stop, block and prevent hacker access. Those with CompTIA Network+ certification will be pleased to have the help in keeping networks and other organizational assets safe. Here’s how AI is making a difference on the positive side of the fight:
- Those who have earned their Security+ certification, or their Arcitura Certified AI Specialist, will be aware of the benefits of incorporating AI into security systems. Software tools are able to analyze threats and determine solutions. Threat hunting grows better as data is continually supplied to AI tools making for a robust independent solution.
- AI can keep team members up to speed on the latest threats because it is able to comb through vast amounts of information and collate it into insightful current documentation.
- As devices are monitored and managed, those with CompTIA A+ certification will appreciate the assistance AI can provide by checking all of an organization’s systems without human input.
Helping Fill the Gap
Perhaps one of the biggest ways AI is making a difference in cybersecurity is in supporting the industry as it deals with a lack of skilled labour. While people are always needed to operate systems and check the work done by AI and other tools, the various ways that AI supports them helps take a little bit of the pressure off of all the to dos that abound the IT front. With speed and single-tasking, AI can do things that humans can’t, while the reverse is also true. But together they accomplish more than either could do on their own.
AI isn’t just harmful online bots and malicious systems created by hackers. There are a lot of benefits to having it on the good side of the cybersecurity world.
The information contained in this post is considered true and accurate as of the publication date. However, the accuracy of this information may be impacted by changes in circumstances that occur after the time of publication. TechnoEdge Learning assumes no liability for any error or omissions in the information contained in this post or any other post in our blog. | <urn:uuid:f8a3dfd3-b81c-457f-ae6d-187460287b15> | CC-MAIN-2022-40 | https://technoedgelearning.ca/the-use-of-ai-in-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00168.warc.gz | en | 0.957488 | 852 | 3.0625 | 3 |
Several useful services are now offered through the technologies that make up the Worldwide Web (WWW). The great majority of these Web sites provide access to said services exclusively through HTML interfaces designed to be used by humans through an Internet browser.
NSEQL (Navigation SEQuence Language) is a language designed for programming action sequences on the Internet browser interface: the current version supports Microsoft Internet Explorer (MSIE), and Denodo Browser. NSEQL can be used in a computer program to reproduce any operation sequence a human user may have conducted through the browser.
It is important to highlight that it is not normally necessary to create NSEQL programs manually. The ITPilot graphical generation environment (ITPilot Generation Environment Guide) allows NSEQL programs to be created graphically by simply providing an example of the required sequence through an Internet browser. This manual provides an exhaustive description of the language for advanced users that want to manually create or edit NSEQL programs. | <urn:uuid:1b37550d-3932-4953-abc4-8e2922e5927d> | CC-MAIN-2022-40 | https://community.denodo.com/docs/html/browse/7.0/itpilot/nseql/introduction/introduction | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00168.warc.gz | en | 0.895634 | 313 | 2.515625 | 3 |
Dr. Frederick Gilbert, President of Lakehead University in Thunder Bay, Ontario, Canada, says he first made the decision to avoid using wireless technology on his campus seven years ago. He was and remains concerned about possible health effects from exposure to even the low levels of RF radiation emitted by Wi-Fi equipment. Data security was a much lesser concern.
The ongoing research on which Gilbert based his decision claims to show health effects from exposure to RF radiation (RFR) ranging from sleep disruption to genetic damage – though effects from Wi-Fi system emissions are probably at the relatively benign end of the range. None of this research, it’s worth noting, is going on at Lakehead.
The president’s decision – and it appears to be his personal decision – came to light recently when the school’s administration issued a bulletin in response to student inquiries about why Lakehead wasn’t implementing a campus-wide Wi-Fi access network like other North American universities. Media in Canada and the U.S. picked up on it, and the radio waves, as it were, hit the fan.
Gilbert does not appear to be a crank. A biologist by training, and President of this small northern Ontario university since 1998, he sounded eminently sensible when we talked on the phone. He was slightly shell-shocked by the negative media attention, though. “We’ve been taking a little static in the media,” is how he put it. “It’s interesting that we have been portrayed as Luddites, yet this campus is one of the most progressive in terms of technology use.”
Lakehead, Gilbert points out, has an extensive fiber network that provides high-speed Internet access almost everywhere. It supplements Ethernet connections with cyber cafes where students can use computers connected to the network. The only thing they can’t do is fire up their laptops at a cafeteria table or outside on the lawn.
It’s not even that Lakehead has an outright ban on wireless. In places where the fiber network doesn’t extend – such as a couple of research facilities on the edge of campus – the school has in fact deployed Wi-Fi nets. And while dorm rooms all have high-speed wired connections, there is nothing stopping students setting up their own Wi-Fi nodes. “What students do within the dorms is up to them,” Gilbert says.
So if he isn’t a Luddite or a crank, why has Gilbert made this seemingly contrarian decision?
According to him, there is a mounting body of scientific evidence to suggest – but not conclusive proof, he is the first to admit – that there are “bioeffects” from even low-level RF radiation. “If you look at the literature that has been published,” he says, “there are demonstrable effects of exposure. Once we get to the point where we can definitively say that there are or are not harmful effects, that’s when we make a decision to deploy, I think.”
The current state of understanding about the health effects of low-level RF radiation (RFR) may be analogous to the understanding of the effects of asbestos exposure or cigarette smoking 25 or 40 years ago, he suggests. So in the meantime, he’d rather play it safe. “The issue I have is that we’re looking here at a technology of convenience [i.e. Wi-Fi] on a campus that is already very technologically advanced,” Gilbert says. “Under the circumstances, I don’t see any reason to take anything other than a precautionary position.”
Gilbert’s interest in the effects of radiation goes back to his undergraduate days when he studied ionizing radiation. RFR is not ionizing radiation, he is quick to point out, but his interest continued. “When I got into the literature on electromagnetic radiation [EMF, of which RFR is one type], there were indications to a biologist that there could be something here, at least to look at as a potential.”
The effects of highly concentrated EMF radiation from long-term, heavy use of cell phones have of course been debated in the scientific community for several years. There is a growing concern, especially in the European community, that heavy users of mobile phones are, indeed, at increased risk of brain cancer – among other health problems.
But these effects are supposedly the result of the thermal energy generated by RFR, part of a continuum of known effects that includes birds sitting on very high-power antennas being fried instantly when transmission begins. Ambient RF radiation – the kind that is in the air all around us, emitted by wireless communications systems, including Wi-Fi – is at much lower levels, generating insignificant amounts of thermal energy.
The research on the effects of ambient RFR is at a much earlier stage. Current U.S. and Canadian health standards allow RFR exposure in the thousands of microwatts, notes environmental consultant Cindy Sage, a principal in Sage EMF Design of Santa Barbara, California. But research in the past five years has begun to show effects from emissions measured in the nanowatts, Sage says. (A microwatt is 10-6 watt; a nanowatt is 10-9 watt.)
“Once you get into the nanowatts range, you’re getting into Wi-Fi territory,” she says. “And at least sleep disruption can be an effect of exposure and maybe a constellation of other health issues.”
Gilbert refers to Sage as a key source of information on the subject, although he has not actually used her as a consultant. Sage has consulted with other colleges, universities and school districts on exactly these issues, she says, but is not at liberty to reveal their deliberations or decisions. She implies that other schools have made or are in the process of making similar decisions to Gilbert’s for similar reasons.
Sage describes herself as a synthesizer and interpreter of the scientific evidence. Her firm’s Web site and some of its publications include continually updated bibliographies of scientific studies on the effects of ambient RFR. She was also a respondent to the City of San Francisco’s request for comments on its proposed citywide Wi-Fi network. Her firm’s response was in opposition to the deployment.
Its argument boils down to this. There is some evidence, albeit inconclusive and puzzling to scientists, of bioeffects from low-intensity RFR. We need more research. In the meantime, the correct approach is to use the “precautionary principle” – i.e. avoid an action if the consequences are unknown but judged to have some potential for major or irreversible negative consequences. Exactly the position Gilbert is taking, in other words.
Some of the reasons for not deploying Wi-Fi and WiMax are purely economic and practical, she suggests. If it turns out these technologies are a health hazard, companies and institutions would presumably have to rip out their wireless networks and replace them at considerable expense with something else. There is also the prospect of victims suing network operators. Sage says children are probably most vulnerable.
The list of observed health effects in the research Sage has studied – which we have no way of being able to evaluate, of course – includes memory loss, sleep disorders and insomnia, slowed motor skills and reaction time in schoolchildren, immune system changes, spatial disorientation and dizziness, headaches, loss of concentration and “fuzzy thinking,” lower sperm count, increased blood pressure, DNA damage and more. A scary litany.
What should we think about the position Gilbert and Sage have taken? If it was widely adopted, the Wi-Fi industry would be badly hurt, which can’t be a good thing. But consider history. As Gilbert notes, 40 years ago, almost nobody believed cigarette smoking caused long-term health problems – although scientists were already sounding the alarm.
This article was first published on WiFiPlanet.com. | <urn:uuid:0786ecbc-667a-4d67-b246-ff6a02b79f23> | CC-MAIN-2022-40 | https://www.datamation.com/mobile/wlan-sickness-rubbish-or-reasonable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00368.warc.gz | en | 0.966527 | 1,668 | 2.578125 | 3 |
Recursive routines provide options for business processes that need to repeat but don't have a set number of times to iterate. They can be used for approvals, returns of unknown numbers of records, or many undefined processes.
The goals for the article are:
- Know how to create a recursive routine
- Understand some of the Use Cases
- See how Runs function
Before working with recursive routines, always have a tab available with the option to stop the engine, just in case.
The simplest Recursive Routine is one that calls itself until a specified event (value is reached, or no more values are found). Inputs are configured the same way, and you use the same return node.
Here's an example.
This routine takes an input value (1), adds one to it, and if the value is less than 4 it calls itself, adds 1, checks for less than 4, etc.
While the above is an additive example, it's the most basic one. A more real world example is an API call that returns a set number of records, and if there are more records it returns a page token to get the next set.
In the case below, the routine calls a number of records until there are no. ore left to return.
The Get Child Records has a limit of 250 records. The Connector leading to Get Next Records checks to see if any records were returned. If there were records (meaning it hasn't gotten to the end yet), it recursively calls itself and keeps processing.
The Add Relationship Records and the Summarize Nodes add the new records to the existing records. and pass them along to the next recursive call, or out to the original call.
This is a very simple example that combines both the additive and Process examples.
The Create Calendar Entry Data node creates records in a form as each record is processed.
You can also see that the routine has a number of records to create, and each time the routine processes it adds one for a comparison check (on the connector from the Start node).
One interesting way that Request uses Recursive Routines is in our Never Drop Never Fail process. See the following article for an overview of the philosophy - Never Drop Never Fail
The routine is included in your Kinops instance - Handler Failure Error Process
As long as the result of the task is not "Stop processing", the system will attempt to continue the tree. If it fails again, another task is created. This recursive situation depends on human intervention to stop processing.
In the next article, we'll look at Basic Ruby Syntax.
Create a recursive Routine that operated for a certain number of times before stopping. It should perform a simple action like displaying a value or looking up a value.
Updated about 1 year ago | <urn:uuid:7d3ed6d0-5880-4aaf-b38f-8acb75544215> | CC-MAIN-2022-40 | https://community.kineticdata.com/docs/recursive-routines | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00368.warc.gz | en | 0.910313 | 573 | 2.90625 | 3 |
A Rootkit is usually a set of software tools that exploits a device to gain root level permissions, which is the highest level permission in a given computer system.
The term rootkit joins of root and kit. It is a working toolbox of malicious software designed to attain illegitimate root permissions on a target’s machine or network.
Rootkits vary in their type and severity. User Mode Rootkits are superficial in relation to their location to the core operating system, only targeting software applications. Kernel Mode Rootkits are dangerous and run deeper attacking the core of the host machine's OS. Bootloader Rootkits affect the Master Boot Record (MBR) and or the Volume Boot Record (VBR) of the system although these are retiring as Windows 8 and 10 machines offer a Secure Boot option.
Rootkits are designed to conceal themselves to avoid detection. They can give attackers full control of a compromised computer, and are notable for carrying on undiscovered until they deliver remote access to, and control of, the target device or system.
"Through a rare technical partnership with the OEM, our MDM gives us tremendous visibility into the devices' health, including if it's been compromised by malware or a rootkit. Once we see a device has been rooted, we disconnect it from the network and tell have the employee bring it into our INFRA team." | <urn:uuid:1a850d0e-f812-4c34-b27c-6f300fac0ad0> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/rootkit | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00368.warc.gz | en | 0.931166 | 285 | 3.015625 | 3 |
In an article posted today by Mohit Kumar of the Hacker News, a proof of concept was reported that proves the technology exists for malware to actually communicate between non-connected systems via inaudible signals sent from one system and received by the other.
The PoC, written by German Researchers Michael Hanspach and Michael Goetz, was originally published in the Journal of Communications last month.
This concept has most recently been brought into the light due to the controversy behind "BadBIOS", a highly advanced and dangerous malware, currently only encountered by one highly regarded researcher.
In the paper, the researchers describe using a system's built-in sound card and microphone to transmit information from one end (the client or application installed on the target system) to the other (the server), if both systems are infected and are within 60 ft. of each other.
The purpose of this type of research was to determine if systems that are completely severed from the internet or any other public facing systems (i.e. airgapped) are susceptible to attacks.
Researchers measured the transmission speed to be roughly 20 bits per second (bps), which averages out to two keyed in characters transmitted every second. The scenario postulated by the researchers involved a keylogger sending data back to a remote attacker, at 20 bps, the entirety of this blog post would take 20 minutes to send.
However, that doesn't take into consideration data reliability. With all types of potential interference factors, like cell phones, televisions and other electronic emissions, I think that expecting a clear and reliable feed of data outside of a lab is unlikely.
My theory is that this technology could be used to provide targeted malware a means of external communication for contact with a command and control server. The infected system would receive commands from the server and assuming that the initial infection on the covert system was via USB drive, perhaps the malware could store stolen data on the USB.
That data would be sent out later once the USB is able to plugged into an outward facing system. This is similar to how Flame worked when extracting sensitive data from closed off networks.
This scenario is entirely based on having previous intelligence gathered, therefore it would be highly targeted and every command sent from the control server would need to follow a previously created plan. In other words, it's unlikely that the average user will ever need to worry about this type of threat.
None the less, it's pretty interesting.
Thanks for reading and safe surfing! DFTBA!
Follow Adam Kujawa and all of his zany opinions on Twitter @Kujman5000 | <urn:uuid:0962a372-0508-4ac5-bd27-21b7e0a6cf04> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2013/12/yelling-across-the-gap | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00368.warc.gz | en | 0.966923 | 534 | 3 | 3 |
The Food and Drug Administration (FDA) has announced the first portion of its âTransparency Initiative,â an effort to make clear issues such as basic operations, the decision making process and the process by which drugs are judged and approved.
The FDAâs effort coincides with a larger goal of overall transparency for federal government operations set by the Obama administration.
âThis initiative will make information about the FDA more user-friendly and accessible to the public,â said FDA Commissioner Dr. Margaret Hamburg. âIt fosters a better understanding about what we do.â
As part of the first phase of the initiative, the FDA has organized a brief curriculum called âFDA Basics.â Included in this section is a question and answer segment concerning the FDA and the products it regulates, short videos about the activities of the Administration and conversations with agency personnel about their role in day-to-day operations.
Future curriculums featuring senior FDA officials are planned.
As one of her first acts as commissioner, Hamburg organized a task force to address the issue of transparency. As part of the second phase of the initiative, the task force will recommend ways to make FDA information more accessible and useful to the public, while protecting confidential information. The final phase will address transparency to regulated industries. | <urn:uuid:b212df32-abff-471f-a69e-18b0a7fda452> | CC-MAIN-2022-40 | https://executivegov.com/2010/01/fda-rolls-out-transparency-initiative/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00368.warc.gz | en | 0.935682 | 265 | 2.59375 | 3 |
What is Workflow Automation and Why You Should Use It
Section 3.2 of CompTIA'sCloud+ certification exam covers workflow automation. You may be asking yourself what this is and why you need it? After all, can't you just manually spin up resources and be fine? Why do you need complicated processes to spin up these resources? In this post, we'll cover the topics that can answer these questions.
What is Workflow Automation?
Workflow automation is a process by which blueprints are defined for a certain outcome — and steps are taken to achieve that end result. These are commonly referred to as runbooks. Think of this as a recipe for making pizza. Except in the workflow automation world, we use terms like runbooks instead of recipes. This will list the ingredients or requirements such as cheese, sauce, toppings, dough. Your recipe may have you make the dough from scratch or you may opt to get pre-made dough pizza crusts.
This is a great example of the flexibility of workflow automation. It certainly can describe building something from scratch but it can also build upon other pre-made building blocks. For example, if you are spinning up a new Ubuntu virtual machine, you likely won't be building that from scratch. You would typically reference an Ubuntu image made available by your cloud provider.
There are quite a few reasons to use workflow automation. We'll highlight the top three below and expand upon them a bit. There are certainly many other reasons, but these are the major drivers.
When using automation, one of the key benefits is consistency. When manually setting up environments, it is easy to skip steps or forget them. These mistakes can cause long term issues that aren't easy to track down. When using workflow automation, it creates a reliably duplicable environment and consistency with each instantiation of that environment. When making changes to an environment, these automations can help ensure those changes are consistent across the environments.
Going back to our pizza recipe example. If you don't have a recipe, your pizza may not turn out consistent. You may make them slightly different or use different ingredients. You may not want to go back because sometimes it tastes great and other times it is less desirable. In infrastructure though, consistency is key and provides stability of the environment.
For example, AWS CloudFormation uses declarative configuration management in which you define the end state — and it figures out how to get there — no matter what state you started with. This provides very consistent results whether you are standing up new infrastructure or bringing existing infrastructure into specification.
Workflow automation and runbooks can usually be stored in a source repository or system where multiple people can review it. This review process is important as everyone can be on the same page in regards to what the deployment should look like. Our pizza reference breaks down a little on this one, but parts of it still hold true.
If you are gluten- or lactose-intolerant, you likely want some transparency into the ingredients so that you know if you can eat the pizza. Also, if you are looking to restrict your diet, but still want to indulge, you may want to know the estimated calories of the meal. In the infrastructure world, transparency goes a long way to help ensure the consistency.
Many automation tools use either JSON or YAML formats which are very easy to read. This allows for greater transparency. Typically, you would even store this in a version control system so that everyone can review at any given time. JSON is fairly human readable but typically requires a viewer to format it while YAML is much easier to read in its native format.
3. Ease of Duplication
If you have an environment or set of changes you want to reproduce easily, workflow automation makes that easier. Spinning up the environment one time may be easy, but what if you have to spin up five instances of it and remember all of the details? Or what if you need to apply a specific patch to multiple environments? With workflow automation, all the hard work happens in the planning phase. Once that is done, you have an easily reproducible set of steps.
Back to the pizza analogy, this is very similar to a recipe. Imagine a restaurant that makes them from scratch without a recipe. Some staff may not know how and may have to ask other employees how to do that. It would be very time consuming to hunt down instructions for making pizza each time.
Using a cloud automation tool, the ease of duplication is unparalleled. In Microsoft Azure, you can use their Blueprints stack and simply apply to each Resource group as needed making the duplication as easy as a few clicks of the mouse or a few strokes of the keyboard.
What Processes Can be Automated?
Nearly every step of the deployment, creation, and management process can be automated. Going through some examples, when spinning up a new resource, any storage, network, memory requirements are allocated. An operating system image is cloned or instantiated against those resources. If there are any network ACLs that need to restrict traffic, those can be applied as well. Even after you have a running resource, the automation does not have to stop there. Workflow automation can cover some first time boot configuration settings.
For example, you may have a Debian Linux instance that needs to have Apache and PHP installed. It may also need SSH enabled and your public keys copied over so that you can access it using key based authentication. Once all that is in place, your application like WordPress may need to be copied over and extracted. This is a simple example of what could be achieved via automation.
It does not even end there though. Perhaps you have some malicious traffic trying to connect to your network and you want to block certain IP addresses. If your security groups have been provisioned via automation they very likely can be updated easily as well.
Code and application deployment is a huge one as well. Automations could be written such that a snapshot or backup is taken first, then any dependencies such as new OS libraries are installed before deploying the new code.
Cloud Automation vs Cloud Orchestration: What's the Difference?
It is easy to get confused by what each of these does or think they are the same thing. Cloud automation tends to be involved with spinning up various individual resources. For example, you may have cloud automation for AWS EC2 instances and a separate cloud automation to pull source from Github — and build it and then deploy over to the EC2 instance. These EC2 instances may go into their own VPC which needs to be created first.
Where cloud orchestration comes in is that it hovers above all these various automations as the umbrella that organizes and orchestrates it. When the dependencies are properly defined, many of these tools can work out all of the order of operations to ensure resources are available for other resources to depend on in the provisioning process. In the above example, it is important to ensure the VPC exists first so that the EC2 instance has a network container to be instantiated into. The EC2 instance has to exist before any code or applications can be deployed to it.
What Cloud Automation Tools Exist?
Cloud providers like AWS and Azure have their native tools because they want to make it as easy as possible for you to spin up your resources as necessary. One of these tools for example is AWS has CloudFormation. It is AWS specific though but is very flexible.
But what if you’re working with cloud-agnostic or multi-cloud environments? You're not left to use each individual platform's tools. There are great 3rd-party tools like Terraform that work with all of the major cloud providers.
We've talked mostly about provisioning but there are some great maintenance automation tools as well, such as RedHat's Ansible and SaltStack. Putting resources in the cloud is not just about spinning up servers and letting them run themselves. They still need to be maintained and updated.
Workflow automation can seem overwhelming. There are so many pieces to it and seemingly just as many tools. Always start with your business requirements and find the right tools for those requirements. For example, are you standardizing on AWS? If so you only need to focus on tools that work with AWS. Do you need automations to provision and maintain an environment? If so, you need to look at tools that can do both or two separate tools that can work together on that.
There are typically no wrong answers when it comes to these decisions. Business requirements and cost are usually the main factors and help narrow the field down. If the big picture gets overwhelming, break it down into smaller pieces and then take a step back to make sure they all fit together. | <urn:uuid:84612122-6726-4b68-a188-91035e7e902f> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/certifications/cloud/what-is-workflow-automation-and-why-you-should-use-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00368.warc.gz | en | 0.947832 | 1,803 | 2.671875 | 3 |
A lot has happened in the past year, including a worldwide pandemic that completely disrupted the traditional workforce. Technology has played a crucial role in helping the world to adapt to the new reality of remote work. However, these changes have also brought about unique cybersecurity concerns, which we will detail in this post.
Top Cybersecurity Threats of 2021
2021 has been a unique year as far as cyber security is concerned. The global COVID-19 pandemic dramatically changed how we work, shop, and learn. The internet has been a lifeline in a world looking for a sense of normalcy in the face of restricted travel and outright lockdowns.
Cybercriminals have also been working overtime, leading to an upsurge in cyber threats in 2021. Consequently, 2021 has been a record year for cyber-attacks, already surpassing 2020’s total even before the year’s end.
Moreover, according to the Identity Theft Resource Center (ITRC), 2021 is set to record the highest number of publicly reported data breaches for a single year.
Given this grim introduction, here are the most prevalent cybersecurity threats of 2021:
Phishing attacks are an all-time favorite for cybercriminals. Here, hackers trick users into downloading attachments or clicking links. Then, attackers can gain access to private databases, personal financial information, credit card credentials, and other types of sensitive information.
Phishing attacks top the list as the most common cyberattack in 2021. According to Google Safe Browsing, there were at least 2,145,013 registered phishing sites as of Jan 17, 2021. This figure is up 27% from Jan 19, 2020, when Google registered 1,690,000 phishing sites.
Phishing scams have also become more sophisticated and aren’t limited to email. For example, in smishing, the attack is delivered via SMS. Similarly, vishing attacks are delivered via voice calls.
Additional phishing statistics for 2021 include:
- The most targeted industries for phishing attacks in 2021 are financial institutions, social media, SaaS, payment, and E-commerce, respectively. (Source)
- 85% of phishing attempts in 2021 targeted user credentials. (Source)
- Phishing attacks account for 36% of all data breaches in 2021. (Source)
- 96% of phishing attacks originate from email. (Source)
- Companies implementing zero-trust security policies save $1.76 million in mitigated phishing attacks. (Source)
Ransomware attacks are malware that encrypts victims’ files, rendering them unusable. Then, attackers demand a ransom in exchange for restoring the files to their original state.
According to Coveware, a cybersecurity firm, the most common ransomware attacks in 2021 include Sodinokibi, Conti V2, and Lockbit. These malware variations accounted for 14.2%, 10.2%, and 7.5% of 2021 ransomware attacks, respectively.
Although ransomware incidents dropped in 2020 and even further in 2021, there is still cause for concern. Larger organizations are at higher risk this year, accounting for 42% of attacks. This figure compares to 33% of smaller companies reporting ransomware attacks. (Source)
Additional ransomware statistics for 2021 include:
- Ransomware has already cost businesses $20 billion in 2021 in ransom payments. This figure is up from $11.5 billion by the end of 2019. (Source)
- The average recovery cost from a ransomware attack in 2021 is $1.85 million. This cost includes recovery expenses, lost opportunities, ransomware removal, and lost opportunities. (Source)
- The average hacker ransom demand in 2021 is $220,298. This figure is up 43% compared to $178,000 at the tail-end of 2020. (Source)
- However, the average ransom payment for small businesses is $5,900. (Source)
- Less than a third of ransomware victims pay hackers to decrypt their data. (Source)
IoT Device Attacks
Internet of Things (IoT) refers to physical devices connecting to the internet and sharing data. Generally, these are nonstandard devices that can grant users remote access and interact with other devices over the internet. Common IoT devices include fitness trackers, medical sensors, intelligent security systems, and smart refrigerators.
Notable IoT threats statistics for 2021 include:
- 48% of businesses report that they cannot detect IoT security breaches on their network. (Gemalto).
- 75% of cyberattacks target network routers. (Symantec)
- Most IoT attacks happen within the first five minutes of connecting to the internet. (NETSCOUT)
- Cyberattacks against IoT devices have grown more than 100% in 2021 (Source)
- There have been more than 1.5 billion IoT device attacks in the first half of 2021 alone. These attacks mostly attempt to build botnets, mine cryptocurrency, and steal data. (Source)
Cryptojacking refers to when hackers hijack a victim’s personal or work computer to mine cryptocurrency. Cryptocurrency mining requires immense computer resources, so hackers may target unwitting internet users and hijack their processing power.
Unfortunately, cryptojacking cases are on the rise in 2021. In this case, hackers send malicious links that unknowing users click on, giving the hackers remote control of the victim’s computer. Then, the cybercriminals inject malware into the victim’s computer to secretly mine cryptocurrencies in the background. The process can use up as much as 70-80% of a computer’s processing power.
Cryptojacking malware, known as crypto miners, accounted for 41% of all detected malware in 2020. This trend is said to continue well into 2021. Kaspersky reported at least 432,171 cases of cryptojacking in the first quarter of 2021. (Source)
This spike in cryptojacking has been largely attributed to the increased value of cryptocurrency across the board. Historically, crypto miners increase during such booms, and cryptojacking incidents go down when crypto value plummets.
Notable Examples of 2021’s Notable Cyber Breaches
2021 has been a busy year for cybercriminals. Some of the year’s noteworthy cyberattacks and breaches include:
Example 1: Twitch
5 Billion Records Leaked
Twitch, the Amazon-owned streaming service took to Twitter on October 6th, 2021 to announce a significant data breach. According to the BBC, the breach resulted in more than 100GB of leaked data.
The stolen data was later posted publicly on 4chan. Some of the stolen information included internal company documents, user data, security tools, and the service’s proprietary source code. Famously, payment information for Twitch elite gamers was also posted publicly.
The hack and subsequently leaked information are attributed to a “hacktivist” who was unhappy about the platform’s allegedly toxic community. There’ve been multiple incidents of malicious Twitch users developing bots to flood chatrooms with hateful messages. These attacks mostly targeted elite streamers on the platform.
Consequently, search engine queries for deleting Twitch rose 733% soon after the breach announcement.
Example 2: The Pandora Papers
11.9 Million Documents Leaked
The Pandora Papers have been dubbed the most significant offshore data leak in history. The hack targeted the rich and powerful to expose their financial secrets. The leak included files from financial companies used by the ultra-wealthy to create trusts and offshore structures in tax havens.
The leak included the financial secrets of more than 35 world leaders and 300 public officials in more than 90 countries. Leaked documents included share certificates, incorporation records, memos, emails, and compliance reports.
The Pandora Papers were leaked to the International Consortium of Investigative Journalists (ICIJ), revealing tax avoidance, hidden wealth, and even money laundering. It is worth noting that running secret companies or using offshore services is not illegal, but these services have been used to launder money, avoid taxes, and divert money.
Example 3: Astoria Company Database Breach
30 Million Individuals Impacted
Astoria Company, a lead generation company, was the victim of a data breach on January 26, 2021. The breach affected more than 30 million users of the company’s database. Astoria Company runs a network of websites that collect information on persons looking for payday loans, discounted car loans, and medical insurance.
Users volunteer personal information to Astoria’s network of websites. This information is later sent to partner sites such as loan agencies. Finally, Astoria makes its money through pay-per-lead referrals.
A threat intelligence team became aware of Astoria’s data breach. The stolen information was later listed for sale on the Dark Web. This information included people’s:
- Date of Birth
- Email address
- Physical Address
- IP Address
- Mobile Phone
Other leaked data included people’s complete bank account information, social security numbers, and medical records. The breach resulted from malicious scrips on the company’s website, allowing anyone access to the database from a public URL. More than 30 million Americans were affected by the breach.
2021 Cybersecurity State of Preparedness Statistics
The cybersecurity statistics for 2021 are grim so far. The obvious question is how well organizations are prepared to defend themselves against cyberattacks. According to Axio’s 2021 State of Ransomware Preparedness report, the outlook isn’t too bright either.
The report identifies several key areas where organizations are failing to implement and sustain basic cybersecurity practices, including:
Privileged Access Management – It appears that many organizations lack fundamental oversight and control over privileged credentials and access. Poor credential management is the most significant risk factor for cyberattacks. Unfortunately, many organizations do not take enough precautions to protect privileged credentials.
Exposure to Third-Party Risk – Most organizations rely on external partners to provide critical services. Many times, these third parties require network access for convenience and efficiency. Unfortunately, this setup means that the organization has less direct control over its network security.
Worst still, 29% of the survey’s respondents admitted that they do not vet the external party’s cybersecurity position before allowing access to their network. In short, the inherited risk from external partners is a severe cybersecurity concern in 2021.
Basic Cyber Hygiene – Cyber hygiene means implementing basic practices and controls to secure a company’s data, network, and assets. Most of these practices are a low investment with potentially high rewards.
However, poor cyber hygiene is a worrying trend in 2021. For example, 69% of organizations report unlimited internet access for Windows domain controller hosts. This trend is worrying since domain controllers are a favorite avenue for hackers to spread ransomware or other attacks throughout the organization.
Network Monitoring – Network monitoring is indispensable for identifying and defusing cyberattacks before they occur. However, a worrying number of organizations do not monitor their networks for anomalies that may indicate an imminent attack. Even more worrisome, a significant number of organizations haven’t invested in basic network controls.
For example, 64% of companies do not monitor their networks for suspicious data transfers or network processes that consume disproportionate resources.
Other noteworthy statistics from the 2021 State of Ransomware Preparedness Report include:
- Nearly 80% of organizations have either not implemented or have only partially implemented a privileged access management solution.
- Only 50% of companies conduct annual user awareness training for employees.
- Only 42% of organizations log actions executed using privileged credentials.
- Only 39% of companies monitor for irregular use of privileged credentials.
- Only 26% of companies restrict command-line scripting tools by default.
- 60% of organizations do not maintain updated records of external parties with access to their network.
Cybersecurity Trends for 2021
There is much to be done to prepare for cyberattacks adequately. So, what are organizations doing in 2021 to fend off cyber breaches? We can look at some of 2021’s cybersecurity trends to get answers to this question.
Transition to Zero-Trust Platforms
For one, we are likely to see a transition to Zero-Trust Network Access (ZTNA). These platforms are likely to replace Virtual Private Networks (VPNs) for two main reasons. First, VPNs are proving inadequate for stopping phishing attacks, which are on the rise. (Source)
Secondly, the COVID 19 pandemic has necessitated working away from a centralized office. By contrast, home offices tend to have less secure routers, firewalls, and access management. This situation is hardly surprising in the absence of a dedicated IT security team.
Therefore, network security is a concern more than ever before. The Zero-Trust policy is founded on the premise that there is no such thing as a trusted source. Therefore, organizations are keen to create a framework of enforced cyber hygiene.
According to the National Cyber Alliance, 60% of businesses will transition from VPNs in favor of ZTNA by 2023. (Source)
More Widespread Use of Multi-Factor Authentication
Strong passwords have long been a standard cybersecurity best practice. But more organizations are waking up to the reality that strong passwords aren’t enough to ward off cyberattacks. Many businesses are turning to Multi-Factor Authentication (MFA) to bolster cybersecurity.
MFA requires users to provide two or more verification factors. For example, a user may be required to enter their password and enter a unique code sent to their mobile phone.
According to Google, 2FA alone can stop three-quarters of targeted attacks, 96% of bulk phishing attacks, and 100% of all automated attacks.
Rising Cloud Security Threats
Today’s remote workforce is heavily reliant on cloud services. These services come with several benefits, including cost savings, efficiency, and scalability. According to a Cybersecurity Insiders report, 76% of companies use two or more cloud providers. Furthermore, 92% of organizations host at least part of their IT environment in the cloud.
However, cloud services also come with unique security threats and vulnerabilities. For example, poorly configured cloud settings may lead to security vulnerabilities such as unauthorized access, data breaches, and account hijacking.
Cloud services adoption also raises additional security issues, including misuse of personal devices, weak passwords, multiple potential entry points for hackers, and regulatory compliance issues.
The concern about cloud security threats is far from mere fearmongering. A whopping 79% of companies have experienced a cloud data breach in the past 18 months, according to a 2021 IDC survey.
Increased Use of Artificial Intelligence
Increased use of cloud services, remote working, and widespread adoption of IoT devices contribute to the rise in cyberattacks. Security experts are hard-pressed to develop practical solutions to increasingly more sophisticated cyber threats.
We will likely see more widespread AI and machine learning to bolster security infrastructure. Artificial Intelligence brings a promising capability to the fight against cybercrime. For example, AI can analyze large mounds of at-risk data quickly. Additionally, this technology has proved its worth in natural language processing, automatic threat detection, and automating security systems.
Sadly, the opposite is also true. Hackers are already automating their attacks. Unfortunately, there is every reason to believe that this trend will continue to grow. These automated attacks will also get more targeted, effective, and complicated. | <urn:uuid:2729f62e-a090-45f2-baa7-8e0b41d47de9> | CC-MAIN-2022-40 | https://nira.com/global-cybersecurity-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00368.warc.gz | en | 0.936837 | 3,195 | 2.78125 | 3 |
Cybercrime has become a part of everyday life, and hackers are using any
opportunity to take advantage of an unknowing victim to gain access to personal
information for financial gain. As gatekeepers to the data of today’s small and
medium businesses (SMBs), managed service providers (MSPs) are also becoming
increasingly targeted by these attackers.
Some social engineering attacks are more obviously a scam than others. Education and cyber security training can mean the difference between compromised credentials and a failed attempt by a hacker.
One commonly used cyberattack is phishing. Phishing is an umbrella term for attacks that are typically delivered in the form of an email, chat, web ad, or website that has been designed to impersonate a real person, system, or organization. Phishing messages are crafted to deliver a sense of urgency or fear with the end goal of capturing an end user’s sensitive data and can result in wire transfer fraud, credential phishing, malware attachments, and URLs leading to malware spraying websites.
The Different Types of Phishing Attacks:
The most common phishing tactic, these emails are designed to look like they are from a trusted source. They ask the victim to reply to the email or fill out a web form, giving out personal details.
Spear phishing is an attempt to gain access to credentials or financial information from a targeted individual. Attackers pass themselves off as someone the target knows well or an organisation they’re familiar with to gain access to compromising information and exploit the victim. These attacks are purposefully crafted to target a specific user or small group of users. They are typically crafted after research of the target has occurred, resulting in a more personally relevant phishing attack.
Whaling is a form of spear phishing with a focus on a high-value target, meaning the fraudulent communication comes from a senior employee within an organisation, to boost credibility. This approach also targets other high-level employees within an organization as the potential victims and includes an attempt to gain access to company platforms or financial information. These attacks employ the same methods as spear-phishing attacks.
Mass phishing campaigns cast a wider net than the targeted techniques of spear phishing and whaling. True to their name, they are sent to the masses to convince a subset of the wide net to fall victim to their efforts. Typically, these are sent via email from a knock-off corporate entity insisting a password needs to be updated or credit card information is outdated. The damage caused by falling victim to a mass campaign may not be as immediately evident as more targeted attacks as there is a lag time between the successful attack and the sale of the data obtained in the attack.
Ambulance Chasing Phishing
This form of phishing is commonly a mass campaign, but can also be spear phishing. With ambulance-chasing phishing, attackers will play off of current crises to drive urgency for victims to take action that will lead to compromising data or information. For example, targets of this form of phishing may receive a fraudulent email encouraging them to donate to relief funds for recent natural disasters or the COVID-19 global pandemic.
Pretexting is a highly effective method of phishing as it reduces human defenses by creating the expectation that something is legitimate and safe to interact with. Pretexting involves an attacker doing something via a non-email channel to set an expectation that they’ll be sending something seemingly legitimate shortly. For example, attackers may call and leave a voicemail acting as a vendor saying that their contract will be sent shortly via email. Then, an email about the voicemail will be sent containing malicious links.
Account Expired/Change Password
Like other phishing scams, these look like they come from a trusted source, and inform the victim that their password for an account has expired. This is done to encourage them to enter other credentials to reset their password.
The scammer can then use these credentials to access the victim’s account.
Also called smashing, these work similar to phishing emails, but are sent as SMS, social media messages, or other messages compatible with phones.
Clicking through links in these messages can give hackers access to your data, or allow them to install malicious software on your device.
This type of attack is more sophisticated, as it involves intercepting emails between two people. The attacker can then send emails back to these two people, who think they are coming from each other, but are actually from the attacker.
They can ask for private information or request certain actions, and the person may easily fall, victim, as they think the email is from a trusted source.
In this method, hackers will create a Wi-Fi network copying the address of another. Anyone who connects to this spoofed network will be exposed to the hackers, allowing them to access passwords and other information.
This is usually done in public spaces such as coffee shops, malls, and airports.
How to get protected from Phishing:
These are just a few of the ways malicious actors will try to exploit businesses and
their unknowing employees to gain access to credentials and financial information.
To stay ahead of the curve, it’s crucial to every member of your organisation on the
risks they face as the cybersecurity landscape continues to evolve and hackers
become more sophisticated.
There are a few key ways to protect an organization from phishing and
increase your cyber resiliency.
- Regular training of staff and customers (Register here for Government Fully Funded Cyber Security Awareness Training)
- Learn the psychological triggers
- Build a positive security culture
- Implement technical measures e.g. email security or anti-phishing solutions
- Test the effectiveness of the training
To learn more about these protection methods read our blog on “How to Spot and
Protect Against Phishing Email Attacks”
IT Connexion is a proud partner of Datto Inc. | <urn:uuid:0881eb2c-918a-4c9a-85ff-cdeb1b119c86> | CC-MAIN-2022-40 | https://www.itconnexion.com/common-types-of-phishing-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00368.warc.gz | en | 0.94641 | 1,261 | 3.34375 | 3 |
If the Department of Homeland Security has its way, cellphones will soon do more than transmit calls, GPS information and a host of data from the Web. They’ll also monitor the air for toxic substances that could be part of a chemical warfare attack.
Just as antivirus software springs to life when it spies suspicious activity, so Cell-All, from the DHS Science and Technology Directorate (S&T), regularly sniffs the surrounding air for certain volatile chemical compounds.
When a threat is sensed, a warning is delivered in one of two ways. For personal safety issues such as a chlorine gas leak, a warning is sounded; users can choose a vibration, noise, text message or phone call for their alerts. For larger-scale catastrophes, such as a sarin gas attack, details including the time, location and compound are sent to an emergency operations center.
Qualcomm, LG, Apple and Samsung
Detection, identification and notification all take place in less than a minute, DHS says.
“Our goal is to create a lightweight, cost-effective, power-efficient solution,” said Stephen Dennis, Cell-All’s program manager.
Toward that end, S&T is pursuing cooperative research and development agreements with Qualcomm, LG, Apple and Samsung. The hope is to have 40 prototypes in about a year, focusing first on detecting carbon monoxide and fire.
Technologists from Rhevision, meanwhile, have developed an “artificial nose” that’s essentially a piece of porous silicon that changes color in the presence of certain molecules and can be read spectrographically.
Cell-All will operate only on an opt-in basis and will transmit data anonymously, DHS stressed.
There’s already at least one effort to recruit citizens in monitoring air quality — French Montre Verte, which uses a wrist device specially equipped with not just a time piece but also a GPS chip, a Bluetooth chip, and ozone and noise sensors, as Springwise reports.
There have also been separate efforts to use cellphones to monitor localized rain patterns and to keep tabs on traffic, Johannes B. Ullrich, chief technology officer with the SANS Internet Storm Center, told TechNewsWorld.
Yet while such crowdsourcing efforts may “sound cool,” he noted, the trick will be making the technology work as intended.
‘The Question of Who’s Paying’
“The problem I see is how people will feel about having sensors in their phones,” Ullrich said. “They must have additional circuity there, and that costs money, so there’s the question of who’s paying.”
Often, another problem with such sensor technology is a high incidence of false positives, such as might happen if a person were to hold the phone too close to the exhaust from a car, Ullrich added.
Of course, the crowdsourcing aspect could reduce that problem, he pointed out, allowing officials to hold off on sending a hazmat team, for instance, until enough phones in the same area had sent in alerts.
‘It Will Be a Niche’
Indeed, “while the idea sounds like a good one, I think for it ever to fly, Homeland Security would need to convince citizens that this would be in their best interest,” Allen Nogee, a principal analyst with In-Stat, told TechNewsWorld. “That might be a hard sell, and even if it only costs a dollar, with over 100 million new handsets shipped in the U.S. alone, that’s $100,000,000 for just sensors.”
Even if customers have to pay just a penny more for the technology, “I can’t imagine anyone will use it,” telecom and wireless analyst Jeff Kagan agreed.
“If we have an attack, all of a sudden everyone will interested,” Kagan predicted. “Until then, it will be a niche.”
A Radioactive Approach
Although science is “incredible,” there are so many types of potential risks out there — from chemical to biological to radioactive — “it’s impossible to even scratch the top here,” Nogee pointed out.
Sensors using radioactivity might be a better approach, he added.
“Such sensors are cheap, and the risk of a false alarm is pretty low,” Nogee said. “And in this case, knowing the location and readings from a wide range of users would be very helpful.
“So forget the chemical and other types of sensors that never would be feasible,” he concluded. “A radiation monitor would be the best to explore, but even the feasibility of that is even in question.” | <urn:uuid:8ef3cd98-6b8c-40e5-8b32-b574e6448197> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/air-sniffing-cellphones-could-aid-chemical-warfare-defense-69760.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00368.warc.gz | en | 0.94767 | 1,016 | 2.53125 | 3 |
The Network Computing, Communications and Storage research group at Aarhus University has developed a completely new way to compress data. The new technique provides possibility to analyze data directly on compressed files, and it may have a major impact on the so-called “data tsunami” from massive amounts of IoT devices.
The method will now be further developed, and it will form the framework for an end-to-end solution to help scale-down the exponentially increasing volumes of data from IoT devices.
“Today, if you need just 1 Byte of data from a 100 MB compressed file, you usually have to decompress a significant part of the whole file to access to the data. Our technology enables random access to the compressed data. It means that you can access 1 Byte data at the cost of only decompressing less than 100 Bytes, which is several orders of magnitude lower compared to the state-of-the-art technologies. This could have a huge impact on data accessibility, data processing speed and the cloud storage infrastructure,” says Associate Professor Qi Zhang from Aarhus University.
Compressed IoT data
The compression technique makes it feasible to compress IoT data (typically data in time series) in real time before the data is sent to the cloud. After this, the typical data analytics could be carried out directly on the compressed data. There is no need to decompress all the data or large amounts of it in order to carry out an analysis.
This could potentially alleviate the ever-increasing pressure on the communication and data storage infrastructure. The research group believes that the project’s results will serve as a foundation for the development of sustainable IoT solutions, and that it could have a profound impact on digitalization:
“Today, IoT data is constantly being streamed to the cloud, and as consequence of the massive amounts of IoT devices deployed globally an exponential data growth is expected. Conventionally, to allow fast frequent data retrieval and analysis, it is preferable to store the data in an uncompressed form.
“The drawback here is the use of more storage space. If you keep the data in compressed form; however, it takes time to decompress the data first before you can access and analyze it. Our project outcome has the potential not only to reduce data storage space but also to accelerate data analysis,” says Qi Zhang. | <urn:uuid:6ac3da62-fae1-4d9c-9664-9b54adc2f8bd> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2020/08/31/compressed-iot-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00368.warc.gz | en | 0.92736 | 478 | 3.140625 | 3 |
Chip maker Nvidia has come a long way in transforming its graphics processors so they can not only be used to drive screens, but also to work as efficient and powerful floating point engines to accelerate modeling and simulation workloads.
At the GPU Technology Conference hosted in its hometown of San Jose, co-founder and CEO Jen-Hsun Huang declared the adjacent market of deep learning – a more neutral term than machine learning – as the next big wave of GPU computing. And, as Huang tells it, deep learning could be more important in the grand scheme of things than the advent of the Internet as intelligent automation makes its way into our professional and personal lives, doing tasks better than we can.
All makers of compute engines, including Nvidia, as well as storage devices and networking gear very much want for these new classes of deep learning workloads to take off because that will keep companies consuming more IT gear. All of the big hyperscale players have deep learning facilities, doing everything from speech, text, image, and video recognition to operating machinery, like our cars, autonomously on our behalf. Deep learning got its start in academia, with the creation of neural networks that emulate some of the processes in the human brain, decades ago, and in recent years have flowered as hyperscale companies have to process unprecedented volumes of rich media data and are using more sophisticated and accurate neural network software, painfully developed over decades, to do this. It is beginning to look like a “big bang” in deep learning is about to happen, as Huang put it, thanks to the confluence of better approaches, such as convoluted neural networks and cheap floating point performance.
As you might expect, while talking up the potential future market for its GPU engines, the company bragged a bit about the transformation of the HPC market by Tesla GPU coprocessors as adjuncts for CPUs in X86, ARM, and Power systems and clusters. GPU coprocessors give such systems substantially more floating point performance at a lower cost, lower electricity consumption, and lower heat dissipation than sticking with CPU-only machinery. During his keynote, Huang revealed some figures to show just how pervasive Tesla coprocessors have become in HPC:
While the CUDA programming environment for GPU offload was invented in 2006 for the GeForce 8800 graphics cards, the concept of a separate Tesla GPU coprocessor aimed at supercomputing clusters did not happen until two years later with the launch of the Tesla C870 and D870 devices, which offered single precision floating point processing. From there, Nvidia has moved through four generations of GPUs – code-named Tesla, Fermi, Kepler, and Maxwell, with Pascal and Volta still on the horizon – has increased the performance by roughly an order of magnitude and, interestingly enough, the Tesla computing installed base has grown by about an order of magnitude, too. (It is really about a factor of 8X, but as Huang himself admitted, he does “CEO Math,” where things are expressed in generalities, not with precision. Ironic, isn’t it?)
The growth in the Tesla business has been phenomenal, and as The Next Platform has recently reported, through Nvidia’s fiscal 2015 year (which ended in January), the Tesla line is now at an annual revenue run rate of “several hundred million dollars” and grew at around 50 percent in the last fiscal year for Nvidia, driven by strong adoption by both HPC and hyperscale organizations. Nvidia has talked about the number of CUDA downloads, the number of CUDA-enabled applications, and academic papers relating to GPU coprocessing in the past. New at the GTC 2015 event this week was Huang divulging that Nvidia has shipped a cumulative 450,000 Tesla coprocessors, an increase of 7.5 percent from 2008. (Cumulative CUDA downloads have increased by a factor of 20X, and that is because CUDA works on all Nvidia GPUs, not just Tesla coprocessors.) The CUDA app count is up by a factor of 12X, to 319, as the fiscal year ended.
“This is amazing progress in a few short years,” Huang said, and it probably will not be long before sales of Tesla products are material enough that they will be broken out separately from other GPU sales for client devices. These days, all kinds of applications are being accelerated by GPUs, including databases and data stores, various kinds of streaming applications, as well as the traditional simulation and modeling workloads that are commonly called HPC.
Like in the past, software developers who have been looking at how to deploy cheap flops to run applications to get better bang for the buck have been driving Nvidia into the deep learning market. And Nvidia is reacting to the needs of this nascent but fast-growing market by doing what all compute engine makers do: Offering chips that are more precisely tuned to goose the performance of workloads. To that end, Huang flashed up a roadmap of future GPUs:
The new data on that roadmap is that the future “Pascal” processors will not only support 3D memory, which has high bandwidth as well as high capacity, but also will offer mixed precision floating point calculation, specifically 16-bit FP16 instructions. Jonah Alben, senior vice president of GPU engineering at Nvidia, tells The Next Platform that this 16-bit floating point math would not only be useful for deep learning but also various kinds of imaging applications that can do fine with 16-bits rather than the 32-bits of single precision or the 64-bits of double precision.
Alben also confirmed that as has been the case with past generations of GPUs, some features that are available in high-end, server coprocessor variants of the Nvidia GPUs are not cascaded down to workstation and client parts. The dynamic parallelism and Hyper-Q features of the “Kepler” family of GPUs, for instance, were only available in the Tesla coprocessors, not in the rest of the Kepler family. Having said that, given the preference for inexpensive compute at single precision among the hyperscalers that are driving machine learning these days, it seems likely that the GeForce line will support FP16 processing as an inexpensive option. Not all GPUs will support the NVLink point-to-point interconnect in the Pascal family, either. It would not be surprising to see Pascal cards with NVLink for dual-GPU workstations and with four or eight connections between GPUs for compute farm nodes.
With the exception of the FP16 detail, the roadmap above is essentially the same one that Nvidia showed a year ago, which not only shows when Nvidia has each GPU architecture ready, but also the relative performance per watt of the devices as gauged by the SGEMM single-precision matrix math test. Nvidia can dial up and down the clock speeds, CUDA core count, and thermals to hit the performance per watt targets shown in the roadmap.
Huang did a little rough math to explain how the future Pascal GPUs would excel compared to the Maxwell GPUs when running deep learning applications, saying once again to expect around an order of magnitude more performance using his “CEO Math” rough approximations.
The comparison above is interesting because it shows the effects of the FP16 instructions, the 3D stacked memory, called High Bandwidth Memory (HBM) developed by AMD and Hynix, and NVLink all added up and relating to the deep learning applications that Nvidia is counting on to push its Tesla business to the next level. At last year’s GTC event, when Nvidia rejiggered its GPU roadmap, the company pushed out the hardware-assisted unified memory that was expected with the Maxwell GPUs to the Pascal GPUs. At the time, Huang also said to expect that by 2016, when Pascal is expected to ship, Nvidia could deliver 1 TB/sec of memory bandwidth on the GPU, roughly 4X that of the Maxwell GPUs.
In his keynote address this year, Huang said that the Pascal GPU would have twice the performance per watt as the Maxwell GPU, and added that the Pascal card would have 32 GB of this HBM memory on the card (it is not 3D, strictly speaking, but what is referred to as 2.5D, with memory chips jammed side-by-side very tightly). That is a factor of 2.7 increase in memory, and he added that the FP16 would double the instructions and that the memory bandwidth would increase by a factor of 3X, so the resulting instruction bandwidth into and out of memory would increase by a factor of 6X. Average it all out over the various stages of deep learning, when neural nets are learning and refining their models, customers should expect a 5X improvement in performance on deep learning applications. That performance improvement assumes that the final weighting stage of the machine learning application can make use of NVLink interconnect on a machine with four GPUs sharing their HBM memory. Huang hinted that the Pascal-based machine with would deliver a 10X improvement when it shifted to eight GPUs in a single NVLink cluster.
Not everyone will necessarily go the Tesla route with deep learning applications. The GeForce Titan X card, based on a Maxwell chip with 7.1 teraflops of single precision math on its 3,072 CUDA cores, costs a mere $999. A follow-on Titan X card based on Pascal, which will no doubt come to market, will deliver twice the performance per watt, generally speaking, and so will have even better bang for the buck. If FP16 instructions are supported on Pascal variants of the Titan X cards and NVLink does not help so much on certain portions of the deep learning algorithms, it is possible that companies will even go so far as to build hybrid systems using a mix of different styles of Pascal GPUs within their systems.
The one thing that Nvidia did not launch today was a Maxwell-based Tesla coprocessor card, and Sumit Gupta, general manager of the Tesla GPU Accelerated Computing division at Nvidia, did not comment on when or if the company would get such a device into the field. We are beginning to think, based on an OpenPower roadmap that IBM was showing off a week ago, that there never will be a Maxwell Tesla part:
As you can see, the roadmap shows the Kepler GK210 variant of the Tesla card as being part of the OpenPower platform in 2014, with the Pascal GP100 part coming in 2016 and the Volta GV100 coming in 2017. Just because the OpenPower partners are not showing a Maxwell variant of the Tesla does not mean it does not exist. In fact, we have heard of some Maxwell-based Tesla test units that were making the rounds with hardware partners last fall. The message thus far conveyed indirectly by Nvidia is that Pascal is what is coming next the Tesla line. | <urn:uuid:2f78e7d2-2568-4f37-994c-c105452482cc> | CC-MAIN-2022-40 | https://www.nextplatform.com/2015/03/18/nvidia-tweaks-pascal-gpus-for-deep-learning-push/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00368.warc.gz | en | 0.958732 | 2,225 | 2.65625 | 3 |
Traditionally, storage and security have been separate disciplines within IT. While the two groups had some overlapping concerns and worked together on some projects, they were largely distinct.
These days, that model is changing. The constant news about security breaches at well-known companies like Sears and Delta Air, Panera Bread, Saks Fifth Avenue and Lord & Taylor, MyFitnessPal, Orbitz, FedEx and the city of Atlanta have enterprise IT leaders very concerned about their own risk.
Many are adopting DevSecOps, an approach that makes everyone in the organization responsible for security. For storage professionals, that means paying greater attention to data storage security.
What Is Data Storage Security?
Data storage security is a subset of the larger IT security field, and it is specifically focused on securing storage devices and systems.
The Storage Networking Industry Association (SNIA) Dictionary offers the following, more technical definition data storage security:
Storage Security: Application of physical, technical and administrative controls to protect storage systems and infrastructure as well as the data stored within them. Storage security is focused on protecting data (and its storage infrastructure) against unauthorized disclosure, modification or destruction while assuring its availability to authorized users. These controls may be preventive, detective, corrective, deterrent, recovery or compensatory in nature.
The SNIA also notes that secure storage “may also be the last line of defense against an adversary, but only if storage managers and administrators invest the time and effort to implement and activate the available storage security controls.”
For storage administrators and managers, ensuring proper data storage security is a careful balancing act. They must weigh three primary concerns covered by the acronym CIA: confidentiality, integrity and availability. They must keep sensitive data out of the hands of unauthorized users and they must assure that the data in their systems is reliable, while also making sure that data is available to everyone in the organization who needs to access it.
At the same time, they need to be very cognizant of costs and the value of their data. No one wants to end up with data storage security systems that are more expensive than the value of the data they are protecting. Yet organizations also need to have strong enough security systems that breaching them would require potential attackers to expend more time and resources than the data would ultimately be worth.
Data Security vs Data Protection
Storage security and data security are closely related to data protection. Data security primarily involves keeping private information out of the hands of anyone not authorized to see it. It also includes protecting data from other types of attacks, such as ransomware that prevents access to information or attacks that alter data, making it unreliable.
Data protection is more about making sure data remains available after less nefarious incidents, like system or component failures or even natural disasters.
But the two overlap in their shared need to ensure the reliability and availability of information, as well as in the need to recover from any incidents that might threaten an organization’s data. Storage professionals often find themselves dealing with data security and data protection issues at the same time, and some of the same best practices can help address both concerns.
Data security and data protection are clearly overlapping concerns. Image Source: SNIA
Key Drivers for Data Storage Security
Several recent trends are increasing enterprise interest in data security. They include the following:
- Data growth — According to IDC, the amount of data stored in the world’s computer systems is roughly doubling every two years. For enterprises, that means constantly needing to add new storage in order to keep up with business needs. And as storage volumes grow, they become more valuable as targets and more difficult to protect.
- Cyberattack growth — The Verizon 2018 Data Breach Investigations Report uncovered 53,000 security incidents last year, including 2,216 data breach incidents — and that’s only a fraction of the actual events experienced by organizations. And a recent report from a UK government agencyfound found that 2017 had more cyberattacks than any other year on record. New attacks seem to be in the news nearly every day, and that has businesses worried about their security posture.
- Cost of data breaches — Recovering from a data breach is incredibly expensive. The Ponemon Institute 2017 Cost of a Data Breach Study found that companies experiencing breaches spent an average of $3.62 million, or about $141 per record lost, to recover from incidents in 2017. Those expenses can be a powerful encouragement to improve data security.
- Increasing data value — Thanks to the rise of big data analytics, organizations are more aware than ever of the value of their data. According to Gartner the big data analytics market grew by as much as 63.6 percent in recent years, and by 2020, enterprises will likely spend $22.8 billion on tools to help them uncover valuable insights in their data. But in order for analytics to prove useful, enterprises need to be able to ensure the veracity of their data, and that means investing in security.
- Edgeless networks — Thanks to trends like cloud computing and the Internet of Things (IoT), enterprises now have data spread out in more places than ever before. Corporate networks no longer have a hard edge that organizations can define and protect with firewalls. Instead, they must rely more strongly on defense in depth, including storage security, to protect their information.
- Regulation — Governments are taking an increasing interest in data security and passing stronger laws as a result. The EU’s General Data Protection Regulation (GDPR), which goes into effect May 25, 2018, is forcing companies around the world to take stronger measures to protect customer privacy, and that will impact storage security as well.
- Need for business continuity — 2017 was a record year for natural disasters in the US, highlighting the need for business continuity and disaster recovery capabilities. This is driving demand for secure backup and other storage security technologies.
- DevSecOps approaches — According to Forrester, 63 percent of organizations have already implemented DevOps, and another 27 percent are planning to do so. As DevOps grows, more companies are becoming interested in DevSecOps, which integrates security into the approach and spreads responsibility for security throughout the organization — including the data storage team.
Another huge driver of interest in data storage security is the vulnerabilities inherent in storage systems. They include the following:
- Lack of encryption — While some high-end NAS and SAN devices include automatic encryption, plenty of products on the market do not include these capabilities. That means organizations need to install separate software or an encryption appliance in order to make sure that their data is encrypted.
- Cloud storage — A growing number of enterprises are choosing to store some or all of their data in the cloud. Although some argue that cloud storage is more secure than on-premises storage, the cloud adds complexity to storage environments and often requires storage personnel to learn new tools and implement new procedures in order to ensure that data is adequately secured.=
- Incomplete data destruction — When data is deleted from a hard drive or other storage media, it may leave behind traces that could allow unauthorized individuals to recover that information. It’s up to storage administrators and managers to ensure that any data erased from storage is overwritten so that it cannot be recovered.
- Lack of physical security — Some organizations don’t pay enough attention to the physical security of their storage devices. In some cases they fail to consider that an insider, like an employee or a member of a cleaning crew, might be able to access physical storage devices and extract data, bypassing all the carefully planned network-based security measures.
Data Security Best Practices
In order to respond to these technology trends and deal with the inherent security vulnerabilities in their storage systems, experts recommend that organizations implement the following data security best practices:
- Data storage security policies — Enterprises should have written policies specifying the appropriate levels of security for the different types of data that it has. Obviously, public data needs far less security than restricted or confidential data, and the organization needs to have security models, procedures and tools in place to apply appropriate protections. The policies should also include details on the security measures that should be deployed on the storage devices used by the organization.
- Access control — Role-based access control is a must-have for a secure data storage system, and in some cases, multi-factor authentication may be appropriate. Administrators should also be sure to change any default passwords on their storage devices and to enforce the use of strong passwords by users.
- Encryption — Data should be encrypted both while in transit and at rest in the storage systems. Storage administrators also need to have a secure key management systems for tracking their encryption keys.
- Data loss prevention — Many experts say that encryption alone is not enough to provide full data security. They recommend that organizations also deploy data loss prevention (DLP) solutions that can help find and stop any attacks in progress.
- Strong network security — Storage systems don’t exist in a vacuum; they should be surrounded by strong network security systems, such as firewalls, anti-malware protection, security gateways, intrusion detection systems and possibly advanced analytics and machine learning based security solutions. These measures should prevent most cyberattackers from ever gaining access to the storage devices.
- Strong endpoint security — Similarly, organizations also need to make sure that they have appropriate security measures in place on the PCs, smartphones and other devices that will be accessing the stored data. These endpoints, particularly mobile devices, can otherwise be a weak point in an organization’s cyberdefenses.
- Redundancy — Redundant storage, including RAID technology, not only helps to improve availability and performance, in some cases, it can also help organizations mitigate security incidents.
- Backup and recovery — Some successful malware or ransomware attacks compromise corporate networks so completely that the only way to recover is to restore from backups. Storage managers need to make sure that their backup systems and processes are adequate for these type of events, as well as for disaster recovery purposes. In addition, they need to make sure that backup systems have the same level of data security in place as primary systems.
Learn More About Data Storage Security
|8 Data Security Best Practices|
|1. Write and enforce data security policies that include data security models.|
|2. Implement role-based access control and use multi-factor authentication where appropriate.|
|3. Encrypt data in transit and at rest.|
|4. Deploy a data loss prevention solution.|
|5. Surround your storage devices with strong network security measures.|
|6. Protect user devices with appropriate endpoint security.|
|7. Provide storage redundancy with RAID and other technologies.|
|8. Use a secure backup and recovery solution.| | <urn:uuid:85cae39f-c7d0-43b3-809b-d963d5d6de50> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/management/data-storage-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00568.warc.gz | en | 0.947365 | 2,205 | 2.640625 | 3 |
“Without decisive action, we are gambling away our last chance to – literally – turn the tide”, UN secretary-general António Guterres has said ahead of the meeting. But why is he proclaiming this as our last chance?
At pat COPs various extensions to the UNFCCC treaty have been negotiated to establish legally binding limits on greenhouse gas emissions for individual countries, and to define an enforcement mechanism. These include the Kyoto Protocol in 1997, which defined emission limits for developed nations to be achieved by 2012; and the Paris Agreement, adopted in 2015, in which all countries of the world agreed to step up efforts to try and limit global warming to 1.5°C above pre-industrial temperatures, and boost climate action financing.
So, here’s where COP26 gets interesting: during the conference, among other issues, delegates will be aiming to finalise the ‘Paris Rulebook’, or the rules needed to implement the Agreement. This time they will need to agree on common timeframes for the frequency of revision and monitoring of their climate commitments. Basically, Paris set the destination, limiting warming well below two degrees, (ideally 1.5) but Glasgow, is the last chance to make it a reality.
Which all brings us back to the initial question: why is it the last chance? It is the simple fact that climate change has gone from being an uncomfortable low-level issue, to a life-threatening global emergency, in the past three decades. Although there have been new and updated commitments made by countries ahead of COP26, the world remains on track for a dangerous global temperature rise of at least 2.7°C this century even if Paris goals are met.
The science is clear: a rise of temperatures of that magnitude by the end of the century could mean, among other things, a 62 per cent increase in areas scorched by wildfires in the Northern Hemisphere during summer, the loss of habitat of a third of the mammals in the world, and more frequent four to ten month-long droughts.
Guterres bluntly calls it “climate catastrophe”, one that is already being felt to a deadly degree in the most vulnerable parts of the world like sub-Saharan Africa and Small Island States, lashed by rising sea levels. Millions of people are already being displaced and killed by disasters exacerbated by climate change.
For Guterres, and the hundreds of scientists on the Intergovernmental Panel on Climate Change IPCC), a scenario of 1.5°C warming, is the “only liveable future for humanity”. The clock is ticking, and to have a chance of limiting the rise, the world needs to halve greenhouse gas emissions in the next eight years.
This is a gigantic task that we only will be able to do if leaders attending COP26 come up with bold, time-bound, and front-loaded plans to phase out coal and transform their economies to reach so-called net-zero emissions. | <urn:uuid:f0c36124-b6a7-4f39-a5a6-09eee9f5a8c7> | CC-MAIN-2022-40 | https://digitalinfranetwork.com/news/time-is-running-out/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00568.warc.gz | en | 0.934402 | 611 | 2.921875 | 3 |
It’s a double blind – a way of ensuring that even if someone steals your password, they still won’t be able to break into the safe without you there.
It’s what’s called two-factor authentication (or 2FA), and in this modern age of increased cybercrime, you need to make sure your IT security company knows how to help you use it properly.
2FA provides clear identification of users through the combination of two different components. The components that can be used for 2FA can vary from an eye iris, a bank card, a USB stick token, a username, a PIN, and more.
Mobile phone two-factor authentication is the most commonly used form for businesses today. After inputting a password online to log in to a service or access data, the user will receive an automated text or phone call on their mobile phone asking for confirmation that the user is the one trying to log on. It’s a fast and simple way to ensure that, even if your password is stolen, your data will still be locked down to your eyes only.
Cybercrime is on the rise. Even smaller businesses are becoming tempting targets for hackers looking to steal personal data to use for identity theft. It’s crucial that your IT company is one step ahead of the hackers, and that they can bring out tools like 2FA when needed to protect your interests.
Even more importantly, you need to work with an IT company that stays on top of the changing market for 2FA and IT security. Every year user-interface technology changes, and adaptability is key to true security. Although life is not like a spy movie, the importance of technology in film and reality is evident, and it is always evolving.
With the ever-changing advancements of identification and 2FA, keep your company in the lead with the information to stay ahead – put your trust in Fuelled Networks for all your Ottawa IT security needs. Contact us at (613) 828-1280 or firstname.lastname@example.org to learn how two-factor authentication can help ensure your data stays for your eyes only.
Published On: 16th March 2015 by Ernie Sherman. | <urn:uuid:13845b73-c4b6-4f56-ab89-a221413c1279> | CC-MAIN-2022-40 | https://www.fuellednetworks.com/does-your-it-company-talk-about-two-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00568.warc.gz | en | 0.93303 | 456 | 2.5625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.