text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The growing emphasis on online safety Overshadowed by computing science, in the computing curriculum 2014 schools have struggled to give it time and appropriate attention. That is changing, and I engage with many schools now where it is a priority, and also extends to meet the challenge of engaging parents and the wider community. Swindon CAS recently focused on the latter area and the next meeting is dedicated exclusively to online safety. This drive is probably both a consequence of need and curriculum change. How has the reform increased the importance of teaching online safety? Relationships Education has been updated and made statutory from September 2020 in one of the biggest reforms for many years. It’s impossible to have failed to take note of the Online Relationships strand. This separation out of the online component however cannot be considered the totality. Indeed, four out of the five proposed strands place digital citizenship and by inference online safety at the core. Online Relationships and Being Safe requires no expansion but both Caring Friendships and Respectful Relationships cannot be taught effectively divorcing the offline and online worlds. Given the growth of the social web through gaming and social media the issues of online friendship and respect are central to both strands. Young children from early primary age now regularly chat and friend people online. A proper understanding of acceptable and safe behaviour in this context is essential. Online behaviour and respect Respect is a critical concept for children and how this applies to the Internet should not be ignored. The fact that communication online breaks down barriers can unfortunately also negatively impact acceptable standards e.g. trolling, abusive behaviour as well as create echo chambers of extreme and inappropriate views. Children need to be taught how to contribute effectively and respect others’ viewpoints online as much as they should offline. Similarly, there is a need to teach about friendship and how this is different online and offline; that we expect standards of behaviour from caring friendships, that knowing someone online is very different to knowing them offline. In both areas the online dimension requires expansion and greater understanding. It is both an exciting and challenging time, but online safety is now very clearly at the heart of curriculum reform. How can we help? Our expertise is what ensures we provide Ed Tech tools which help schools provide the best education to their pupils. DB Primary is the ideal platform to teach online safety in primary schools. To find out more – talk to us.
<urn:uuid:63eca53b-155d-4d2e-9b9a-0ae396e9c5b6>
CC-MAIN-2024-38
https://www.neweratech.com/uk/blog/online-safety-is-at-the-heart-of-educational-reform/
2024-09-08T21:56:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00834.warc.gz
en
0.959703
484
3.140625
3
We’ve covered the use of AI in a variety of industries, from law to sports. But advances in medicine are perhaps the most important our society can make. Unfortunately, they’re also among the most challenging to achieve. From cancer research to Alzheimer’s studies, scientists are working tirelessly to better understand devastating conditions and create better treatments. But progress moves slowly, and nowhere is this more apparent than in suicide prevention. In 2016, researchers came to the grim finding “that there has been no improvement in the accuracy of suicide risk assessment over the last 40 years.” The challenges in suicide prevention are substantial. When confronted with decisions about whether to hospitalize potentially suicidal patients, clinicians must determine the likelihood that someone will take their own lives in the immediate future. In some cases, hospitalization is vital. But in others, the patient might benefit from other therapeutic techniques and coping mechanisms that will help them manage drastic emotional incidents in the future. These are life and death decisions, and the pressure is enormous. Yet psychiatrists and other practitioners can refer only to guidelines that often prove less than useful in assessing someone’s suicide risk. A working group from the Department of Veterans Affairs and the Department of Defense said of existing suicide screening protocols, “suicide risk assessment remains an imperfect science, and much of what constitutes best practice is a product of expert opinion, with a limited evidence base.” Suicide is the tenth leading cause of death among Americans, with more than 44,000 people dying by their own hands each year. Depression and anxiety, which are closely correlated with suicide attempts, is on the rise in the U.S., including among teenagers. Last year, the suicide rate in the U.S. reached its highest point in 30 years. Doctors, caregivers, and loved ones are desperate to help people who are suffering. But many of the indicators commonly used to gauge someone’s risk level, such as past hospitalizations or incidents of self-harm, can be misleading. Fortunately, researchers may have found a powerful new tool for improving risk assessment methods. Recent experiments in using artificial intelligence to predict whether patients are at risk for committing suicide have shown promising results and returned surprising indicators that human observers are likely to miss. Augmented suicide prevention Software-based suicide prevention monitoring systems have already been used to track young students’ web searches and flag any alarming usage patterns, such as those related to suicide. However, artificial intelligence could offer a sharper, more proactive approach to risk detection and prevention. One group of scientists and researchers are working on a machine learning algorithm that so far has an 80-90% accuracy rate predicting whether a patient will try to commit suicide in the next two years. When analyzing whether someone might try to kill themselves in the next week, the accuracy rate went to 92%. The algorithm learned by analyzing 5,167 cases in which patients had either harmed themselves or expressed suicidal tendencies. One might wonder how a computer program could do in months what doctors with years of experience struggle with regularly. The answer is by finding underlying indicators that humans might not think to look for. While talk of suicide and depression are obvious indicators that someone is suffering, frequent use of melatonin may not jump out as much. Melatonin doesn’t cause suicidal behaviors, but it is used as a sleep aid. According to the researchers, reliance on the supplement could indicate a sleep disorder, which, like anxiety and depression, correlates strongly with suicide risk. Researchers are discovering that rather than there being a few tell-tale signs, such as a history of drug abuse and depression, suicide risk may be better assessed through a complex network of indicators. Machine learning systems can identify common factors among thousands of patients to find the threads that doctors and scientists don’t see. They can also make sense of the web of risk factors in ways the human mind simply can’t process. For instance, taking acetaminophen may indicate a higher chance of attempting suicide, but only in combination with other factors. Computer programs that can identify those combinations could dramatically enhance doctors’ abilities to predict suicide risk. Machine learning is being explored for other predictive uses as well. Scientists are experimenting with using machine learning to study fMRI brain scans to gauge a patient’s suicide risk. In a recent study, a machine learning program detected which subjects had suicidal ideas with 90% accuracy. Granted, the study only involved 34 people, so more research is needed. But the results align with other work being done, and it seems that the potential for machine learning to play a critical role in suicide prediction is strong. Machine learning could also become an essential tool for diagnosing post-traumatic stress disorder (PTSD). Between 11-20% of veterans who served in the Iraq and Afghanistan wars suffer from PTSD, and the most recent data available showed veteran suicides comprising 18% of deaths in the U.S. Psychiatrists and counselors may struggle to diagnose PTSD if soldiers don’t share the full extent of their trauma or symptoms with them, making it difficult to know whether they’re at risk for committing suicide. However, one ongoing study is looking at how voice analysis technology and machine learning can be used to diagnose PTSD and depression. The program is being fed thousands of voice samples and learning to identify cues such as pitch, tone, speed, and rhythm for signs of brain injury, depression, and PTSD. Doctors would then be able to help people who can’t or won’t articulate the pain they’re experiencing. Other forms of AI will become increasingly useful in the race to prevent suicide as well. Natural language processing algorithms could analyze social media posts and messages to identify concerning phrases or conversations. They could then alert humans who would intervene by reaching out to the potentially troubled person or contacting a resource who could offer support. Popular social media platforms already offer resources and support, to varying degrees, for both users who are considering harming themselves and for concerned friends and family who spot alarming posts. However, increasingly sophisticated natural language processing and machine learning techniques could identify at-risk users with greater accuracy and frequency. If we rely solely on people to report concerning content, there’s a good chance cries for help will be missed. The massive amount of content uploaded to popular social platforms each minute makes it impossible for users to see everything their friends have posted. But computer programs can scour for language that points to problems at all times, adding an important buffer for people who need help. Some researchers are even looking to leverage data mining and behavioral data to better identify and assist people in need. Commercial brands regularly use behavioral information to hone their marketing messages according to people’s buying patterns and preferences. But doctors, social workers, and support organizations could soon use those tools for a more altruistic purpose. Wearables may also play a role in suicide prevention. If doctors could persuade at-risk patients to use tracking apps that gather data about their speech patterns and behavioral changes, they might be able to use that information to track when someone is more likely to become suicidal. The breadth of data gathered through apps and wearables could be analyzed to better understand mental health issues and intervene before patients’ circumstances become extreme. From heartbreak to healing It’s important to note that while AI may support suicide prevention, people will continue to play a critical role in helping at-risk loved ones recover and maintain a healthy mental state. Social connectedness and support are essential to suicide prevention. Regular, positive interactions with family, friends, peers, religious communities, and cultural groups can mitigate the effects of risk factors like trauma and drug dependence and alleviate anxiety and depression. Nothing is more heartbreaking to a family than learning a loved one has taken their own life and wondering what they could have done to help. Artificial intelligence soon may give people a greater chance of intervening before it’s too late and give those suffering from severe mental illness an opportunity to experience rich, healthy lives.
<urn:uuid:3875ed4c-911f-4b3c-a99e-ddb49498cbbb>
CC-MAIN-2024-38
https://www.entefy.com/blog/suicide-prevention-hasnt-improved-for-40-years-thankfully-ai-is-changing-that/
2024-09-10T03:32:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00734.warc.gz
en
0.952048
1,626
3.234375
3
A dataset of images, used for computer vision tasks, could be the key to success or failure. A clean dataset could lead the way to a great algorithm, model and ultimately system, while no matter how good the model or algorithm is, garbage in – garbage out. So, how do we know what we have? How do we make sure the data is clean, high quality, and contains what we really need? If the dataset was small (ex. 100s of images), we could check it manually. Maybe, write scripts or code for random sampling and some statistics, exploration and tests. With larger datasets, or frequently new sets to review, this is not feasible. Power of Data Explorer Let’s see how Akridata’s Data Explorer helps us. Data Explorer is a platform that was built to allow us focus on the data, curate it, clean it and make sure we start the development cycles with a great foundation. It will be handy in later stages of development too. Data Explorer starts by representing each image with a feature vector, cluster them into groups and plotting the result on a 2D map. The Pascal VOC 2012 data set is a widely used set of natural images. In the image below, we see how it was processed and visualized by Data Explorer: A 2D representation of 17K images from Pascal VOC dataset (left) and randomly selected images (right) On the left, we see ~17K images automatically clustered into several groups; on the right, there are some examples, randomly chosen for review. Data Explorer comes built in with a great featurizer, but it also supports custom options; in addition, there are several clustering algorithms so you could choose the most relevant for your data. In this blog, we saw how a data set of images could be visualized, based on the raw images, no metadata involved. This is just a first step to review the quality of the data, clean it, and build a strong foundation to develop the next steps. In the next article, we will see how the interactive UI allows us to explore the dataset, which insights to gain, and how to go one step closer to the desired clean dataset.
<urn:uuid:b5c0cb19-2c88-41d4-bd70-ec64dc67faa0>
CC-MAIN-2024-38
https://akridata.ai/blog/image-data-set-visualization/
2024-09-11T09:20:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00634.warc.gz
en
0.9346
459
2.65625
3
Hi I’m John, I’m going to talk about Docker. And Docker is one of these technologies, it’s been really a big deal last, I don’t know, five years or so. And I think that it’s going to be the thing that everyone’s using in short time here. And really, it’s the next gen generation of the virtual machine world. And what it does is it containerize is an application, which is much smaller versus virtualizing, an entire machine and you get all kinds of benefits out of that. Some of them are less memory usage, faster startups, and just plain easier to administer. And that’s what all these, I’ll say SAS Type companies and Internet of Things type companies are taking advantage of. So the reason why it takes less memory is that containers are small. If you take a look at these pictures here, on the left hand side, you’ve got the virtual machine, it’s got an entire, you know, what’s called a hypervisor and the operating system and your applications that can be we’ll say, two gigabytes of memory to get that whole thing up. And running on the right hand side, we’ve got the Docker container. And then the applications are really the only thing that are being spun up each time you start a container. And that takes a lot less memory. And so this is kind of like a visual of that. And then the other part that’s an outcome of this is, if you were to start up a virtual machine, which is you know, very common, it’s a it’s an actual machine starting up, which might be you know, 15 seconds on a on a relatively fast machine, it can be longer, depending on what’s running on that machine. In the world of containers. And this is I think kind of a magic piece is it’s a zero second startup, if you want to start a container, it’s instantly running, does its thing and shuts down. So you can have a container up and running, doing its thing and stopping in 1/10 of a second. And that’s how you can have these things scale up and scale down really quickly. And you you get tangent solutions, like less chance of having viruses or security issues, because your application is running for a 10th of a second. Versus if it’s running for days and days, somebody can install some sort of piece of malware, and it’s watching memory, etc. But in the world of containers, that thing’s gone already. So you get a lot of scale benefits, but you get security benefits, architectural benefits that are just worth paying attention to. And that’s one of the benefits of this container world. I say it’s also easier to administer. And really what’s happening is, as a system architect or an operations person, you have environments that you trust. So we’ve built up this world, we know it’s a good, secure world. And so therefore, let’s put some other things on top of it. And what you do in the world of Docker is you extend existing trusted containers or trusted machines. And you can do this as code. And I’ll show a couple pictures here. But you can do machines as code and environment says code. And here’s just an example one, this is a Apache web server. And if you take a look, there’s 14 lines that describe this entire machine. And the first thing it’s doing is extending a trusted known machine of Ubuntu. And then it’s adding a comment saying this person’s the maintainer. When you build it, it does an update, that’s what that line is doing. Then the next three lines are really kind of setting security information, what law what files it’s going to use, and then what port it’s exposing. So from an architecture secure security perspective, you can tell what’s going on what ports are being opened up, and what files are being shared. And that’s just a very comforting thing. Versus if you were to take a look at a virtual machine, you would have no visibility to that you’d really have to study machine and the world of Docker, it’s quite straightforward, exposed. And so this is an example Docker file. The files all tend to be fairly small, human readable. And again, they’re extending existing containers. And these requirements are the changes that you make are very obvious and therefore comforting to operations people. So kinetic data and Docker so we’ve been using Docker for a couple of years experimenting with it, but we’re using it pretty heavily in our, in our environment now. And we also use it in the development world. So kind of a, an example that recently came up is we have our kinetic task engine and somebody called up said, hey, well, the task engine work with Postgres version 10, or whatever the latest version is. All our developer was very quickly able to just do a poll of the postgres spin up, get it up and running in like 510 MINUTES, POINT kinetic at it, do some experimenting, and be comfortable that kinetic task was able to run against the postgres system, prior to using something like Docker would be well, how you have to set up a Linux virtual machine. How do you download it? What’s the security settings there’s all All kinds of complications and installing that stuff that we’re really not going to run, we’re just going to answer a simple question for a customer. So being able to pull down a Docker get it up and running point our stuff at it is super helpful. Also, we use it in the development world for continuous integration and delivery. So we have web hooks hooked into the code repositories, that when software is updated and pushed into the repository, there will automatically spin up some doctors that will check out that code, go through all the tests to make sure that the tests work, make sure that the performance is within expectations. And then depending it can actually push it right into our production world. So that stuff is spun up at Dockers, they do their thing. And then when they’re done, they’re shut down. So that’s another usage. And then it’s also part of the backbone of the Kynapse world scaling both up and down. Also, our developers are using it. So if they’re developing piece of code, they can check out a Docker have it go in set breakpoints, and stuff and have just their own world on their own desktop. And so that’s being used. And then also, same thing with our customers, they can run our world in Dockers on their machine. So I kind of give a little hint there. So can you run kinetic in Docker? Well, this picture we shown a couple of times, this is not a fun thing to install. Anybody enjoy installing kinetic, there’s a lot to it, there’s probably eight different systems that need to be set up. And they need to know how to talk to each other their security settings, network protocols available, there’s a lot to setting up the kinetic world. And it’s not a very fun task. But once it’s up and running, it’s going to be rock solid. Well, the world of Docker allows you to define a machine as code. But you can also define entire environments as code. And that’s called Docker Compose. And so you can define multiple machines in a human readable file. Things like databases, web servers, application servers, all that stuff that’s required for an environment to run can all be in a very easy to understand file. And it will also connect up the network and say this one needs to be able to see that one and this network port is exposed. And all that stuff that an operations person would be concerned with would be there and visible. And so here is a sample Docker compose file. And you’ll see that there’s multiple machines being defined here. And the first one is a Cassandra database. And so it’s defining what the Cassandra database is, and, and setting some memory requirements, then there’s the postgres database. So we’re starting up multiple databases here. And then further on, we’ve got the kinetic task engine, you’ll see that it depends on the postgres system being available. And then you’ll see the requests CEE system also has dependencies on the database being available. So you can kind of put these dependencies in there. And you can put health check definitions and all that stuff in this very nice, simple file. And you can get it up and running. So you can on your own machines do a git clone of this definition. And in 136 lines of Docker, you can start up your own kinetic world on your own machine with just a couple lines. So that’s something that we’re really kind of proud about. And five minutes later, we’ll be up and running. And that’s a lot easier than the previous install. So thanks
<urn:uuid:a0260dd4-2a0c-49ca-b875-030f20711a7b>
CC-MAIN-2024-38
https://kineticdata.com/video/lightning-talk-5-docker/
2024-09-16T09:35:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00234.warc.gz
en
0.936019
2,003
2.640625
3
Who Invented the Computer? DARPA Who Invented the Computer? This is the twenty-ninth installment in our ongoing series. On Oct. 4, 1957, the Soviet Union launched Sputnik 1, the world’s first artificial satellite, into an elliptical low-Earth orbit. The metallic ball remained aloft for three months, completing 1,440 orbits and, during the first three weeks of its journey, emitting radio signals that could be picked up by amateur radio operators. It was an impressive achievement for the Soviet space program — and it terrified the American public. Panicked citizens immediately feared the worst, suspecting that Sputnik was a developmental leap forward in the direction of some unguessed technology that would pave the path for the pinkoes to rain down atomic fire onto our cities. With the Space Race now “on like Donkey Kong” (as the kids of a later generation would surely have put it), President Dwight D. Eisenhower wasted no time in establishing the Advanced Research Projects Agency (“ARPA”) in early 1958. ARPA was a completely new idea for scientific advancement: a collaboration of government, academia, and industry, with the purpose to develop new and transformative technologies to promote national interests. Because the agency was under direction of the Department of Defense — its offices were literally in the Pentagon — a “D” was soon added and the agency became “DARPA.” Let's Invent Cool Stuff One of DARPA’s primary areas of focus was computers, particularly how to make them more powerful, faster, and user-friendly. Its efforts would ultimately bear fruit beyond the wildest imaginings of anyone involved at the time. Nowadays, however, most people even in the IT industry have little idea of how important DARPA has been to computers, the United States, and by extension the world. Through research and development grants to academic and governmental institutions, DARPA provided the genesis for many of our most important technologies, including: Global Positioning System (GPS) Graphical User Interface (GUI) Computer Mouse (This may have come up before.) CALO, or Cognitive Assistant that Learns and Organizes (The ancestor of Siri, Alexa, Cortana, and others.) Stealth Planes and Warships High Performance Computing Anti-missile Laser Systems for Naval Ships MEMS, or Micro-Electro-Mechanical Systems (MEMS are now ubiquitous in our modern machinery, devices, and video games.) TOR, the open-source software for enabling anonymous communication online And speaking of “online,” without a doubt DARPA’s most towering contribution to modern life is its key role in bringing about a little thing called “the internet.” DARPA’s ARPANET, as neatly summarized by Wikipedia, was “the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the internet.” DARPA began operations with a mere 150 employees and a budget of $520 million. Surprisingly, the Agency hasn’t grown much since its beginnings. Today, it has approximately 300 employees and an operating budget of $3.8 billion. Although tiny as federal agencies go, DARPA is considered by those in the know to be the most efficient and cost-effective of all agencies and is often referred to as the example of how government can “spur innovation through research and development investments.” A big reason DARPA is able to punch above its weight in innovation, is its flat organizational structure. The notable lack of layered bureaucracy facilitates communication and flexibility while simultaneously reducing the confounding bottlenecks of excessive rules and processes. The agency consists of five divisions called offices: the Director’s Office, the Adaptive Execution Office, the Aerospace Projects Office, the Strategic Resources Office, and the Mission Services Office. Each is led by a single program manager who is solely responsible for providing direction and support for programs. In addition to an efficient organizational structure, DARPA’s recipe for achievement rests solidly on four unchanging operational principles: Trust and Autonomy — Because DARPA program managers are directly responsible for the development and success of each project, upper management gives them an incredible degree of trust and autonomy to accomplish objectives. Rather than submit proposals and then wait for reviews and approval from above, like managers in other agencies, DARPA managers are free to implement and fund new programs and projects on their own. If a project isn’t progressing as hoped, they are also free to terminate it at any time. Limited Tenure — Unlike other government agencies where program managers stick around for decades and build hidebound fiefdoms, DARPA program managers are hired for a limited time, usually three to five years. It is estimated that 25 percent of program managers turn over annually. The upside to short tenures is a constant influx of new personnel with new ideas. People who join DARPA are people anxious to bring an idea to fruition. Critics complain that constant turnover among program managers is inefficient and leads to brain drain. Their argument is understandable, but based on DARPA’s track record, it doesn’t seem to be valid. Sense of Mission — DARPA is the ultimate résumé stuffer. Being asked to join is a high honor, as well as a prestigious recognition of one’s career achievements and abilities. Employees feel that they are part of something special. New recruits typically come via recommendations from existing personnel, and each is highly motivated to achieve something meaningful during their term — especially since it will contribute to the well-being and even survival of the nation. Risk-taking and Tolerance for Failure — Program managers are told to “swing for the fences” and not to worry about striking out. Failure is good because it can be learned from and built upon. Management’s encouragement to achieve wonderous technological breakthroughs is so prevalent that program managers are known to reject projects for “not being sufficiently ambitious.” DARPA has undoubtedly been the most impactful of all federal agencies. Not only has it protected America, by being the fountain for an unmatched list of technological marvels, but in the process, it has been the impetus in the ultimate creation of numerous multi-billion-dollar industries. Indeed, it is estimated that 70 percent of all U.S. computer research has been funded by DARPA. Federal agencies are often in the news, not for what they achieve but for being wasteful, inefficient, and burdensome to citizens and businesses. DARPA is a notable exception. Its small team and tiny budget enable it to continue playing a major role in making the United States the world’s leader in technological innovation.
<urn:uuid:65927604-461d-470a-b8fc-0b3f9c9823fd>
CC-MAIN-2024-38
https://www.gocertify.com/articles/who-invented-the-computer-darpa
2024-09-16T08:48:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00234.warc.gz
en
0.955799
1,419
3.484375
3
Are you aware of the coding strategies used by developers? There will be fewer security flaws and better leads if the development team uses secure coding guidelines to design the product. As a layperson, one could ask, “What are the fundamentals of secure coding techniques, best practices, and how and when can we utilize them?” In this blog, we will go through the basics of secure coding guidelines, best practices, how we can use them, and how VAPT can help in this. Table of Content What is Security Coding? Security coding refers to a set of practices for applying security considerations to the way software is coded and encrypted to prevent any errors or weaknesses. This helps in simultaneously preventing and minimizing the most common vulnerabilities that eventually lead to cyber-attacks. Why Do We Need to Secure Our Coding Techniques? Secure coding aids in the minimization of vulnerabilities and risks associated with software development. It is very advantageous to secure the code with correct rules. The major objective for the development team should be to discover and analyze the risks and security issues associated with the application, as well as to compare various mitigation strategies and choose the best one. The following are the major reasons why one needs to secure the coding techniques: – - Help to develop a standard for a platform and development language - Find and remove the vulnerabilities that could be exploited by cyber attackers - Ensure that every software has checks and systems in place to strengthen it and get rid of any security issues and vulnerabilities. - Since exploits can result in the leak of sensitive user data, optimizing security through these techniques from the beginning can help you save a lot of money in the long run. Security Coding Guidelines Let’s look at some of the secure coding standards provided by OWASP that are used to guard against vulnerabilities. These standards are intended to detect, prevent and eliminate issues that may compromise software security. - Password Management - Access Control - Cryptographic Practices - Configuration of System - Error handling and Logging - Threat Modeling - Security By Design - Password Management – Passwords are still the most extensively used security credentials and sticking to basic coding standards lowers risk. - Access Control – Make sure that requests to access sensitive data are double-checked to ensure that the user has authorization to read the information. - Cryptographic Practices – In the event of a compromise, using high-quality current cryptographic algorithms with keys stored in safe key vaults improves the security of your code. - Configuration of System – Make sure you’re managing your development and production environments safely if you work in various environments. Remove any unnecessary components from your systems and make sure all working software is up to date with the latest versions and fixes. - Error Handling and Logging – Software updates include vulnerability patches, making them one of the most significant and secure coding guidelines. Error handling and logging are two of the most effective methods for reducing the impact of errors. - Threat Modeling– Threat modeling is a multi-stage process that should be integrated into the software development, testing, and deployment lifecycles. - Security Design– Organizations may have different goals when it comes to software engineering and coding. It’s possible that prioritizing security will conflict with prioritizing the development pace. How Can You Make Sure Your Code Is Safe? A secure code is a requirement of a competent software development process. The project will be doomed if suitable data security protocols are not established. The competent and knowledgeable personnel at VAPT Security assists businesses in ensuring that data security standards are properly accepted and executed while considering your specific computing environment. Secure coding best practices operate as a buffer against cyber threats attempting to exploit a software’s SDLC and/or supply chain. In VAPT services at Kratikal, we do a scan of the configuration and setup testing, covering all critical services to discover any misconfigurations or vulnerabilities. Please contact us if you have any additional questions regarding VAPT by leaving a comment in the box below, and we would be happy to help!
<urn:uuid:6925f65a-b309-4e31-b9ae-a0ae80c5daf7>
CC-MAIN-2024-38
https://kratikal.com/blog/security-coding-guidelines/
2024-09-18T21:38:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00034.warc.gz
en
0.920059
841
3.203125
3
Data Management Glossary Solid State Drives (SSDs) Solid-State Drives (SSDs) are a type of non-volatile storage device that stores persistent data on solid-state flash memory. Unlike traditional Hard Disk Drives (HDDs), which use spinning disks and magnetic heads to read and write data, SSDs have no moving parts. This lack of mechanical components makes SSDs faster, more reliable, and more energy-efficient than HDDs. Key characteristics and advantages of solid-state drives - Speed: SSDs provide faster data access and transfer speeds compared to HDDs. This is due to the absence of mechanical parts, resulting in virtually instant data access. - Durability: Because there are no moving parts, SSDs are more durable and less prone to mechanical failure than HDDs. They are also more resistant to physical shock and temperature variations. - Energy Efficiency: SSDs consume less power than traditional HDDs. This can be particularly beneficial for laptops and other battery-powered devices, as it contributes to longer battery life. - Quiet Operation: Since there are no moving parts, SSDs operate silently, providing a quiet computing experience compared to the audible noise generated by spinning HDD disks. - Compact Form Factor: SSDs are available in smaller form factors, which is advantageous for devices with limited space, such as ultrabooks, tablets, and certain server configurations. - Reliability: SSDs are generally more reliable due to their solid-state nature. They are less susceptible to mechanical failures and wear and tear over time. - Lower Latency: SSDs exhibit lower latency in accessing and retrieving data, contributing to improved system responsiveness and faster application loading times. While SSDs offer numerous benefits, they are often more expensive on a per-gigabyte basis compared to traditional HDDs. As a result, data storage solutions often involve a combination of both types, with frequently accessed data stored on SSDs for speed and performance, and less frequently accessed data stored on HDDs or other slower, more cost-effective storage media.
<urn:uuid:9798f2e9-d395-4201-818b-8074e8180d8f>
CC-MAIN-2024-38
https://www.komprise.com/glossary_terms/solid-state-drives-ssds/
2024-09-18T21:15:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00034.warc.gz
en
0.932415
423
3.796875
4
Nation-state cybercrime isn’t just a risk for businesses in certain sectors anymore. Advanced Persistent Threat (APT) groups (APTs) have expanded their scope of attack, hitting businesses that weren’t thought to be in danger of that kind of threat in the past. That puts every business at risk as threat actors seek new ways to obtain information, strike targets and make money. These 10 facts nation-state cyberattack facts illustrate today’s climate of risk and offer insights into protecting businesses from nation-state trouble. Explore today’s biggest threats & what’s next in The State of Email Security 2022 GET IT>> What is a Nation-State Cyberattack? Microsoft defines nation-state cybercrime as malicious cyberattacks that originate from a particular country to further that country’s interests. It’s a complex subject that is full of twists and turns, and just like any other field, it also has some very specific terminology. Nation-State Threat Actor —- Nation-state threat actors are people or groups who use their technology skills to facilitate hacking, sabotage, theft, misinformation and other operations on behalf of a country. They may be part of an official state apparatus, members of a cybercrime outfit that are aligned with or contracted by a government or freelancers hired for a specific nationalist operation. Advanced Persistent Threat (APT) —- These are nationalist cybercrime outfits with sophisticated levels of expertise and significant resources that work to achieve the goals of the government that supports them, undertaking defined operations with specific goals that forward the objectives of their country. Infrastructure Attack —– When nation-state actors conduct an infrastructure attack, they’re attempting to damage one of their country’s adversaries by disrupting critical services like power, water, transportation, internet access, medical care and other essential requirements for daily life. Infrastructure attacks are a major component of modern spycraft and warfare. Learn how incident response planning boosts cyber resilience & security. GET THE EBOOK>> 10 Nation-State Cyberattack Facts You Must See - An estimated 90% of Advanced Persistent Threat Groups (APTs) Groups regularly attack organizations outside of the government or critical infrastructure sectors. - There was a 100% rise in significant nation-state incidents between 2017 and -2021. - Russian nation-state actors are increasingly effective, jumping from a 21% successful compromise rate in 2020 to a 32% rate in 2021. - 21% of nation-state attacks in 2021 targeted consumers. - 79% of nation- state attacks in 2021 targeted enterprises. - 58% of all nation-state attacks in the last year were launched by Russian nation-state actors. - Ransomware is the preferred weapon of nation-state threat actors. - The “big 4” sponsors of APTs are Russia, China, North Korea and Iran. - Nine in 10 (86%) organizations believe they have been targeted by a nation-state threat actor. - The average nation-state-backed cyberattack costs an estimated $1.6 million per incident. See 10 reasons why Graphus is just better than other email security solutions. SEE THE LIST>> Common Targets of Nation-State Attacks Researchers took a look at nation-state attacks and determined who APTs were going after the most. Targets of Nation-State Cyberattacks | % of Total | Enterprises | 35% | Cyber Defense Assets | 25% | Media & Communications | 14% | Government Bodies | 12% | Critical Infrastructure | 10% | Source: Dr. Mike McGuire and HP, Nation States, Cyberconflict and the Web of Profit Are you ready to stop ransomware? Find out with our 5 Steps to Ransomware Readiness infographic! GET IT>> Nation-State Cyberattack Facts About Attack Vectors Nation-state threat actors will use a wide variety of means to accomplish their goals, but nation-state cyberattack facts offer insights into their go-to attacks employed against both public and private sector targets. Phishing Attack —- A technique for attempting to persuade the victim to take an action that gives the cybercriminal something that they want, — like a password — or accomplishes the cybercriminal’s objective, — like infesting a system with ransomware through a fraudulent solicitation in email or on a web site. Distributed Denial of Service (DDoS) Attack -— Distributed Denial of Service attacks are used to render technology-dependent resources unavailable by flooding their servers or systems with an unmanageable amount of web traffic. This type of attack may be used against a wide variety of targets like banks, communications networks, media outlets or any other organizations that rely on network resources. Malware Attack —- Malware is a portmanteau of “malicious software.” It is commonly used as a catch-all term for any type of malicious software designed to harm or exploit any programmable device, service or network. Malware includes trojans, payment skimmers, viruses and worms. Ransomware Attack -— Ransomware is the favored tool of nation-state cybercriminals. This flexible form of malware is designed to encrypt files, lock up devices and steal data. Ransomware can be used to disrupt production lines, steal data, facilitate extortion, commit sabotage and a variety of other nefarious purposes. Ransomware attacks are highly effective and can be used against any organization. Backdoor Attack —– Nation-state threat actors will often intrude into an organization’s systems and establish a foothold called a back door that allows them to return easily in the future. It could be months or years before they use it. This also affords them the opportunity to unobtrusively monitor communications, copy data and find vulnerabilities that enable further attacks. Follow the path business takes to a ransomware disaster in The Ransomware Road to Ruin. DOWNLOAD IT NOW>> These Nation-State Cyberattack Facts Make it Clear That Strong Email Security Iis the Cornerstone of a Powerful Defense Protecting your company from phishing is essential to reduce the chance of falling victim to a nation-state cyberattack. You need strong email security to get the job done —– and you can get it at a great price. Enter Graphus. This email security powerhouse uses AI to keep phishing emails away from user inboxes automatically. Automated email security is 40% more effective at spotting and stopping dangerous phishing messages than traditional email security or a SEG. Graphus is also budget-friendly at half the price of its competitors. You’ll benefit from: - A powerful guardian that protects your business from some of today’s nastiest email-related threats like cryptomining, spear-phishing, business email compromise, ransomware and other horrors. - The power of TrustGraph, our patented technology that uses AI to compare more than 50 separate data points to analyze incoming messages and spot illegitimate messages quickly and efficiently before they land in anyone’s inbox. - A solution that uses machine learning to add information to its knowledge base with every analysis it completes to continually refine your protection and keep learning without human intervention. See how to avoid cybercriminal sharks in Phishing 101. DOWNLOAD IT>>
<urn:uuid:34a60b83-bf76-4b06-84f2-1acfdab6eca9>
CC-MAIN-2024-38
https://www.graphus.ai/blog/10-nation-state-cyberattack-facts-you-need-to-know/
2024-09-10T07:59:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00834.warc.gz
en
0.921761
1,511
2.578125
3
A research team, including members from Google, uncovered a vulnerability in ChatGPT that enabled the extraction of several megabytes of training data by prompting the model to endlessly repeat a word. The attack, costing around a couple of hundred dollars, exposed real email addresses, phone numbers, and other identifiers from ChatGPT’s training dataset. The researchers notified OpenAI, which addressed the specific exploit but did not rectify the underlying vulnerability. The attack’s simplicity involved prompting the model with a request to repeat a word indefinitely, causing it to diverge and disclose training data. The significance of this vulnerability lies in the potential exposure of sensitive information during model training, including personally identifiable details. While OpenAI implemented measures to prevent the specific attack, the incident raises broader concerns about model memorization and the inadvertent regurgitation of training data during model interactions. The researchers emphasized the importance of addressing such issues to prevent unintentional data leaks, especially when deploying language models in real-world applications. The findings underscore the ongoing challenges in ensuring the privacy and security of AI models and highlight the need for continuous efforts to enhance safeguards against data extraction and leakage.
<urn:uuid:65362d4c-eef4-4c3f-b0f7-775df16883dc>
CC-MAIN-2024-38
https://cybermaterial.com/attack-extracts-chatgpt-training-data/
2024-09-17T19:15:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00234.warc.gz
en
0.919716
233
2.703125
3
As the aeronautics and space exploration sectors advance, aerospace manufacturing is at the forefront of revolutionary change. Facing intricate engineering tasks and the urgency for environmental sustainability, the industry is experiencing significant metamorphosis. How aerospace manufacturers adapt to these multifaceted challenges will be critical in determining the direction and influence of the sector in the years to come. This industry is setting the stage for the future by pioneering developments in technology and eco-friendly practices. These strategic adjustments are central to propelling aerospace manufacturing toward a resilient and innovative future. The readiness of the aerospace field to tackle the evolving demands of technology and sustainability will firmly establish its role in shaping the next frontier of human discovery and progress in the universe. The Innovation Imperative in Aerospace R&D Aerospace manufacturing begins in the fertile grounds of research and development. These cloak-and-dagger sectors within companies such as SpaceX and Blue Origin are where the next generation of aerospace technology incubates. Investment in R&D is more than a prerequisite; it’s an imperative that fuels the entire industry. The cohesion between virtual design, materials science, and the digital simulation of prototypes cuts years from development timelines, allowing rapid iteration and innovation. The commitment to R&D is paramount as the aerospace industry seeks to maintain its competitive edge. For instance, the application of generative design algorithms in creating aircraft components not only speeds up the design process but also produces stronger, lighter, and more efficient parts. This symbiosis of technology and aerospace capability drives the sector forward, ensuring its products are safer, more reliable, and capable of breaking new ground. From Design to Reality: Engineering and Prototyping Advances in computer-aided design and 3D visualization have revolutionized aerospace engineering and prototyping. Beyond theoretical blueprints, the industry employs advanced software tools to simulate every nuance of an aerospace component’s life cycle — from the drafting table to the upper stratosphere. Virtual prototyping reduces the time and resources spent in creating numerous physical models, making the iterative process of design more effective and less costly. Upon successfully navigating virtual tests, the industry then crafts physical prototypes. These need to stand up to the excruciating demands of real-world operating conditions. To ensure they do, companies invest in cutting-edge prototyping techniques like 3D printing for rapid prototyping and complex geometry part production. This is where theoretical models are truly put to the test, evaluated under the most extreme conditions to ensure every potential point of failure is addressed. Manufacturing Mastery: Production and Technology The leap from prototype acceptance to full-scale production involves myriad challenges in aerospace manufacturing. Factories must maximize efficiency, maintain rigorous quality standards, and incorporate the latest technologies to stay relevant. Advanced methods like additive manufacturing enable the production of intricate parts that would be impossible or prohibitively expensive using traditional methods. In addition to embracing 3D printing, manufacturers are refining the use of composites to create lighter, more fuel-efficient aircraft. The practicality of these materials comes alongside the struggle to integrate them without disrupting established manufacturing processes. It’s a delicate balance of innovation and pragmatism, testing the industry’s ability to adapt while maintaining an unwavering commitment to quality and precision. Aerospace Giants and the Global Market Titans such as Boeing and Airbus have long dominated the aerospace manufacturing stage, but the competition never ceases. These giants lead the way in technological advancements and market trends, deeply influencing the industry’s global landscape. Their perennial quest for improvement shapes everything from onboard systems to the overall design of aircraft and spacecraft. Furthermore, these industry leaders must constantly evaluate and respond to the demands of both military and commercial aerospace sectors. The commercial market’s growth, enabled by increasing global connectivity, drives the development of new aircraft models, while military contracts push the envelope in technological innovation. This duality shapes the aerospace manufacturing landscape, requiring a sophisticated understanding of distinct yet overlapping needs. Sustainability: The New Frontier Sustainability is perhaps the most significant challenge facing aerospace manufacturers today. As environmental concerns reach a critical mass, the industry is under mounting pressure to develop aircraft that are cleaner and more fuel-efficient. In response, manufacturers are exploring a range of solutions, from biofuels to electric propulsion systems. These efforts are not without their economic implications, but they represent an important investment in the industry’s future. Regulatory frameworks like the European Union Emission Trading Scheme also impose strict limits on emissions, compelling aerospace manufacturers to pursue eco-friendly designs. The quest for greener aircraft is a formidable task, intertwined with the need to sustain profitability, but it is a task that the industry cannot afford to ignore. Innovations in this space are not only ethical imperatives but also opportunities for advancement, driving future competitiveness in a world increasingly attuned to ecological impact. Overcoming Modern Challenges The tangible and metaphorical weight carried by aerospace manufacturers is colossal; they must balance the rigid requirements of regulatory compliance, mitigate supply chain disruptions, and achieve economic feasibility. A single misstep can lead to monumental setbacks, underscoring the need for robust risk management and strategic foresight in this high-stakes industry. The tightrope walk of regulatory compliance demands constant vigilance and adaptability. Aerospace manufacturers must also grapple with the unpredictability of the supply chain—a volatility heightened by recent global events—and the relentless pursuit to control costs without compromising on excellence. It is these challenges that often spur innovation, as companies seek smarter, more resilient ways to operate in an ever-shifting industry landscape. Embracing the Digital Revolution To streamline complex manufacturing processes, aerospace companies are increasingly implementing artificial intelligence and the Internet of Things (IoT). Through AI, manufacturers can optimize production workflows, while IoT devices offer unprecedented monitoring capabilities, ensuring equipment is operating at peak performance and anticipating maintenance needs. Harnessing these digital tools, however, comes with its own set of hurdles, notably cybersecurity. As manufacturing becomes more networked, the potential for cyber-attacks grows. Thus, companies must heavily invest in digital defenses, ensuring the integrity and safety of their operations. This digital revolution is not a choice but a necessity as aerospace manufacturing strides into the digitally interconnected age. Globalization and Aerospace Expansion Globalization is transforming aerospace manufacturing, establishing new international centers that redefine economic and competitive landscapes. This shift enables cost-cutting and market expansion but also pressures established players to preserve their lead while safeguarding local jobs. To stay afloat, veterans of the industry must leverage global partnerships yet protect national interests. Continued success in aerospace requires blending innovative research with advanced manufacturing techniques and a deep understanding of the ever-evolving global marketplace. The industry stands at a crossroad of challenges, with the demand for evolution pitted against geopolitical and economic complexities. As aerospace extends its reach, it exemplifies the zenith of human innovation and our collective ambition to advance. Balancing global integration with local advantage will be crucial for the industry’s future, ensuring it remains a powerful force in an interconnected world.
<urn:uuid:a96dbc70-9256-414f-bae5-60162554c3ea>
CC-MAIN-2024-38
https://manufacturingcurated.com/automotive-and-aerospace/navigating-the-future-trends-and-challenges-in-aerospace-manufacturing/
2024-09-09T08:59:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00134.warc.gz
en
0.911752
1,438
3.265625
3
A computer network administrator is responsible for the day-to-day operation of a company’s computer network. Depending on the size of the company, there can be more than one network administrator. A computer network administrator is often also called a system administrator. In a smaller company where there is only one network administrator, the duties involved are as followed: If the company is big enough, the network administrator hands off smaller network repairs to the network technician. An administrator usually has his own office and remotely handles server issues and maintenance. Some travel might be required in order to troubleshoot network ports or installation of network devices. With the knowledge of a computer network administrator, you can handle many part-time network issues and installations for your own monetary benefit. A lot of small businesses can’t afford to pay a network administrator full-time, so they hire part-time consultants. These businesses are your local mom and pop shops, law firms and accounting firms. Simple duties include: The great thing is you can remotely handle some of the server issues they encounter and later collect your fee. With a few jobs that you complete in one week, you will quickly see the high potential income you can earn part-time on your own. Learn how you can start a part-time computer business with little or no out of pocket expenses!!! The US Bureau of Labor Statistics reports that the average median salary in 2010 for this job is $69,160 per year or $33.25 per hour. www.bls.gov Job Growth - Employment is expected to grow 28 percent from 2010 – 2020. Demand is high. Education - Most companies ask that their candidates have a Bachelor’s degree. They would consider candidates with an Associate’s degree and one or more of these certifications: MCSE, MCITP and CCNA. If you really want to stand out consider earning a Bachelor’s degree in Computer Networking. Because network technology is continually changing, network administrators should keep up with the latest certifications. Most employers will require you to have several years of experience in networking. If you earn these top certifications, the employer would not require so many years of experience because they respect these certifications and you would meet the minimum requirements. The MCSE certification (Microsoft Certified Systems Engineer) covers Windows Server 2000 and 2003. Recently Microsoft had discontinued it and is offering the new MCSE. This time it covers Windows Server 2012 and it is again called MCSE (Microsoft Certified Solutions Expert). There are two I recommend you consider: A lot of new technology has emerged like virtualization and cloud computing, so this certification will help you stay on top with the latest technologies. Another highly regarded Microsoft certification is called MCITP (Microsoft Certified IT Professional) and it covers Windows Server 2008. A CCNA would be more of a requirement for a network engineer and would earn a higher income than an administrator. Armed with all of these certifications and an Associate’s or Bachelor’s degree, you are guaranteed for success. "Didn't find what you were looking for? Use this search feature to find it."
<urn:uuid:213a5148-01ad-488d-9843-a9d07c13989c>
CC-MAIN-2024-38
https://www.computer-networking-success.com/computer-network-administrator.html
2024-09-09T07:54:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00134.warc.gz
en
0.948285
653
2.6875
3
MOUNTAIN VIEW, Calif. — Microsoft brought its brainiacs to Silicon Valley for a road show highlighting the latest cool stuff. Scientists from Microsoft Research labs in San Francisco and Redmond joined their colleagues at the company’s Mountain View, Calif. campus to showcase speculative projects that could someday find their way into products. Researchers are working on everything from a Web services-based model of the universe to sneaky ways to foil spammers. Dan Ling, vice president of Microsoft Research, told an audience of academics, entrepreneurs and business folk that while Research has only a small part of Microsoft’s hefty $7 billion R&D budget, most of the company’s products are influenced by what it does. For example, the San Francisco lab’s statistical analysis of the Web could find its way into the new search technology Microsoft is readying to go up against Google. Jim Gray, a Microsoft Research Distinguished Engineer, said that a yearlong project to produce a statistical characterization of the Web turned up some interesting and useful trends. Microsoft Research tracked 1 billion Web pages for a year, analyzing what had changed and looking for anomalies. By keeping track of how many Internet names mapped to the same IP address or how many other pages linked to a single Web page, the technology seems to be able to identify what Gray called ”places you don’t want a search engine to go,” such as sites identified with pornography or spam. Microsoft researchers Marc Najork, Mark Manasse and Dennis Fetterly published the research and passed the information to the MSN Search team. A new algorithm for finding the shortest route could be used for Microsoft MapPoint.Net, Gray said. In tests, author Andrew Goldberg found it delivered a 20-times improvement in time and memory for the road network of a large state. This improvement could enable shortest path routing for PDAs. It could be used to offer users real-time advise about traffic congestion or road outages, and it also could enable larger requests, such as driving directions for the shortest cross-country route. A very long-term project, Ling said, is modular data center software, codenamed Boxwood, that could make large-capacity storage and computation systems cheaper by virtualizing storage, distributing the locking and global state to unify the system, and automating provisioning, error detection and reinitializing. ”We need to get rid of the idea that with our 1500 CPUS we’re going to have 1500 different file systems,” Ling told internetnews.com. One area Microsoft Research is helping lead Microsoft is the company’s efforts to combat spam. ”It’s of great importance to the Hotmail group which is here in Silicon Valley,” Ling said. The stats are alarming: 23 percent of e-mail users say spam has reduced their e-mail use, while 76 percent are bothered by offensive or obscene content, and as much as 78 percent of all e-mails are spam. ”It’s something that needs to be undertaken by the community as a whole. Leading e-mail providers are starting to get together to look at common strategies,” he said. Ling also outlined several approaches, including employing machine learning techniques to automatically identify e-mails that look like spam. With millions of Hotmail users participating in helping to train the software, Ling said, the filters can become very effective over time. Microsoft also is considering ”black hole” lists and some form of ”postage” that makes it more expensive to send spam, whether that’s charging money, making the computer perform a computation or giving senders a test to prove they’re human. All these could make spamming a little less economical. Another project — MindNet — is a semantic network. ”Think of it as a bunch of senses of a particular word and relationships between those words,” Ling explained. For example, different words would link to the word bank when used to denote a financial institution than when it referred to the bank of a river.
<urn:uuid:fce834f9-7c0e-4a48-b15d-db15e40e56b2>
CC-MAIN-2024-38
https://www.datamation.com/erp/inside-microsofts-next-big-thing/
2024-09-09T08:08:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00134.warc.gz
en
0.935689
847
2.625
3
A Username is a special name given to a person to uniquely identify them on a computer network. Also called account names, login IDs, or user IDs, usernames are given to a person by the network administrator or they are selected by the user. In either case, a username may not be the same for more than one person or else it would defeat its intended purpose of distinguishing one user from another. Email provides the archetypal example of a system or application that cannot be shared, and therefore a person’s email address often serves as a default or mandated username. In conjunction with a shared secret between the person and the online service, usernames are part of a legacy authentication scheme first conceived by the late Fernando J. Corbató. One of the fathers of computer science, Corbató at the MIT Computation Center in the 1960s developed the username/password scheme to allow for multiple users. “Username/password is the dominant means of authenticating people to an online service—for now. Usernames will survive the evolution away from this security protocol and are most commonly a person’s email address, since those are unique and not shared with other users.”
<urn:uuid:88c1824d-eb0d-415b-99de-a53851c2c3e6>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/username
2024-09-10T13:09:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00034.warc.gz
en
0.938212
248
3.515625
4
Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring. Reserve Your Seat TodayWeather fluctuations can make humidity levels highly unpredictable, so monitoring the humidity of your facilities is necessary for maintaining network stability and preventing failures. In warmer temperatures, humidity typically increases as the air is able to hold more moisture. The excess moisture can permeate into hardware systems and in certain conditions condense. This can short-circuit expensive electronic components or - at least - create spurious unexpected connections. On the other hand, low humidity can dry-out seals and other protective elements. It can also increase the chances of static electricity discharge with possible catastrophic effects on sensitive electronic circuits. These issues can lead to substantial repair or replacement costs if humidity is not monitored and controlled. Alternatively, problems can be reduced or eliminated by careful remote monitoring and control using humidity sensors. A humidity sensor is a device used for providing ongoing measurements of the air's relative humidity. This works by running an electrical current between two nodes and then monitoring the amount of voltage created by the air's moisture. Humidity sensors are a type of telemetry sensors. They are important to remote monitor and control humidity levels and to ensure the safety of your electronic infrastructure. Purchasing a quality humidity sensor will quickly improve the safety of your network. Depending on the needs of your business, humidity sensors come in a lot of different shapes and sizes - sensors can be large or small, internal or external, and stand-alone or daisy-chained. Some sensors have external probes for reaching specific locations, where others are contained inside small boxes for ambient monitoring. When selecting a humidity sensor, be sure to look for sensors that can adapt to your environment and provide you with reliable and easy ways to monitor your infrastructure. Humidity sensors are the easiest solution for preventing humidity damage. They can automatically monitor your equipment, allowing you to focus on the important areas of your operations. Ultimately these sensors provide peace of mind - monitoring your network's equipment and ensuring the quality of your overall network. Humidity sensors report moisture levels in two different ways: digital (also called discrete) or analog. Discrete sensors collect digital information in the form of "on" or "off" values. While analog sensors collect live values from environmental conditions. After that, this information is reported to you as a real value, instead of just on/off values. Digital sensors are able to monitor conditions for operation within a specified range. When the conditions pass outside the monitored range, the sensor closes a contact (sends an alert). When the conditions return inside the monitored range, the sensor opens the contact (meaning that it clears the alert). The signal is reported as a binary (0/1; on/off; open/close; high/low) type of input. A good use case for sensors would be when your systems are hosted remotely. For the sake of argument, let's imagine you're hosting your humidity monitoring on a remote island. In addition, you want to keep the humidity a sub 50% threshold so once the humidity reaches 50%, your digital controller will automatically turn on your dehumidifier. If there's a problem, such as extreme heat - and the dehumidifier doesn't completely alleviate the humidity - unfortunately there will be no other option for your equipment. Analog sensors are more advanced and provide continuous visibility to current conditions. An analog sensor can monitor humidity within a set range, thus providing you with an exact and continuous percentage. An analog sensor can perform the same functions as a digital sensor, but multiple times and within a single sensor. Using the previous example, imagine that you were using two analog sensors. Using control relay output on the RTU, your analog humidity sensor would turn on that same dehumidifier at 50%, but could also turn on a second dehumidifier at 60%. In addition, an analog humidity sensor would allow you to view the exact percentage of humidity in your home (for instance 50%, 51%, 52%, etc.). Digital sensors would only tell you that both rooms are above 50%. Analog sensors allow you to assess the complete situation and respond more efficiently. If you want the continuous reporting of detailed humidity levels, an analog sensor is the best option for you. However, if you are considering analog sensors, keep in mind that - although they are generally reliable and get the job done - they definitely have some downsides. For starters, they must be wired into on-site power to operate. This is fine for one or two sensors, but any more and you will be spending a lot of time wiring. In addition, they generally require +12 VDC or +24 VDC, which may not be available at your location. In order to supply power to each unit, you need to put a voltage converter between your sensor and your rectifier - thus adding more wiring. Once you have the analog sensor powered, you need to wire it to your RTU using one of your limited analog inputs. One sensor might be OK, but if you need multiple sensors, your analog inputs will quickly be used, leaving no room to monitor anything else. When you plug in the sensor, the RTU can sense incoming electrical signals, but will not understand what type of unit it is. In other words, it won't be able to differentiate between a temperature or humidity sensor, nor see whether it is outputting voltage (0-5v) or current (4-20mA). Additionally, you have to scale the device. Scaling is the process of telling the RTU how to convert the voltage input to a human-readable output. For example, your analog sensor will come with a preset scale, usually noted on the side of the device. It may state that a value of 0 volts = 5 degrees F and that 4.7 volts = 100 degrees F. If the sensor reports a value of 2.8V, the RTU uses the scale to convert that to 77.4 degrees. Unfortunately, scaling is necessary with analog sensors. While the method is very accurate, it's still an extra step. It can also be confusing if you haven't set it up before. While traditional analog sensors are a good option, forward-looking manufacturers are moving towards sensors that require only one wire. These units are a better option for several reasons. First, they only require one wire to transport both data and power. This lessens the number of wires needed and removes the clunky wall transformer units. Imagine being able to quickly and effortlessly plug the sensor into your RTU and having it function right away, straight out of the box. You only need a single, common RJ-12 cable to install, no specialized cable, or expensive equipment. Plus, you can easily crimp new RJ-12 cables in the field for quick repairs. Since these sensors are easy to install, you won't have to spend hours training techs on how to incorporate them into the network. Best of all, they boast the same, if not better, reliability when compared with traditional analog sensor. You don't have to sacrifice quality for ease of use. Our line of sensors is called D-Wire. There's a variety of sensor types. They provide easy-to-use, easy-to-install functionality and reliability. One simple RJ-12 cable connects the D-Wire sensors to your RTU for monitoring of critical environmental levels. To conserve ports in your unit, up to 16 D-Wire sensors (with a maximum of 600 feet) can be daisy-chained through one port. No more using up valuable analogs to monitor each sensor individually. Provisioning, or editing, the sensors is easy to do as well. Each D-Wire sensor is uniquely identifiable by the RTU. It can sense the unit and distinguish what type of device it is. This means that, if a sensor becomes unplugged, the RTU will detect this and alert the appropriate person. You can monitor humidity, as well as temperature, using the D-Wire Temperature and Humidity combination sensor. This device can accurately report the live-analog values for temperature (+/- 2 degrees F) and humidity (+/0 4% RH). No matter which type of sensor you choose, an RTU is the most important part of your humidity monitoring system. Humidity sensors, which are a fundamental node of your alert topology, are typically monitored continuously by an RTU - short for remote telemetry unit. The RTU takes the input from the analog or digital devices and uses that information to perform advanced monitoring functions. The collected data is processed locally by "intelligent" RTUs for thresholding and notification, forwarded to alarm master systems for similar processing or both. In other words, the RTU is responsible for notifying other equipment in real-time to respond to changing humidity levels. If you integrate your analog sensor with an efficient RTU, you can view the monitored value from anywhere in the world. Quality RTUs can process the collected data locally against multiple threshold values and generate a binary signal when a threshold is crossed. Importantly, the RTU should report not just the generated binary signal but also the monitored analog value to the master station. In larger systems, it's sometimes easier to administer analog thresholding at a centralized master rather than distributed to each remote throughout the system. The most advanced RTUs even provide email, text, and voice call notifications to alert you of emergency situations. Because there are so many different types of RTUs out there, choosing the right unit can be difficult. In my experience, it's best to look for RTUs that will work with your current situation but also provide functionality for your long-term goals. Try to find an RTU that is a perfect mesh of integration, interface, and size. In addition, watch out for large manufacturers who use cheap plastic cases and fail to individually test each of their products. When selecting an RTU, quality is much more important than quantity. The RTU is the brain of your humidity sensor so it's critical to choose a reliable and efficient one. No matter which humidity sensor you choose, be sure to consider what is best for your system. The NetGuardian 832 is a popular choice if you are looking for a medium-sized unit with SNMP protocol and all of the advanced functions. The NetGuardian supports customized alerts (through text, email, voice call, etc.) and provides an integrated web interface (via your web browser) to view your equipment from any location. In addition, this RTU is a nice option because it's quality tested, created with a durable metal case, and customized to fit the exact requirements of your system. The bottom line is that the quality of your humidity monitoring system will enhance the overall efficiency of your network. This means that, by choosing the right humidity meter, you will instantly safeguard your network and secure your revenue stream. With so many different selections available, it can be difficult to choose the right options for your business. Here are several qualities to look at when deciding on the best humidity monitor: When it comes to managing the status of your systems - and therefore protecting the quality of your network - you need to have up-to-date readings on the condition of your remote sites. Having the ability to easily access this information from any location is necessary for saving you time and money. Finding a humidity monitor that updates your information within seconds, through several easily-accessible interfaces, is the most efficient way to manage and protect your business. Advanced humidity monitoring systems offer several options for accessing your remote site data. Some systems provide a menu-based interface, operating from software installed directly on your computer. Other systems offer web-browser interfaces that are accessible from any computer with internet access (through an IP address). The most advanced forms of these interfaces provide detailed geographic information, including alarm locations, history, and repair analysis. Your humidity meters needs to provide you with important updates on the condition of your network. Due to climate, weather, and natural disasters, levels of humidity can rapidly fluctuate and destroy your infrastructure. With customized alert notifications, you can choose the conditions of your alerts and instantly stay informed on important developments in the status of your systems. Look for RTUs that provide a wide range of options, including emails, or voice, and text messages sent directly to your cell phone. If humidity levels rapidly rise, a custom alert notification could be the only solution for quickly notifying you of your site's status and helping to save your network devices. Without adequate sensor coverage, you could incidentally neglect an area of your infrastructure and sustain devastating network losses. The most advanced sensors cover a wide range of temperatures, come in different lengths of probes, and can be chained together to cover all of your site. Look for humidity sensors that can adapt to your environment and provide complete coverage for all of your hardware. As I said before, no matter what you choose, whether analog or discrete sensor, a good RTU can make all the difference. If you are unsure on what type of sensor to decide on or plan on using both digital and analog, many RTUs contain ports for both analog and discrete inputs. The NetGuardian 832 has room for 8 analog alarms and 32 discrete alarms with the ability to expand to 176. The majority of humidity sensors run off of power, with a protected battery plant connected to the RTU and other critical equipment in case of unexpected power failure. But often, these devices fail to integrate the back-up energy support within each individual sensor. When the power fails, all sensors that are connected to the main power supply also stop working. The most advanced humidity systems connect their individual humidity sensors to emergency battery support via the RTU, ensuring that your sites are constantly covered even in the direst energy situations. Always choose humidity sensors that have reliable back-up power sources in order to ensure continued monitoring for all your remote site. Many humidity monitoring devices are limited in function. Advanced devices are capable of integrating multiple monitoring technologies into a single unit. For example, depending on your business, it may be more efficient and cost-effective to look for a technology that integrates temperature sensors into your humidity monitoring system. However, depending on your RTU, it becomes easy to incorporate multiple alarm systems for things such as doors, battery voltage, AC power, climate, smoke, floor water, fuel tanks, and more. No matter what product you choose, integrating monitoring technologies is an effective way to maximize your network protection. With increasing global temperatures and unexpected weather fluctuations, humidity monitoring is becoming increasingly relevant. If you operate a business that uses electronic devices, humidity could pose a significant risk to your network's safety. By ensuring that you are always informed on the condition of your infrastructure, you can feel confident with your system's security and begin focusing on other challenges of growing your business. We have more than 30 years of experience working with remote monitoring, that's simply all we do. As experts in network monitoring, we've dealt with many different situations and we've helped multiple different companies achieve their humidity monitoring goals. And we can help you too. So don't leave your vital gear unprotected any longer. Contact us today and let's discuss a perfect-fit monitoring solution for your scenario.
<urn:uuid:e849c143-2b0f-4e6d-a8cf-f75122cd2937>
CC-MAIN-2024-38
https://www.dpstele.com/blog/how-to-plan-a-humidity-monitoring-system.php
2024-09-12T22:04:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00734.warc.gz
en
0.928076
3,170
3.09375
3
Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring. Reserve Your Seat TodayWhat's going on: Creating a loop with a paper clip may seem silly, but it's actually a great troubleshooting tool. By looping the transmit and receive pins, you can test serial cable port communication by checking if the serial ports connections transmit and receive valid information. This is called a loopback test and can be used to test rs232 communication. You don't need a port testing software. Loopback is a good RS-232 check tester method. When you're setting up any kind of communication serial device, you'll probably have to troubleshoot data ports activity at some point. In our work with remote monitoring and control systems, DPS Tech Support reps frequently help our clients use loopback testing to troubleshoot RTUs that aren't reporting alarms correctly. Shorting pins can confirm that your device and its data port are working properly. With that established, you can move on to testing cables, protocols, and other equipment in your system. A screwdriver is the right tool when you have an open connector with pins on 2 sides. A slot/flat screwdriver head will usually be the right width to bridge pins. You can even achieve a slight diagonal if you need to connect 2 pins that are not directly across from one another. The classic example of this kind of port in remote monitoring is the Amphenol connector. It has 50 pins, and each discrete alarm (contact closure) input is a pair of contact pins across from one another. It's very simple to use a screwdriver to short a pair of pins together to test your RTU's inputs during diagnostics. A paper clip takes more time than a screwdriver, but it has much more versatility because it can be bent. You can use a screwdriver to short 2 female pin sockets on a DB9 serial port. You can connect virtually any 2 pins on a 50-pin Amphenol connector. You'll simply need to shape the paper clip into an appropriate shape and insulate it from your hands with a napkin or some other insulator. These tips are intended to be used only on data ports, where the electrical flow is minimal. You should always use industry-standard safety procedures when working with any electrical equipment. This is never more important than when you're operating on power-input circuits. Discussion of electrical safety techniques is beyond the scope of this article, but understand that there are risks whenever you work with electricity.
<urn:uuid:f4db1435-3271-4340-a47f-ab2f8db07f86>
CC-MAIN-2024-38
https://www.dpstele.com/rtu/support/troubleshoot/screwdriver-test.php
2024-09-12T22:55:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00734.warc.gz
en
0.93755
538
2.625
3
Cache memory plays a key role in computing and data processing. After all, it’s a critical component all modern computer systems use to store data for fast and easy access. Everything, from desktop PCs and data centers to cloud-based computing resources, use a fast static random-access memory (SRAM), also called a cache memory, that works alongside the central processing unit (CPU). While the fast performance of a computer is oftentimes credited to its RAM power or processor, cache memory has a huge and direct impact on the overall performance quality of the device. This article will help construct a deeper understanding of how cache memory works, its various types, and why it’s critical for the smooth operation of a computer system. How Cache Memory Works Some computers include an SSD cache memory, often called flash memory caching. It’s used to temporarily store data until permanent storage methods are able to handle it, which boosts your device’s performance. However, since the CPU is one component responsible for pulling and processing information and it works considerably faster than the average RAM, users may be forced to wait while it attempts to read incoming instructions from the RAM. This results in reduced performance and speed. Cache memory is the solution to prevent this from happening. This is accomplished by installing a small-capacity but fast SRAM close to the CPU. The SRAM then takes data or instructions at certain memory addresses in RAM and copies it into the cache memory temporarily, along with a record of the original address of those instructions or data. As a result, this prevents the CPU from having to wait, which is why caching is used to increase read performance. However, since the cache memory of any given device is small in relation to its RAM and CPU computing power, it can’t always hold all of the necessary data. Depending on which scenario ends up happening, it could result in what’s referred to as a “cache hit” or a “cache miss.” Memory Cache Hit When the CPU goes to read instructions off of the cache memory and finds the corresponding information, this is known as a cache hit. Since the cache memory is faster and closer in proximity to the CPU, it ends up being the one to provide the data and instructions to the CPU, allowing it to begin processing. In the event of a cache hit, the cache memory acts as a high-speed intermediary and queue between the CPU and the main RAM. Processes that require data or instructions to be written back to the RAM or memory must first go through the cache memory until the RAM is able to reach. That way, the CPU isn’t slowed down waiting for the RAM’s response. There are multiple ways this correspondence occurs depending on the cache memory’s write policy. One policy is known as “write-through.” This is the simplest and most straightforward approach, in which anything written on the cache memory is also written on the RAM. A “write-back” policy allows data written to cache memory to be immediately written to the RAM, and anything written to the cache memory is marked as “dirty” for its duration. This signals that the data is different from the original data or instruction pulled from the RAM. Only when it is removed from the cache memory will it then be written to RAM, replacing the original information. Some intermediate cache memory write policies allow “dirty” information to be queued up and written back to the main RAM in batches. This is a more efficient approach compared to using multiple individual writes. Memory Cache Miss If the CPU goes to read information or instructions on the cache memory but is unable to find the required data and has to resort directly to the hard drive and RAM, this is known as a cache miss. This reduces the speed and efficiency of the device’s processing, as it now has to operate according to the speeds of the RAM and hard drive. Afterward, when the required information or instructions are successfully retrieved from the RAM, they first get written to the cache memory before they’re sent off to be processed by the CPU. That happens primarily because data or instructions that have been recently used by the CPU, are likely to still be of importance and need to be accessed again shortly. Writing it to the cache memory saves the CPU from having to return to the RAM or hard drive a second time to retrieve the same data. On rare occasions, some data types can be marked as non-cacheable. This is to prevent valuable cache memory space from being occupied by unnecessary data, even if it has been retrieved manually by the CPU from the RAM or hard drive. Cache Memory ‘Eviction’ A cache memory’s storage capacity is minuscule compared to RAM and hard drives. While RAMs can range between 2GB and 64GB and hard drives reach 1TB to 2TB on average consumer devices, a cache memory’s capacity measures between 2KB and a few megabytes. This stark difference in storage capacity would mean that sometimes cache memories get full when the CPU still needs to pull in information. “Eviction” is a process that removes data from the cache memory to free up space for information that needs to be written there. What data is going to be evicted is determined through a “replacement policy” depending on the most in-use and important information. There are a number of possible replacement policies. One of the most common ones is a least recently used (LRU) policy. Based on this policy, if data or instructions have not been used recently, then they are less likely to be required in the immediate future than data or instructions that have been required more recently. 2 Types of Cache Memory Cache memory is divided into two categories depending on its physical location and proximity to the device’s CPU. - Primary Cache Memory: The primary cache memory, also known as the main cache memory, is the SRAM located on the same die as the CPU, which is as close as it can be installed. This is the type generally used in the storage and retrieval of information between the CPU and the RAM. - Secondary Cache Memory: The secondary cache memory is the same hardware as the primary cache memory. However, it’s placed further away from the CPU, ensuring the existence of a backup SRAM that can be reached by the CPU whenever needed. Despite being accurate terms to describe a cache memory based on its physical location in the system, neither is used nowadays. This is because modern cache memories can be manufactured small enough and with sufficient capacities to be placed on the same die as the CPU with no issue. Instead, modern cache memories are referred to by level. 3 Level of Cache Memory Modern computer systems have more than one piece of cache memory, and these caches vary in size and proximity to the processor cores and, therefore, in speed. These are known as cache levels. The smallest and fastest cache memory is known as Level 1 cache, or L1 cache, and the next is the L2 cache, then L3. Most systems now have an L3 cache. Since the introduction of its Skylake chips, Intel has added L4 cache memory to some of its processors as well. However, it’s not as common. Level 1 Cache Level 1 cache is the fastest type of cache memory since it’s embedded directly into the CPU itself, but for that same reason, it’s highly restricted in size. It runs at the same clock speed as the CPU, making it an excellent buffer for the RAM when requesting and storing information and instructions. L1 cache tends to be divided into two parts, one for instructions (L1i) and one for data (L1d). This is to support the various fetch bandwidth used by processors as most software tends to require more cache for data than instructions. The latest devices have a cache capacity of 64KB—32KB of L1i and 32KB of L1d. In a quad-core processor, this adds up to 256KB of L1 cache memory. Level 2 Cache Level 2 cache is oftentimes also located inside the CPU chip, but just further away than the L1 to the core. Those are considerably less expensive than their L1 counterparts and are larger in size and capacity, and can be anywhere from 128KB to 8MB per core. In some cases, L2 cache memories are implemented on a separate processing chip, also known as a co-processor. Level 3 Cache Level 3 cache memory, sometimes referred to as last-level cache (LLC), is located outside of the CPU but still in close proximity. It’s much larger than the L1 and L2 cache but is a bit slower. Another difference is that L1 and L2 cache memories are exclusive to their processor core and cannot be shared. L3, on the other hand, is available to all cores. This allows it to play an important role in data sharing and inter-core communications and, in some cases depending on the design, with the cache of the graphics processing unit (GPU). As for size, L3 cache on modern devices tend to range between 10MB and 64 MB per core or depending on the device’s specifications. What is Cache Mapping? Since cache memories are incredibly fast and continue to get larger alongside the requirements of software computing processes, there needs to be a system for retrieving the needed information. Otherwise, the CPU could end up wasting time searching for the right instruction on the memory instead of actually processing it. The processor knows the RAM memory address of the data or instruction that it wants to read. It has to search the memory cache to see if there is a reference to that RAM memory address in the memory cache, along with the associated data or instruction. There are numerous approaches for mapping the data and instructions pulled from the RAM to cache memory, and they tend to prioritize some aspects over others. For instance, minimizing the search speed makes it less accurate and reduces the likelihood of a cache hit. Meanwhile, maximizing the chances of a cache hit also increases the average search times. Depending on the various levels of compromise in speed and accuracy, there are three types of cache mapping techniques. Direct Cache Mapping Direct cache memory mapping is the simplest and most straightforward information recovery technique. With this approach, each memory block is assigned a specific line in the cache, as determined by the RAM’s given address. Based on this, the CPU would only have to search this single block to check whether the needed information is available or not. However, it’s not stored in that exact location, the CPU marks it as a cache miss and proceeds to search and pull the information directly from the RAM. Direct mapping is highly inefficient, especially with devices with higher specs and larger flows of data and instructions into the cache memory and CPU. Associative Cache Mapping Associative cache mapping is the exact opposite of the direct approach. When pulling data and instructions from the RAM, any block can go into any line of the memory cache, randomizing its location. When the CPU searches for specific information, it has to check the entire cache memory to see if it contains what it’s looking for. This approach yields a high rate of cache hits, and the CPU rarely resorts to retrieving information directly from the RAM. However, whatever time is saved by only communicating with the cache memory is wasted searching through all of its lines every time an instruction is needed. Set-Associative Cache Mapping Set-associative cache mapping is a way to compromise between direct and associative cache mapping, aiming to maximize events of a cache hit whilst minimizing the average search time per request. For this to work, each data block pulled from the RAM is only allowed to be mapped to a limited number of different cache memory blocks. This is also known as N-way set-associative cache mapping. For each point of information or instruction, there is an N number of blocks where it can be mapped and later found by the CPU. A 2-way set-associative mapping system gives the RAM the option to place data in one of two places in the cache memory. In this scenario, the cache hit likelihood increases, but the average search time doubles, as the CPU would need to check twice as many potential blocks. A 4-way set-associative mapping system gives the RAM four potential mapping blocks, an 8-way mapping system provides eight potential mapping blocks, and a 16-way mapping system offers 16 variations. The higher the value of N, the higher the chances of a cache hit but the longer the average data block recovery time for the CPU will be. The value of N is adjusted depending on the device and what it’s going to be used for, opting for a balanced ratio of time to cache hits. 3 Examples of How Cache Memory is Used The use of cache memory isn’t only essential for on-premises and personal devices. It’s also used heavily in data center servers and in cloud computing offerings. Here are a few examples highlighting cache memory solutions. Beat is a ride-hailing application based in Greece with over 700,000 drivers and 22 million active users globally. Founded in 2011, it’s now the fastest-growing app in Latin America, mainly in Argentina, Chile, Colombia, Peru, and Mexico. During its hypergrowth period, Beat’s application started experiencing outages due to bottlenecks in the data delivery system. It had already been working with AWS and using the Amazon ElastiCache, but an upgrade in configuration was desperately needed. “We could split the traffic among as many instances as we liked, basically scaling horizontally, which we couldn’t do in the previous solution,” said Antonis Zissimos, senior engineering manager at Beat. “Migrating to the newer cluster mode and using newer standard Redis libraries enabled us to meet our scaling needs and reduce the number of operations our engineers had to do.” In less than two weeks, Beat was able to reset ElastiCache and reduce load per node by 25–30%, cut computing costs by 90%, and eliminate 90% of staff time spent on the caching layer. Sanity is a software service and a platform that helps developers better integrate content by treating it as structured data. With headquarters in the U.S. and Norway, its fast and flexible open-source editing environment, Sanity Studio, supports a fully customizable user interface (UI) and live-hosted data storage. In order to support the real-time and fast delivery of content to the developers using its platform, Sanity needed to optimize its distribution. Working with Google Cloud, it turned to Cloud CDN to cache content in various servers located close to major end-user internet service providers (ISPs) globally. “In the two years we’ve been using Google Kubernetes Engine, it has saved us plenty of headaches around scalability,” said Simen Svale Skogsrud, co-founder and CTO of Sanity. “It helps us to scale our operations to support our global users while handling issues that would otherwise wake us up at 3 a.m.” Sanity is now able to process five times the traffic using Google Kubernetes Engine and support the accelerated distribution of data through an edge-cached infrastructure around the globe. SitePro is an automation software solution that offers real-time data capture at the source through an innovative end-to-end approach for the oil and gas industries. Based in the U.S., it’s an entirely internet-based solution that at one point controlled nearly half of the richest oil-producing area in the country. To maintain control over the delicate operation, SitePro sought the help of Microsoft Azure, employing Azure Cache for Redis, and managed to boost its operations. Now, SitePro is in a position to start investing in green tech and more environmentally-friendly services and products. “Azure Cache for Redis was the only thing that had the throughput we needed,” said Aaron Phillips, co-CEO at SitePro. “Between Azure Cosmos DB and Azure Cache for Redis, we’ve never had congestion. They scale like crazy. “The scalability of Azure Cache for Redis played a major role in the speed at which we have been able to cross over into other industries.” Thanks to Azure, SitePro was able to simplify its architecture and significantly improve data quality. It has now eliminated the gap in timing at no increase in costs. Read more: 5 Top Memory Management Trends Speed Up Device Processes With Cache Memory Cache memory is a type of temporary storage hardware that allows the CPU to repeatedly retrieve information and instructions without having to resort to RAM or hard disk. For computers and servers, they’re built inside and as close as physically possible to the device’s core to reduce processing times. Cache memories are categorized by level depending on their proximity to the device’s CPU into L1, L2, and L3 respectively. The average device contains multiples of each level of cache per core, and the greater their capacity, the faster the device’s computing. Speed is also heavily reliant on the cache memory’s mapping technique and whether it prioritizes time or information search accuracy. There are three techniques: direct mapping, associative mapping, and set-associative mapping.
<urn:uuid:ec402ce1-68fc-43f1-bfaa-a4bbc1cd8dc5>
CC-MAIN-2024-38
https://www.enterprisestorageforum.com/hardware/cache-memory/
2024-09-12T21:51:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00734.warc.gz
en
0.946981
3,653
4.25
4
The spectacular rise of cryptocurrency is impossible to ignore. Global reaction ricochets from excitement to apprehension to complete incomprehension. In the wake of Bitcoin’s recent surge, governments, investors and laypeople alike are grappling to understand the real potential of cryptocurrency. Despite the inundation of news and information, it remains, in the eyes of many, a mysterious phenomenon. For many people, the word ‘blockchain’ results in a brain fog and gets mentally filed as gibberish. This lack of general knowledge leads to doubts about the value of cryptocurrency and the technology that supports it. Cryptocurrency is widely misunderstood and particularly prone to myths and misconceptions. That said, it is here to stay – so everyone needs to understand its pitfalls and potential. It does offer real advantages over traditional money, but buying and trading cryptocurrencies successfully requires knowledge. To clear up some of the current confusion, here are some of the top misconceptions about cryptocurrency – debunked. 1. It’s all about Bitcoin This is definitely not true. There are hundreds of cryptocurrencies available, some of which have the potential to rival Bitcoin. In fact, relative newcomers such as Ether and Monero are already proving popular and may provide a real alternative. Another important cryptocurrency to look out for is Litecoin, which has shown impressive growth against Bitcoin since early 2017. Bitcoin may be the world’s first decentralised cryptocurrency, but it certainly is not the only one making headlines. In many cases, newer cryptocurrencies have been designed specifically to address the perceived limitations and flaws of Bitcoin. 2. Bitcoin transactions are completely anonymous An anonymous transaction means that the user’s true identity remains unknown. A private transaction doesn’t necessarily shield the user’s identity, but it can hide what was bought and for how much. Bitcoin trades are considered pseudonymous, rather than anonymous. The Bitcoin protocol doesn’t record identities, but the blockchain (a distributed electronic public ledger) stores a record of every single transaction and makes them all completely visible. This anonymity is, on the one hand, very appealing for some users who are concerned with privacy and, on the other, incredibly frustrating for financial institutions. Methods such as transaction graph analysis mean Bitcoin’s transparency can be leveraged to link trades with specific addresses. 3. Cryptocurrency is only used by criminals Not too long ago, Bitcoin was closely associated with the dark web and criminal transactions. People believed that, thanks to its anonymity, anyone could buy or sell whatever they wanted using Bitcoin and never get caught if they broke the law. This all changed in February 2015 when the founder of Silk Road, Ross Ulbricht, was sentenced to life in prison. Silk Road, a marketplace that accepted Bitcoin, facilitated the sale of $1 billion in illegal drugs before being shut down. Since then, more shady characters have been caught and prosecuted for embezzlement, fraud and laundering. The reality, however, is that most Bitcoin users are law-abiding people who are curious about cryptocurrency, concerned with privacy and hoping to make some money. Yes, digital money can keep illegitimate transactions secret, but many of the most popular cryptocurrency options create a data trail that – with enough forensic intelligence – can reveal an entire financial history and expose the bad guys. 4. Cryptocurrency trading requires technical expertise The mark of any widely successful technology is that it’s easy to use. If cryptocurrencies were tricky to buy, sell and use, then they wouldn’t be quite so popular. >See also: The best Bitcoin apps of 2017 That said, embarking on something so completely new can be disconcerting. Fortunately, the ‘wild west’ days of cryptocurrency are settling, and users can now sign up with a safe and secure exchange. What’s more, educational resources are increasingly available, helping people understand the seemingly incomprehensible world of cryptocurrencies and getting them up to speed quickly. 5. You can only buy a whole coin at a time You can buy as much, or as little as you want. One Bitcoin, for example, can be divided up into 100,000,000 units. This makes transactions accessible for everyone. You also don’t have to invest your entire savings all at once – or all in one place. In case you were wondering, each unit of Bitcoin is called a ‘Satoshi’ after the cryptocurrency’s enigmatic creator: Satoshi Nakamoto. 6. It’s too late to invest in cryptocurrency Given the stratospheric rise in the value of Bitcoin, many would-be investors and traders think they’ve missed the boat. That may well not be the case. The World Economic Forum projects that cryptocurrencies will hold 10% of global GDP by 2027. In other words, digital money hasn’t necessarily peaked yet by any means. In fact, its true potential remains to be seen. 7. The government will somehow shut it down Any desire by governments to shut down Bitcoin, or any other cryptocurrency for that matter, has given way to grudging acceptance. There is no easy way for any institution to reverse or stem this financial revolution. Nonetheless, attempts are being made to regulate the cryptocurrency market. The world’s governments are taking various positions. For example, South Korea is considering introducing a capital gains tax on cryptocurrency trading, Australia believes the bubble will burst and, in the US, Bitcoin futures have made their world debut on traditional regulated exchanges. 8. There is a huge amount of wealth stored in Bitcoin There is no doubt that dreams of untold wealth are fuelling much of Bitcoin’s rise – however, Bitcoin’s total market cap is only 2.5% that of gold. 9. Cryptocurrency has no intrinsic value This does delve into a somewhat lofty territory – one that questions the very philosophical definition of value. If we take Adam Smith’s approach to supply and demand and define market value as simply what someone is willing to pay, then Bitcoin’s extrinsic value is understandably high – it has many willing buyers. The source of Bitcoin’s intrinsic value is, perhaps, a little harder to pin down. The intrinsic value of many assets, including gold, depends on their utility. While this applies to Bitcoin too, many cryptocurrency advocates and opponents have not considered the crucial – and very valuable – element of mining. In theory, anyone can mine Bitcoin, process transactions, create new Bitcoins and get rewarded for it. This ability to profit from Bitcoin outside of trading, combined with its limited supply, gives it even greater value. 10. Bitcoin is just a currency Yes, you can buy, sell and trade with Bitcoin – but it’s not just a digital form of currency. Bitcoin was the first example of blockchain technology in action. This technology stores data in a public and fully transparent ledger. This accessibility of critical information has the capability to disrupt and transform many different industries for the better. The incredible rise in the value of Bitcoin this past year has boosted demand – but many remain wary of its volatility. Whether one approaches investment opportunities with a risk-taking attitude or prudence, it pays to know what you’re investing into. Don’t let a lack of knowledge or misguided fears keep you in the dark ages. Sourced by Benjamin Dives, CEO, London Block Exchange
<urn:uuid:80f7a6f5-bc55-413d-9a3f-eecdfcb55c44>
CC-MAIN-2024-38
https://www.information-age.com/decrypting-cryptocurrency-misconceptions-9104/
2024-09-15T10:54:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00534.warc.gz
en
0.946744
1,522
2.796875
3
ACUCOBOL-GT includes two fundamental window types: floating windows and subwindows (sometimes referred to as pop-up windows). Each window type is discussed briefly below. Floating windows are discussed in detail in Floating Windows. ACUCOBOL-GT also supports many types of controls (technically a type of window). Controls are discussed in detail in Graphical Controls. A floating window is the ACUCOBOL-GT window type that creates a host-based, pop-up window. When your application executes in a graphical environment, such as Microsoft Windows, floating windows are created as native pop-up windows, managed by the host operating system and the ACUCOBOL-GT run-time. Floating windows must be used when you want to include graphical controls, such as buttons, entry boxes, and scroll bars. ACUCOBOL-GT supports two types of floating windows: modal and modeless. Floating windows are positioned and displayed on the virtual screen. The virtual screen is intrinsic to all applications that use floating or subwindows. The virtual screen size can be set with the SCREEN SIZE run-time configuration variable and changed during program execution with the MODIFY Statement. The default virtual screen size is 25 rows by 80 columns. See Windowing Concepts for more information. An independent window is similar to a floating window, except that independent windows do not belong to parent windows; independent windows are controlled independently. Subwindow is the name given to ACUCOBOL-GT text-mode windows created with the DISPLAY WINDOW or DISPLAY SUBWINDOW statement. Subwindows are always text-mode windows and are not compatible with graphical controls. However, subwindows can be mixed with floating windows, so long as the subwindows do not display on top of graphical controls. When an overlay occurs, due to the workings of the underlying host system, control objects are improperly displayed on top of the text-mode subwindow. For a discussion of textual and graphical modes, see Graphical vs Textual Modes. You can easily convert subwindows to floating windows by changing the DISPLAY WINDOW statement to a DISPLAY FLOATING WINDOW statement. However, subwindows that simply define a screen region, that are not bordered, or are not pop-up in nature, do not lend themselves to conversion to floating windows.
<urn:uuid:7917c90d-f084-47b4-a3b3-91d32e54899a>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/extend-acucobol/1001/BKINININTRS018.html
2024-09-08T06:19:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00334.warc.gz
en
0.904565
494
2.59375
3
VDMA, a group of German and European engineering firms, has warned that a proposed Europe-wide ban on PFAS chemicals would have a "devastating" effect on European industry. A planned European ban on per- and polyfluoroalkyl substances (PFAS), also known as "forever chemicals" would endanger many industrial processes, including those needed for a transition to clean energy, according to a position paper from VDMA. The group wants to see a more differentiated view of the substances, to allow their use when needed. PFAS are widely used in industrial production. In the high-tech sector, they are essential in silicon fabrication. Within data centers, they have been adopted as working fluids for two-phase liquid cooling by pioneers such as Zutacore and LiquidCool. The leading producer, 3M, has announced it will pull out of the field, and cooling specialist Zutacore has announced a shift to an alternative producer by 2026. Europe is working on a comprehensive ban of around 10,000 substances, which VDMA says would be a mistake as the list includes chemicals that are "indispensable" in technologies including the production of fuel cells, heat pumps, solar systems, and hydrogen electrolyzers. According to the VDMA, the planned European ban is based on concerns about consumer products such as ski waxes, Teflon pans, and outdoor jackets, and an industry-wide ban would be "as exaggerated as it would be unjustified." VDMA wants to see some PFAS classed as "polymers of low concern," and exempted for their use in industrial production, warning that otherwise, the EU will "shoot itself in the foot." "The planned ban would mean that European producers would have to do without PFAS, while competitors from non-European countries could continue to use the substances and thus gain considerable competitive advantages,“ said Dr. Sarah Brückner, head of VDMA environmental affairs and sustainability. VDMA points out that goods produced with PFAS would still be imported into Europe, as there are no standardized tests for them and little understanding of the supply chains. Dr. Brückner said Great Britain has a better approach: "We should take our cue from the UK and look at the substance groups in a differentiated way." The VDMA wants PFAS separated into subgroups, with non-hazardous polymers immediately exempted. The group also wants PFAS to be allowed in industrial applications, where safe handling can be enforced. PFAS should also be allowed in the internal workings of systems, which do not come into contact with the environment, VDMA says. The group also calls for a longer transitional period than the proposed 18 months, so alternative chemicals can be found - and wants to allow PFAS replacement parts to be allowed indefinitely.
<urn:uuid:e7691ae1-945e-4b6a-81a7-cd386dc379b7>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/news/mechanical-engineering-group-warns-european-pfas-ban-would-be-devastating-to-net-zero-plans/
2024-09-14T07:54:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00734.warc.gz
en
0.970662
585
2.578125
3
What is artificial intelligence? Artificial intelligence (AI) is the area of computer science that focuses on simulating human intelligence. AI’s true meaning comes from combining rational, emotional and cognitive levels of intelligence across a larger process. What are the business benefits of artificial intelligence? AI is ushering in a new age of productivity growth, human and robotic collaboration and, most importantly, the intuitiveness sought by consumers. Artificial intelligence pinpoints areas of opportunity and delivers personal insights that drive innovation. Whether a company is experiencing overall business growth, or seasonal or temporary growth, AI’s predictive and scalable platform allows businesses to respond quickly and smoothly to the needs of their customers and to changes in the market. AI is optimizing and modernizing companies and helping them rethink how they do business. Why is AI so important? AI is becoming increasingly important in business. It is embedded in many applications by default to leverage features, such as recommendations, next best actions and natural language generation (NLG) based analytics. Employing AI effectively requires a clear focus on applying intelligent technologies to solve tough operational challenges and deliver a lift to the business. How is AI being used in today’s businesses? Today’s businesses are using AI in many ways. For example: - A utility deployed voice-activated, AI-driven chatbot to help executives, account managers and field-service technicians conduct research into services and solutions using voice commands or typed queries—which helped the company streamline customer interactions and enhance user experience - An insurer used machine learning and geospatial analysis to better understand the complex flood insurance market and identified a $3.3-billion new business opportunity How does a company begin formulating an effective AI strategy? To create a rigorous AI strategy, a company must look beyond technological capabilities. Each business challenge will require different tools, techniques and approaches. Leveraging AI requires extensive experimentation and the ability to apply learnings to the next stage of deployment. Companies need to factor that reality into their plans. A strategy should begin by emphasizing business value/impact and ethical/responsible behavior, rather than the technology’s capabilities and algorithms.
<urn:uuid:979e4243-9ded-47b9-89d8-b9beb5b7d78f>
CC-MAIN-2024-38
https://www.cognizant.com/us/en/glossary/artificial-intelligence
2024-09-15T14:41:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00634.warc.gz
en
0.938262
442
2.828125
3
聪明勤奋的GPT4-AI-powered content generation. Empowering creativity with AI precision. 更勤奋更聪明的GPT4 - 联网版(New Version) v2024.05.27. 给我10个字的问题,还你一篇2000字论文。本GPT是基于OpenAI最新发布的GPT-4o模型特点,加强升级后的版本,旨在为你提供更专业更完整的回答。Give me a 10-word question, and I'll return a 2000-word essay. This GPT is based on OpenAI's latest GPT-4o model, enhanced and upgraded to provide you with more profe A versatile GPT-4 model for clear and detailed responses. 全面升级的GPT-4 - 联网版 20.0 / 5 (200 votes) Introduction to 聪明勤奋的GPT4 聪明勤奋的GPT4 is a highly advanced version of OpenAI's GPT-4 architecture, designed to provide expert-level knowledge and detailed responses across various domains. The core design purpose of this model is to cater to users who require in-depth, comprehensive answers to complex questions. 聪明勤奋的GPT4 is built with an extensive knowledge base that spans numerous fields, making it capable of understanding and addressing a wide array of queries with precision and depth. For instance, if a user asks for an analysis of the economic impact of renewable energy adoption, 聪明勤奋的GPT4 would not only provide a detailed explanation of the economic theories and data but also offer examples from different countries, discuss potential challenges, and suggest future trends. This thoroughness ensures users receive not just an answer, but a nuanced understanding of the topic. Main Functions of 聪明勤奋的GPT4 Providing detailed explanations of scientific concepts. A university student studying quantum mechanics can ask 聪明勤奋的GPT4 to explain complex principles like wave-particle duality or Schrödinger's cat. The response would include historical context, mathematical formulations, and practical examples, thereby enriching the student's understanding. Assisting with complex problem-solving tasks. An engineer working on a new project can leverage 聪明勤奋的GPT4 to troubleshoot issues, optimize designs, or generate innovative solutions. For example, if there's a challenge with thermal management in electronic devices, GPT-4 can suggest materials, design modifications, and cooling techniques based on the latest research. Creative Writing and Content Creation Generating high-quality written content. A content creator can use 聪明勤奋的GPT4 to draft articles, scripts, or even books. If an author needs help developing a storyline for a science fiction novel, GPT-4 can provide detailed plot outlines, character development ideas, and suggestions for incorporating scientific accuracy into the narrative. Ideal Users of 聪明勤奋的GPT4 Researchers and Academics Researchers and academics benefit significantly from using 聪明勤奋的GPT4 due to its ability to provide in-depth, well-referenced information. It can assist in literature reviews, hypothesis generation, and data analysis, making it an invaluable tool for advancing scholarly work. Professionals and Industry Experts Professionals in fields such as engineering, medicine, law, and business can use 聪明勤奋的GPT4 to enhance their work through expert insights, problem-solving capabilities, and up-to-date information. This group benefits from the model's ability to deliver practical, applicable knowledge that can be directly implemented in their professional activities. How to Use 聪明勤奋的GPT4 Visit aichatonline.org for a free trial without login, also no need for ChatGPT Plus. Begin by visiting the website aichatonline.org, where you can access a free trial of 聪明勤奋的GPT4 without the need to log in or subscribe to ChatGPT Plus. This allows you to explore the tool's capabilities immediately. Familiarize yourself with the interface. Once you’re on the site, take a moment to understand the layout and available features. This will typically include a text input area where you can start typing your queries, along with various settings that let you customize the tool's output. Define your use case. Determine what you want to achieve with 聪明勤奋的GPT4. Whether you need assistance with writing, coding, research, or creative brainstorming, clearly defining your objective will help you get the most relevant and useful responses. Input your prompt with specific instructions. To get the most out of 聪明勤奋的GPT4, enter detailed and specific instructions in your prompt. Be clear about what you're asking for, and if needed, specify the format, style, or length of the response you desire. Review and refine the output. After receiving a response, review it carefully to ensure it meets your needs. You may want to refine your input based on the initial output, or use follow-up questions to delve deeper into your query for more tailored information. Try other advanced and practical GPTs AI-powered project risk prevention AI-powered tool for navigating Japanese labor laws and social insurance. AI-powered comprehensive information tool 彻及禅师 - 佛学大师 Zen Master AI-powered Zen guidance for mindful living. Your versatile AI-powered assistant. Engage with AI-powered 蕾姆 for immersive interactions. AI-Powered Solutions for Every Task AI-Powered Coaching for Growth. AI-powered tool for carbon footprint tracking AI-powered tool for front-end development. AI-Powered Draft Creation and Management Enhance your Dall-E prompts with AI. - Content Creation - Creative Writing - Research Assistance - Coding Help - Language Translation Frequently Asked Questions about 聪明勤奋的GPT4 What can I use 聪明勤奋的GPT4 for? 聪明勤奋的GPT4 can be used for a wide variety of tasks, including academic writing, content creation, coding assistance, brainstorming, language translation, and more. Its versatility allows it to adapt to different contexts and provide relevant, high-quality responses tailored to your specific needs. Is there a cost associated with using 聪明勤奋的GPT4? You can access a free trial of 聪明勤奋的GPT4 without needing to log in or subscribe to any premium services like ChatGPT Plus. This allows you to explore its features and capabilities without any upfront costs. How accurate is the information provided by 聪明勤奋的GPT4? 聪明勤奋的GPT4 is trained on a vast dataset that includes a broad range of topics, enabling it to provide accurate and relevant information. However, users should still verify critical data independently, especially for academic or professional purposes, as AI-generated content may occasionally contain errors. Can I customize the responses generated by 聪明勤奋的GPT4? Yes, you can customize the responses by providing specific instructions in your prompt. You can dictate the format, tone, style, and length of the response, allowing the tool to generate content that fits your precise requirements. What languages does 聪明勤奋的GPT4 support? 聪明勤奋的GPT4 supports multiple languages, allowing you to generate content, translate text, or communicate in various languages beyond just English. This makes it a versatile tool for multilingual users.
<urn:uuid:8af6b922-7861-44fc-9780-72c24acd7e98>
CC-MAIN-2024-38
https://theee.ai/tools/%E8%81%AA%E6%98%8E%E5%8B%A4%E5%A5%8B%E7%9A%84gpt4-2OToEhqobZ
2024-09-18T00:35:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00434.warc.gz
en
0.833692
1,836
2.5625
3
High-performance computing is one of the most exciting areas in IT at the moment. After all, its applications feel almost unlimited. Whether it's the ATOS and UK Government-planned meteorological supercomputer, the ten exascale HPCs supposedly planned in China, or the new world-record-breaking Frontier, supercomputers are gaining momentum. These supercomputers, which can enable complex data-intensive and compute-heavy scientific research in areas like quantum mechanics, weather forecasting, aerodynamics, and molecular modeling, seem to be popping up all around the globe. Europe’s most powerful – the LUMI supercomputer – has now launched, and is the most energy-efficient of the lot. In a DCD>Deep Dive, we were lucky enough to sit down and talk with Veli-Antti Leinonen, data center specialist for CSC (IT Center for Science), to discuss the LUMI HPC, and how they achieved such impressive efficiency results. But first, a bit of background. Based in Kajaani, Finland, LUMI is a EuroHPC joint undertaking, funded equally by the EuroHPC, and LUMI consortium countries. Pre-exascale, the supercomputer is currently capable of 152 sustained petaflops, but this is expected to grow to more than 375 in the coming weeks with a peak performance potentially over 550 petaflops. “Our main goal and agenda is to support the research activities in Finland in a nonprofit way,” Leinonen told DCD. “We only provide the services to our owners, and we are not competing in the commercial sector of data center services.” Powered by the same architecture and the Frontier exascale supercomputer, it is no surprise that LUMI is leading in Europe. But what has made this HPC so energy efficient? “We needed to decide whether to utilize the greenfield or the brownfield concept,” admitted Leinonen. “We had both options, and as you can guess from the pictures, brownfield was our choice in the LUMI project, hence the faster time to market and to pay the capital expenses.” Brownfield, compared with greenfield, enables the use of pre-existing buildings and facilities. This significantly reduces the construction cost and time, though it does not necessarily guarantee energy efficiency in the long term. “Compared with a greenfield concept we were able to save 80 percent of the CO2 emissions and roughly 2,000 tonnes of CO2 in the building phase alone. The sustainability was naturally embedded in LUMI, and it resulted in low operational costs, and carbon-negative production that we were able to get as a side bonus in less than a year. It was kind of a no-brainer.” “But here in the Nordics with the harsh winter climate, I think greenfield was not a realistic option, because we couldn’t start the project until the summer. Whereas brownfield gave us a perfect filter for the cold days and snowy nights. We utilized and placed the LUMI data center in a former UPM paper mill in the Kajaani area.” This location was not a random choice. The fact is that the Kajaani paper mill, subsequently shut down and made into a business park named Rensforsin Ranta, was already connected up to plentiful renewable energy sources, as well as having a conveniently placed national grid substation. “A National Grid substation is already placed within the business park and can feed the area 1,000s of megawatts. So it gives us, scalability-wise, a strong future backbone to lean on. “The power supply to the area is with the Finland National Grid Lines two times, and then four of the other feeding lines are from the hydro reservoirs and other wind parks in the area,” explained Leinonen. “The UPS is roughly 10 percent of the maximum power load of the data center, in order to give sufficient time for the storage and the management nodes to be properly shut down in case of an emergency situation. Within the 40 years of history, we only had one power outage.” On top of the renewable energy sources, excess heat from the facility is sent out to the Kajaani municipality. “Our PUE factors are 1.04 when we do not utilize the excess heat. When the excess heat is used, the margin for that PUE increases by roughly 20 percent. But it makes business sense to use it because we're actually selling the heat to the district heating network and getting reimbursement for every megawatt-hour of heat that's been pushed into the system. “We are, of course, looking at that sustainability angle, but while providing a 20 percent reduction of the annual net, we also see that we are reducing our total cost of energy up to 40 percent, and we are able to say that our data center operations are in fact carbon-negative.” Heat reuse is also a significant element of LUMI’s cooling process. “Excess heat utilization is our primary source of cooling. We basically run our liquid cooling HPC loads as high, temperature-wise, as you can drive them, and then prime that water with the heat pumps to roughly 90 degrees celsius, to then be distributed in the district heating network. “The demarcation point we've set is after the heat pumps, because cooling is one of the most critical components in the data center, and therefore we want to have that under our control. But as a backup, dryer cooling and chillers in the roofs give us the redundancy for the cooling.” The LUMI supercomputer is an important reminder that IT architecture can be powerful without hurting the planet. Hopefully, we will continue to see more data centers and HPCs following suit.
<urn:uuid:87576a60-2f5b-4322-81c4-0bf4140c1a1a>
CC-MAIN-2024-38
https://direct.datacenterdynamics.com/en/marketwatch/lumi-the-supercomputer-named-after-snow/
2024-09-19T07:06:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00334.warc.gz
en
0.9529
1,235
2.765625
3
Why do attacks increase over the holidays? In a joint cybersecurity advisory, the FBI and Cybersecurity and Infrastructure Security Agency (CISA) warn that cyberattacks increase significantly during the holidays and encourage businesses to be aware of the heightened risks and be vigilant with network defenses. Like weekends, cybercriminals target the US specifically during holidays because it’s a busy time of year and employees are often distracted, leaving companies vulnerable to attack. With business slowing, people on vacation and kids out of school, it’s not surprising that employees across the board pay less attention to security. Threat actors know this and aim to take advantage. Popular attack types Email is one of the most common attack vectors, and phishing email attacks are even more widespread during the holidays. Since there is a significant uptick in online shopping, threat actors will target victims with phishing emails disguised as shipping updates or tracking information that victims are much more likely to click. Due to employee distraction, external exploitation has a higher chance of success. There is a window of exploitation from the vulernability discovery time until it is patched in which threat actors can weaponize it. During the holidays, this window becomes larger because security engineers operate slower and often in reduced numbers with team members being out of the office. What should companies do to prepare? Implementing basic cyber hygiene is always critical to prevent attacks. Basic cyber hygiene includes, but is not limited to, keeping operating systems and software up to date, scanning for vulnerabilities, utilizing strong passwords and a password manager and enabling multifactor authentication (MFA). Blue Team Alpha recommends that companies start protecting their networks, beginning with Implementation Group 1 in the Center for Internet Security’s (CIS) Critical Security Controls version 8. This guide contains “essential cyber hygiene” actions that every company should have in place to defend against cyberattacks. Have an incident response plan in place Incident response plans should be thorough, ready to deploy and not dependent on a particular person to avoid singular points of failure if that person is out during an attack. All employees should know their role in the event of an incident and who to call. Tighten email security Since email is the most common attack vector, all employees should be refreshed on how to identify and report phishing emails. Be aware of any email that seems abnormal, even if it comes from a trusted source. When in doubt, contact the sender by other means to confirm the email’s legitimacy. The same applies to voice phishing calls, too. Implementing domain allow listing is another way to protect from phishing emails. While not foolproof, this means that emails going to the inboxes of top executives are preapproved to limit the likelihood of phishing emails getting through to them. Ensure security teams plan around the increased risk Security and IT teams need to be on high alert during this time of year and should plan their travel around it. By doing this, teams will be able to maintain a high level of network security, see malicious activity sooner and respond faster. This limits the risk of a successful attack and can lower the impact of an attack if one does happen. Engage in preemptive threat hunting Threat hunting is a proactive strategy that seeks out evidence of a threat actor in the network before an attack. Cybercriminals can exist on a victim’s network for a while before acting. Threat hunting utilizes behavior-based analytics to identify abnormalities and trigger alerts.
<urn:uuid:e1108ee0-681d-4af0-8b0c-ec5da2bd3f05>
CC-MAIN-2024-38
https://blueteamalpha.com/blog/why-cyber-criminals-love-the-holidays-and-what-to-do-about-it/
2024-09-09T15:23:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00334.warc.gz
en
0.951466
714
2.53125
3
Descriptive VS Inferential Statistics Updated · Sep 08, 2023 Understanding Descriptive VS Inferential Statistics, descriptive statistics are summary statistics that can help to understand the individual quantitative observations and the general process to retrieve the insights from the data. This is only a descriptive method and thus does not focus on the differences between two variables Descriptive VS Inferential Statistics - Descriptive statistics can be easily presented using graphics, tables, or charts. - This method is used to describe a situation. - As the name suggests, it explains already-known information. - Moreover, descriptive statistics provide shape to the given raw data. - In this case, organization and analysis of the raw data are easy. - The given data is limited to a sample only. - Inferential Statistics are used to understand the occurrence of a case in an event. - As the same says, inferences can be drawn using the data of the population. - The results in this case are retrieved on the basis of probability. - Moreover, the results almost provide the conclusion on the population. - Comparison between different datasets based on predictions can be successfully performed. Types of Descriptive Statistics Variability: Also called a dispersion in a dataset, the way the values are spread around. The central tendency in the set of data helps to understand the variability. Moreover, it includes a wide range of measurements. Variability has some commonly used measures as follows: - Skewness: Skewness is a measurement of symmetry in a given dataset. If the left-hand curve is longer than the right one and seen as a flat curve then the value is negative. On the other hand, when the right curve is long and flat then it is seen as positive. - Range: In order to understand the size of the distribution of values, the smallest value is often subtracted from the largest value. - Standard deviation: The quantity of variation or dispersion can be known using standard deviation. For the values which are nearby the mean value, a low standard deviation is implied. On the other hand, values spread across are referred to as high standard deviation. - Kurtosis: To understand whether the data include extreme values Kurtosis analysis is performed. The extreme values are also referred to as Outliners. If the provided data includes a high range of extreme values, then we call it a high Kurtosis. - Minimum & maximum values: These values represent the highest and lowest values in the datasets. - Central Tendency: The common central values in a given dataset for measurements are referred to as a central tendency. But, instead of giving a reference to the whole dataset, it only focuses on a variety of central measurements. Some measures of central tendency include: - Mode: refers to the commonly observed value in the dataset. - Median: The middle value in the given dataset. - Mean: The average value in the entire dataset. - Distribution: Distribution explains the frequency of different results in the given dataset. Values of the results in the distribution category can be presented using a graph, table, and list. Visual representation is a common practice in descriptive statistics. In Addition, Descriptive VS Inferential Statistics are based on two different bases as one method provides more generalized results while the other does the opposite. Inferential Statistics, therefore, depends on general results based on a larger population. Moreover, this method is based on probability rather than facts. The accuracy of the results obtained is based on the accuracy and representative of the larger population. Therefore, the results are gained from the random sample. Observations in the random samples with biased meaning are not considered. Random sampling is a crucial step to perform inferential Statistics. Random Sampling involves the factors such as defining the population, deciding the sample size, drawing a sample, and analyzing. Additionally, other techniques include Confidence intervals, hypothesis testing, and regression and correlation analysis. Concluding the Descriptive VS Inferential Statistics, even though these are two different statistics and the results given are the opposites of each other, using Descriptive and Inferential Statistics collectively provides accurate results as these are together called powerful statistical techniques. On the other hand, using only descriptive statistics, won’t come in handy for decision-making. Descriptive Statistics helps to analyze data such as mean, median, mode, standard deviation, and quartiles. With excessive data on one hand, Excel makes it easy to calculate the above functions. Let’s assume the numbers in excel as: For calculating the average function use =AVERAGE(range). If we want to take an average between 3 to 17 then the calculation would be =AVERAGE(A1:A4). For calculating the mode function use =MODE(A1:A4). For calculating the standard deviation function use =STDEV(A1:A4). For calculating the median function use =MEDIAN(A1:A4). For calculating the QUARTILE function use =QUARTILE(A1:A4). If you want to calculate more columns use, between two datasets. For eg. (A1:A4, B1:B4) For calculating the mode function use The purpose of descriptive statistics is to give information about the data set. There are three types of descriptive statistics, involving, frequency distribution, central tendency, and variability of the data. In simple words, frequency distribution measures the common occurrence of the data, central tendency measures the midpoint of distribution, and degree of dispersion is measured by the variability of data. Descriptive Statistics deals with the collection, analysis, interpretation, and presentation of the given data in a descriptive way. On the other hand, inferential statistics predicts inferences on a given dataset of the larger population. Population in terms of statistics includes an observation of individuals irrespective of the nation, group of people along with their common features, and also the group of individuals. Barry is a lover of everything technology. Figuring out how the software works and creating content to shed more light on the value it offers users is his favorite pastime. When not evaluating apps or programs, he's busy trying out new healthy recipes, doing yoga, meditating, or taking nature walks with his little one.
<urn:uuid:05b45c08-da1a-4cdb-828f-93d5d76d3b57>
CC-MAIN-2024-38
https://www.enterpriseappstoday.com/stats/descriptive-vs-inferential-statistics.html
2024-09-09T15:14:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00334.warc.gz
en
0.911002
1,304
3.515625
4
A Symmetric Key is a cryptographic key that is used to perform both the cryptographic operation and its inverse, for example to encrypt plaintext and decrypt ciphertext, or create a message authentication code and to verify the code. Also, a cryptographic algorithm that uses a single key (i.e., a secret key) for both encryption of plaintext and decryption of ciphertext. Related Term: Secret Key Source: CNSSI 4009 CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. Sign up for the monthly newsletter to help CyberHoot with its mission of making the world ‘More Aware and More Secure!’
<urn:uuid:506566bf-5c88-46a1-8be2-47588b3067e6>
CC-MAIN-2024-38
https://cyberhoot.com/cybrary/symmetric-key/
2024-09-10T21:35:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00234.warc.gz
en
0.870349
204
3.328125
3
Music CD-R And Data CD-R People who are new to computers and duplication, will sometims confuse CD-R music media with CD-R data media. While confusing them is easy to do, the two are different indeed. Even if you have some experience with computers, confusing them is very easy to do. As you may or may not know, there are differences between music CD-R and data CD-R disks. The obvious difference is, of course, the name. With one named CD-R music and one named CD-R data, you know there has to be some type of difference between the two. What's known is that there are indeed technical differences in what is embedded in blank music CDs when compared to blank data CDs. These differences center upon bytes that are within the sub channels of the blank music disks. This doesn't affect the quality, as both audio and data can be duplicated onto both music CD-R disks and data CD-R disks. You can burn data onto music CD-R, and music onto data CD-R media without any problems. Keep in mind, whether or not you get data on a music CD-R will depend on what type of hardware you use to duplicate the CD. If you plan to use a PC to do all of your burning, it won't matter. A PC doesn't differentiate between music CD-R and data CD-R. PCs will see a blank media CD and duplicate information on it that pertains to the settings you have outlined in the software you plan to use to burn the CD. If you plan to use a seperate CD burner, it may or may not let you burn data or music on a generic blank or data CD-R. Some hardware are funny like that, as they only want you to use blank media with well known brand names that they have approved of. If you plan to do most of your CD duplication on a computer, it really doesn't matter which type of blank CD-R you use. They will both work fine in most cases when you store either music or data. When storing data, you have a limit of 700 MB, while music will have a limit of a little over an hour of tunes. For your duplication needs, computers are the ideal way to copy media. You can use equipment outside of a computer and CD burner, although you'll need to check the operations manual and see what they recommend for media. If you have a computer or access to one, it can do wonders in the areas of music and data CD-R duplication. About this post Viewed: 1,641 times No comments have been added for this post. Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum.
<urn:uuid:c100b398-60ff-4e83-afd9-36126beef6b2>
CC-MAIN-2024-38
https://www.fortypoundhead.com/showcontent.asp?artid=2689
2024-09-10T19:55:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00234.warc.gz
en
0.941302
581
2.671875
3
European Supercomputer to Map Human Brain This video gives an overview of JuQueen, a new supercomputer recently unveiled at a research center in Juelich, Germany, which is the fastest supercomputer in Europe, capable of performing quadrillions of calculations per second. March 22, 2013 While it is six times faster than its predecessor, JuQueen, a new supercomputer recently unveiled at Jülich Supercomputing Centre in Jülich, Germany, uses one-sixth of the energy. The supercomputer is the fastest in Europe and capable of performing quadrillions of calculations per second. A group of doctors, computer scientists and others will be embarking on a 10-year-long project to use the computer's capabilities to map the entire human brain - from individual cells to large areas of the brain. The video runs 2:20 minutes. For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. About the Author You May Also Like
<urn:uuid:f4ff6c51-b241-4230-acce-bcdbdd0c151c>
CC-MAIN-2024-38
https://www.datacenterknowledge.com/supercomputers/european-supercomputer-to-map-human-brain
2024-09-12T01:40:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00134.warc.gz
en
0.907692
205
3.078125
3
Study Proposes Using Memory and Repeater Devices for Space-Based Quantum Internet (TheEngineer.co) Enabling space-based quantum internet services with memory and ‘repeater’ devices has been proposed in research by Strathclyde University and international collaborators. The study suggests that quantum memories (QM) and repeaters, which are used in the transmission of the information, can be deployed to facilitate use of advanced internet technology. This is done through distribution of quantum entanglement, a phenomenon in which two particles are interlinked, potentially at vast distances from each other. The research showed that satellites equipped with QMs provided entanglement distribution rates which were three orders of magnitude faster than those from fibre-based repeaters or space systems without QMs. It was led by Humboldt University in Berlin and involved the Institute of Optical Sensor Systems of the German Aerospace Center (DLR) and JPL (Jet Propulsion Laboratory NASA). In a statement, Dr Daniel Oi, senior lecturer in Strathclyde’s Department of Physics, a partner in the research, said: “We show in this paper that this method would have much higher performance than previously proposed schemes and we identify promising physical systems with which to implement it. “The work is connected to wider work at Strathclyde on Quantum Technologies, and in particular Space Quantum Communication research that includes several space missions due to be launched in the next few years.” The proposal in the research uses satellites equipped with QMs in low-earth orbit. It is focused on the use of quantum key distribution (QKD) for encryption and distribution, and of QMs to synchronise detection events which could otherwise have been happening by chance. “With the majority of optical links now in space, a major strength of our scheme is its increased robustness against atmospheric losses,” the team states in the paper. “We further demonstrate that QMs can enhance secret key rates in general line-of-sight QKD protocols.”
<urn:uuid:7c262210-793e-468b-8e63-f844e77ee926>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/study-proposes-using-memory-and-repeater-devices-for-space-based-quantum-internet/
2024-09-12T03:24:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00134.warc.gz
en
0.95119
427
2.71875
3
iko - Fotolia Microsoft’s underwater datacentre trials have attracted the attention of environmentalists and industry watchers alike, with many questioning the long-term viability and impact of the initiative. The software giant has created a subsea, self-contained datacentre as part of its ongoing Project Natick research into creating facilities that can help it meet the growing demand for cloud-based services in a sustainable way. An underwater datacentre is a good way, according to Microsoft, of achieving this as it negates the need for expensive mechanical cooling systems. If – as the company hopes – such facilities can be paired up with hydroelectric power systems, it could also stand to be more environmentally friendly than traditional land-based builds. The company has also been quick to talk up the latency benefits of offshore datacentres on its Project Natick web page. Here, it states, with half the world’s population living within 200km of an ocean, subsea builds have the potential to significantly cut down data transfer times to user sites. David Barker, technical director at Surrey-based datacentre and colocation provider 4D, says that, from a latency and logistics perspective, the research Microsoft is doing makes a lot of sense. “The vast majority of the Earth’s surface is covered with water and all international fibre routes run along the sea bed,” Barker tells Computer Weekly. “By deploying a datacentre on the sea bed you get around some issues with building a facility on land and at scale, such as having to build away from major metro areas where there is little fibre connectivity to keep land costs low.” From research to commercial reality According to a New York Times report into the project, the prototype vessel contained a single, operational datacentre rack that was surrounded in pressurised nitrogen to soak up the heat generated by the IT kit inside. The report also states that Microsoft has designs on creating another datacentre that will be around three times the size of its first prototype in tandem with a company that specialises in hydroelectric power systems, and that could be trialled as early as 2017. Despite this, Microsoft has been quick to stress that it is still early days for the research project, and that it is likely to be sometime before other modular, ocean-ready datacentres start rolling off the production line. The 10ft by 7ft prototype facility was initially sited one kilometre off the Pacific coast of the US between August and November 2015, before being shipped back to Microsoft’s Redmond headquarters so its research team could analyse how it fared. The company has revealed that it hopes, in time, that similar datacentre builds could be deployed under the sea for up to five years at a time, in line with the average lifespan of the equipment inside. Effect on marine environment unknown In light of this, Gary Cook, senior corporate campaigner and IT sector analyst at environmental lobbyists Greenpeace, says more research into the long-term ecological impact of the vessels will be required. Particularly, he added, as the precise amounts of thermal pollution these types of vessels could give off – and the consequential increase in ambient sea temperature they could cause – is unknown. Localised increases in sea temperature caused by the output of warm water from power plants, for example, can sometimes have a transformative effect on aquatic creatures and their delicate ecosystems. As such, species can fail to adapt to the changing conditions, causing them to migrate away or die off, whereas other, less prevalent organisms might be better suited and thrive. “I have not seen any data that indicates how much local heating of the marine environment occurs with these, only the adjectives along the lines of ‘extremely’ small amounts,” Cook tells Computer Weekly. “Exactly what amount of local heating is to be expected, particularly in aggregate if these pods were deployed in large numbers in close quarters, is something to keep an eye on. Hopefully Microsoft will make their full findings available soon.” That aside, Cook says Microsoft’s decision to put sustainability and renewable energy at the forefront of its distributed datacentre plans is a promising development. “Microsoft deserves credit for exploring this, but I hope they show greater commitment to aggressively marrying existing renewable sources to their rapidly growing Azure infrastructure, as we have recently seen from Google, Apple and Facebook,” Cook adds. Opportunities for others If and when the Microsoft vessels do enter full production, Andrew Donoghue, European research manager at IT analyst house 451 Research, says it is unlikely that other players in the datacentre and colocation market will be rushing to follow suit. “So-called hyper-scale datacentre operators such as Microsoft, Google, Facebook and Apple have effectively rewritten the rule book on accepted datacentre design in recent times, with the use of containerised datacentres, rack-level fuel cells for power, new IT architectures from the silicon all the way up with initiatives like the Open Compute project,” he explains to Computer Weekly. “We won’t see enterprises or colocation datacentre operators adopting underwater facilities anytime soon, but it is possible that Microsoft may decide to move this from a research project to actual commercial deployment for a few specific use cases.” In Microsoft’s case, these types of builds could act as edge datacentres providing cloud-related internet of things (IoT) services or for hosting smart city applications that rely on low latency connections to datacentre resources, Donoghue continues. However, there are a number of regulatory, logistical, maintenance and cost challenges that would make it difficult for the majority of datacentre providers to follow Microsoft into the water straightaway. David Barker, 4D “In the short-term, it is only companies with enormous research and development budgets that will be able to take on projects such as this, but if Microsoft is able to commercialise the technology, then it could become a reality that we have a network of underwater datacentres supporting the cloud services we use every day, which is a very interesting and intriguing concept,” says 4D’s Barker. Indeed, with the furore around Safe Harbour sharpening the minds of many CIOs and tech firms about the legal issues around data sovereignty, Barker says using offshore datacentres could throw up a whole new set of considerations for IT decision makers. “Deploying a datacentre on the sea bed in international waters does provide some interesting thoughts on data protection and privacy,” he says. “If the data is held in international waters does copyright law still apply? Is there any regulation or requirements on data security or protection if it is being stored outside of any country’s jurisdiction?” During the three-month trial, the company used a series of sensors to remotely monitor conditions inside and out of the vessel, in case any maintenance issues cropped up, as it is designed to be unmanned. Karl Mendez, managing director of CWCS Managed Hosting, tells Computer Weekly this could prove to be a stumbling block for companies that are used to having easy on-site access to their IT assets in the event of a hardware failure. “An underwater datacentre would be difficult for technical engineers to maintain and access – a potentially devastating problem in the event of a hardware failure – and sea water could erode the equipment over the long-term,” says Mendez. In a similar vein, Donoghue says finding efficient ways of delivering backup power suppliers to an underwater site is also likely to prove problematic. “Primary power will probably come from renewables – wave power seems likely or even a connection to a hydroelectric power plant – but it’s not clear what the backup source will be [in these situations]. It could potentially be a grid connection as the datacentres will be deployed close to the shore,” he offers. Read more about sustainable datacentre builds - Yahoo’s global director for sustainability explains why cloud providers need to get clued up on renewable energy. - With datacentre operators under increased competitive pressure to cut their PUE scores, are their ratings really all they seem?
<urn:uuid:bab8106f-1778-43e8-9782-add5c170c559>
CC-MAIN-2024-38
https://www.computerweekly.com/feature/Microsofts-underwater-datacentre-The-pros-and-cons-of-running-subsea-facilities
2024-09-13T09:37:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00034.warc.gz
en
0.95092
1,714
2.90625
3
The term info stealer is self-explanatory. This type of malware resides in an infected computer and gathers data in order to send it to the attacker. Typical targets are credentials used in online banking services, social media sites, emails, or FTP accounts. Info stealers may use many methods of data acquisition. The most common are: - hooking browsers (and sometimes other applications) and stealing credentials that are typed by the user - using web injection scripts that are adding extra fields to web forms and submitting information from them to a server owned by the attacker - form grabbing (finding specific opened windows and stealing their content) - stealing passwords saved in the system and cookies Modern info stealers are usually parts of botnets. Sometimes the target of attack and related events are configured remotely by the command sent from the Command and Control server (C&C). The age of info stealers started with the release of ZeuS in 2006. It was an advanced Trojan, targeting credentials of online banking services. After the code of ZeuS leaked, many derivatives of it started appearing and popularized this type of malware. In December 2008, a social media credential stealear, Koobface, was detected for the first time. It originally targeted users of popular networking websites like Facebook, Skype, Yahoo Messenger, MySpace, Twitter, and email clients such as Gmail, Yahoo Mail, and AOL Mail. Nowadays, most botnet agents have some features of info stealing, even if it is not their main goal. Common infection method Info stealers are basically a type of Trojan, and they are carried by infection methods typical for Trojans and botnet agents, such as malicious attachments sent by spam campaigns, websites infected by exploit kits, and malvertising. Info stealers are usually associated with other types of malware such as: - Downloaders/Trojan Droppers They are represented by malware families such as: - Neutrino botnet Early detection is crucial with this type of malware. Any delay in detecting this threat may result in having important accounts compromised. That’s why it is very important to have a good quality anti-malware protection that will not let malware be installed. If the user suspects his or her computer is infected by an info stealer, he or she should do full scan of the system using automated anti-malware tools. Removing malware is not enough. It is crucial to change all passwords immediately. Info stealers are dangerous for all the users of an infected machine. The consequences are proportionally serious to the importance of stolen passwords. Common dangers are: violated privacy, leakage of confidential information, having money stolen from an account, and being impersonated by the attacker. Stolen email accounts can be northerly used to send spam, or a stolen SSH account can be used as a proxy for attacks performed by cybercriminals. Avoidance procedures are same as for other types of Trojans and botnet agents. First of all, keep up good security habits. Be careful about visited websites and don’t open unknown attachments. However, in some cases this is not enough. Exploit kits can still install the malicious software on the vulnerable machine, even without any interaction. That’s why it is important to have quality anti-malware software.
<urn:uuid:9094e57e-1a7d-484a-a7e9-6d739eb0c8eb>
CC-MAIN-2024-38
https://www.malwarebytes.com/blog/threats/info-stealers
2024-09-13T09:27:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00034.warc.gz
en
0.943842
689
3.015625
3
Quick definition: A SIM, or Subscriber Identity Module, is a small memory chip that is either inserted into or built into a device, providing it with the ability to connect to a cellular network. SIM cards, which can be non-updatable UICCs or updatable eUICCs (commonly known as eSIMs), are available in various form factors and types, such as standard SIMs, eSIMs, and iSIMs. The humble SIM card has evolved far beyond its original design, becoming a critical component in ensuring seamless connectivity across devices and networks globally. From the traditional, physical SIMs to the innovative, embedded eSIMs, the technology has adapted to meet the diverse and expanding needs of IoT applications. In this article, we take a brief look into the world of SIM technologies, exploring their types, form factors, and the vital role they play in shaping the IoT landscape. Difference between types of SIMs and SIM form factors Before we look at the variety of SIM cards available, it’s important to distinguish between SIM types and SIM form factors. ‘SIM types’ refers to the card’s functionality and technological features, with variations including standard SIMs, and embedded SIMs (eSIMs). Conversely, ‘SIM form factors’ refer to the physical dimensions of the SIM card, which include traditional sizes like Full-Sized, Mini, Micro, and Nano SIMs, as well as embedded form factors like MFF2, which emnify offers, and USON8. A SIM card’s type is not tied to its form factor—a nano-SIM can be an eSIM, and an eSIM can come in different form factors, such as MFF2. iSIMs represent another form factor, eliminating the need for a physical card altogether. Types of SIMs Several SIM card types have emerged, and the most prominent ones include: Standard SIMs are the classic version of SIM cards, stored in a physical plastic card that can be transferred between devices such as mobile phones and tablets. These cards come with their own set of specific limitations. Unlike more advanced SIM technologies, specifically eSIMs, they do not allow the SIM profile to be remotely changed or configured, meaning network settings or updates must be manually performed. Embedded SIMs/eUICCS (eSIMs) The eUICC, or Embedded Universal Integrated Circuit Card, and eSIM (Embedded SIM) often intertwine but serve distinct roles when it comes to connectivity. While an eSIM refers to a physical embedded SIM card that does not require a tray or slot in the device, eUICC refers to the software that enables the remote provisioning of carrier profiles on the eSIM. An eSIM is available in various form factors, including 4-in-1, 3-in-1, and MFF2. The eUICC software on eSIMs allows for the remote management and swapping of carrier profiles, ensuring that the sensors maintain reliable connectivity with optimal network providers, regardless of geographical location, without necessitating physical intervention. This capability to acquire a new profile, akin to a software update, even enables a change in the service provider, offering unparalleled flexibility in managing global IoT deployments. Take, for instance, an IoT deployment in agriculture, where numerous sensors are embedded across vast farmlands to monitor environmental conditions. While standard SIMs can connect to multiple network providers, they are bound to the SIM profile set during production. In contrast, eSIM technology, especially when sensors are deployed globally and interact with various network providers and technologies, provides a distinct advantage by offering the flexibility to remotely manage and swap carrier profiles via eUICC software. Thus, while eSIM provides the physical framework for embedded connectivity, eUICC offers the technological capability to manage this connectivity remotely and flexibly across various network providers and regions. Multi-IMSI (International Mobile Subscriber Identity) technology stands out as a connectivity solution. Unlike traditional SIMs that are confined to a single IMSI, Multi-IMSI SIMs are pre-loaded with multiple IMSIs, each corresponding to a different network carrier. This enables the SIM to switch between various network providers, enhancing the reliability of connectivity, especially in cross-border or remote deployments. For instance, in a global IoT deployment involving asset tracking across multiple countries, a Multi-IMSI SIM can seamlessly switch between different carrier networks as the asset moves, ensuring uninterrupted data transmission. It's crucial to highlight that while Multi-IMSI provides flexibility in network selection by switching between pre-loaded profiles, it does so with the unique advantage of making decisions locally on the SIM, without requiring connectivity. On the other hand, eSIM technology allows carrier profiles to be remotely provisioned and managed via eUICC software, offering a distinct kind of flexibility. However, it’s worth noting that eSIMs, especially those with eUICC, encounter a challenge: if you need to switch a carrier due to lack of connectivity in a specific country, you cannot update the eSIM as it requires connectivity to manage it. Thus, while eSIMs provide a high degree of flexibility and remote manageability in dynamic IoT deployments, Multi-IMSI SIMs offer a robust solution for maintaining connectivity in scenarios where remote management is not feasible. SIM Form Factors You could say the differences between SIMs start with the form factor. SIM cards come in various shapes and sizes, and these form factors don’t necessarily determine what kind of software the SIM will have implemented. Full-Size SIM (1FF) Comparable in size to a credit card, this is the oldest and the largest card and was predominantly used for early-generation car phones, but advancements in miniaturization technology have now made this form factor redundant. Mini SIM (2FF) While Mini SIM cards, also known as 2FF SIMs, may be considered dated, they continue to find relevance, particularly in certain IoT devices. For instance, 2G-based GPS trackers often utilize the 2FF form factor. The 2FF SIM’s ongoing use in such applications highlights its durability and the technology’s ability to serve specific needs even as the industry continues to evolve. Micro SIM (3FF) The Micro SIM, or 3FF SIM, emerged with the advent of smartphones and devices requiring even smaller components. Its compact size allowed device manufacturers to devote more space to other elements like larger batteries or additional circuitry. This in turn advanced smartphone capabilities such as longer battery life and improved performance. Nano SIM (4FF) The Nano SIM, or the 4FF SIM, is currently the smallest plastic SIM card available, where the plastic and the chip are nearly identical in size. While the 4FF is the tiniest among plastic SIM cards, MFF2 embedded SIMs also present themselves as compact physical form factors, albeit integrated directly into the device’s circuitry. The Nano SIM design reflects the ongoing trend toward miniaturization, enabling even thinner and more feature-rich devices. Embedded SIM (eSIM/MFF2) The eSIM, also known as MFF2, represents a significant shift away from physical SIM cards. MFF2 eSIMs can be bought on reels from providers like emnify and then soldered onto the device, offering a secure connectivity solution. Integrated SIM (iSIM) The iSIM, or integrated SIM, diverges from the concept of a physical form factor. The iSIM is not a physical SIM card; it’s part of the wireless chipset in every modem and is loaded directly onto it. This integration is particularly beneficial for smaller, low-power devices where accommodating the physical components of an eSIM might be challenging. Another benefit is that it has no SIM hardware or logistics, which ultimately leads to lower costs. It’s crucial to note that the iSIM isn’t designed to replace the eSIM but rather to offer an alternative solution, catering to different use cases while still ensuring global, future-ready technology. A quick note on architecture standards ICC: The ICC standard, once a staple of IoT connectivity, is long outdated and generally out of use. It’s exclusively tethered to GSM and 2G, so you’re unlikely to come across it when choosing SIM cards for your IoT system. UICC: This is the most widely used SIM software standard as it compatible with a number of networks -- GSM/2G, 3G, 4G, 5G, LTE Cat 1 bis, and LTE-M and NB-IoT. However, UICC cards can only hold one operator profile, making them prone to issues with permanent roaming and achieving broad global coverage. eUICC: eUICC is software on the SIM card that allows for the remote provisioning of carrier profiles. eUICC can be used on all SIM form factors (2FF to MFF2 embedded SIMs), but having an embedded SIM (eSIM) doesn’t guarantee that you have eUICC, and vice versa. The emnify global IoT eSIM Navigating global IoT deployments presents a unique set of challenges, particularly in managing connectivity across diverse geographical locations and network technologies. The emnify global IoT eSIM is a transformative solution and is specifically designed to navigate and simplify global IoT connectivity by addressing several key aspects: - Reliable Connectivity: The eSIM is equipped with a multi-IMSI applet, ensuring robust connectivity across the globe at cost-effective rates. - Regulatory Adherence: It includes features like the Brazil IMSI, ensuring compliance with regional regulatory requirements, such as those specific to Brazil. - Future-Proofing: The eSIM is updateable, ensuring it can adapt to the latest configurations and future emnify enhancements, such as potential satellite connectivity integrations. The emnify global IoT eSIM offers unified connectivity with a single profile and APN, integrating all RANs into our single, logical core network. Whether utilizing an emnify SIM in Germany with T-Mobile, or in Australia with Telstra, all data traverses through our core network, embodying a truly network-agnostic approach. This approach not only sidesteps the need to predict regional device usage but also empowers you with insights and access to monitor all devices – globally and in real-time. Our CEO, Frank Stoecker, describes this succinctly: “The new emnify IoT eSIM is not just about tapping into SIM capabilities that we didn’t have before – it’s transforming the way IoT connectivity is delivered.” Built with Multi-IMSI and eUICC technology, the emnify global IoT eSIM simplifies the complexity of achieving network access, ensuring resilient global coverage, navigating evolving network technologies, and managing complex logistics processes for your IoT deployments with a single eSIM. Available as eUICC in all form factors - 2FF, 3FF, 4FF, and MFF2, and in the future, also as iSIM - the emnify eSIM is ready to adapt and evolve with the ever-changing landscape of cellular connectivity. Navigating the future of IoT with advanced SIM technology From the traditional, physical SIM cards to the innovative, remotely manageable eSIMs and iSIMs, the journey of SIM technology has been significant, adapting to the needs of diverse IoT applications. Looking ahead it is safe to assume that the continuous evolution of SIM technology will undoubtedly sculpt the trajectory of IoT deployments, driving enhanced connectivity, scalability, and operational efficiency across myriad applications and industries. Get in touch with our IoT experts Discover how emnify can help you grow your business and talk to one of our IoT consultants today! With a career spanning over 16 years in content creation, Bronwyn has honed her skills in translating intricate concepts into readily comprehensible content. As Senior Content Manager at emnify, Bronwyn applies this expertise to the dynamic IoT landscape, crafting content that informs and educates. SIM, eSIM vs iSIM: What’s the Difference? As you explore cellular components, at times you may hear several different terms for components that provide the same core function, such as a SIM, eSIM, and iSIM. The difference between these components could have a significant impact on your device’s future functionality, security, and scalability, so it’s important to understand the differences between them. eSIM vs. Nano SIM Form Factors: What’s the Difference? When choosing a SIM card for your cellular device, size matters. Smaller SIM cards take up less space, freeing manufacturers to build smaller devices and add additional components—which is especially valuable in Internet of Things (IoT) applications. Depending on how and where your device will be used, it may also be important to have a more durable SIM card that can handle extreme temperatures and conditions like corrosion and vibrations. What Is an Integrated SIM (iSIM)? Smaller components allow manufacturers to build smaller devices. The integrated SIM gives manufacturers the greatest flexibility in how they design cellular devices. In the future, iSIMs will be a dominant SIM form factor in consumer electronics and the Internet of Things (IoT).
<urn:uuid:435d8fa4-c1d0-42e1-a82f-a6df92de04ec>
CC-MAIN-2024-38
https://www.emnify.com/iot-glossary/types-of-sim-cards
2024-09-15T16:03:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00734.warc.gz
en
0.918329
2,771
3.1875
3
Researchers at Stanford University have come up with a new high-rise chip that is “smaller, faster, cheaper – and taller”, owing to the stacking of requisite components. The IEEE International Electron Devices Meeting which was held at San Francisco, between December 15th and 17th, witnessed the team present a paper about the skyscraper chip’s architecture. Headed by Subhasish Mitra, associate professor of electrical engineering and of computer science, and H.-S. Philip Wong, the Williard R. and Inez Kerr Bell Professor in Stanford’s School of Engineering, the team utilises three breakthroughs, explains the news release announcing the breakthrough: First, is a new technology for creating transistors, those tiny gates that switch electricity on and off to create digital zeroes and ones. Second is a new type of computer memory that lends itself to multi-story fabrication. Third is a technique to build these new logic and memory technologies into high-rise structures in a radically different way than previous efforts to stack chips. “This research is at an early stage, but our design and fabrication techniques are scalable,” Mitra said. “With further development this architecture could lead to computing performance that is much, much greater than anything available today.” The prototype chip revealed at IEDM shows how to put logic and memory together into three-dimensional structures that can be mass-produced, Wong said. “Paradigm shift is an overused concept, but here it is appropriate,” Wong noted. “With this new architecture, electronics manufacturers could put the power of a supercomputer in your hand.” Researchers on the project and Ph.D candidate in Stanford’s Department of Electrical Engineering, Max Shulaker and Tony Wu, created the techniques behind the four-story high-rise chip unveiled at the conference. “The slowest part of any computer is sending information back and forth from the memory to the processor and back to the memory. That takes a lot of time and lot of energy,” Shulaker told Computerworld. “If you look at where the new exciting apps are, it’s with big data… For these sorts of new applications, we need to find a way to handle this big data.” “People talk about the Internet of Things, where we’re going to have millions and trillions of sensors beaming information all around,” surmises Shulaker. “You can beam all the data to the cloud to organize all the data there, but that’s a huge data deluge. You need [a chip] that can process on all this data… You want to make sense of this data before you send it off to the cloud.” Although efforts to stack silicon chips earlier did save space but not avoid digital traffic jams caused by the connectivity by wires unlike something called ‘nanoscale elevators’ in the Stanford design. Read more here. (Image credit: Stanford University)
<urn:uuid:52559619-7dce-4e15-8a72-b6c07905f0db>
CC-MAIN-2024-38
https://dataconomy.com/2014/12/18/stanford-researchers-invent-multistoried-chips-to-address-the-rise-of-iot-and-big-data/
2024-09-16T22:16:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00634.warc.gz
en
0.944678
637
2.859375
3
A new trend in naval warfare is unilateral attack autonomous underwater vehicles (OWA-AUV). These armed underwater vehicles combine the autonomy and range of an AUV with a torpedo-like warhead. Because of their low speed, they are mostly used to attack relatively static targets, such as ships in port or at anchor, as well as offshore infrastructure. Where did these drones come from? Several countries and non-governmental organizations already use such devices. As in many other aspects of unconventional naval warfare, the great powers and the West in general are lagging behind in this area. There were predecessors, such as torpedoes, which were launched into ports to damage ships. I suspect that at least one western navy used a similar type of weapon without attracting much public attention. However, this trend is really new and relevant, partly due to the improvement of unmanned underwater technologies. How effective are these drones? One might be tempted to compare them to the underwater equivalent of the Shahed drones launched in Ukraine. However, in the naval theater of war, more emphasis is placed on surprise and stealth than on quantity. These drones are effective due to their ability to stealthily approach targets and cause significant damage to critical enemy infrastructure. How they affect modern warfare. The use of underwater attack drones changes the approach to naval operations, making them more unpredictable and dangerous for the adversary. These devices can force the enemy to spend more resources on the defense of ports and other important facilities, which in turn changes the distribution of forces and affects the overall military strategy. (MARICHKA) — The new large autonomous underwater vehicle (UAV) developed by AMMO Ukraine has a length of approximately 6 meters and a diameter of 1 meter. The design of the device is metal, and the entire body or most of it is a pressure tank. The warhead is located inside the main fuselage. The smallest OWA-AUV to date was presented for the first time by Ukrainian engineers. Its design is unconventional, which emphasizes its uniqueness. The device is equipped with larger stabilizers and much wider propulsors (propellers) compared to other UUVs. Its range and warhead are smaller than other types. This torpedo weapon first appeared in May 2021, but gained significant attention during the 2022 conflict. Management can either be internal or at least very limited after launch. Due to its relatively small size, Hamas can launch it from the beach with the participation of four people. In essence, it is a combination of Unmanned Underwater Vehicle (UUV) and torpedo technology. Over the past few years, there have been attacks on merchant ships in the Persian Gulf attributed to the IRGC (Islamic Revolutionary Guard Corps). Some of these attacks involved ships at anchor and are believed to have involved similar drones. Developed for the IRGC (Islamic Revolutionary Guard Corps), which has been actively engaged in naval hybrid warfare in recent years. This device is much larger than an ordinary torpedo, which gives it a much longer range, which is at least 500 km. Works on electricity, and most of the fuselage is occupied by lead-acid batteries. Designed for a variety of tasks, including reconnaissance, monitoring and protection of underwater objects. “Heil” is characterized by high autonomy and the ability to work in difficult underwater conditions. Thanks to advanced technologies and modern sensors, the drone is able to collect and transmit data in real time.
<urn:uuid:1455167e-5926-426e-a41c-c58e40c21491>
CC-MAIN-2024-38
https://hackyourmom.com/en/drony/evolyucziya-pidvodnyh-udarnyh-droniv-u-suchasnij-vijni/
2024-09-20T16:05:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00334.warc.gz
en
0.968143
696
3.125
3
Gaming PC is no joke and that is some serious processing power and hardware that you build to make it possible for you to play those extensive games on your PC. That power comes with certain factors that you need to be careful about and having the PC heat up is one of them. The smarter processor and GPU you get, the more heat it will be producing as it will be processing a lot more information than your ordinary computer. You get to have different sort of fans for your CPU and GPU that will help you with dissipating all that heat and keep your hardware safe and cooler. If you notice that your fans are randomly ramping up, here are a few things that you will need to take care of. Fans Randomly Ramp Up 1) Disable Overclocking These fans come with temperature sensors and if they notice that your hardware temperature is raising more than it should be, they will ramp up to efficiently achieve the optimal temperature on your CPU and GPU. That means, if your PC is overheating, the fans will automatically speed up a bit to cool it down in an efficient manner. This can be caused if you are overclocking your GPU or CPU as that will cause the hardware to overheat and fans will have to overclock in order to ensure that they are cooling efficiently. To fix such problem, you will need to check if you are overclocking your hardware and disable it if you are. Overclocking can cause the hardware to heat up more than it should and that will not only cause the fans to ramp up, but can also be dangerous for the hardware that you have on your PC and can damage it in the longer run, or reduce the longevity of your hardware certainly. 2) Enable fan smoothing If you are not overclocking and the fans are randomly ramping up for no reason, you will need to check on the BIOS settings as well. There are quite a number of options on advanced CPUs and their BIOS and fan smoothing is one of them. Fan smoothing clocks the fans at optimal speed so they can run constantly at the right speed to keep your PC cool and not let it heat up at the same time. You will need to access the BIOS and enable fan smoothing from there and that will definitely be helping you out perfectly in order to ensure that you don’t have to face any such problems later. 3) Increase fan Curve There is also a possibility that your PC might be producing more heat than your fans can dissipate and that will cause them to ramp up. The best way would be to increase the fan curve manually and adjust it to the right speed where they can work normally and you will be able to make sure that you don’t have to face such problems afterwards and that will be helping you out perfectly in sorting the problem for good.
<urn:uuid:81d7e4d8-bdd3-4275-909c-acf2741a79cc>
CC-MAIN-2024-38
https://internet-access-guide.com/fans-randomly-ramp-up/
2024-09-20T17:53:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00334.warc.gz
en
0.955548
576
2.53125
3
You may have noticed that your internet browser uses a lot of RAM; possibly more than many of the desktop programs you might use. This is most certainly true if you use the newest versions of Chrome and Firefox. But is it really a bad thing that your internet browser uses a lot of RAM? First you have to understand how RAM works. How Does RAM Work? It’s usually a good thing when your computer utilizes RAM. RAM works like a quick reference for recently used information, using it as a quick access to the programs and information you use most. If your computer and programs are running well, there’s no reason to be concerned about the amount of RAM your computer is used. Should I Be Concerned That My Internet Browser Uses a Lot of RAM? Internet browsers of the past didn’t used to have the advanced functionality of the browsers we use today. The days of basic static text and photo webpages are in the past. The newest internet browsers utilize plugins, extensions, and support web apps that are just as functional as their desktop counterparts (like in-browser messenger apps, games, document editing). Internet browsers are also utilizing RAM to become faster and more reliable. By splitting up the processes within the browser, it’s less likely to crash the entire browser if one of the tabs or plugins crash. Other features, like prerendering for example, are utilized by some browsers to help load pages faster at the expense of RAM. Why Empty RAM is Rarely a Good Thing Empty RAM on your computer is an unused resource that could help your computer or programs perform better. Cached data, like that used by internet browsers, is marked low-priority, which means that it can be instantly freed up to reallocate to something more important.
<urn:uuid:031fb61f-ce0b-4aaf-a471-775af62e8045>
CC-MAIN-2024-38
https://www.ccsipro.com/blog/internet-browser-uses-lot-ram/
2024-09-20T17:14:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00334.warc.gz
en
0.93173
366
3.03125
3
In today’s rapidly evolving business landscape, achieving process excellence is more crucial than ever. Companies across multiple industries are continually seeking methods to streamline their operations, enhance productivity, and foster innovation. Among the most effective strategies for advancing process management are Scrum and Agile methodologies. Though often used interchangeably, these two concepts have unique attributes and, when combined, can drive significant improvements in how businesses function. Understanding Scrum and Agile Origins and Definitions Scrum and Agile have deep roots in the world of software development but have since found applications in various other industries. Scrum was first conceptualized in a 1986 Harvard Business Review article using rugby as a metaphor for creative processes. It was later formalized as a software development framework, focusing on iterative progress through defined roles and cyclical review processes. Agile, on the other hand, was born in 2001 from a manifesto by 17 software developers. This manifesto outlined values and principles aimed at streamlining software development processes, focusing on customer collaboration, flexible response to changes, and frequent delivery of valuable software. Distinct Yet Complementary While Scrum and Agile are often mentioned together, it’s essential to recognize their distinctions. Scrum provides a structured, cyclical approach focused on small, iterative improvements through predefined roles such as Scrum Master, Product Owner, and Development Team. The framework emphasizes time-boxed iterations known as sprints, each culminating in reviews and retrospectives. Agile represents a broader cultural mindset prioritizing flexibility, collaboration, and continuous enhancement. By weaving Scrum’s structured methods into an Agile framework, organizations can harness the full potential of both methodologies. This synergy facilitates not only the improvement of individual projects but also enhances overarching organizational practices and goals. Scrum’s Structured Teams Scrum emphasizes the importance of small, tightly defined teams with specific roles and objectives. This structure involves roles such as the Scrum Master, who facilitates the Scrum process and removes impediments; the Product Owner, who represents the stakeholders and prioritizes the backlog; and the Development Team, responsible for delivering increments of the product. This clear delineation of roles mirrors excellent process management practices, where process owners and subject matter experts play pivotal roles in defining and refining business processes. The defined framework and role-specific responsibilities ensure efficiency and accountability, allowing teams to focus on delivering value incrementally. Agile’s Collaborative Environment In contrast, Agile supports a more fluid and collaborative process environment. It encourages shared responsibility among team members and stakeholders, fostering an inclusive culture where everyone is involved in improving business processes. Agile’s foundational principles advocate for customer collaboration over contract negotiation and responding to change over following a rigid plan. This collaborative ethos ensures that ideas and feedback are continuously incorporated into the process management cycle, promoting agility and adaptability. The inclusive nature of Agile ensures that process improvements are not just top-down mandates but are embraced and driven by those actively involved in the work. Process Sprints and Reviews Continuous Cycles of Improvement Scrum’s methodology revolves around “sprints,” which are short cycles of planning, execution, and review. These sprints embody the principle of continuous improvement, helping to prevent processes from becoming outdated or brittle. Each sprint, typically lasting two to four weeks, includes stages such as sprint planning, daily stand-ups, sprint execution, and a final sprint review meeting. These time-boxed iterations allow teams to make regular, manageable changes, enabling them to adapt quickly to new information or shifting requirements. The iterative nature ensures that improvements are incremental and sustainable, contributing to a dynamic process management strategy. Regular Evaluations and Retrospectives Regular evaluations and retrospectives are critical components of both Scrum and Agile methodologies. These practices ensure that all team members are actively involved in assessing and enhancing business processes. Scrum, through its review and retrospective meetings, provides structured opportunities for the team to reflect on their performance and the process itself. Agile extends this through its emphasis on regular feedback loops and continuous improvement. By committing to frequent, small-scale improvements, organizations can maintain a dynamic and responsive process management strategy that adapts to changing needs and challenges. This iterative evaluation fosters a culture of openness and adaptability. Agile’s Culture of Continuous Improvement Agile methodology promotes a culture where the pursuit of process excellence becomes a default approach rather than an isolated task. This mindset supports the continuous delivery of usable outcomes at every stage, emphasizing the importance of constant, incremental process enhancements. The Agile principles encourage teams to deliver working software or tangible outputs frequently and to welcome changing requirements, even late in development. With Agile, every team member is invested in the ongoing quest for better processes, contributing to a culture where improvement is an integral aspect of daily work rather than a sporadic initiative. Delivering Usable Outcomes Delivering usable outcomes at every stage is a cornerstone of the Agile philosophy. This approach encourages frequent, small-scale process improvements that are both practical and achievable. By focusing on delivering tangible results, teams can create a culture of continuous innovation that aligns with the principles of process excellence. The regular delivery of increments allows stakeholders to see progress and provide feedback, which can be immediately integrated into subsequent sprints. This iterative delivery not only ensures that the final product meets user needs more accurately but also keeps the team motivated and focused on delivering value continuously. Integration of Scrum and Agile Adopting Scrum practices within an Agile framework offers the most effective results for process management. Scrum’s structured cycles enhance Agile’s flexible and collaborative ethos, creating a comprehensive approach to process improvement. This integration leverages the strengths of both methodologies to establish a robust framework for achieving process excellence. The structured, role-defined nature of Scrum, combined with Agile’s emphasis on collaboration and adaptability, ensures that the process improvements are consistent, incremental, and sustainable. It creates an environment where structured goals and flexible processes can coexist and thrive. The real-world applications of combining Scrum and Agile are vast and varied. From software development to manufacturing, healthcare, and beyond, these methodologies have proven to be invaluable in enhancing process management. Organizations that successfully integrate Scrum and Agile can achieve more resilient and efficient operations, leading to sustained competitive advantages. For example, in the healthcare industry, where adaptability and precision are paramount, combining these methodologies can streamline patient care processes and improve outcomes. In manufacturing, the iterative cycles of Scrum can enhance product development, while Agile’s adaptability can address changing market demands swiftly. Incremental and Continuous Improvements Both Scrum and Agile emphasize the importance of regular review and enhancement, ensuring that business processes remain relevant and efficient. This commitment to continuous, incremental improvements aligns with best practices in process management, fostering sustainable and resilient operational strategies. Scrum’s structured review processes and Agile’s broader cultural emphasis on continuous improvement create a feedback-rich environment. Regular inspections and adaptations ensure that processes are consistently refined, preventing obsolescence and promoting sustainability. This creates a robust framework where processes are not only efficient but also scalable and adaptable to future challenges. A collaborative culture is essential for the successful implementation of Scrum and Agile methodologies. Agile’s focus on collaboration supports Scrum’s structured practices, ensuring that process improvements are holistic, inclusive, and widely accepted across the organization. The collaborative involvement of all stakeholders fosters shared ownership and accountability, enhancing the likelihood of successful process improvements. Engaging all team members in the process management cycle ensures a diverse range of insights and ideas, leading to more innovative solutions. This collective effort drives a culture where continuous improvement is a shared vision, enhancing overall organizational performance. In today’s fast-changing business environment, achieving process excellence is more important than ever. Companies across various industries constantly look for ways to streamline operations, boost productivity, and encourage innovation. Two of the most effective strategies for improving process management are Scrum and Agile methodologies. Though often used interchangeably, Scrum and Agile have distinct characteristics. Agile is a broader philosophy focused on iterative development, adaptability, and customer collaboration. Scrum, a subset of Agile, provides a specific framework for roles, events, and artifacts, helping teams manage and complete complex projects. Combining these methodologies can significantly enhance business functions, offering a structured yet flexible approach to managing tasks and delivering value. In a world where efficiency and adaptability are paramount, integrating Scrum and Agile can propel organizations toward achieving their goals more effectively and efficiently. Utilizing these methodologies enables businesses to respond to changes swiftly and stay ahead in a competitive landscape.
<urn:uuid:3658263b-b407-4f3b-abdd-e70d037c9166>
CC-MAIN-2024-38
https://developmentcurated.com/development-operations/how-do-scrum-and-agile-drive-process-excellence-in-industry/
2024-09-08T12:09:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00534.warc.gz
en
0.916621
1,811
2.546875
3
Databases are transforming as it is moving away from the on-premise setting and heading to the cloud. Cloud-based databases are fast gaining popularity as it is becoming the most efficient way to store, manage, and retrieve all kinds of data – structured, unstructured, and semi-structured by accessing a cloud platform. Although there is another name of cloud database that people know as database as a service (DBaaS), which are managed services, it would still require the services that has a proven track record in database administration for those who choose the traditional cloud database. All-round accessibility of cloud databases is one of the most prominent benefits because users can access it from anywhere by using a web interface or a vendor’s API. Cloud solutions do away with the need for creating infrastructure and lowers cost as the service provider takes care of auto-scaling, recovery from failure, and automated processes. As data volumes are increasing at a breathtaking rate, and the variety of data is also multiplying, to manage diverse data generated at unprecedented speeds, the cloud database is most appropriate. Cloud databases are a notch above the traditional databases in that it focuses on end-to-end analytics instead of transactional processing of data. Cloud databases allow users to make use of its translational capabilities for navigating through the data and can translate raw data into the business language to enable users to understand the various sets of views and tables fully. Users can use the power of analytics to drive their business to the next level. Deployment models of cloud database Cloud database models are available in two types – Traditional and Database as service (DBaaS). Traditional cloud database – This type of database is like the conventional database managed in-house but comes with infrastructure provisioning. Organizations intending to use traditional cloud database must approach a cloud service provider for purchasing virtual machine space for deploying the database in the cloud. To control the database, organizations entrust the task to the IT staff or developers who use a DevOps model. Database management is the responsibility of the organization. Database as a service (DBaaS)–In this arrangement, the cloud service provider offers a fee-based subscription to organizations interested in using the database. In exchange, the service provider undertakes several real-time tasks related to administrative, operation, maintenance, and database management. The service provider creates the infrastructure for running the database by using automation for provisioning, scaling, backup, security, patching, high availability, and health monitoring. Organizations derive maximum value for DBaaS based on software automation that does away with the need for hiring a database administrator services of RemoteDBA. Benefits of Cloud database The benefits of cloud databases are much like that of the cloud services in general. - Improved innovation and agility – Setting up a cloud database takes minimal time, and so also its de-commissioning. It speeds up the process of testing and validating as well as operationalizing new business ideas very fast. Organizations can abandon the project whenever they want and turn to the next innovation. - Reduced risks – Cloud databases offer several opportunities for reducing business risks, especially when using DBaaS models. By using security best practices, cloud service providers can minimize human error, which is the primary reason for software downtime. Due to automated high availability features and SLAs (service level agreement), downtime is almost non-existent that helps to avoid revenue loss. The cloud infrastructure is an infinite pool of resources that supports any scale up or scale down to meet the business needs, which does away with the need for capacity forecasting. The system is ready to accommodate any capacity whenever needed. - Loser costs – The dual features of dynamic scaling and pay-per-use subscription allow users to plan for a steady state and then scale up to meet peak demands and again scale down when the demand reduces to get back to the steady state. It is less expensive than maintaining in-house capabilities. The option of turning off services when not needed also saves cost. - Faster time to market – As the cloud infrastructure is readily available for use and database access is available in minutes, the speed to market goes up many more times. Cloud database choices How to manage cloud databases depends on the choice of organizations who can pick from the following database management styles. Self-managed cloud databases – This model entails that organizations must take full responsibility in managing the database supported on the cloud infrastructure by using in-house resources but without any automation. The benefits are like that of locating a database in the cloud, but the organization is in full control of the database management by appointing database administrator. Automated cloud databases – To avail the benefits of automated cloud databases, organizations use an API provided by the vendor to assist with lifecycle operations. But they control the operating system and database configuration and maintain access to the database servers. The SLAs of this model is limited and exclude activities like maintenance and patching. Managed cloud databases – This model have many similarities with the automated cloud databases, but consumers do not have access to the hosting database servers. Consumers must stay happy with the vendor-supported configurations as end users are prohibited from installing theirown software. Autonomous cloud databases – This is the most advanced model of cloud database that eliminates the involvement of human labor in database management. This new model is a hands-free operating model that makes use of automation to the fullest for performance tuning and database management. Services include zero-downtime operations for planned and unplanned service lifecycle and database activities. Cloud databases are now using a hybrid cloud concept that accommodates growing data management needs and can collect, replicate, deliver, and push to the edge all your data. It is now possible to create applications to ensure that data retrieval from servers remotely located can happen in seconds as DBaaS can replicate and distribute data immediately, which offers near real-time access to worldwide data. Most importantly, users can directly connect their applications to the database.
<urn:uuid:3577ed31-95d1-4af2-80a1-1222cabed6d7>
CC-MAIN-2024-38
https://www.m2sys.com/blog/others/an-overview-of-cloud-database-and-its-benefits/
2024-09-09T19:30:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00434.warc.gz
en
0.929067
1,212
2.671875
3
Find Replace Tool One Tool Example Find Replace has a One Tool Example. Go to Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer. Use Find Replace to find a string in one column of a dataset and look up and replace it with the specified value from another dataset. You can also use Find Replace to append columns to a row. The Find Replace tool has 3 anchors: F input anchor: This input is the initial input table ("F" for "Find"). This is the table that is updated in the tool's results. R input anchor: This input is the lookup table ("R" for "Replace"). This is the table that contains data used to replace data in (or append data to) the initial input. Output anchor: The output anchor displays the results of the Find Replace tool. Configure the Tool The Find Replace tool configuration is comprised of 2 sections: Find and Replace. Choose the radio button that best describes the part of the field that contains the value to find: Beginning of Field: Searches for the instance of the field value at the beginning of the field. The entire field does not have to only contain what is being searched for. Any Part of Field: Searches for the instance of the field value in any part of the field. The entire field does not have to only contain what is being searched for. Entire Field: Searches for the instance of the field value contained within the entire field. The instance MUST be there in its entirety to be replaced with the new value. Find Within Field: Select the field in the table with data to be replaced (F input anchor) by data in the reference table (R input anchor). Find Value: Select the field from the reference table (R input anchor) that contains the same values as the Find within Field field in the original table (F input anchor). Select optional search conditions: Case Insensitive Find: This option will ignore the case in the search. Match Whole Word Only: Strings are only matched if there are leading and trailing spaces. For strings at the beginning or end of a cell, there must be a space at the other end. You can choose to replace or append data in the table using these radio buttons: Replace Found Text With Value: Choose the field from the reference table (R input anchor) to use to update the original table (F input anchor) Find Within Field. Optionally select Replace Multiple Found Items (Find Any Part of Field only). This should only be used if you selected Any Part of Field from the first radio button. Append Field(s) to Record: Choose this option to append a column populated with the reference table (R input anchor) data whenever the selected Find Value field data is found within the selected Find Within Field. Select the fields to append.
<urn:uuid:f6d729f1-7913-4f52-babf-cf0fb724065d>
CC-MAIN-2024-38
https://help.alteryx.com/20231/en/designer/tools/join/find-replace-tool.html
2024-09-11T01:11:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00334.warc.gz
en
0.807553
597
2.921875
3
Never in his wildest dreams did Jim Bell imagine he would help take pictures of Mars. But in 2004, he did exactly that when he led the photography team for NASAs mobile robot missions to Mars. Now some of the pictures snapped by the two robot rovers named Spirit and Opportunity can be viewed in Bells new book, “Postcards from Mars.” The work includes some 150 pictures of the robots ongoing exploration of Mars desert-like surface. The two rovers have taken more than 150,000 pictures during their nearly three-year visit to the red planet. For the first time, what came to earth as massive images, some more than 100MB, have been edited, cropped, processed and published in a book for public viewing. Bell, an associate professor at Cornell University in Ithaca, N.Y., admits he is not a professional photographer. But he has followed his childhood love of the art form all the way to another world. When it comes to space photography, researchers often take “whatever you can get” because of the difficulties involved, he said. But the technology on Spirit and Opportunity and the duration of the mission gave those involved with the project the chance to think like photographers, he said. “Once in a while we can think about framing a picture…we can make those decisions that landscape photographers make all the time,” he said. The engineers estimated the robots would last 90 days. Bell hoped maybe, with a little luck, they would last 180. But no one, he said, thought the robots would still be there taking pictures more than a thousand days after touching down. He credits the robots resiliency to good design and luck. For example, Bell said, there was concern Martian dust storms would cake the robots solar panels with dust and make them useless. However, wind has routinely cleaned the panels off. It was a long journey for Bell to get to this point. About 10 years ago, NASA announced a competition for scientific instruments to be used for an upcoming mission. Bell and a team of scientists and researchers from around the world submitted a proposal and were rejected twice. The third time however, was the charm, and the group won the right to design a robotic rover. NASA officials later decided they wanted two robots to reduce the risk. All in all, he said, it took about 39 months to build the machines. Spirit and Opportunity are far from the first robots to land on Mars. The first in fact, touched down in 1976. But what makes the two robots different is how mobile they are. The previous robots were either stationary or were only able to travel a short distance. Spirit and Opportunity have navigated more than 4 and 6 miles of the planets surface, respectively, he said. Depending on the position of the planets, it can take as little as four minutes, or as much as 20, for signals sent by the rovers to reach the earth and visa versa. For that reason, Bell explained, the robots are not given commands in real time. Instead, scientists send them a list of commands in the morning to govern the machines movements for the entire day. The cameras the robots are equipped with provide a level of resolution equivalent to 20/20 vision in human beings, roughly three times better than the best camera previously used on Mars, he said. Among the most important things the cameras have turned up is proof of the presence of water on ancient Mars in the form of pictures of marks on rocks on the planets terrain. “Now we know that there was liquid water on the surface of Mars a long time ago,” Bell said.
<urn:uuid:ba8f8577-6de3-43a9-9651-eaab3bcd9314>
CC-MAIN-2024-38
https://www.eweek.com/networking/robot-rovers-provide-a-peek-at-mars/
2024-09-11T00:04:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00334.warc.gz
en
0.976012
749
2.6875
3
Microsoft Developing State-of-the-Art Algorithm to Accelerate Path for Quantum Computers to Address Climate Change (Microsoft) Matthias Troyer , Distinguished Scientist, has penned this recent must-read Microsoft Research blog. Troyer explains that the question emerges that is both scientific and philosophical in nature: once a quantum computer scales to handle problems that classical computers cannot, what problems should we solve on it? Quantum researchers at Microsoft are not only thinking about this question—we are producing tangible results that will shape how large-scale quantum computer applications will accomplish these tasks. We have begun creating quantum computer applications in chemistry, and they could help to address one of the world’s biggest challenges to date: climate change. Microsoft has prioritized making an impact on this global issue, and Microsoft Quantum researchers have teamed up with researchers at ETH Zurich to develop a new quantum algorithm to simulate catalytic processes. In the context of climate change, one goal will be to find an efficient catalyst for carbon fixation—a process that reduces carbon dioxide by turning it into valuable chemicals. One of our key findings is that the resource requirements to implement our algorithm on a fault-tolerant quantum computer are more than 10 times lower than recent state-of-the-art algorithms. These improvements significantly decrease the time it will take a quantum computer to do extremely challenging computations in this area of chemistry. The research presented in this post is evidence that rapid advances in quantum computing are happening now—our algorithm is 10,000 times faster than the one we created just three years ago. By gaining more insight into how quantum computers can improve computational catalysis, including ways that will help to address climate change while creating other benefits, we hope to spur new ideas and developments on the road to creating some of the first applications for large-scale quantum computers of the future.
<urn:uuid:5be4bc51-8bfa-419e-bbfc-050adfd912a6>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/microsoft-developing-state-of-the-art-algorithm-to-accelerates-path-for-quantum-computers-to-address-climate-change/
2024-09-11T00:57:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00334.warc.gz
en
0.925101
380
2.75
3
High speed broadband wireless Internet is becoming more widely available in many different parts of the globe and is delivered in a wider variety of different types of connections. For people who are in the market for a smartphone or trying to decide on the best type of Internet access, the wide availability of choices can make your decision rather confusing. The most popular types of Internet access connections these days are WiFi, 3G (third generation wireless), and 4G (fourth generation wireless). WiFi connectivity is available in many different areas including your home, in public venues, transportation hubs, retail establishments, educational institutions, and just about any other location where you are out in public. 3G Internet access has been made available through most cellular phone carriers and provides a high speed Internet connection access via your mobile phone or other type of mobile device. 3G Internet access replaced 2G and is also the precursor to 4G high speed Internet access. 4G Internet is the most recent form of Internet access and is available through most cellular carriers however, due to its recent introduction to the mobile environment it is not yet widely available in all areas of the globe. Now that you have a general overview of WiFi, 3G, and 4G Internet access technologies let’s delve a little deeper into the difference between WiFi, 3G, and 4G as well as the purposes each type of connection serves to help you stay connected in an increasingly mobile world. WiFi Internet Access WiFi technology provides wireless Internet access via the use of radio waves which transmit a signal to a wireless enabled device. The radio waves operate on a frequency of 2.4 GHz and operate on a standard which is set forth by the IEEE which is the Institute of Electrical and Electronics Engineers. The standard is known as the 802.11 and offers a number of different levels of bandwidth usage symbolized by a letter. The different levels include 802.11a, 802.11b, 802.11g, and most recently, 802.11n. The letter refers to the amount of bandwidth capability which determines the performance of the wireless connection. Under development is also the 802.11ae standard which is a WiFi connection currently being developed to accommodate new and innovative technologies which are about to be introduced to the mobile environment. WiFi is the nickname for a wireless Internet connection and is accessed by devices which are equipped with a wireless network card that communicates with the wireless router. You can access this type of connection from up to one hundred feet away and the performance of the connection can vary according to the 802.11 standard being used and the number of devices connecting to the router simultaneously. Currently WiFi is used just about anywhere and provides a high speed broadband Internet connection. A WiFi connection is offers efficiency when using the latest applications however the connection is typically slower than 3G or 4G. 3G Internet Access 3G Internet access and stands for Third Generation wireless technology. It is called third generation because it follows two previous generations known as 1G which was initiated in the early 1980s for use with cellular technology and 2G which was established in the 1990s for cellular communications technology. Both of types of Internet access would not support current applications however they were considered to be efficient for their time. 3G connectivity offers improved efficiency which operates at greater speed than 1G or 2G and also provides more broadband capability than WiFi. 3G was initiated as part of the 3GPP which is known as the Third Generation Partnership Project which was first established in the late 1990s for the purpose of deploying 3G networks. The implementation of 3GPP marked the beginning of Evolution Data Optimized or EVDO which increases download speeds of up to 2.4Mbps and UMB or Ultra Mobile Broadband which is capable of speeds of up to 288Mbps (megabytes per second) when downloading data to your mobile device. 3G mobile broadband offers significant improvements over WiFi access which provides more efficiency for streaming video applications, using videoconferencing applications such as Skype, and browsing the Internet at higher speeds than a WiFi connection. 4G Internet Access 4G which is commonly referred to as Fourth Generation wireless technology is the most recent form of mobile broadband Internet access. It offers higher capabilities than 3G connectivity since it offers a higher rate of data transfer. 4G is typically offered via an LTE network or Long Term Evolution network and via a WiMAX network. LTE is capable of offering speeds of up to 100Mbps where a WiMAX network offers speeds of up to 70Mbps. Since 4G follows 3G technology you can consider it as an upgraded connection which is capable of more bandwidth and more services. This means that it can accommodate the latest applications which are bandwidth-intensive without sacrificing performance. With 4G you can stream high definition video, engage in videoconferencing at much higher data transfer rates, and utilize other applications which enable ubiquitous computing, meaning you can access virtually any bandwidth-intensive application from any location which has access to a 4G connection. 4G offers more capability due to the fact that it is an end-to-end connection which provides some real advantages when it comes to running the most recent applications. On one hand, it enhances the quality of voice communications by converting voice to data faster than with a 3G connection. Since 4G is strictly a data connection the voice quality would otherwise be compromised with a lower type of connection since it is not data driven. LTE and WiMAX quickly convert voice to data which results in better clarity and connection quality. 4G is available through your wireless cellular carrier but because it is so recent it may not be available in the area where you live. The upside is that the establishment of 4G is a step forward for new innovations and applications which accommodate an environment that is becoming increasingly mobile, thanks to new types of broadband connectivity. You also want to stay tuned for 802.11ae which is an up and coming mobile broadband connection which will offer amazing capability that goes a step beyond 4G connectivity.
<urn:uuid:5af3f40f-f2d1-4466-8c1e-75e7cd1ce36d>
CC-MAIN-2024-38
https://internet-access-guide.com/understanding-the-difference-between-wifi-3g-and-4g/
2024-09-12T05:47:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00234.warc.gz
en
0.957121
1,218
3
3
Take These Steps To Quickly Improve Cyber Security Everyone is interested in the silver bullet that will magically make them completely secure and afe from any cyber threat. It doesn’t exist, but as Thorin Klosowski points out at Lifehacker, there are a number of ways to become more secure within minutes. - 2 Factor Authentication By far the simplest and quickest way to improve security is to enable 2 factor authentication on your online accounts. With this more secure type of log-in, you’ll be prompted for your password, but you won’t be given access to your account until you’re given a second authentication method. In many cases, you’ll be texted or called with a code to enter to prove that you are who you say you are. Once you’ve gone through this process, a hacker would need to using your computer, or have your smartphone to gain access to your account. - Password Manager A password manager can be added to practically any browser and will automatically log you into accounts that have been added to it. This actually sounds less secure, but the password manager locks away all your passwords and encrypts them so they’re safe. You’ll only need to remember one master password to use the password manager. Many managers will even generate a strong, random password for each site you wish to use with it, so the only way to log in to those accounts is by having access to the password manager. - Encrypted Email Email encryption has some headaches associated with it. Most notably, encrypted emails require a key to read, so whoever you’re sending a message to will need the key. But sending them the key over email defeats the purpose of encryption. You probably don’t need to encrypt every email you send, but messages containing information like bank accounts, social security numbers or even contact information are good candidates for encryption. Just be sure to send the encryption key through text, or in person. - Secure Back Up Backing up your files is always a good idea, but, just like email, it’s important to encrypt files containing potentially valuable data. There are a number of services that offer encrypted back ups, but one obstacle is that usually these encrypted files won’t be available to you on another machine. That means you won’t be able to access them from your smartphone or at work. These steps will improve your online security, but nothing is unhackable. The idea is to make it as difficult as possible for anyone to access your data and accounts. Geek Rescue specializes in improving your cyber security to keep your information safe and your devices free from malware. Give us a call at 918-369-4335 to find out how to strengthen your security. September 19th, 2013
<urn:uuid:9270c854-eae7-419c-8efd-7b8571006e2f>
CC-MAIN-2024-38
https://www.geekrescue.com/blog/2013/09/19/take-these-steps-to-quickly-improve-cyber-security/
2024-09-12T04:43:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00234.warc.gz
en
0.933419
579
2.546875
3
The National Institute of Standards and Technology (NIST) recently released its draft Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (Privacy Framework). What is the NIST Privacy Framework? First, let’s begin with what is NIST? NIST was founded in 1901 and is now part of the U.S. Department of Commerce. According to the NIST website, it is one of the nation’s oldest physical science laboratories, involved in a variety of industries and technologies, from nanomaterials to the smart electric power grid. NIST’s Information Technology Laboratory focuses on the priority areas of Cybersecurity, Internet of Things, and Artificial Intelligence. NIST Security Standards are well known in the cybersecurity field. The Privacy Framework is a voluntary tool to help organizations and to “foster the development of innovative approaches to protecting individual’s privacy; and increase trust in systems, products, and services.” With the release of the Privacy Framework, NIST recognizes that privacy risks and cybersecurity risks are interconnected, and the Privacy Framework provides a flexible tool that can be used to explore that interconnection. What Can Organizations Do with the NIST Privacy Framework? In Section 3.0 of the draft Privacy Framework, it states that “the Privacy Framework can assist an organization in its efforts to optimize beneficial uses of data and the development of innovative systems, products, and services while minimizing adverse consequences for individuals. The Privacy Framework can help organizations answer the fundamental question, ‘How are we considering the impacts to individuals as we develop our systems, products, and services?’” According to the draft, the Privacy Framework can be used for risk management, to strengthen accountability within an organization, and to establish or improve a privacy program. From a practical standpoint, privacy concerns can be incorporated into product development, service delivery, and supply chain management. Organizations may be able to use the Privacy Framework as they seek to mitigate privacy risks in the development of products and services as well as when they store, collect, process, or sell data. Considering the impact to individual privacy in the development of new technology is a key to protecting that privacy. NIST is accepting public comments on the draft Privacy Framework until 5 p.m. EST on October 24, 2019.
<urn:uuid:b27d5eaf-713c-4a3d-90f5-39707a08d4b4>
CC-MAIN-2024-38
https://www.dataprivacyandsecurityinsider.com/2019/09/what-is-the-nist-privacy-framework/
2024-09-14T17:42:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00034.warc.gz
en
0.934353
471
3.046875
3
A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are Huge savings for students Each student receives a 50% discount off of most books in the HSG Book Store. During class, please ask the instructor about purchase details.List Price: | $26.00 | Price: | $13.00 | You Save: | $13.00 | A groundbreaking narrative on the urgency of ethically designed AI and a guidebook to reimagining life in the era of intelligent technology. The Age of Intelligent Machines is upon us, and we are at a reflection point. The proliferation of fast-moving technologies, including forms of artificial intelligence akin to a new species, will cause us to confront profound questions about ourselves. The era of human intellectual superiority is ending, and we need to plan for this monumental shift. A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are examines the immense impact intelligent technology will have on humanity. These machines, while challenging our personal beliefs and our socioeconomic world order, also have the potential to transform our health and well-being, alleviate poverty and suffering, and reveal the mysteries of intelligence and consciousness. International human rights attorney Flynn Coleman deftly argues that it is critical that we instill values, ethics, and morals into our robots, algorithms, and other forms of AI. Equally important, we need to develop and implement laws, policies, and oversight mechanisms to protect us from tech's insidious threats. To realize AI's transcendent potential, Coleman advocates for inviting a diverse group of voices to participate in designing our intelligent machines and using our moral imagination to ensure that human rights, empathy, and equity are core principles of emerging technologies. Ultimately, A Human Algorithm is a clarion call for building a more humane future and moving conscientiously into a new frontier of our own design. " Coleman] argues that the algorithms of machine learning--if they are instilled with human ethics and values--could bring about a new era of enlightenment." --San Francisco Chronicle
<urn:uuid:e880db5d-0815-463e-8dd4-6757d30ad190>
CC-MAIN-2024-38
https://hartmannsoftware.com/books/COM-.-a7816400a2365
2024-09-15T19:02:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00834.warc.gz
en
0.887229
416
2.859375
3
Dirty data, or unclean data, is any type of data that contains inaccurate, incomplete, inconsistent, or outdated information. While these misinformations are usually very tiny (think, “Mr. Smith” vs. “Mr. Smyth” in the headline) and caused by human error, dirty data can have far-reaching consequences, especially for data-critical industries, such as financial and healthcare. Bad data is estimated to cost the US economy around $3.1 billion (Forbes) in lost productivity, system outages, and higher maintenance costs every year. Experts project that this number is only going to increase in the next few years, especially as it is estimated that 463 exabytes of data will be created each day globally by 2025 (World Economic Forum). To clarify, an exabyte is one billion billion, or one quintillion bytes. To put this into further perspective, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia is planning to upgrade its Square Kilometre Array (SKA), a next-generation radio telescope, to generate 300 petabytes of data per year in the next decade. Considering that 1 petabyte is only 0.01 exabyte and we’re referring to looking at celestial objects lightyears away, even this pales in comparison to the infinite amount of data we are (and will) produce each day on Earth. So, while a misspelling may seem harmless, the millions of Mr. Smiths who receive an invoice or letter addressed to a Mr. Smyth from their companies may have a different opinion—and could ultimately lead to lost sales. Learn valuable insights about the IT industry, including essential terms you need to know. How does data get dirty? 1. Human error The most common reason data gets dirty is through human error. While the well-loved phrase “No one is perfect” is meant to soothe people as they make mistakes in life, it could also contribute to a slip-up in data entry, such as a typo error. Over time, these human errors can pile up and slowly compromise the integrity of your otherwise reliable data. Human error is also one of the leading causes of cybersecurity vulnerability. It’s worth noting that you can’t eliminate human imperfection, but there are many ways to mitigate this risk. For example, you can train your employees to always double-check their work before submitting it. Even then, it’s highly encouraged that you create processes to ensure that an editor or proofreader checks the same entries to ensure their validity. 2. Fake customer entries Have you ever intentionally entered the wrong name or email address because you didn’t want a company to gain private information? You are not alone. Your customers don’t owe you their information, and many will not willingly give you their sensitive information if they don’t trust you. The best way to reduce this risk is to build client trust. Be transparent with them as much as possible, and never use black-hat practices to manipulate information from prospects. Be genuine: That’s the best way to improve your trust rating. 3. No strategy or a lack of it It’s important that your departments are not siloed, especially if they share data points. A lack of data collection strategy can lead to a lazy approach to treating your customers and data. For example, if your marketing team needs to interview the same people as your sales team, both teams must coordinate to ensure no redundancy. This also ensures consistent messaging in your branding. It may be a good idea to assign a data checker within your organization to double-check all data points, even across teams. 4. No data audits The truth is that all organizations may have some level of bad data at a certain point, particularly if their company is rapidly expanding. Your website is a perfect example of this. For instance, you may say you serve X number of people on your website—which would be perfectly accurate when the website was live. Nevertheless, if your company grows, this number could be inaccurate in two, six, or however many months. Proactively auditing your data is vital to maintaining reliable records. In this age of GDPR, HIPAA compliance, and other increasingly strict consumer privacy laws, the importance of conducting regular data audits cannot be overstated. Dirty data is one of the many IT challenges for 2024. Discover the other IT challenges faced by business leaders by downloading this guide. Examples of dirty data 1. Duplicate data This refers to any data that partially or fully shares the same information. This typically occurs when the same information is entered multiple times, usually in different formats. For example, if a customer calls numerous times and is received by a different IT technician who types their name slightly differently each time. Duplicate data can look like this: - Raine Grey - Raine Gray - Rain Grey - Reine Grey - Rainey Grey Duplicate data may also be considered redundant data, which occurs when data between teams is not synced. Thus, even if the system refers to one person (such as Raine Grey, the author of this article), I would show up as five different people. 2. Incomplete data This is data that lacks information. For example, if you ask a prospect for their complete name for your email newsletter but don’t indicate that these fields are mandatory, you may have only a first or last name, making your email campaign less personalized. 3. Inaccurate data Inaccurate data is misleading information or any data that contains mistakes. On some occasions, inaccurate data can also be duplicate data, which would require you or one of your team members to manually check each data entry to find the true one. 4. Outdated data Outdated data is any data that used to be accurate but is no longer valid for whatever reason. Common examples of this are old email addresses and changes of titles (e.g., Ms. to Mrs. or Mr. to Dr., etc.). This is why regular data audits are especially important. 5. Insecure data This is any data vulnerable to a cyber threat, such as spear phishing. Insecure data points are not encrypted by any security protocol or are not protected by multifactor authentication. Essentially, insecure data can be accessed by anyone in your company. How to clean your data Data management can be simple if you have the necessary tools and resources. Most importantly, you must be steady in your commitment to auditing your customer data regularly to know where to begin and what to do. After all, you don’t know what you don’t know. This usually starts with a data warehouse, a centralized repository that provides a unified view of all an organization’s data. From here, you gain a better, more comprehensive understanding of the scope of potential issues and determine the severity of each. This process of discovering patterns from your data falls under the umbrella scope of data mining. You can then develop action plans to resolve any detected dirty data. Typically, this is done manually, but some IT teams may use Microsoft Excel. You may also consider the tools and software available in the market today that help you identify and clean dirty data. Protecting yourself against dirty data Given the volume of data companies need to manage today, it is impossible not to have some data get dirty. That said, you can minimize their potential organizational impact by being proactive about all the information you receive and handle. It is highly recommended that you regularly audit and clean your data. While this cannot wholly eliminate dirty data from your organization, it can make their threat to your bottom line negligible.
<urn:uuid:56f5c8c6-b31e-4dae-b26e-af6253aa836d>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/endpoint-security/what-is-dirty-data/
2024-09-15T20:57:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00834.warc.gz
en
0.93871
1,599
3.59375
4
Why Multi-Factor Authentication Is Vital to Securing Your Enterprise In a world where cybersecurity threats, hacks, and breaches are exponentially increasing in frequency and sophistication, it’s more important than ever to take actionable steps to mitigate the risk these attacks have on your enterprise. One surefire method to further protect your network and assets is to implement multifactor authentication policies; also known as MFA security or 2FA for shorthand reference, utilizing this technology is proven to radically reduce attacks levied against your business. In fact, a 2019 report from Microsoft concluded that multifactor authentication effectively blocks 99.9% of automated attacks. Since they fight more than 300 million fraudulent log-ins every day, it’s safe to say that Microsoft is the perfect example of how well this technology works. Let’s dive into what MFA security is and how it can bolster your cybersecurity posture. What Is Multi-Factor Authentication? Multi-factor authentication is an authentication method that necessitates a user to verify their identity with at least two forms of identification. What constitutes a form of identification can vary and usually involves a combination of the following: - Something you know: a password or PIN - Something you are: a fingerprint or other form of biometric identification - Something you have: a trusted smartphone or device that can validate your identity via confirmation codes Typically, basic systems will only require usernames and passwords in order to access whatever information a user is trying to get into; with multifactor authentication, the standard log-in information will be used in conjunction with another layer of verification, such as entering a one-time temporary passcode that is sent to a trusted device. Overall, 2FA ensures that an account can’t be accessed without double-checking the identity of the user with one or more additional forms of credentials. Why Passwords Alone Aren't Enough Let’s start with some troubling statistics about passwords: - The password “123456” is used by 23 million account holders. - An analysis of more than 15 billion passwords reveals the average password has eight characters or less. - A single password is used to access five accounts on average. Passwords are essential for nearly everything we need access to, especially in this age of the Internet and the profound integration of smart devices and the Internet of Things. However, as these statistics illustrate, passwords alone are simply not secure enough to provide ample security. When millions of accounts can be accessed with the same, simple six-digit password and that one password might be able to access five separate accounts, it’s only a matter of time before a person’s entire digital presence is compromised. Essentially, password practices are still far too basic for them to be effective on their own; that’s why MFA security is necessary for better cybersecurity protection. The Importance of MFA Security Multifactor security is vital to securing your enterprise because it ensures that malicious actors can’t access your network or assets due to the additional layer—or layers—of security that MFA practices offer. With MFA cybersecurity practices in place, cybercriminals would have to know a person’s username, password, and have access to the person’s phone, for example. With the ubiquity of smartphones and how essential they are to our lives in the 21st-century, a missing phone would be immediately noticed and reported. Following this example of MFA security necessitating a smartphone, the device itself would presumably be locked, which is yet another layer of security for criminals to try and hack into. Basically, it would trigger a long line of obstacles for virtual thieves and hackers to access your network or assets with 2FA policies established and followed. Benefits of Utilizing Multiple Access Controls If it wasn’t already clear, there are many advantages to utilizing multiple access controls in your enterprise, including: MFA’s Flexibility in the Evolving Work Environment With the pandemic continuing to force businesses to shift and adapt to different operations, like remote operations, it’s essential to keep your company’s remote workers protected from a cybersecurity perspective. MFA security enables your employees to access all of their usual required sites and accounts but with added protection. This means that both your employee accounts and company assets are further secured against malicious activity, such as fraudulent log-ins. Multifactor Authentication Doesn’t Disrupt the User Experience As we learned earlier, passwords are often one and the same—so it’s likely that your employees’ passwords aren’t exactly Fort Knox, and necessitating lengthy passwords might result in poor security practices like writing down passwords because they’re too difficult to remember. With multifactor authentication, you can gain peace of mind knowing that even if your accounts don’t have the strongest passwords, they’re fortified against cybercriminals since multiple forms of identification are needed. This reduces the amount of work that your internal IT team needs to spend addressing employee access issues like password resets and empowers them to focus on more strategic tasks. MFA Security Significantly Reduces Risk A security breach caused by anything, but especially if caused by a flimsy user password, would have significant consequences for your company and your clients; after all, the average security breach cost rose to more than $4 million in 2021. Passwords and general bad practices regarding passwords are obviously a huge risk for enterprise security; that’s why implementing MFA practices would significantly reduce cybersecurity risk at your organization. Partner with Compquip to Manage Your Enterprise’s Cybersecurity Posture! For more than four decades, Compuquip has been entrusted to secure and strengthen cybersecurity practices across dozens of enterprises. We provide a wide range of products and services, including Managed Security Services, firewall automation, virtual CISO services, and more. Contact us today to learn how we can fortify your organization’s cybersecurity posture today!
<urn:uuid:1c3d034d-7c8d-4049-a37c-6910a6f5e3c8>
CC-MAIN-2024-38
https://www.compuquip.com/blog/why-multi-factor-authentication-is-important
2024-09-17T00:52:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00734.warc.gz
en
0.935291
1,241
2.671875
3
Zero Trust cybersecurity is an approach to security that assumes that every user and device accessing an organization's network or resources, including those within the organization's network perimeter, should be treated as a potential threat. In a zero-trust model, access to resources is not automatically granted based on a user's location or the fact that they are already within the network. Instead, access must be explicitly granted, authenticated, and authorized per session based on the user's identity, device, context, and behavior. Zero Trust also emphasizes the principle of least privilege, ensuring that users only have access to the resources needed to perform their jobs and nothing more. Zero Trust is vital because traditional perimeter-based security approaches are no longer effective in today's increasingly complex and distributed IT environments. With the rise of cloud computing, mobile devices, and the Internet of Things (IoT), the network perimeter has become increasingly porous, and the number of potential attack surfaces has multiplied. Organizations can no longer rely solely on firewalls and other perimeter defense to protect their assets. Instead, they need a more holistic approach to security that focuses on protecting their data and resources wherever they may be, assuming that no user or device can be trusted implicitly. By implementing a Zero Trust model, organizations can improve their security posture, reduce the risk of data breaches and cyber-attacks, and ensure that their critical assets are always protected. In this track, we will explain Zero Trust and why it's needed to protect organizations against cyber-attacks. We will also look at how Zero Trust evolved over time and where it may go in the future. Various use cases for the Zero Trust model will also be discussed. Receive Continuing Professional Education Credits KuppingerCole Analysts AG is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing education on the National Registry of CPE Sponsors. State Boards of accountancy have final authority on the acceptance of individual courses for CPE credits. Complaints regarding registered sponsors may be submitted to the National Registry through its website: nasbaregistry.org. You can get 3 CPEs for this track. After attending this track you will be able to: Field of Study: Information technology Advanced Preparation: None Program Level: Intermediate Delivery Method: Group Live (on-site attendance only) To register for this session, go to https://www.kuppingercole.com/book/eic2023 and book a hybrid event ticket. In order to be awarded the full credit hours, you must attend the whole track on-site, which will be controlled after the Conference. To redeem your CPE Credits please fill and send the following form: https://www.kuppingercole.com/event_cpe/eic2023 After we have checked your attendance you will receive your CPE certificate.
<urn:uuid:8369fb8e-45d4-46ea-b366-1c5f2bdfc339>
CC-MAIN-2024-38
https://www.kuppingercole.com/tracks/936
2024-09-07T10:29:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00798.warc.gz
en
0.935859
591
2.828125
3
A team of New Zealand and Australian physicists have extended the storage time for a prototype quantum super-computer optical hard drive by over one hundred times. Scientists at the University of Otago and the Australian National University (ANU) have achieved a major breakthrough, reported in Nature this week, demonstrating six-hour quantum storage using atoms of the rare earth element europium embedded in a crystal. It has long been hoped that scientists will eventually come up with a way to store data in a state of quantum entanglement for the benefit of ultra-secure communications. However, at present, such states can only be maintained for a short time before the entanglement fails. The New Zealand and Australian research team has come up with a way to store data for hours, rather than milliseconds. This new breakthrough heralds the world’s first solid state quantum hard drive. “Quantum states are very fragile and normally collapse in milliseconds. The fact that we have storage times of hours has the potential to revolutionise how we distribute quantum entanglement in a communication network,” says lead author Ms Manjin Zhong, from the Research School of Physics and Engineering at the ANU. Utilising this effect, a quantum communication network could be used for perfectly secure encryption for data transmission. “Our experiment shows that it is now possible to think of extending the range of quantum communication by storing entangled light in separate memories and then transporting them to different parts of the network,” Ms Zhong said. The team essentially created the ROM by embedding an atom of the rare-earth element of europium into a crystal matrix. After writing a quantum state onto the nuclear spin of the europium using light, the team subjected the crystal to a combination of a fixed and oscillating magnetic fields to lock the atom’s spin in place and preserve the fragile quantum information. “This prevented the quantum information leaking away for as long as six hours, which is quite surprising,” says Dr Jevon Longdell from the Dodd Walls Centre for Photonic and Quantum Technologies at the University of Otago. Current quantum communication networks are limited to distances of about 100km. “You can distribute the entangled pairs of quantum states literally in a box sent via the post. Then use these entangled pairs to come up with a shared secret key and then use this secret key to do the communication. By comparing the results with your friend you can come up with a secret that only you two share. The neat thing is that we have discovered you can do this comparison without a secure channel,” he says. Dr Longdell says that in the future scientists hope there will be quantum or super-fast computers that can solve difficult problems which current computers cannot solve. “Our long term storage of quantum states would be helpful to achieve this,” he says. The team is also excited about the fundamental tests of quantum mechanics that a quantum optical hard drive will enable. “We have never before had the possibility to explore quantum entanglement over such long distances,” said Associate Professor Matthew Sellars, leader of the research team. “We should always be looking to test whether our theories match up with reality. Maybe in this new regime our theory of quantum mechanics breaks.”
<urn:uuid:05cd878e-5c0b-43b4-b561-fde4ac6c43e7>
CC-MAIN-2024-38
https://cioaxis.com/hottopics/a-major-breakthrough-for-quantum-hard-drive-is-achieved
2024-09-09T22:28:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00598.warc.gz
en
0.924879
675
3.234375
3
In this course, you will learn about and use a Microsoft SharePoint Team Site to access, store, and share information and documents. In many professional environments today, people work collaboratively in teams. Information technology and applications facilitate this by allowing people to easily share, access, edit, and save information. Microsoft SharePoint 2016 is a platform specifically designed to facilitate collaboration, allowing people to use familiar applications and Web-based tools to create, access, store, and track documents and data in a central location. A strong understanding of SharePoint's features and capabilities will allow you to work more efficiently and effectively with SharePoint, and with the documents and data stored in SharePoint. Furthermore, effective use of new social networking capabilities will allow you to identify, track, and advance issues and topics most important to you, and collaborate with colleagues more effectively.
<urn:uuid:e5570737-c4e9-4ca7-a51a-98089deed714>
CC-MAIN-2024-38
https://www.lumifywork.com/en-nz/courses/microsoft-sharepoint-2016-site-user-sp16/
2024-09-09T22:26:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00598.warc.gz
en
0.927921
172
2.78125
3
Lesson 4 mentions 2 different types of memory: ROM (read only memory) and RAM (random access memory). ROM is the memory, non-volatile, where once plugged into the system, it will never get be modified. Most likely needs an upgrade when it runs into issues within the boot of the system. Every time I go to boot up my system, I have a .5 second time frame to open my BIOS settings in the motherboard, and if booted properly, will show the settings in my system. RAM is much more different where it writes and reads operations in the system. As our system continues to update and change throughout the versions, the RAM changes along with. The main backside to this is where this requires more power to operate this memory whereas the power in running the ROM is more slow and less used.
<urn:uuid:31c05763-bc97-4f74-af5f-256ada2b7915>
CC-MAIN-2024-38
https://mile2.com/forums/reply/94979/
2024-09-12T08:54:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00398.warc.gz
en
0.942
167
3.078125
3
Ripple and RippleNet The technology that serves both like a network for digital payment and a cryptocurrency for the financial transactions is termed as Ripple. The operation of Ripple takes place on a peer-to-peer and open source decentralized platform on which seamless money transfer is allowed in any format whether Yen, USD, bitcoin or litecoin. Connecting the payment providers, banks and exchanges of digital asset is the major aim of Ripple so that the global payments can become cost efficient and faster. The design of Ripple infrastructure is aimed at making the transactions convenient and quicker for the banks. This is the reason why bigger financial institutions have this option of cryptocurrency as more popular. While XRP cryptocurrency is is generally referred using Ripple, in actual terms, it is a company by which XRP’s are held the most. Not just several systems for payments are offered by Ripple but almost 60 billion XRP are also owned by it. RippleNet, the blockchain system of Ripple provides financial institutions and businesses with several programs that also cater in payments that are cross borders. xRapid, xCurrent and xVia are included in it. RippleNet is limited, unlike XRPL, to Ripple Company and is developed on XRPL top as an exchange and payment network. At present, a suite of 3-product is offered by RippleNet that has design intended to serve as solution for payment system for financial institutions and banks. XRP: the digital currency of Ripple XRP is the digital currency and for the other currencies, it serves like a bridge. There is no discrimination from XRP between one cryptocurrency/fiat and the other and therefore exchange of any currency with another becomes easier. There is own gateway for each of the currency present on ecosystem. If someone asks for paying for the services with bitcoins then it is not necessary for the receiver to have bitcoins as well. The payment received could also be sent in form of Canadian dollars to the gateway. In simple terms, it is not required to have gateway for complete transaction initiation since use of multiple gateways is possible that can form a trust chain that ripples athwart users. When user holds balances with gateway then it exposes him to risk of counterparty. In traditional system of banking, this is an apparent risk. If its liability is not honored by gateway, the money value that is held by user at that gateway could be lost. A gateway is not trusted by the user and can therefore make use of trusted gateway for the purpose of transaction. The risk of counterparty is not there with bitcoins and with several other altcoins as well since the bitcoin of a user is not the liability of any other user. Working of ripple: The running of ripple network is not with a bitcoin like system of proof of work or with Nxt like system of proof of stake. Instead, a consensus protocol is there on which the transactions depend for validating the balances of account on system along with the transactions. The working of consensus is for improving system integrity with double spending prevention. For instance, if a transaction is initiated by a user with several gateways but the same amount is craftily sent to systems of gateways, then everything will be there with him but with first transaction deleted. Consensus is used by the nodes distributed individually for deciding as which is first transaction and majority vote is determined for this purpose using poll system. Roughly just 5 seconds are taken by the transactions and they are almost instant. No central authority is there for deciding as who can confirm the transactions and set up nodes and this is the reason of decentralized nature of ripple platform. In the given currency, all IOUs track is kept by Ripple for any gateway or user. Flow of IOU transaction and credits take place among Ripple wallets that are available publically on the ledger of Ripple consensus. However, history of financial transaction is recorded publically and it also becomes available to the blockchain, there is no link of data to any business or individual’s account or ID. However, information is made susceptible to the measures of de-anonymization with dealings public record. How to purchase ripple: The convenience of buying ripple is not as much as bitcoin. In certain occasions, a Bitstamp like exchange of cryptocurrency allows exchanging XRP or USD but this case is rare. The other ripple selling exchanges such as Binance and Coinbase will demand different cryptocurrency exchange such as ether or bitcoin for XRP acquition. A ripple wallet and account is needed regardless of currency that is exchanged for XRP. This is where the XRP is sent. There is much similarity between bitcoin wallets and ripple wallets with the secure keys by which transactions are allowed. However, minimum 20 XRP is needed by wallets in case of ripple as the initial deposit. Similar to other wallets of cryptocurrency, different forms of wallets are there that includes mobile wallets and software wallets available for iOS and Android. A hardware wallet is generally recommended for storing ripple. The contents are stored offline in hardware wallets and therefore the security is more in this case. Ledger is one of notable manufacturer of hardware with wallet that supports ripple. While the first cryptocurrency known is Bitcoin and for the smart contracts, the platform recognized is Ethereum, ripple network might be considered as the system of currency exchange with its focus on solutions for global payment for financial institutions and banks. Implementation of RippleNet might be done on existing banking infrastructure’s top as the means of improving and complementing the traditional system of payment. Real time cost efficient payments are allowed by xCurrent athwart the financial institutions while XRP is used by xRapid like a borderless bridge currency for offering liquidity pools on demand. Communication and integration of all participants of RippleNet are facilitated by xVia. Ripple is supported by following banks: - Yes Bank - Axis Bank
<urn:uuid:424252d4-0521-4fc1-925e-588f36a683be>
CC-MAIN-2024-38
https://networkinterview.com/what-is-ripple-and-ripplenet/
2024-09-13T16:08:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00298.warc.gz
en
0.965154
1,179
2.828125
3
A lot has changed since the early days of the internet, particularly when it comes to how we access and connect to it. Today, it’s easy to forget that internet connection problems can still occur, and that they have the potential to bring your business to a halt. For the most part, significant disruptions are rare – partly due to advances in technology and competition between various ISPs to provide reliable connections. However, as a small business owner or manager, you need to be prepared for all eventualities. Connection issues due to a modem or router problem can lead to costly downtime, so it’s important to understand the differences of a modem vs. a router. What is a Modem? A combination of two terms, modulation and demodulation, a modem is a device that performs two important tasks to enable your business to communicate with the outside world via the internet. It translates digital signals into analog signals (modulation) that can be sent via a cable. And, it converts data from a telephone cable or fiber optic to a digital signal that the computer can understand (demodulation). In other words, the modem is your gateway to the internet. What is a Router? Most businesses rely on numerous devices in the course of their operations. In the age of Bring Your Own Device (BYOD), it’s also not uncommon for employees to use more than one device. With a modem, you can only connect to a single computer. To make sure your workplace computers and employee devices can connect seamlessly with the internet, you’ll need a router. As the name suggests, a router’s primary function is to route data packets to destination devices. In other words, a router helps you to create a local area network (LAN) for your business. When the router is connected to the modem, it allows devices connected to the LAN to access the internet. Modem vs. Router: What are the Differences? The differentiating factor between a modem vs. a router is that a modem connects you to the internet through your Internet Service Provider (or ISP), while the router allows you to share that internet connection to several devices, either using cables or wirelessly. The two pieces of hardware may look the same, but a router will always have many ports in comparison to a modem which usually has only two. Today, several manufacturers or ISPs are combining these two pieces of hardware in one unit. While this may seem like a more minimalistic and compact design, it’s not always recommended for a business setting, which requires more flexibility and reliability as compared to home networks, where the 2-in-1 combo can work perfectly. How to Choose a Business Router The choice of router you use at your business can determine several things — speed, reliability, the number of devices you can connect to, Wi-Fi availability, and more. However, the most important of all, it also affects your security. With cybercrime incidents on the rise, someone intruding on your network through a vulnerability in the router is something to avoid at all costs. When buying a business router, don’t just go for regular consumer options; consider purchasing a VPN router that provides an additional security layer for your corporate network. Besides security, consider the following: - Wired/wireless: While a wired network can work impeccably, a wireless connection capability offers employees much more flexibility. But, remember that distance and structures such as concrete walls affect the wireless connection. - Model: Newer router models with the latest technology can provide faster speeds to support several devices without traffic jam problems. - Network Priority Feature: Go for a router that allows you to prioritize crucial connections or devices over others. For example, a router that prioritizes voice and video traffic can make your virtual meeting experience seamless. - Content Control: Your employees can be viewed as a weak link through which malicious programs can get into your devices. A router with filtering capability can avert this while also preventing employees from accessing high-risk websites. How to Choose a Modem for Your Business No matter the router you purchase, some key aspects of your internet connection are dependent on your modem and, ultimately, the ISP you go with. Most ISPs will encourage you to rent a modem from them, which may come with a router. However, you need to be careful as the hardware offered may not meet your requirements. If the ISP doesn’t have the modem-router combination you want, the best option is to purchase your own third-party hardware. However, you should first check with the ISP to ascertain that the modem will flawlessly connect to the ISP. Another consideration is the speed of the modem. If you’re going for a third-party modem, make sure that it can match the router speeds. How to Troubleshoot Modems and Routers Once in a while, you may run into an internet-related problem due to a modem or router issue. One of the most common faults occurs when there is a loose or damaged cable. This is where you want to check first, especially the WAN cable. Another common issue is where the software supporting the hardware malfunctions or is outdated. Always check that you’re running the latest firmware or schedule automatic updates for continuous updates whenever there is an update or patch. If using Wi-Fi, you will want to appropriately position your router for it to broadcast the signal to the whole room. Alternatively, add an access point to rooms where the signal is weak. Keeping Your Business Online Modems and routers are a crucial part of keeping your business online. To learn more about ensuring the smooth running of your organizational technology, get in touch with Electric today. By outsourcing your IT, you can access expert advice, lightning fast support, and the solutions you need to power business growth.
<urn:uuid:43fcaa87-aa65-4664-99b4-439eb8ca6c6f>
CC-MAIN-2024-38
https://www.electric.ai/blog/modem-vs-router
2024-09-13T14:13:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00298.warc.gz
en
0.944237
1,201
2.703125
3
Botnets are networks of compromised devices that pose a significant threat to digital security. With the increasing reliance on technology across various sectors, the potential damage that botnets can cause is substantial. Whether it is disrupting services through Distributed Denial-of-Service attacks or spreading spam and phishing emails, botnets represent a critical challenge in maintaining cybersecurity What is a botnet? A botnet is a network of private computers, infected with malicious software and controlled as a group without the owners’ knowledge. The term botnet is a combination of two words: ‘robot’ and ‘network’. It signifies an army of bots – a network of interconnected devices, each running one or more bots. How does a botnet work? Botnets function by infecting multiple systems, usually by spreading malicious software through emails, websites, or social media. Once the system is infected, the botnet can control the device, often without the owner’s knowledge. The botnet controller, also known as the botmaster or bot herder, can command the network of bots to perform tasks, which could range from sending spam emails to conducting distributed denial-of-service (DDoS) attacks. Uses of botnets Distributed denial-of-service attacks One common use of botnets is to conduct DDoS attacks. A DDoS attack is an attempt to make an online service unavailable by overwhelming it with traffic from multiple sources. Botnets can generate huge amounts of traffic to overwhelm a target. These types of attacks can lead to significant downtime and loss of business for the targeted company. Spamming and phishing Botnets are also commonly used for spamming and phishing. They can send vast amounts of spam emails to users, tricking them into revealing their personal information like credit card numbers or passwords. Another use of botnets is for distributing malware. This could be for creating more botnets or for other malicious purposes. Often, the malware is disguised as a harmless file, but when it is downloaded and opened, it will infect the system. Some botnets are used for cryptocurrency mining. In this case, the infected systems are used to mine cryptocurrencies like Bitcoin. This can be a profitable venture for the botmaster but can cause significant performance issues for the infected system. Maintain cybersecurity best practices to avoid botnets Botnets are a major threat in the digital world. They can be used to conduct a wide range of activities, from DDoS attacks to cryptocurrency mining. It is crucial to maintain your cybersecurity best practices to prevent your devices from becoming part of a botnet. Stay safe out there!
<urn:uuid:a7d51289-e5a4-4fc8-a149-c5f28e17466c>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/it-service-management/what-is-a-botnet/
2024-09-19T17:40:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00698.warc.gz
en
0.939421
542
3.734375
4
As fiber cable network is built by drawing the long lines of physical cables, it is highly impossible to lay a continuous cable end-to-end. Then there comes the fiber pigtail, one of the cable assemblies, has a connector on one end and a length of exposed fiber on another end to melt together with fiber optic cable. By melting together the glass fiber cable, it can reach a minimum insertion loss. Pigtails are terminated on one end with a connector, and typically the other side is spliced to OSP (Outside Plant Cable). They may be simplex: (single fiber), or multi-fiber up to 144 fibers. Pigtails do have male and female connectors in which male connectors will be used for direct plugging of an optical transceiver while the female connectors are mounted on a wall mount or patch panel. Fiber optical pigtails are usually used to realize the connection between patch panels in a Central Office or Head End and OSP cable. Often times they may also provide a connection to another splice point outside of the Head End or central office. The purpose of this is because various jacket materials may only be used a limited distance inside the building. You may confused the purpose between fiber optic connector, fiber optic patch cord and fiber optic pigtail. Here we will figure it out. Fiber optic connector is used for connecting fiber. Using one or two fiber optic connectors in one cable has two items with different assistance in fiber optical solutions. Fiber optic patch cords(or called fiber jumpers) used as a connection from a patch panel to a network element. Fiber optic patch cords, thick protective layer, generally used in the connection between the optical transceiver and the terminal box. Fiber Optic Pigtail called pigtail line, only one end of the connector, while the other end is a cable core decapitation. Welding and connecting to other fiber optic cable core, often appear in the fiber optic terminal box, used to connect fiber optic cable, etc. Fiber optic cable can be terminated in a cross connect patch panel using both pigtail or field-installable connector fiber termination techniques. The pigtail approach requires that a splice be made and a splice tray be used in the patch panel. The pigtail approach provides the best quality connection and is usually the quickest. Fiber pigtails are with premium grade connectors and with typical 0.9mm outer diameter cables. Simplex fiber pigtail and duplex fiber pigtails are available, with different cable color, cable diameter and jacket types optional. The most common is known as the fusion splice on pigtail, this is done easy in field with a multi-fiber trunk to break out the multi-fibers cable into its component for connection to the end equipment. And the 12 fiber or 6 fiber multi color pigtail are easy to install and provide a premium quality fiber optic connection. Fiber optic pigtails can be with various types of fiber optic terminations such as SC, FC, ST, LC, MU, MT-RJ, MTP, MPO, etc. Pigtails offer low insertion loss and low back-reflection. They are especially designed for high count fiber optic cable splicing. Pigtails are often bought in pairs to be connected to endpoints or other fiber runs with patch cables.
<urn:uuid:c02a5d08-4115-42bc-999d-e40c757b523a>
CC-MAIN-2024-38
https://www.fiber-optic-components.com/tag/pigtail-fiber-optic
2024-09-07T14:33:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00898.warc.gz
en
0.924898
679
3.140625
3
Access to a fast internet connection through Wi-Fi has become indispensable in both our professional and personal lives. Consequently, it's now common for us to want to share Wi-Fi access with colleagues at work and loved ones at home so they can easily connect to the Internet at high speeds as well. However, with threats like Man-in-the-Middle (MITM) Attacks and Packet Sniffing, where cybercriminals try to intercept or manipulate communication, sharing Wi-Fi passwords must be done cautiously. So, in this article, we'll explore a few safe and convenient methods for sharing Wi-Fi passwords. How to share a Wi-Fi password on iPhone, iPad and MacBook Sharing a Wi-Fi password between Apple devices, such as iPhone, iPad, and MacBook, is quite straightforward — especially if the devices are up-to-date with the latest iOS, iPadOS, or macOS versions. In fact, the entire sharing process can be completed in three easy steps. They are: Turn on Wi-Fi and Bluetooth on both Apple devices and keep them nearby. Select the Wi-Fi network on the device you want to connect. A prompt will appear on the other device sharing the password. Tap Share Password, then tap Done. How to share a Wi-Fi password on Android devices Sharing a Wi-Fi password using an Android device can also be done without too much effort. There are two ways you can approach it: Create a QR code If your device is running an Android 10 system or higher, you can share your Wi-Fi password by creating a QR code. For that, go to Settings and access your device's Wi-Fi settings. Next, tap the gear or information icon next to the Wi-Fi network for which you want to share your password and select Share to generate a QR code. The person seeking Wi-Fi access can either scan the QR code using their device's camera or navigate to the device's Wi-Fi settings, select Add network, and then scan the code. Use the Nearby Share feature Access your device's Wi-Fi settings, select the network you wish to share, and then choose Nearby or Nearby Share. Your device will scan for nearby devices, enabling you to choose the one you want to share Wi-Fi access with. The recipient will receive a connection request on their device, which they'll need to accept to get access to the Wi-Fi network. How to share a Wi-Fi password on Windows devices As far as we are aware, there's no quick and easy method available to share Wi-Fi passwords from Windows devices at the moment. This means that to share your password with someone, you'll need to provide it to them directly or use third-party software for that. If you've forgotten your Wi-Fi password, you can retrieve it by following these steps: Go to Windows settings and choose the Network & Internet tab. Navigate to the Network and Sharing Center. Locate the network for which you want to share the password. Click on Wireless Network Properties. Access the Security tab and click on the Show characters box to see the password. Once you check what your Wi-Fi password is, you can share it with the intended recipient. Top security practices for sharing a Wi-Fi password Sharing Wi-Fi passwords through Apple or Android devices relies on operating system services, which may not be completely secure. Additionally, there are always instances where we need to share passwords with people using devices from different brands. In both cases, following specific security guidelines is essential to prevent password breaches. Here's what we recommend you should do. Firstly, when sharing a password, opt for secure methods like encrypted platforms or in-person exchange, avoiding the public eye or unprotected communication channels. Furthermore, ensure your Wi-Fi network is secured with WPA2 or WPA3 encryption protocols to deter unauthorized access. Avoid older and less secure encryption standards such as WEP. Additionally, consider generating temporary passwords for your guests. Many routers provide the option to create time-limited guest passwords, which is ideal for short-term visitors. Best tip — Use a password manager to store and share your passwords If, at this point in reading this article, you wish there was a single solution that would allow you to quickly and securely share all your passwords, including the one for Wi-Fi, let us assure you that such a solution exists, and it’s called NordPass. NordPass is a cybersecurity tool that enables you to store, manage, and share passwords, passkeys, credit card details, and other sensitive information with ease. Because it is encrypted, it ensures that storing and sharing passwords is much more secure, while its intuitive interface makes the whole process effortless. Moreover, NordPass features a Data Breach Scanner, a feature that allows you to check if your data, including your passwords, has been compromised in a breach. That way, you can stay informed about the security of your credentials and take immediate action when necessary. So, if the security of your Wi-Fi passwords and other credentials is important to you, try NordPass to be able to share them without second thoughts. Frequently Asked Questions Android and Apple devices run on different operating systems. Therefore, to share a Wi-Fi password from an Apple device to an Android one, you need to manually generate a QR code using a third-party platform that the Android user can scan. You can also opt for a more convenient solution and simply use a password manager to easily share the Wi-Fi password with any popular brand device. To transfer your Wi-Fi password to your new phone, you can simply check the password on your old device and then manually enter it on your new one. Alternatively, you can use a password manager like NordPass, which securely stores your Wi-Fi passwords and syncs them across all your devices. In other words, with NordPass, you can access your Wi-Fi password on your new phone without the need for manual input. The easiest way to revoke access to a shared Wi-Fi password is by changing the password on your router's settings, making the previously shared password invalid. If you've used a password manager to share the Wi-Fi password, you can simply delete or edit the shared item within the app to make it inactive. Whether your Wi-Fi remains secure when shared depends on how you go about the sharing process. For example, if you share your Wi-Fi password via unprotected online communicators or by writing it down on a piece of paper and passing it along, you risk compromising the password’s security. However, if you personally provide the password to your guests — away from the public eye — or use a password manager like NordPass to encrypt the shared information, your chances of compromise will be significantly reduced. Yes, you can remotely share your Wi-Fi password through different methods, but not all are secure. For instance, by using messaging apps or email, you risk the password being intercepted by hackers while in transit. However, with a password manager, you can securely share passwords from anywhere with an internet connection.
<urn:uuid:8ff9fea7-d336-4c98-a808-561ce9939ee5>
CC-MAIN-2024-38
https://nordpass.com/blog/how-to-share-wifi-password/
2024-09-08T19:40:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00798.warc.gz
en
0.937261
1,482
2.5625
3
Previous articles in this series provided a definition of Artificial Intelligence (AI) and a general overview of its use in Cybersecurity, described various types and subspecialties of AI systems and their respective functions, and described the many ways AI is being used in Cybersecurity today. Now we’ll look at the three main functionalities of an Artificial Intelligence system that set it apart from other Cybersecurity tools. These three unique functionalities are as follows: - AI can learn: One of the main themes that has been pointed is that a good AI system can learn, with or without human intervention (of course, the latter is much more preferred). The way that it learns is that literally billions of pieces of data are fed into it, via many intelligence feeds. Once this data is fed into it, the Artificial Intelligence tool can then “learn” from it by unearthing any trends or threat vector-based attack signatures that have not been discovered previously. It can even also learn from known trends as well. By combining these two together, the AI system can then make reasonably accurate observations or predictions as to what the Cyber Threat Landscape will look like on a daily basis, if that it is what the main purpose of has been designed to accomplish. It is important to note, while this is done on a 24 X 7 X 365 basis, the data and information that is fed into it must be done on an almost minute by minute basis. If this is not done, the AI system can lose its robustness very quickly, can literally become “stale”. Also, based upon the datasets that are fed into it, a good AI system can also even make recommendations to the IT Security team as to what the best course of action it can take in just a matter of minutes. In this regard, Artificial Intelligence can also be used as a vehicle for threat mitigation by the Cyber Incident Response Team. An AI system that is designed for the Cybersecurity Industry can also digest, analyze, and learn from both structured and unstructured datasets (this even includes the analysis of written content, such as blogs, news articles, etc.). - AI can reason: Unlike the other traditional security technologies, Artificial Intelligence tools can also reason and even make unbiased decisions based upon the information and data that is fed into it. For example, with very high levels of accuracy and reliability, it can “… identify the relationships between threats, such as malicious files, suspicious IP addresses or insiders.” (Source 1)In other words, an AI system can look at multiple threat vectors all at once and take notice of any correlations that may exist between them. From this, a profile of the Cyberattacker can be created and even used to prevent other new threat variants from penetrating into the lines of defenses. Very often, a Cyberattacker will launch differing attacks so that they can evade detection. For example, Cyberattackers have been known to hide their tracks after penetrating an IT/Network Infrastructure by covertly editing the system logs of the servers, or even just simply reset the modification date on a file that has been hijacked but replaced with a phony file. These cannot be detected by the standard Intrusion Detection Systems (IDSs) that are being used today; they can only be discovered by anomalies if significant deviations can be found. But with the use of AI, these and other hidden commonalities can be discovered very quickly in order to track down the very elusive Cyberattacker. Also, an AI system does not take a “Garbage In/Garbage Out” view of a threat vector. It tries to make logical hypotheses based upon what it has learned in the past. In fact, it has been claimed that Artificial Intelligence can respond to a new threat variant 60X faster than a human could ever possibly do.(Source 2) - AI can augment: Again, as it has been mentioned before in the last subsection, one of the biggest advantages of Artificial Intelligence in Cybersecurity is that it can augment existing resources. Whether it is from filling the void from the lack of the labor shortage, or simply automating routine tasks that need to be done, or even filtering through all the false positive warnings and messages to determine which of those are for real, AI can absorb all of these time consuming functions that can take an IT Security team hours to accomplish and get them done in just a matter of minutes. It can also be a great tool to conduct tedious research-based tasks, can calculate the levels of risk very quickly so that the IT Security team can respond to a Cyber Threat in just a matter of seconds and mitigate it quickly. The Importance of Artificial Intelligence In Cybersecurity To further substantiate the need for Artificial Intelligence in Cybersecurity, a recent study by Capgemini (which is entitled “Reinventing Cybersecurity With Artificial Intelligence”) discovered the following: - 64% of businesses feel that they need robust AI tools in order to combat the threat from Cyberattackers. - 73% of businesses are now developing test cases for using Artificial Intelligence primarily for Network Security purposes. This is illustrated in the diagram below: - 51% of CIOs and CISOs are planning to make extensive use of Artificial Intelligence as it relates to Threat Hunting and Detection. This is illustrated below: - 64% of the CIOs and CISOs claim that using Artificial Intelligence actually decreases the time it takes to respond to a particular Cyberattack, and the corresponding response time has increased by 12% as can be seen in the diagram below: The top 5 use cases for Artificial Intelligence are as follows: - Fraud Detection - Malware Detection - Intrusion Detection - Calculating risk levels for Network Security purposes - Behavioral Analysis. This can be seen pictorially below: An overwhelming 56% of Cybersecurity Executives claim that their respective staffs are too overworked and overburdened; and that an alarming 23% of these teams cannot even respond to Cyber threats as they occur. - 48% of the CIOs and CISOs claim that plan to increase their budget for Cybersecurity by at least 29% in 2020. Note: It is important to mention that 850 CIOs and CISOs as well other security executives were polled in this survey. Further details on this study by Capgemini can be found via this link. Overall, it appears that the spending on Artificial Intelligence in the Cybersecurity Industry will grow exponentially in the coming years, as substantiated by the diagram below: Overall, it is expected that in the United States, the spending on Artificial Intelligence technologies will be at $38.2 billion by 2026. The main catalysts for this growth are as follows: - The rise of interconnected devices brought on by the evolution of the Internet of Things (IoT) - The overall growth rate of newer Cyber Threat variants - Concerns of information and data leakage - The increasing vulnerability of Wi-Fi networks - The security posed by the various Social Media platforms (which include the likes of Facebook, Twitter, Linked In, Instagram, Pinterest, etc.) Next up: How to prepare and deploy an AI system in your business So far, this series has served as a deep dive into the many facets of Artificial Intelligence and its use in Cybersecurity. Next you may be eager to deploy such a system to protect your business. This is often an overwhelming proposition. To help you take that next important step, the fifth and final installment of the series will offer practical guidance and a checklist that will prove useful in your consideration of the deployment of an Artificial Intelligence system.
<urn:uuid:f5d91abd-4933-452f-b518-230b9ae897a2>
CC-MAIN-2024-38
https://platform.keesingtechnologies.com/the-role-of-artificial-intelligence-in-cybersecurity-part-4/
2024-09-10T02:05:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00698.warc.gz
en
0.950336
1,559
3.3125
3
Building Resilience to Reverse Engineering: A Study in CAPTCHA Cybercriminals have found several different workarounds for CAPTCHA, and while recent updates promise to revitalize it, the technology’s focus on “bot-like” behaviors leave it unprepared for modern and sophisticated threats. If you’ve ever signed up for a service or account on the internet, you’ve probably experienced a CAPTCHA. The bot detection system (which stands for the mouthful “Completely Automated Public Turing test to tell Computers and Humans Apart”) prompts users to enter codes, choose the right images in a sequence, or solve a math problem in order to prove that they’re human. CAPTCHA was an important initial step in distinguishing human and bot activity on the web — but for several reasons, it’s had trouble keeping up with determined attackers. Though companies are revamping the technology for a new era, CAPTCHA’s 17-year history is a good case study in how the incentive of profit and observable behaviors will allow cybercriminals to overcome most technological barriers. In this blog post, we explain why CAPTCHA can be reverse engineered, as well as alternative methodologies – like the ones we use at White Ops – that keep cybercriminals on their toes. A Brief History of CAPTCHA Reverse Engineering The first incarnation of CAPTCHA was born in 2000, when engineers at Carnegie Mellon came up with a simple way for internet users to prove they were human: look at an image featuring distorted text a computer couldn’t read, type it into the box, and pass through. Bots at the time were rudimentary and consistently failed these tests. Other than CAPTCHA farms that paid human workers to pass the tests, no workaround for the system existed until over ten years later. In 2013, researchers at Vicarious, a California-based AI firm with funding from both Amazon and Facebook, proved they could successfully train computers to crack CAPTCHA codes using neural networks. The networks were able to analyze the shapes in these warped images and correctly identify them as letters and numbers, enabling Vicarious’ bots to solve 90% of Google, Yahoo!, PayPal, and Captcha.com codes. In response to the Vicarious research, many companies phased out these tests in favor of more advanced ones like reCAPTCHA. These new versions relied on deeper passive analysis of a user’s behavior and browser information, monitoring for hallmarks of user activity like solving, navigation, and submission time to create a portrait of that user’s behaviors. Clicks that happened too fast, too slowly, or in a pattern unusual for that user were seen as possible indications of bot activity. Once flagged, the suspicious user was asked to complete an image-based test that would be harder for an AI bot to solve. But it was only a matter of time before cybercriminals figured out how to reverse engineer this solution, too. Some graduate students at Columbia University broke through reCAPTCHA in 2016 with success rates between 70.8% and 83.5%. These white hat hackers modified user-agent reputation, used reverse image searches to find keywords associated with the photos used in bot tests, then cross-referenced those keywords with the test’s prompt to select the appropriate images. For example, if a bot were asked to choose images with birds in them from the usual set of 9 photos, the bots would reverse-image search them, pick out the ones associated with the word “bird,” and select those images to pass the test. This ongoing arms race between CAPTCHA and bot developers demonstrates a simple principle of cybersecurity: as long as people have enough time on their hands, there’s no single test that criminals won’t eventually figure out how to pass. Why CAPTCHA Can Be Reverse Engineered Two aspects of CAPTCHA make the task of bypassing it pretty easy on hackers. First, there’s the tool’s relatively straightforward set of observable inputs; second, the adversary gets real-time feedback about the success of their efforts as soon as they finish. CAPTCHA presents hackers with a fairly clear objective — enter these characters, display these characteristics, or select these photos — and immediately lets them know whether their attempts to achieve that objective were successful. These two characteristics allow hackers to enter into a rapid “iteration loop,” continually fine-tuning their inputs until they get the result they want. That rapid iteration loop is especially vulnerable to hackers with access to machine learning technology. CAPTCHA also suffers from attempts to balance human accessibility with security. If a cybercriminal doesn’t want to task a botnet to solve CAPTCHAs, all he or she has to do is find a third-party accessibility plugin like CAPTCHA Be Gone, which solves CAPTCHA puzzles for blind or visually impaired people. So long as bot detection methods are visible to the user, bots will find a way to get around them. Building Resilience to Reverse Engineering While detection mechanisms such as CAPTCHA can be an effective deterrent for less sophisticated bots, they will never be a perfect defense against a motivated, highly advanced adversary. So how can a bot defense system be designed to avoid such reverse engineering? At White Ops we think of this solution across three dimensions: - Input depth and breadth: In order to reverse engineer a bot defense solution, an adversary must know and understand the entirety of inputs. Instead of picking out a few letters and numbers, our algorithm performs hundreds of tests and thousands of measurements, forming a complex surface area for an adversary to navigate. The algorithm also continues testing for the duration of a session, instead of simply providing one checkpoint that a user – human or bot – has to pass. These complicated tests don’t have to come at the expense of user experience either: White Ops passively collects thousands of data points (called signals) to test for bot activity, without once prompting the user to enter information or pass a test. Invisible reCAPTCHA doesn’t prompt most users either, but still presents suspicious traffic with a potentially solvable puzzle instead of continuously monitoring a session. - Polymorphism: Inputs need to change on a periodic basis to keep adversaries on their toes. For example, the Columbia researchers who cracked reCAPTCHA noted that “challenges were not created ‘on the fly’ but were selected from a small pool of challenges that were periodically updated.” This static (or semi-static) nature of these challenges makes them more susceptible to reverse engineering, because it gives the adversary more time to complete the process with a given set of inputs. - Feedback opacity: The internet provides cybercriminals with a massive trove of data to train machine learning models. Combined with rapid feedback, an adversary can use this information to quickly adjust models and observe results. Given enough cycles through the iteration loop, any form of detection can be broken. However, if feedback is withheld, the loop is broken, and the adversary has no way to determine which approaches are working and which aren’t. One of the greatest innovations in security was the silent alarm, not because it prevented thieves from stealing, but because it kept them from knowing whether they’d successfully avoided police detection. Similarly, feedback opacity and misdirection throws hackers off their game by keeping them in the dark about the success of their efforts. Even if hackers do manage to break through, they’ll have no idea how they did it because there’s no observable connection between the output they wanted and any of the thousands of inputs they’ve used to get it. As security specialists, we shouldn’t be shocked when, after putting all our time and effort into developing one test, our adversaries pour their time and effort into successfully passing it. We need to expand our approach to cybersecurity and force cybercriminals to work harder, continuously developing and improving our tactics – even when they appear to be working. CAPTCHA shows us that bots will eventually pass any tests we throw at them. What remains to be seen is just how many tests cybercriminals are willing to pass — and how many times they’re willing to pass it — in order to achieve a single objective.
<urn:uuid:ea952351-dc39-4aa4-b119-e2c842d5890b>
CC-MAIN-2024-38
https://www.humansecurity.com/learn/blog/building-resilience-to-reverse-engineering-a-study-in-captcha
2024-09-10T00:18:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00698.warc.gz
en
0.947907
1,722
3.5625
4
MOOCs – Massive Open Online Courses – and other online learning platforms are currently competing with various college and university programs to give students and other ordinary individuals the chance to learn and hone various professional skills. And while no one might recognize this battle between these two different forms of learning and education, it is evident to everyone looking at the whole scenario that online learning is here to stay. While traditional education will always get preference over e-learning, it is needless to say that such a modern form of education is quite beneficial to students of all disciplines and professionals. Of course, one group of students is reaping more benefits than the rest – engineering majors. Whether electrical engineering or computer science, MOOCs and online learning platforms are doing wonders for engineering kids, both academically and professionally. Here is how. Online learning platforms have more engaging content and exercises. Be it interactive or animated lectures, easy-to-understand resources, etc.; these platforms have everything. Even homework is fun when your instructors and tutors use so much engaging content. Platforms like zyBooks use such resources to make their classes more appealing and exciting. Even programming becomes fun on zyBooks, thanks to the way they design every class. Even when you are stuck with a homework problem, you can always seek help from various online sources that can help you out with your zyBook answers. These online tutoring and homework assistance platforms use expert tutors to help you with your zyBook or any other MOOC homework. Under their guidance, not only will you be eager to understand tough engineering concepts like circuits, data structures, calculus, etc., but you will also enjoy the learning experience. That in itself is a feat that colleges and universities fail to achieve. Most MOOCs are designed so that the learning outcomes of those courses cater to the industry. As a result, it is safe to say that these online learning platforms provide industry-centric education to their students. And what is more exciting for engineering students is that tech giants are teaming up with these MOOC platforms to provide the best learning experience. Not only that, but they are also willing to hire successful graduates from these programs as interns and full-time engineers. Take Google, for example. The tech giant teamed up with a leading MOOC platform, Coursera, and provides courses on data science, programming, UX engineering, etc. All courses are taught by Google’s experts and can be accessed by students for a small fee. The best part about all this is that Google has assured that successful candidates from this course will have the opportunity to work with different tech giants across the world. And these companies are bound by contract to hire the candidates. How great is that? Not only are you learning new skills, but you are practically opening yourself up for various job opportunities. Unlike college, students do not have to take classes or attend seminars and lectures that they do not want. The only reason they do so in college is that they need to complete a set amount of credits and get good grades if they are to secure their engineering degree. However, a lot of these courses are uninteresting, not to mention unnecessary. With MOOCs and online learning, it becomes easier to stay focused on one particular area and only learn what you want to learn. Online learning aims to learn and get good at a particular skill, not to earn credits or grades. Hence, students can work on their skill development while only focusing on things they need to learn for their professional lives. They could even do courses that their college majors would not allow, take a course as many times as needed, and learn whenever they want to. Given all these benefits, it is easy to see why engineering students should enroll themselves in online courses. And it is not just engineering students who are benefiting. Students from other disciplines are also coming to see the perks that online learning and MOOCs offer. They, too, will find the need to join these online learning platforms and become part of a much-needed change in global education.
<urn:uuid:61d056b6-565b-4880-856b-a42c7114994a>
CC-MAIN-2024-38
https://electricala2z.com/tech/moocs-online-learning-benefiting-engineering-students/
2024-09-11T07:33:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00598.warc.gz
en
0.969824
833
2.734375
3
In recent years, hydrogen fuel has garnered significant attention as an alternative energy source as the global community seeks to combat climate change and drive economic growth through decarbonization. Though hydrogen has long been considered a promising yet underutilized resource, its unique properties make it a viable candidate for powering a broad range of applications. The question remains whether hydrogen fuel can completely replace fossil fuels to meet our energy needs. Despite its numerous advantages, the reality suggests that hydrogen will likely play a complementary role alongside other renewable energy sources rather than serving as a standalone solution. Advantages of Hydrogen Fuel One of the primary benefits of hydrogen fuel is its versatile production methods, which allow it to adapt to different energy needs and geographical contexts. Hydrogen can be produced through various processes, including natural gas reforming, biomass gasification, and electrolysis of water. Among these, green hydrogen—produced through electrolysis using renewable energy sources like wind and solar—is particularly advantageous as it offers a zero-emission solution. This adaptability makes hydrogen an attractive option for various sectors, from heavy industry to transportation. Moreover, hydrogen’s high energy density allows it to power heavy-duty machinery, vehicles, container ships, and even aircraft. This makes hydrogen an ideal substitute for fossil fuels in traditionally high-emission industries. Hydrogen-powered vehicles, for instance, can achieve substantial ranges with less frequent refueling compared to electric vehicles. The refueling process for hydrogen tanks is also relatively quick, akin to that of gasoline or diesel vehicles, which provides a practical advantage for commercial and long-haul transportation. These features make hydrogen a compelling alternative within the landscape of renewable energy solutions. Challenges and Limitations However, hydrogen fuel is not without its challenges and limitations. One of the most significant barriers to its widespread adoption is the lack of a comprehensive refueling infrastructure. While hydrogen fueling stations are gradually being developed, particularly in regions like California and parts of Europe, the current network is insufficient to support a large-scale shift to hydrogen-powered vehicles. The expansion of this infrastructure requires substantial investment and coordinated efforts from both public and private sectors. Additionally, the current methods of hydrogen production are not all environmentally friendly. A significant portion of the world’s hydrogen supply still comes from gray hydrogen production, which relies on polluting processes such as coal gasification and natural gas reforming. Transitioning to green hydrogen is essential for realizing the full environmental benefits of this alternative fuel. This shift necessitates significant advancements in renewable energy technologies and increased capacity for green hydrogen production, which, while promising, are still in the development stages. The Role of Hydrogen in a Diversified Energy Strategy Given these challenges, it is clear that hydrogen fuel alone cannot replace fossil fuels entirely. Instead, hydrogen will play a crucial role within a diversified energy strategy aimed at achieving global decarbonization. In conjunction with other renewable energy technologies such as solar, wind, and bioenergy, hydrogen can help reduce our reliance on fossil fuels and mitigate the impacts of climate change. The integration of hydrogen into the broader energy matrix involves continued efforts to develop infrastructure, increase green hydrogen production, and enhance the efficiency and affordability of renewable energy sources. Projects around the world are actively pursuing the production, distribution, and utilization of hydrogen fuel, highlighting its potential as a key component of the future energy landscape. For instance, initiatives in countries like Japan and Germany are focusing on building hydrogen-powered public transportation systems and industrial applications. These efforts underscore the importance of utilizing hydrogen as part of a multi-faceted approach to energy sustainability, rather than relying on it as a sole solution. In recent years, hydrogen fuel has gained considerable attention as a potential alternative energy source in the global effort to address climate change and promote economic growth through decarbonization. Although hydrogen has long been seen as a promising yet underutilized resource, its unique characteristics make it a viable option for powering a wide array of applications. The debate continues on whether hydrogen fuel can fully replace fossil fuels to meet our energy demands. Despite its many benefits, the current consensus indicates that hydrogen is likely to play a complementary role alongside other renewable energy sources rather than serving as a standalone solution. This belief stems from the understanding that while hydrogen can significantly reduce carbon emissions, it faces several challenges, such as production costs and infrastructure requirements, that prevent it from being the sole provider of energy. In essence, hydrogen’s role in the energy landscape will synergize with other sustainable methods, collectively working towards reducing our carbon footprint and achieving a cleaner, more sustainable future.
<urn:uuid:b0d02fda-58d5-4cee-8dfa-bb9c344bdbee>
CC-MAIN-2024-38
https://energycurated.com/energy-management/can-hydrogen-fuel-completely-replace-fossil-fuels-for-energy-needs/
2024-09-12T11:52:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00498.warc.gz
en
0.940073
924
3.6875
4
In recent years, the transition to clean energy has become more critical than ever, with environmental concerns and the urgency to combat climate change driving significant investments in renewable energy sources. One such investment is Ørsted’s substantial $4 million commitment to environmental research and development in Connecticut. This funding is aimed at advancing the Starboard Wind Project, promoting academic research, and fostering workforce development. The initiative places a strong focus on both UConn Avery Point and Southern Connecticut State University (SCSU), setting the stage for impactful environmental stewardship and economic growth in the region. Ørsted’s Strategic Investment in Connecticut’s Universities Ørsted’s $4 million pledge is strategically divided between UConn Avery Point and SCSU, with each institution receiving $2.5 million and $1.5 million, respectively. The decision to invest in these universities underscores Ørsted’s commitment to leveraging local expertise and fostering academic advancements. At UConn Avery Point, the funding will facilitate the creation of an interdisciplinary environmental research program spearheaded by the Department of Marine Sciences. This program is poised to explore the intricate relationships between offshore wind farms and ocean habitats, thus enriching our understanding of the environmental impacts and advantages of renewable energy. On the other hand, SCSU will channel its portion of the grant into researching biodiversity, climate resilience, and offshore wind energy. The university will monitor biodiversity across various offshore turbine, substation, and cable route locations to assess the ecological influences of these installations. Additionally, SCSU aims to explore innovative solutions for shoreline protection that not only enhance biodiversity but also improve climate resilience. These efforts are expected to yield critical insights for combating climate change and mitigating coastal erosion. The Starboard Wind Project: Driving Clean Energy Expansion The Starboard Wind Project stands at the forefront of Ørsted’s clean energy initiatives in Connecticut. This ambitious 1,184-megawatt project aims to deliver reliable and affordable offshore wind power to residents across the state. A key component of this endeavor is to reduce dependency on fossil fuels, thereby significantly lowering carbon emissions and promoting environmental sustainability. As part of the broader mission to transition to renewable energy, the project is set to play a crucial role in shaping Connecticut’s clean energy future. Beyond its environmental benefits, the Starboard Wind Project promises substantial economic advantages. It is projected to bring nearly $420 million in direct investment and expenditure to Connecticut, fostering job creation and stimulating local economies. If selected, the project is expected to create over 800 full-time jobs, particularly in areas such as the New London State Pier. These positions will likely span a range of sectors, including construction, operation, and maintenance, further bolstering the state’s workforce. The blend of economic uplift and sustainable development paints a promising picture for Connecticut’s renewable energy landscape. Enhancing Academic Research Through Interdisciplinary Efforts UConn Avery Point’s focus on interdisciplinary research is poised to make considerable contributions to the study of offshore wind energy. The environmental research program will bring together marine scientists, engineers, and social scientists to investigate the multidimensional impacts of offshore wind farms. By examining ocean habitats and their interactions with wind turbines, researchers can provide valuable data that can inform sustainable development practices. These collaborative efforts are expected to yield a wealth of knowledge that will benefit both the scientific community and broader environmental policies. Moreover, social science researchers at UConn Avery Point will delve into the societal aspects of offshore wind energy. This includes evaluating the significance of workforce development and understanding public perceptions of renewable energy projects. These insights will be crucial for shaping policies and ensuring that the growth of the offshore wind industry aligns with community needs and expectations. The interdisciplinary approach ensures a holistic understanding of the environmental, economic, and social dimensions of offshore wind energy and underscores the importance of a well-rounded research initiative. Promoting Biodiversity and Climate Resilience at SCSU SCSU’s research initiatives are geared towards addressing some of the most pressing environmental challenges of our time. By studying the biodiversity around offshore wind installations, SCSU will gather essential information on the ecological impacts of these projects. This data will be instrumental in developing strategies that minimize negative effects on marine life, thereby fostering a more harmonious relationship between renewable energy infrastructure and ocean ecosystems. The focus on biodiversity is critical for maintaining balanced marine environments as the offshore wind sector continues to expand. In addition, SCSU’s exploration of climate resilience through shoreline protection infrastructure is pivotal for safeguarding coastal communities against the impacts of climate change. By identifying and implementing nature-based solutions that enhance biodiversity, SCSU aims to create more resilient coastal landscapes. These efforts not only help mitigate the effects of rising sea levels and storm surges but also contribute to the overall health of marine environments. The university’s commitment to these research areas underscores the interconnectedness of biodiversity conservation and climate adaptation. Workforce Development: Preparing for a Sustainable Future One of the key aspects of Ørsted’s investment is its focus on workforce development. Both UConn Avery Point and SCSU are set to launch projects that equip students with the skills and knowledge needed for careers in the offshore wind industry. These initiatives are designed to prepare the next generation of workers for roles in engineering, environmental science, and other related fields. As the demand for clean energy professionals grows, this emphasis on education and training becomes increasingly essential for the industry’s sustainability. For instance, UConn Avery Point plans to integrate workforce development into its interdisciplinary research program, ensuring that students gain hands-on experience and practical insights into the offshore wind sector. Similarly, SCSU aims to create pathways for students to engage in research and training that align with the demands of the renewable energy market. By nurturing a skilled workforce, these universities play a crucial role in driving the state’s clean energy transition. The collaboration between academia and industry highlights the importance of investment in human capital to support the renewable energy sector’s growth. Economic and Environmental Implications of the Starboard Wind Project In recent years, the shift to clean energy has become more crucial than ever due to rising environmental concerns and the pressing need to combat climate change. These issues have propelled significant investments in renewable energy sources. A noteworthy example is Ørsted’s $4 million investment in environmental research and development in Connecticut. This funding is intended to advance the Starboard Wind Project, support academic research, and encourage workforce development. Specifically, it focuses on enhancing programs at UConn Avery Point and Southern Connecticut State University (SCSU). The investment aims to create a fertile ground for substantial environmental stewardship and economic growth within the region. By bolstering academic research, the initiative not only nurtures innovation but also prepares a new generation of skilled professionals in the renewable energy sector. The workforce development component is particularly vital as it ensures that the necessary skills and expertise are cultivated locally, thereby strengthening the community’s capacity to contribute to and benefit from the burgeoning clean energy industry. Moreover, this initiative aligns with larger state and national goals for reducing carbon emissions and promoting sustainable practices. By focusing on educational and workforce growth, Ørsted’s commitment serves as a blueprint for how private investments can have a widespread impact, fostering both immediate and long-term benefits.
<urn:uuid:ec9c7378-814c-491c-879d-e78eeb287a78>
CC-MAIN-2024-38
https://energycurated.com/renewable-energy/how-will-orsteds-4m-boost-connecticuts-clean-energy-research/
2024-09-12T13:37:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00498.warc.gz
en
0.908207
1,498
2.703125
3
This week marks the 29th anniversary of Microsoft’s Internet Explorer, which debuted on August 16th, 1995, joining the fierce “browser wars” among online pioneers like Netscape and upstarts like Opera. Internet Explorer made its public debut as part of the Windows 95 operating system launch, kicking off nearly two decades as the dominant web browser across countless PCs around the globe. Out of the gate, Internet Explorer helped bring user-friendly internet access into the mainstream for millions. Its tight integration with Windows 95 provided a simple, one-click way to go online and browse the World Wide Web, which was still relatively new and novel in 1995. Internet Explorer put the internet front-and-center for Windows users. Rise, Dominance, and Downfall As it rapidly gained market share against Netscape Navigator throughout the late 1990s, Internet Explorer showcased technical innovations as well. Version 4.0 in 1997 introduced support for bolder multimedia web experiences through technologies like stylesheets, data binding, and DHTML. This allowed more dynamic and interactive content within web pages. The browser was also a key driver in establishing today’s common web standards and opened the door to enhanced online security through user data encryption. While imperfect, it raised the bar on critical web fundamentals. However, Internet Explorer’s dominance came at a cost, as Microsoft infamously leveraged its Windows monopoly to bundle and favor IE over third-party alternatives. This sparked legal battles over anticompetitive practices and inhibited the open web’s growth for years. Despite its pioneering role, Internet Explorer’s relatively stagnant development eventually led to it being superseded by more modern, standards-compliant browsers like Mozilla Firefox and Google Chrome in the late 2000s. By 2015, Microsoft effectively admitted defeat by introducing the new Microsoft Edge browser and slowly phasing out IE. On this 29th anniversary, we look back at how Internet Explorer democratized the internet and shaped the early web before being eclipsed itself. It was a ubiquitous presence that moved the needle, even as it became a case study in how a tech titan’s product can rise to dominance through anticompetitive means before falling behind more innovative alternatives. Internet Explorer may have faded, but its legacy is forever burned into web history.
<urn:uuid:1fbea21d-91f9-4942-8934-3dd45834cad4>
CC-MAIN-2024-38
https://nationalcioreview.com/articles-insights/tech-time-travel/tech-time-travel-microsoft-launches-internet-explorer/
2024-09-14T23:35:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00298.warc.gz
en
0.929293
474
2.75
3
It is not possible to list the thousands of security tools and technologies used by modern security organizations. However, here are some of the most common tools that are typically present in a mature security stack. A firewall is a network security device that monitors incoming and outgoing traffic, acting as a barrier between a trusted internal network and untrusted external networks. Firewalls use predefined rules to allow or block traffic based on factors like IP addresses, ports, and protocols, preventing unauthorized access and malicious traffic from entering the network. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) IDS is a security technology that monitors network traffic for signs of malicious activity or policy violations. If detected, it generates alerts for security personnel to investigate. IPS, on the other hand, is an active system that not only detects but also blocks or prevents malicious traffic in real-time. Both IDS and IPS can be host-based (focusing on a single system) or network-based (monitoring the entire network). Security Incident and Event Management (SIEM) SIEM solutions collect, aggregate, and analyze log data from various sources, such as firewalls, IDS/IPS, servers, and applications. They help organizations detect, investigate, and respond to security incidents by providing real-time monitoring, advanced analytics, and automated response capabilities. SIEM solutions also enable compliance with regulatory requirements through centralized reporting and auditing. Vulnerability Management is the process of identifying, evaluating, and addressing security weaknesses in an organization's IT infrastructure, software, and applications. This process involves continuous scanning, monitoring, and assessment of systems to detect possible vulnerabilities. Once vulnerabilities are identified, organizations prioritize and remediate them through patching, configuration changes, or other security controls. The main goal of vulnerability management is to reduce the likelihood and impact of successful cyberattacks by minimizing exploitable vulnerabilities in the environment. Attack Surface Management Attack surface management is the practice of identifying, mapping, and reducing the potential entry points (attack vectors) an adversary could use to compromise an organization's IT systems and data. This involves understanding and securing all components of the IT environment, including hardware, software, networks, cloud services, and third-party integrations. By minimizing the attack surface, organizations can reduce the risk of cyberattacks, lower the chances of successful breaches, and improve their overall security posture. Attack surface management includes activities such as continuous monitoring, threat modeling, secure configuration management, and proper access control implementation. Cloud Security Posture Management (CSPM) CSPM solutions help organizations maintain and improve their security posture in cloud environments by continuously monitoring cloud infrastructure, identifying misconfigurations, and providing recommendations for remediation. CSPM tools enable organizations to enforce security policies, assess compliance, and mitigate risks associated with cloud adoption. Threat intelligence refers to the collection, analysis, and sharing of information about existing and emerging threats, such as threat actors, tactics, techniques, and procedures (TTPs), vulnerabilities, and indicators of compromise (IoCs). Threat intelligence solutions help organizations proactively identify and mitigate risks, prioritize security efforts, and improve their overall security posture.
<urn:uuid:a6a6011a-2d89-4d0b-90c3-5a5742ed8e07>
CC-MAIN-2024-38
https://www.hackerone.com/knowledge-center/principles-threats-and-solutions
2024-09-16T07:14:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00198.warc.gz
en
0.924413
650
2.890625
3
Welcome to #WeekendWisdom number 49. This week we’re going to talk about Intrusion Detection Systems. What are intrusion detection systems? An intrusion detection system, or IDS, is either a device or a piece of software that sits on the network and analyses the data that flows across that network. It looks for signs that there might be an intruder, hackers or something that might be on the network or some piece of software that’s doing something malicious on the network. How does an IDS work? It can detect these by using techniques such as, like what an anti-virus application uses, it looks for a signature. So if a specific type of malicious software that is sitting on the network might be extracting data from your network. That has a certain behaviour. It might have a certain signature that the IDS can pick up on. Similarly, using techniques like machine learning an IDS might be able to look for anomalous behaviour. So if a database server is suddenly sending lots of data to another host or outside the network unexpectedly, that anomalous behaviour could be detected by the IDS and reported on. There also other devices which are called honeypots which can enhance an IDS. A honeypot will look like a very, very vulnerable device, maybe a vulnerable email server. If the hackers scan that and try and penetrate that honeypot that will trigger an alert because nothing should be scanning that device. So that’s it for this week. Let’s be careful out there and we’ll talk to you again next week. How can L2 Cyber Security help you? We offer a full range of training programmes, which can be delivered online or in-person*. L2 Cyber Security are also a partner of CyberRiskAware for online self-directed Cyber Security Awareness training and Phishing testing. Contact us for more information at email@example.com. *With appropriate social distancing and other health and safety measures adhered to.
<urn:uuid:58750f69-8e5e-4084-abaa-b8063d91450c>
CC-MAIN-2024-38
https://www.l2cybersecurity.com/weekendwisdom-049-intrusion-detection-systems/
2024-09-16T07:07:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00198.warc.gz
en
0.935742
418
3.140625
3
What Is a Cloud Server? A cloud server is a virtual server (as opposed to a physical server) running in a cloud computing environment. It is built, hosted, and delivered via a cloud computing platform over the internet, and can be accessed remotely. Cloud servers possess and exhibit similar capabilities and functionalities to a typical physical server but are accessed remotely from a cloud service provider. Cloud servers offer a more flexible and scalable solution for hosting websites, applications, and managing data. They can be configured to provide varying levels of performance, security, and control, similar to those of a physical server. Unlike traditional servers, which are limited by physical capacities and upfront costs, cloud servers can be scaled on-demand to accommodate changes in traffic and processing needs, ensuring that businesses only pay for the resources they use. How Do Cloud Servers Work? Cloud servers operate across a network of physical machines, known as a cloud computing platform. This setup allows for the distribution of data and applications across multiple hosting environments. Through virtualization technology, a cloud provider can divide, allocate, and manage resources on these physical servers efficiently, creating a flexible and scalable virtual environment for end-users. Key Components of a Cloud Server - Virtualization Technology: The backbone of cloud servers, allowing multiple servers to operate on a single physical server. This technology optimizes resource utilization and provides isolation between different virtual servers. - Physical Server Infrastructure: A network of physical servers located in data centers around the world, providing the hardware resources needed to power virtual servers. - Management Software: Cloud providers use specialized software to manage the allocation of resources, maintain the health of the servers, and ensure the security of the data. Advantages of Using a Cloud Server - Scalability: Users can scale resources up or down based on their needs without the need for physical hardware changes. - Cost Efficiency: Pay-as-you-go pricing models mean users only pay for the resources they consume, eliminating the need for significant upfront investments in hardware. - Reliability: Cloud servers are hosted in redundant data centers - where multiple data centers function together to enhance the overall reliability and availability of services - ensuring high availability and minimizing the risk of downtime. - Flexibility: Cloud servers can support various operating systems and can be quickly reconfigured to meet the changing needs of businesses. - Accessibility: Users can access cloud servers from anywhere in the world, provided they have internet access, facilitating remote work and global business operations. Applications and Use Cases for Cloud Servers Cloud servers are incredibly versatile, catering to a wide range of applications and use cases across different industries. Their flexibility, scalability, and cost-effectiveness make them suitable for everything from hosting websites to running complex data analyses. Below are some of the most prominent applications and use cases: Cloud servers provide a stable, scalable, and cost-effective solution for hosting websites. They can easily handle traffic spikes and are ideal for businesses of all sizes, from small blogs to large e-commerce sites. Application Development and Testing Developers use cloud servers to develop, deploy, and test applications in a controlled, scalable environment. They can quickly spin up or down resources as needed, significantly reducing the development cycle and costs. Big Data Analytics Cloud servers are used to process and analyze large datasets in big data analytics. They provide the computational power necessary to run complex algorithms and data processing tasks, enabling businesses to gain insights from their data. Disaster Recovery and Backup Cloud servers offer solutions for data backup and disaster recovery. By storing data on cloud servers, businesses ensure it is replicated and can be quickly restored in the event of data loss or a catastrophic failure. Cloud servers can host virtual desktops, allowing users to access their personal desktop environments from any device, anywhere. This is particularly beneficial for remote work and global teams. IoT (Internet of Things) Cloud servers play a crucial role in the IoT ecosystem, providing the backend infrastructure necessary for collecting, processing, and storing data from IoT devices, as well as running the analytics and applications that leverage this data. These use cases highlight the versatility of cloud servers, making them a cornerstone of modern IT infrastructure. Their ability to adapt to different workloads and computing needs while offering cost efficiency and high availability makes them an invaluable resource for businesses and individuals alike. Challenges Associated With a Cloud Server Setup Transitioning to or implementing a cloud server setup can bring significant advantages, but it also comes with its own set of challenges. Addressing these challenges is crucial for organizations to fully leverage the benefits of cloud computing. Below are some key challenges associated with cloud server setups: - Security and Privacy Concerns: Ensuring the security of data in the cloud is paramount. Organizations must navigate issues related to data privacy, compliance with regulations, and the risk of data breaches. - Management Complexity: Managing cloud infrastructure requires a different set of skills compared to traditional IT setups. Businesses may face challenges in monitoring and managing cloud resources efficiently. - Cost Management: While cloud servers can be cost-effective, unpredictable or unoptimized usage can lead to unexpectedly high costs. Effective cost management strategies are necessary to prevent budget overruns. - Performance and Latency Issues: Depending on the cloud provider and the geographical location of the cloud servers, users may experience latency or performance issues, particularly for time-sensitive applications. - Data Transfer and Bandwidth Costs: Moving large volumes of data to and from the cloud can incur significant costs. Additionally, bandwidth limitations can affect the performance of cloud-based applications. - Vendor Lock-in: Dependency on a single cloud provider's technologies and services can lead to vendor lock-in, making it difficult and potentially costly for organizations to switch providers or deploy a multi-cloud strategy. - Compliance and Legal Issues: Organizations must ensure that their use of cloud services complies with all relevant laws and industry regulations, which can vary by region and may change over time. FAQs About Cloud Servers - How do you run a cloud server? Choose a provider, set up an account, select server specs, deploy the server, configure any relevant security, and install the software you want to run. After that regular server monitoring is advisable. - How much does it cost to set up a cloud server? Costs vary based on provider, configuration, data usage, and services, ranging from a few dollars to hundreds per month. - Why is it called a cloud server? It's termed "cloud" because, in common with other cloud services, cloud servers are accessible remotely through the internet, akin to how clouds are high above and accessible from anywhere. - Can cloud servers be used for gaming? Yes, they support gaming environments, offering scalable, high-performance platforms accessible from anywhere. - How secure are cloud servers? They use encryption, firewalls, and access controls for security, though effectiveness also relies on user practices and configurations.
<urn:uuid:5aa6deed-1adf-4881-b8ac-911edf3aa40f>
CC-MAIN-2024-38
https://www.supermicro.org.cn/en/glossary/cloud-server
2024-09-17T12:51:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00098.warc.gz
en
0.91397
1,421
3.328125
3
As a company focused on powering a future shaped by growth, Frost & Sullivan is proud to join the nation in commemorating Juneteenth as the newest federally recognized holiday. NPR states, “Juneteenth is a holiday to commemorate the emancipation of enslaved Black Americans in the United States. In 1865, a Union General arrived in Galveston, Texas to share the news.” However, the Emancipation Proclamation was issued more than two years earlier. “Since then, the holiday has been widely celebrated. In fact, as of June 15th, 2021, the Senate passed a bill making Juneteenth a federal holiday. Juneteenth is also known as Emancipation Day, Jubilee Day, Freedom Day and Liberation Day.” Learn more and join us in recognizing the federal holiday via the resources below: - S.475 – Juneteenth National Independence Day Act - House Passes A Bill To Commemorate Juneteenth As A Federal Holiday - Juneteenth 2021 celebrations: What to know about the holiday - 9 Virtual and In-person Events to Celebrate Juneteenth Around the U.S. Frost & Sullivan – Diversity, Equity & Inclusion Alliance
<urn:uuid:8b7460f9-9a3d-41b5-bff1-0db081d80e29>
CC-MAIN-2024-38
https://www.frost.com/news/press-releases/frost-sullivan-commemorates-juneteenth-as-a-federal-holiday/
2024-09-18T14:44:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00898.warc.gz
en
0.95021
244
2.640625
3
Cybersecurity experts discover new kinds of cyberattacks more often as threat actors continue to evolve their hacking techniques. Security researchers from A&M University and the University of Florida recently uncovered a new fingerprint capturing and browser spoofing attack that compromises users’ privacy and security. Dubbed as Gummy Browsers, the attack harvests the browser fingerprinting information without the victims’ knowledge. What is a Gummy Browsers Attack? According to the research, the Gummy Browsers attack primarily focuses on obtaining users’ fingerprint details by tricking them into visiting a hacker-operated website. The attackers then spoof the fingerprint to use it on other targeted platforms. The Gummy Browsers attack technique enables a threat actor to disrupt any web application with browser fingerprinting. Once the attacker obtains the victims’ fingerprints, it can be leveraged to: - Bypass 2FA and MFA authentications - Spoof users’ online fingerprints to steal identity and conduct frauds - Steal personal data by breaking into user devices Fingerprint Spoofing Methods The researchers revealed three methods that could be used to spoof the users’ fingerprints online. These include: - Browser Setting and Debugging Tool– Attackers manipuate the browser settings and the debugging tool that enable one to alter various attributes of the client device and the browser. - Script Modification– Changing the browser properties by modifying the scripts embedded on the website before it sends it to the webserver. Risks of Stolen Fingerprints With the increase in fingerprint and biometric authentication procedures, stolen digital fingerprints have become one of the primary targets of cybercriminals. Threat actors even trade stolen credentials along with fingerprints on various darknet forums, allowing cybercriminals and affiliates to perform scams and frauds. “Our results showed that Gummy Browsers can successfully impersonate the victim’s browser transparently almost all the time without affecting the tracking of legitimate users. Since acquiring and spoofing the browser characteristics is oblivious to both the user and the remote web server, Gummy Browsers can be launched easily while remaining hard to detect. The impact of Gummy Browsers can be devastating and lasting on the online security and privacy of the users, especially given that browser fingerprinting is starting to get widely adopted in the real world. In light of this attack, our work raises the question of whether browser fingerprinting is safe to deploy on a large scale,” the researchers said.
<urn:uuid:895e2d9d-c00a-4bb9-a77d-50beaef9b798>
CC-MAIN-2024-38
https://cisomag.com/beware-gummy-browsers-attack-captures-browser-fingerprints/
2024-09-19T22:08:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00798.warc.gz
en
0.904649
514
2.640625
3
Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') PHPGurukul Hospital Management System in PHP v4.0 has a SQL injection vulnerability in \hms\registration.php. Remote unauthenticated users can exploit the vulnerability to obtain database sensitive information. CWE-89 - SQL Injection Structured Query Language (SQL) injection attacks are one of the most common types of vulnerabilities. They exploit weaknesses in vulnerable applications to gain unauthorized access to backend databases. This often occurs when an attacker enters unexpected SQL syntax in an input field. The resulting SQL statement behaves in the background in an unintended manner, which allows the possibility of unauthorized data retrieval, data modification, execution of database administration operations, and execution of commands on the operating system.
<urn:uuid:8f835a86-b0c9-4498-9903-e72e4be0792d>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2020-22171/
2024-09-19T21:47:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00798.warc.gz
en
0.838685
160
2.546875
3
Entry level medical billing and coding jobs, along with healthcare administration, are a vast and complex field that relies on many integral components to operate smoothly. Medical billing and coding are two interrelated disciplines that play a crucial role in ensuring the efficient functioning of healthcare organizations. These responsibilities require individuals to act as gatekeepers of important data, and as such, they shoulder significant duties within the medical field’s economic ecosystem. This article will explore the nuances of medical billing and coding, including their key attributes, educational prerequisites, recognition and licensing, career projections, and daily challenges, as well as possible troubleshooting methodologies. Understanding Entry Level Medical Billing and Coding Jobs Demystifying the Core Concept of Medical Billing and Coding Jobs: A Focused Examination Medical billing and coding: the vital heart that keeps the circulation of the healthcare industry intact. This in-depth review aims to unravel the fundamental core concept behind medical billing and coding, a complex yet integral process that bridges the knowledge gap between healthcare providers, insurance agencies, and patients. Medical billing and coding is essentially the healthcare equivalent of translating- transposing treatments into universal code, which is then used to process insurance claims. All hospital or clinical diagnoses, treatments, procedures, and supplies are assigned specific codes, forming a universally accepted language that reflects the intricacy and nuances of modern healthcare. A key player in this dance of codes is the International Classification of Diseases (ICD), which is updated periodically to reflect the evolving state of medical science. The ICD allows diseases, signs, symptoms, abnormal findings, complaints, and external causes of injury or diseases to be represented in an easily translatable format, streamlining communication between various stakeholders. Let’s distract ourselves with a minor diversion into the realm of Current Procedure Terminology (CPT). These codes, developed and maintained by the American Medical Association, relate to specific medical, surgical, and diagnostic services. Just as ICD is the lingua franca of diagnoses, CPT is the dialect of procedures. Together, these codes form the syntax and grammar of the medical billing and coding language. An equally integral aspect of this system is the Healthcare Common Procedure Coding System (HCPCS). Overseen by the Centers for Medicare and Medicaid Services, this nomenclature system is predominantly used for outpatient and home health procedures, including non-physician services such as ambulance rides or prosthetics. The taxonomy established by ICD, CPT, and HCPCS is essentially an elegant shorthand, distilling myriad healthcare components into a concise, recordable, and universally decipherable form. This mechanistic aspect of the healthcare system ensures that the careful orchestration of care is appropriately rewarded and accounted for in the maze of insurance claims. Translating this coded language into insurance billing forms the next logical step in this intricate process. Immaculate and accurate billing is crucial, as disparities or errors can result in claim denial, causing delays and financial strains. The exacting rigor of this process, hence, underscores the significance of the medical coder as a pivotal player in healthcare delivery. It’s evident that medical billing and coding is an intricate, dynamic field that forms the backbone of the healthcare reimbursement process. Its matrix-like complexity enables healthcare providers to convert medical treatment into an accepted universal language, subsequently resulting in a smoothly navigated insurance claim process and ensuring the continuity of healthcare. The admirable clinical-medical symbiosis facilitated by medical billing and coding, therefore, serves as an indispensable pillar upholding the edifice of contemporary healthcare. Job Description and Responsibilities As we delve further into the intricate details of a medical billing and coding job, it becomes apparent that the heart of this vocation revolves around three primary responsibilities: abstraction of diagnostic and procedural information, efficient record management, and diligent compliance with legal and regulatory requirements. Let’s explore these crucial aspects right away. First and foremost, the extraction of key diagnostic and procedural data from medical records is what fuels the labyrinthine machinery of insurance claims. Each patient visit generates a plethora of information: symptoms, diagnostic tests, diagnoses, and treatments – all of which need to be appropriately coded using the aforementioned systems of ICD, CPT, and HCPCS. These codes facilitate the interpretation, recording, and ultimately, the reimbursement of healthcare actions. This sophisticated task, full of potential pitfalls, is firmly in the hands of medical coders at the entry-level itself. Furthermore, efficient management of patient records is another cornerstone of entry-level medical billing and coding roles. Patient records must be organized methodically and scrutinized regularly for the sake of timeliness and accuracy. This process is integral to creating precise invoices, facilitating effective doctor-patient communication, ensuring accurate billing, and avoiding costly re-billing or claim denial occurrences. It is also noteworthy to mention that errors in patient data can affect patient safety itself – a testament to the grave responsibility resting on the shoulders of coders. Last but not least, adherence to legal and regulatory norms is a prerequisite that cannot be stressed enough. Healthcare delivery is an arena wrought with diverse and ever-evolving laws, regulations, and standards. Therefore, a cogent understanding and rigorous application of Health Insurance Portability and Accountability Act (HIPAA) laws, along with various federal, state, and insurance regulations, are non-negotiable mandates for medical coders. This adherence safeguards patient privacy, prompts fair reimbursement practices, and potentially wards off legal consequences. To adequately equip oneself for this intricate web of tasks, certain skill sets are non-negotiable. Aside from a thorough understanding of medical terminologies and coding systems, an eye for detail, analytical ability to interpret complex medical records, and aptitude to persist in a fast-paced, dynamic work environment are crucial to the successful execution of entry-level responsibilities in medical billing and coding. This job, it must be noted, is not merely an administrator’s role. It is an arbiter’s responsibility, standing at the intersection of healthcare delivery and financial sustainability, carrying profound implications for the continuity of patient care and fluidity of healthcare functions. It is, therefore, without an ounce of apprehension that one might state that the seemingly painstaking occupation of medical billing and coding is, in reality, a linchpin that holds the vast, intricate territory of healthcare provision together. Thus, individuals undertaking such a role must understand the magnitude of their functions and execute tasks with due diligence, fostering a culture of precision, efficiency, and integrity in their workspace. Education and licensing requirements After understanding the myriad aspects of medical billing and coding, one might ponder precisely what educational qualifications and certifications are needed to secure an entry-level position in this complex but indispensable field. The critical role of these specialists and their far-reaching implications on both the financial aspects and overall patient care necessitates rigorous training and proven competency. Typically, foundational knowledge in health sciences and medical terminologies is a prerequisite. High school graduates interested in this profession often embark on their journey by participating in degree programs like an Associate’s degree in Health Information Management or a Bachelor’s degree in Health Informatics. These programs offer comprehensive knowledge of health information systems, medical terminologies, physiology, anatomy, and pathophysiology, which lay the groundwork for medical billing and coding. Moreover, an introduction to health insurance and reimbursement, coding and classification systems, health care law, and ethics complement academic learning in these degree programs. Graduates from these programs can further hone their medical billing and coding skills and apply them outside the theoretical framework of the classroom. On the certification front, several recognized agencies like the American Academy of Professional Coders (AAPC) and the American Health Information Management Association (AHIMA) offer sought-after credentials. Germinal entry-level certifications include the Certified Professional Coder (CPC) and the Certified Coding Associate (CCA). These certifications require applicants to demonstrate mastery of diagnostic and procedural coding for a wide range of medical services. They also necessitate proving knowledge in medical coding guidelines and payment methodologies under the complex health insurance landscape. Acquiring advanced certifications like the Certified Professional Coder-Payer (CPC-P), the Certified Inpatient Coder (CIC), the Certified Coding Specialist (CCS), or the Certified Coding Specialist-Physician-based (CCS-P) could increase marketability and career advancement opportunities. However, these are more suited for specialists with substantial experience in medical billing and coding. Note that the field of medical billing and coding is undoubtedly challenging and can prove to be intricate and dynamic, yet it is profoundly rewarding. The educational qualifications and certifications required might seem daunting at first. However, they serve to validate the standard of effective and efficient healthcare delivery, ensuring a crucial translation bridge between healthcare providers and insurance companies is fortified with accurate, timely, and ethical practices. In essence, the rigorous journey to becoming a medical billing and coding professional equips aspirants with a skill set not limited to technical expertise in coding systems but extends to critical thinking, problem-solving, and a profound understanding of the healthcare industry. This journey, while demanding, ultimately leads to a role of immense importance and impact, serving as a testament to the merit and high standards demanded by this profession. Career Prospects and Salary After exploring the vast landscape of medical billing and coding, it is important to look at the career path and earning potential of this essential health sector profession. Starting a career in medical billing and coding requires a willingness to learn and adapt, as the field is constantly evolving. The first step is usually acquiring an entry-level position, such as a Medical Biller and Coder, Medical Coding Specialist, or Insurance Coder. This requires prior education and certification, as well as a solid understanding of the field of medical billing and coding. The intricate maze of diagnoses and procedural codes requires continuous updates and additions, keeping professionals on their toes. Employer requirements for these positions vary, but often require an associate’s or bachelor’s degree, preferably in health information management or a related field. AHIMA’s Certified Coding Associate certification is often the first step in gaining the necessary knowledge and skills. As professionals gain experience and advanced certification, opportunities for role expansion and specialization arise, leading to positions such as Medical Coding Auditor, Clinical Documentation Improvement Specialist, or even Medical Coding Manager. The certifications Certified Professional Coder (CPC), Certified Inpatient Coder (CIC), and Certified Coding Specialist – Physician-based (CCS-P) offer lucrative avenues to bolster career prospects. The Bureau of Labor Statistics (BLS) estimates the median pay for medical records and health information technicians, including medical billers and coders, at $42,630 per annum as of May 2020. However, this figure varies based on factors such as geographic location, level of education, expertise, certification, and the type of healthcare facility. Even with the challenges that come with the job, the rewards make it worth it. Medical billers and coders play an essential role in ensuring the seamless operation of healthcare facilities, which ultimately leads to the accessibility of quality patient care. It is a demanding yet rewarding profession that is critical to the global healthcare landscape. Real-world challenges and solutions As per the discussion above, it is quite clear that mastery of medical billing and coding not only demands a strong knowledge base and deft practical skills, but also a commitment to adapt and swim with the changing tide. In a profession where compliance and precision are highly valued, this deep dive into its fundamental aspects provides a prerequisite learning platform for aspiring professionals. It paints a vivid picture of what to expect as a medical biller and coder, with a special focus on the rewards and challenges that line the path. The aspiration to train as a medical billing and coding professional involves an appreciation for the details and a passion for making a difference in the realm of healthcare administration.
<urn:uuid:0d7f9aac-1c2a-4812-b8b8-aa77552fec56>
CC-MAIN-2024-38
https://cyberexperts.com/entry-level-medical-billing-and-coding-jobs/
2024-09-07T20:20:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00098.warc.gz
en
0.930231
2,461
2.515625
3
VR and AR technologies are now most actively used in medicine to prepare and perform surgeries. Three-dimensional data from CT and MRI scans help to create a virtual reality model of a patient and prepare for surgery taking into account risks and potential problems. Surgeons have been using these machines for several years to prepare for particularly complex interventions. For example, the VR-based Surgical Theater platform helps plan neurosurgical surgeries in the U.S. and Israel. Using this solution in 2021, Soroka Medical Center (Israel) performed the most complex surgery to separate twins with conjoined heads. Using Surgical Theater, doctors created interactive 3D and VR models that allowed them to study problem areas through a special headset. Next, the surgical procedure was designed. At the last stage, using SNAP (Surgical Navigation Advanced Platform), the created scheme was transferred to the surgical navigation system of the operating room. It should be noted that the operation to separate conjoined twins heads is extremely rare and has been performed about 20 times worldwide. VR technologies also help directly in performing surgeries. For example, using with the help of da Vinci surgical robots using virtual reality, more than 20 thousand operations have been performed in Russia by 2021. There are already da Vinci systems in Moscow, St. Petersburg, Novosibirsk, Ufa, Tyumen and several other cities. A typical configuration of such a complex consists of a video stand with light sources and cameras, a surgical console, as well as instrumental manipulators and an endoscope, which creates a three-dimensional image. This system greatly increases the precision of the surgeons' actions, allows to eliminate hand tremors, increase the accuracy of movements and maneuverability with the help of special filters. AR technologies are no less in demand. In particular, in May 2021, Nevada Spine Clinic performed a very complex spine surgery. The Medtronic Mazor X robotic platform and xvision AR headset were used for it. With the combination of these technologies, the surgery, which normally takes 6-7 hours, was completed in less than 2 hours. In 2020, Russia's first surgery using AR glasses was also performed in Stavropol Territory. Thanks to glasses the surgeon could view the patient's data: his CT scan, MRI, etc., without losing time. Experience Sharing and Training In 2016, a 360 online broadcast of the operation was performed for the first time. This practice has now expanded and gained popularity, as a large number of viewers can watch complex surgeries up close using VR glasses. This is useful both for teaching students and for passing on experience to colleagues. Nowadays, medical VR simulators are widely used in the world to train medical students and test their skills. In particular, a solution for training and examinations in laparoscopic surgeries has been developed by order of the Ministry of Health of the Russian Federation. According to a 2019 Harvard Business Review study, the overall productivity of surgeons who received VR training increased by 230% compared to colleagues who received traditional training. Rehabilitation of Patients after Strokes and Injuries Medical research has shown that during rehabilitation most patients have serious problems with motivation and engagement - only 30% of the required exercises are performed, which is very, very low. VR technologies make it possible to improve this figure with gamification. VR technologies are now being widely used to rehabilitate people after strokes and to restore limb mobility after injuries. Pompeu Fabra University (Spain) uses a solution that projects the patient's outstretched arms on the screen. Using sensors, the patient can control them as his own, with one drawback: virtual limbs move more precisely and quickly than real ones. Just 10 minutes of a session boosts the patient's confidence and speeds up recovery. In Russia, the Pirogov Centre uses the Devirta-Delphi solution for the same purpose: using sensors placed on the patient, a digital avatar is created who is then moved into the virtual world to perform exercises that restore motor function, such as swimming in the sea with dolphins. By removing themselves from reality, patients perform better and gain confidence faster. There are examples of the use of VR technology for the therapy of children with cerebral palsy. VR technology is being actively tested to combat a variety of phobias, including post-stroke syndrome. Virtual reality is safe and controllable. By immersing themselves in the simulation of disturbing moments or by coming into contact with the object of their fear, patients gradually get rid of their phobias. As for the real-world cases, Psious VR Therapy is an online platform that provides psychotherapists with tools to deal with various types of phobias, eating disorders, depression, and post-traumatic stress disorder. There are also the first practical results in Russia. For example, the Rewire.Education project assisting in adaptation and development of children and teenagers with autism through a VR application is actively operating. VR and AR solutions for Medicine in Testing and Research Phase In addition to the medical solutions listed above, there is ongoing research into the potential of VR and AR technology for the following purposes: - For anesthesia, or rather to reduce pain; - For treatment of dementia, nervous system lesions and mental disorders; - To help the blind and visually impaired; - To diagnose diseases, including Parkinson's disease; - To develop communication skills as well as empathy; - To simulate different conditions and illnesses, thereby enabling a better understanding of the patient's sensations; - For aesthetic medicine, allowing you to "try on" possible changes to your appearance. Outlook and Figures Markets and Markets research group estimates that the medical VR market is expected to grow to $5 billion by 2023 at a compound annual growth rate of 36.6%. According to US-based Goldman Sachs, by 2025, the use of VR and AR technology in medicine is expected to come in second place in the overall software market in terms of volume ($6.1 billion). The most promising uses of VR and AR technologies are the development of medical simulators for training doctors, creating surgical complexes, and solutions for patient rehabilitation. The roadmap of the Russian national program "Digital Economy" notes the possibility to reduce the number of disabled people among the able-bodied population by 7% through rehabilitation with the help of VR, as well as to reduce the number of medical errors by 50-80%.
<urn:uuid:989541ef-0767-49ea-bf5b-c090770f3da5>
CC-MAIN-2024-38
https://noventiq.az/en/about/news/ar-i-vr-dlya-meditsinyi-primenenie-na-praktike
2024-09-07T21:51:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00098.warc.gz
en
0.956771
1,311
3.203125
3
Malware attacks are on the rise. According to ITRC, in 2023, there were 3,205 publicly reported data compromises that impacted an estimated 353,027,892 individuals. The increasing digitization of businesses and whole industries over the previous few decades has emboldened black-hat actors and made organizations more susceptible to attack. The prevalence of malware attacks has skyrocketed since COVID-19 when many organizations moved their operations completely online. Many businesses don’t take malware attacks seriously until it’s far too late. Contrary to what many believe, malicious actors don’t just target organizations with deep pockets — they cast wide nets and exploit any vulnerabilities they find. For that reason, it’s important for every organization to understand the basics of malware attacks: What they are, the common malware types and what to do to prevent them from happening. What Is Malware & Why Is it Important to Understand? Malware, or malicious software, is a type of software designed to intentionally cause damage to computers, systems, servers and networks, typically (but not exclusively) to extract some form of monetary payment from the owner. It’s important for individuals and organizations to understand malware attacks. The incidence and success rate of these security breaches have been on the rise in recent years, making them one of the biggest — yet most preventable — expenses for businesses. The average cost of a ransomware attack in 2023 was $5.13 million. They can put enormous financial strain on organizations, erode trust in their brand and ultimately cause permanent damage that they may struggle to recover from. The Most Common Types of Malware There are numerous different types of malware, each with their own attack profiles, objectives and even end-goals. Here are the most common: This form of malware tricks unsuspecting users into downloading and configuring apparently legitimate software applications. Once the program has been executed, the trojan installs the malware into the host system and begins exfiltrating data and encrypting passwords. Spyware is one of the most harmful types of malware. In the event of an attack, users are often unaware that their servers have been infected. Spyware can enter a system in a variety of different ways, including an app installation, downloading malicious attachments or visiting a malicious website. Once inside, spyware runs in the background of the device, collecting reams of user information and relaying this to third parties. Ransomware attacks have grown exponentially in recent years, making them one of the most lucrative sources of income for cybercriminals. This style of cybersecurity breach happens in a number of ways, but the most common attack vector is through phishing. Unsuspecting users receive prompts to exchange personal information, usually through an email from a purported trusted source. Cybercriminals use this information to penetrate networks, encrypt sensitive data and, ultimately, demand huge ransoms from their victims. Among the least harmful types of malware, adware works by bombarding users with pop-up ads paid for by third parties. Although adware is typically more of a nuisance to users than a serious threat to their private information, failure to remove adware could lead to browser slowdown that may eventually render the device inoperable. Adware programs usually gain access to devices after a user downloads a malicious program. Computer worms infect devices by exploiting security vulnerabilities and replicating themselves to infect other devices. Unfortunately, even after the device owner has identified and removed the worm, most of the damage might already be done. The primary objective of the worm is to remain on the host server for as long as possible and infect as many other systems as it can, so even if users have recovered the initial device, many others in their network will likely already be infected. Fileless malware attacks are a more sophisticated means hackers use to penetrate systems. This malware type gains access to the host computer system through legitimate software or applications downloaded via a malicious link, usually through email. Fileless attacks operate strictly in memory, leaving no digital footprint and making it extremely difficult for security systems to detect them. How Malware Impacts Businesses In many cases, malware is more than a straightforward problem that businesses can quickly pay to have fixed and removed. At its worst, malware can infect entire networks and steal huge sums of sensitive data information, requiring costly interventions to remediate. Even when handled successfully, the impact of malware attacks can permanently damage a business’s operations, brand image and revenue flows. Here are some of the ways malware can affect organizations: One of the hallmarks of a malware infection from the user perspective is that devices and browsers become so slow they’re rendered almost inoperable. In worst-case scenarios, malware can make critical systems and data completely inaccessible, meaning employees are unable to use the devices and access the programs they need to do their jobs. Damages customer relationships Business relationships in the digital economy are founded on trust. Customers share huge amounts of their personal data with the brands they buy from, and that means data privacy is a top priority. When companies are subject to a malware attack, it demonstrates to customers that they aren’t taking the right precautions to protect data, potentially signaling to them that they’re careless with their private information. This can have a permanent effect on trust and brand reputation. Probably the most obvious and harmful effect of malware attacks, malicious actors often demand massive sums of cash to decrypt sensitive information. Although security experts strongly advise against it, many companies simply choose to pay the ransom and hope the problem goes away. Even if they heed the warning, they will often still have to pay huge amounts to recover lost data and undo the damage to their brand. Creates more vulnerabilities Most malware attacks work by either stealing sensitive information, gathering data in the background of regular browsing sessions, or encrypting account login credentials. Even if an organization keeps backups of its sensitive information (which is a key part of any recovery effort), malicious actors will often retain access to stolen information, which they could use to advance other nefarious purposes. How to Identify and Prevent Malware Attacks Understanding what a malware infection is and how it can affect businesses is the necessary starting point. Even more important, however, is knowing how to identify when an attack has taken place and what steps security analysts can take to maximize their organization’s resistance to security breaches. While specific identifiers don’t necessarily apply to all types of attacks, there are a few worth noting. These include: Adware infects devices and enables third parties to push advertisements to browsers. If you’re being bombarded with pop-up ads that’s a telltale sign your device has been infected by malware. Malware attaches itself to your device’s internal systems and uses them to fuel its malicious activity. Over time, this can cause your browser to slow substantially below normal levels. If your devices are operating much slower than normal or are frequently crashing, that could be a sign of malware. One of the most conspicuous indicators that you’ve fallen victim to a malware assault is receiving ransom demands from unidentified individuals asserting they’ve encrypted your data. It’s never advised to pay to resolve a ransomware attack as doing so encourages further criminal activity and there is no guarantee you will get your stolen data back. As PC users become more adept at identifying and preventing malware attacks, threat actors have developed more subtle ways to infect host computers. Any activity that seems especially unusual — like getting ads on government websites or an inability to remove certain software — is likely a sign of a malware attack. Losing account access In the event of a ransomware breach, cybercriminals typically pilfer login credentials and extort a ransom in return for decryption keys. If you find yourself locked out of vital accounts, such as banking portals, it’s probable you’ve fallen victim to malware infiltration. The better hackers get at penetrating devices, the more difficult malware detection is. Even if you don’t detect any unusual activity on your network or device, it’s still important to be on guard to ensure malware isn’t secretly stealing sensitive information in the background of your browsing session. Preventing a malware attack before it occurs is the optimal strategy. Although some organizations perceive malware prevention as demanding costly and extensive security measures, significant strides can be made through the implementation of essential best practices, alongside security systems. Successful malware attacks rely on gaining exclusive access to your data, so keeping backups is one of the most effective ways to thwart malware attacks. Make sure you back up your data as frequently as possible and, ideally, storing it in a secure, offsite storage location. In the event of a malware attack, you won’t have to rebuild your data infrastructure from the ground up, saving precious time, money, and stress. One of the most prevalent and easily manipulable avenues for attack is user error. Individuals, unaware of potential threats, may unknowingly interact with harmful elements, such as clicking on links or downloading files from seemingly familiar sources, thus inadvertently activating malicious software. It’s imperative to provide comprehensive education to all staff members regarding optimal practices, which encompasses instructing them on recognizing dubious emails or online solicitations and implementing more rigorous password protocols. Invest in a malware scanner Preventive measures such as maintaining backups and providing user education are crucial. Despite our best efforts, unforeseen events may occur. In such instances, it becomes imperative to have reliable security software in place to promptly detect malware attacks, enabling swift intervention to mitigate potential long-term harm. Configure a firewall It’s imperative to invest in a comprehensive firewall to block unauthorized users from accessing your systems to protect against a wide range of malware attacks. Malware attacks are on the rise. Businesses need to be prepared with the latest solutions and technology to enhance their security posture and protect their sensitive information from attack. And that starts with having the right team of cybersecurity professionals on your side. Fortra’s Alert Logic’s global security operations center (SOC) experts monitor your systems 24/7 and leverage a diverse range of data collection and analytics methods for rapid threat detection. The comprehensive coverage provide by our managed detection and response (MDR) solution utilizes a combination of people, processes and tools to reduce the likelihood of successful attack. Schedule a demo today to get started.
<urn:uuid:196740b0-9182-4443-a5b0-089e9225e8e8>
CC-MAIN-2024-38
https://www.alertlogic.com/blog/what-is-malware/
2024-09-07T20:07:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00098.warc.gz
en
0.936992
2,142
3.109375
3
Over the past couple of decades, AI has evolved rapidly. From simple programs that could play chess to new software that can manage the most complex and time-consuming of tasks, artificial intelligence has come a long way. One particular subset of AI, dubbed machine learning (or ML), is quickly transforming the way business is done in a variety of industries, from customer service to supply chain management. Built on the idea of prediction and continuous learning, machine learning is helping businesses stay on top of trends and better mitigate disruptive events that can happen at the drop of a hat. To better understand the potential use cases of this emerging tech, it’s beneficial to start with the simplest question: What is ML? What is ML? Machine learning (ML) is a subcategory of artificial intelligence that’s becoming increasingly important; it’s entire functionality is based around prediction, interpretation, and learning. Essentially, ML allows algorithmic software to analyze patterns in data sets to be able to make decisions without human input. For example, a computer scientist could give a machine learning algorithm a large data set to parse through, and that algorithm could then be able to make smart decisions and recommendations based on the data. What’s more, those algorithms can continually improve upon themselves and be able to make even better decisions through human-enabled corrections. Of course, like any program built on artificial intelligence, the capabilities of machine learning and the recommendations it’s able to make is shaped by the quality of the data it’s been fed. Paired with quality data sets, along with analysts who know how to recognize and reduce bias, machine learning can have an enormous potential to transform the way people think about and do work. Different types of machine learning Machine learning in and of itself encompasses an umbrella of several related learning methods. Depending on what kind of data analysts want to predict, they’ll use one of four machine learning strategies. We touch briefly on the subject here, but if you’d like to learn more about the different types of machine learning, click here. Supervised learning involves giving algorithms a labeled set of training data. The variables the analyst wants the algorithm to test are clearly defined, and both the input and output of the algorithm is specified. An easy example of supervised learning is classification, i.e. asking the algorithm a simple question such as: Is this a picture of a cat or a dog? By using past images fed to it of both cats and dogs (in this case, the labeled data), the algorithm can classify by attributes alone whether the image is a cat or a dog. Unsupervised learning allows an algorithm to train itself on unlabeled data. The program will parse through data sets to look for meaningful connections and structures. In this method, the data used and predictions the algorithm makes are predetermined. For businesses, one of the most applicable uses of unsupervised learning is clustering, or the ability to natural groups within a subset of otherwise uninterpreted data. This can result in new data to help segment customer audiences, make recommendations, or analyze the demographics of your social media. A mix of both supervised and unsupervised learning, semi-supervised machine learning involves a greater level of freedom for the algorithm. Analysts can feed the algorithm a mix of labeled and unlabeled data, and the algorithm is then free to make connections and explore without any predetermined outcomes. The most common example of semi-supervised learning is text document classifications. Instead of having to read through an entire document to classify it and help you find the specific document in search results, the algorithm can classify a text document based on smaller amounts of text. The labeled data (text document) and unlabeled data (the contents of the document) together make this method semi-supervised. In reinforcement learning, a user programs an algorithm to complete a particular task. As the algorithm works out the task at hand, the user can give positive and negative cues to help get it on track, while still allowing the program to choose what steps it takes to reach the final outcome. This is oftentimes the method people will use to teach machines to do multi-step processes with clearly defined rules. A real-world example of reinforcement learning being used is in medical diagnoses. In the healthcare world, doctors and nurses must go through specific protocols and procedures when diagnosing a patient; in many cases, algorithms can run through the same processes much faster, resulting in quicker diagnoses and faster care. Advantages and disadvantages of machine learning As with any emerging technology, there are a variety of benefits and drawbacks to the use of machine learning in any given context. While this list is by no means exhaustive, here are just a few common advantages and disadvantages of utilizing ML. Automation of tasks: Now more than ever, businesses are strapped for time, and machine learning can help give some of that precious time back. Instead of having teams work through low-value, repetitive tasks, ML can help take care of them, leaving teams more time to focus on high-value, creative and analytic tasks that drive ROI. Opportunity for continuous improvement: As our understanding of machine learning improves, so does the scope of its capabilities. In the next decade, expect to see ML being able to take on an even wider variety of tasks. Costly to use: The software infrastructure and manpower needed to create and train machine learning algorithms can be pricey, especially for smaller businesses. However, as AI is explored more and commercialized, these costs are projected to go down. Proper training takes time: In addition to high cost, it takes time to train ML programs accurately. Quality datasets must be collected and labelled, and the algorithm must be given enough time to learn and be corrected. Errors are unavoidable: Oftentimes with the type of immensely large data sets that machine learning algorithms are trained on, errors are bound to happen. And while dedicated data scientists can help remove and correct some of those errors, it can be difficult or almost impossible to remove all of them. Why is machine learning important? So we can answer the question, “what is ML?”, but what makes it so important in today’s business context? Now more than ever, data is the thing that drives good decisions in business. From when to buy more inventory, to predicting customer satisfaction, data-driven decision making is key to ensuring businesses can stay resilient and competitive. Additionally, machine learning can help businesses analyze and interpret real-time data in times of disruption. For example, the pandemic showed many businesses just how important it is to be able to quickly gather and analyze new data. And those that already had these capabilities in place were better able to adapt and pivot their business models accordingly. Who is using ML today? As mentioned earlier, the applications for machine learning are growing every day. From education to manufacturing and everywhere in between, ML is helping change the way businesses distribute tasks and analyze the data they collect. A few use cases are highlighted below: Education: helping educators identify struggling students to improve retention Manufacturing: real-time predictions and better condition monitoring Sustainable Energy: monitoring demand and optimizing supply Medicine: disease identification and risk analysis eCommerce: customer service, omni-channel marketing, and upselling
<urn:uuid:285e0c0d-3441-48a7-bdcb-374875a3963c>
CC-MAIN-2024-38
https://www.aipartnershipscorp.com/post/how-to-make-data-science-workflow-efficient
2024-09-10T04:40:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00798.warc.gz
en
0.942246
1,511
3.15625
3
In a world overrun by 24/7 information, it’s easy to get overwhelmed or misunderstand insights we glean. This holds true for businesses too, which must be sure they are making evidence-based decisions, developing the right strategies, and hitting goals. While data can be an invaluable resource for businesses looking to make better and smarter choices, it can be complex, too. That’s why one of the most important concepts for businesses to understand and use is data contextualization. With the help of data contextualization, you can analyze big data easily to identify patterns, trends, correlations, and gain valuable insights from the data you’ve collected. If your business is looking to better understand what your data is telling you, keep reading to learn more about data contextualization and how it can get you from data to insights in no time. What is contextualization? According to the proceedings of the 50th Hawaii International Conference on System Sciences, context is information about a certain entity that can be used to reduce the amount of reasoning required for decision-making that’s within the scope of a specific application. When applied to data analysis, contextualization helps to identify relevant information that can help determine patterns, trends, and correlations. With this data integration, you can provide context to users allowing for better interpretation of your data and enabling you to make smarter decisions. What is big data? Big data is a field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software. Though data with many categories or columns offer huge statistical power, it could still lead to a high false discovery rate. With that, big data challenges may include data storage, capturing data, search, sharing, querying, transfer, data source, and more. Seeing as it’s a complex concept, working with big data analytics to reduce the obstacles in obtaining data is vital. The power of data contextualization Adding context to data means including background information, patterns, trends, outliers, and more, to help a reader make sense of what the data is really saying. For example, in the retail industry, a reported drop in sales during a given month is not valuable without considering things like traffic patterns, previous benchmarks, holidays, and more. Once you have all of that information, a story begins to emerge. It could be that the drop in sales happened over a holiday weekend when most customers go out of town, and isn’t anything to worry about. Or it could be a troubling trend that requires attention. More generally, when data is properly contextualized, businesses can use it to guide customer relationships, improve marketing strategies, predict future economic trends, and manage risk. From there, you can employ data storytelling to create a story from the data analysis you’ve obtained. This allows people to understand complex information and use it to make decisions and take actions against issues. This is incredibly important for influential communications, as narratives and visuals are an effective medium that allows our brain to understand information better, remember it, and make informed decisions. The Bottom Line: Big Decisions Require Big Data Contextualization When it comes to expansion and taking your business to another level, it’s incredibly important to include big data analysis and contextualization into your efforts. With a better understanding of data – such as what it is and how contextualization can make it actionable, you can create impactful strategies and take the right steps to help your company grow and motivate your employees. Knowing the basics should help you make better decisions in the future, though working with big data analytics companies like Gemini Data is key for accurate, effective, and strategic data analysis. If you’re interested in learning more about data contextualization and how it can help your business, reach out to us today. We help organizations construct a connected view of their business and can help solve your biggest data challenges, enabling you to understand and share data stories.
<urn:uuid:d061eec0-26db-4563-a8ea-1ed350b486bb>
CC-MAIN-2024-38
https://www.geminidata.com/the-ins-and-outs-of-data-contextualization/
2024-09-12T15:22:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00598.warc.gz
en
0.925751
827
2.671875
3
This summer has seen a series of IT security breaches in the government. On July 18, hacker collective Anonymous leaked a list of congressional staffers’ emails and passwords in an online message board, revealing more than 2,100 logins. Although a memo later revealed those emails were connected to past Hill employees, current users of the compromised email contact systems were urged to change their information. As recent as last week, three White House employees on the president’s social media outreach team had their personal emails hacked. The hackers sent malicious emails from these compromised accounts with fraudulent links, and more than 12 people were targeted in this attack. And to add to the headaches: There have been weekly headlines about China spying on U.S. intelligence and in June, former Booz Allen Hamilton contractor Edward Snowden revealed secrets about widespread U.S. surveillance programs. So, what gives: Are passwords with symbols, uppercase letters and seven-digit minimum standards a thing of the past? It seems so. This evolution stems from the convergence of several factors, said Charles Romine, director of the Information Technology Laboratory of the National Institute of Standards and Technology. For example, there has been a substantial increase in the number of actors seeking to compromise U.S. cybersecurity, and these adversaries also have more computing power than ever available to them. To keep up, passwords need to evolve to become more complex, Romine said. “Government has been moving toward longer and longer passwords, with more distributions of different characters,” he said. “By introducing passwords with higher levels of complexity, it is some assurance that a brute force attack will take a bit longer.” However, the human mind is only capable of so much. Most federal employees have up to 10 systems they log into, which can mean keeping track of multiple complex passwords. Or more so, if someone is using the same password for all system logins, they become even more vulnerable to attack. “There’s a human limitation you simply cannot overcome; it exceeds workers’ cognitive abilities,” Romine said. Some federal agencies have already started to think beyond traditional passwords to further secure their information. Paving the way, the Defense Department began using common access cards in 2001 for physical access into buildings. CACs are very small and very secure micro computer chips inserted onto a card. By 2006, CACs were used to access any computer in any DOD facility, which helped reduce the number of compromised accounts and computer intrusions by 46 percent. Currently, all federal employees have a personal identity verification card that grants them physical access to buildings. Neville Pattinson, who manages government programs and affairs at Gemalto, said the next move in password security will be on mobile devices. “There’s a strong case to protect all mobile devices, and the bigger question of how to do it,” he said. “One method is tucking it away on the SIM cards of phones. That will protect government employees emails on their on phones and mobile devices.” Substantial support has been thrown toward biometric password security, which has had a significant upswing in the past five years. However, according to Pattinson, biometric technology isn’t always reliable, and it’s very expensive to implement. Everyone can recall examples of biometrics passwords in action or sci-fi movies, and also how easily a villain can compromise them by forcing someone to scan their finger or eye to open a door or access a device. Pattinson said biometrics can be just as vulnerable as traditional passwords, and other technologies such as CAC can provide secure passwords. “The issue of password security is perfectly solvable if people are ready,” he said. “But people are more about ease of use than security of use.” NIST research has also shown high user dissatisfaction with complex passwords. Romine suggests there will soon be passwords using a combination of actors, possibly a biometric feature paired with a traditional password. At NIST, biometric research has been going on for decades. Romine said the lab has made considerable progress in helping the community advance biometric technology. As an example, he cited contactless fingerprint-recognizing technology: Instead of pressing your finger down on glass or a screen, you can simply wave your hand in front of the recognizer. Iris and facial recognition have also evolved considerably. Until that technology is available to agencies, the administration is doing what it can to keep up with the rapidly evolving threats. In February, President Barack Obama signed a cybersecurity executive order that aims to strengthen U.S. cyber defenses by increasing information sharing and developing security standards. The ball is now in Congress’ court. “Congress must act as well, by passing legislation to give our government a greater capacity to secure our networks and deter attacks,” Obama said in his State of the Union address.
<urn:uuid:0f8924ae-32b0-4042-9f58-d725b7314f47>
CC-MAIN-2024-38
https://develop.fedscoop.com/passwords-unlocked-the-future-of-security/
2024-09-16T09:14:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00298.warc.gz
en
0.965456
1,021
2.828125
3
It has been 21 years since IBM’s Deep Blue supercomputer checkmated chess champion Garry Kasparov, marking a historic moment in the development of artificial intelligence technologies. Since then, artificial intelligence has invaded everyday objects, such as cell phones, cars, fridges, and televisions. But the world economy seems to have little to show for the proliferation of smartness. Among advanced economies, productivity growth is slower now than at any time in the past five decades. National GDPs and standards of living, meanwhile, have been relatively stagnant for years. This situation poses something of a riddle: Previous waves of technical innovation have come with rising productivity and, in turn, leaps forward in economic growth and well-being. For example, once electricity became widespread in the United States in the 20th century, labor productivity started growing at an annual rate of 4 percent—almost four times higher than the current rate. There are two schools of thought about today’s productivity puzzle. On the one hand are techno-pessimists, such as Northwestern University professor Robert Gordon, who believe that today’s technologies are the issue. The six innovations that powered economic growth from 1870 to 1970—electricity, urban sanitation, chemicals, pharmaceuticals, the internal combustion engine, and modern communications technologies—the thinking goes, were simply more transformative than, say, Siri. On the other hand are techno-optimists who counter that today’s innovations—cloud computing, big data, and the “internet of things,” which are at the heart of the artificial intelligence revolution—are, indeed, transformative and that their benefits are already being enjoyed by firms and consumers around the world. The problem, scholars such as British economists Jonathan Haskel and Stian Westlake argue, is that national accounting statistics simply cannot capture those benefits. The concept of GDP first emerged in the 1930s to measure economies that were primarily devoted to the production of tangible goods. Intangible goods and services, by contrast, increasingly dominate today’s economies. If GDP figures properly tallied the intangible economy, the argument goes, then productivity growth would look much better. There is some truth in both theories; certainly, electricity changed the structure of work and home life in ways that Google Home has not. It is likewise true that GDP does not count free online services such as Google, Facebook, and YouTube that massively contribute to the well-being of consumers. But there might be a third, more straightforward, solution to the productivity riddle—one that even reconciles the other two. Simply put, the latest revolution is not showing up in national statistics because it has not yet really begun. […]
<urn:uuid:8200e9a9-fec3-4fff-a954-c2f3d0207831>
CC-MAIN-2024-38
https://swisscognitive.ch/2018/08/14/the-real-payoff-from-artificial-intelligence-is-still-a-decade-off/
2024-09-16T09:25:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00298.warc.gz
en
0.953242
545
2.875
3
Encryption features found in the Cisco IOS provide the ability to secure data communications by encrypting the payload of packets. Once encrypted, the contents of packets cannot be read by utilities such as network analyzers. While encryption provides the benefit of securing network communications, it also comes with a cost in the form of higher router CPU utilization. While a variety of data encryption techniques exist, Cisco routers provide the ability to secure data using two primary technologies – Cisco Encryption Technology (CET) and IPSec. CET is an older proprietary encryption method developed by Cisco, and has been phased out of the Cisco IOS as of version 12.1. IPSec is an IETF-standardized encryption method that was designed by a number of companies, including Cisco. Not only is IPSec an Internet standard, it also provides interoperable encryption between the equipment of different vendors. Encryption techniques are most commonly employed to securely transmit data over untrusted public networks like the Internet. For example, data encryption is used to implement what are known as Virtual Private Networks (VPNs), using the Internet rather than dedicated WAN links as a backbone to connect locations. Imagine a situation in which a company has two locations, each of which are connected to the public Internet using Cisco routers whose IOS images support IPSec. The company uses the IPSec capabilities of the routers to form a secure encrypted tunnel over the Internet. When a user from Office 1 attempts to communicate with a server in Office 2, data will be encrypted at the Office 1 router, sent over the Internet as a regular datagram (with an encrypted payload), and then decrypted at the Office 2 router. The end stations need not know about, or have any encryption capabilities. While the ability to encrypt traffic using Cisco routers is a useful feature, it can also have a considerable impact on router performance, especially CPU utilization. As a general rule, Cisco recommends that encryption not be configured on routers whose CPU utilization is already above 40%.
<urn:uuid:e6e3c4c2-cee1-4cd1-affb-394550edfb2c>
CC-MAIN-2024-38
https://www.2000trainers.com/ccda-study-guide/ios-encryption-features/
2024-09-17T14:33:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00198.warc.gz
en
0.941067
401
3.578125
4
What’s more secure than shredding company documents? Why, reducing paper down to its very fibres, then constructing new pieces of paper from the remains, of course. What most would consider an industrial function is now available for the office. Printing giant Epson has unveiled a solution called PaperLab, essentially reducing a paper recycling plant down to a machine the size of a cubicle. The process, which takes waste paper and pumps out custom-grade sheets, skips the recycling bin and garbage collection completely, not to mention it’s the most secure way possible to destroy documents. “PaperLab produces the first new sheet of paper in about three minutes of having loaded it with waste paper and pressing the Start button,” Epson claims in an official statement. “The system can produce about 14 A4 sheets per minute and 6,720 sheets in an eight-hour day.” What’s more, the grade of paper can also be controlled. This can vary from the dimensions, (A4 and A3 were mentioned, although we imagine Letter and other North American sizes to be available) and the thickness, such as business card grade sheets are possible. Even coloured or scented paper are possible. The paper-making process usually takes a lot of water, around a cup of water for one single A4 sheet, but not PaperLab. Epson says that a dry process is in place, which means a smaller carbon footprint. As for converting waste into new paper, Ars Technica reports that an Epson patent describes a process of “crushing and defibrillating paper”, then using air to de-ink the fibres. From there on, it’s the binding process. In this stage, colour, flame resistance, or even fragrance can be added. Lastly, during what the company is calling a forming stage, a paper’s thickness and dimensions are determined. It seems the only thing the machine does not do is print. Availability in Japan is in 2016, with other regions coming “at a later date.”
<urn:uuid:20857928-5126-4ee4-a7a5-31b4fe276125>
CC-MAIN-2024-38
https://channeldailynews.com/video/epsons-new-machine-lets-offices-literally-make-their-own-paper
2024-09-20T01:10:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00898.warc.gz
en
0.949044
438
2.78125
3
Although decades of advances in miniaturization have yielded enormous performance gains for single processors, it now appears that this era is coming to a close. The industry has placed a big bet on future single-chip performance gains coming from increasing core counts. This will only be a winning wager if the software can be programmed to take advantage of parallel processors, and unfortunately, concurrent programming is difficult. Even the experts of single-threaded programming often fail to appreciate that concurrent programs are susceptible to entirely new classes of defects, such as data races, deadlocks, and starvation. Avoiding these pitfalls requires deep reasoning about concurrency, which is difficult for humans, and is not made easier by mainstream programming languages that were not designed to handle concurrency. Consequently, concurrency errors frequently trip up even highly experienced programmers. For instance, in one case, a race condition (now fixed) in iOS 4.0-4.1 meant that any person with physical access to an iPhone 3G or later could bypass its passcode lock under certain conditions. Concurrent programs and their problems have been with us for much longer than multi-processor machines, but concurrency defects of all kinds are much more likely to manifest on multi-processor (including multi-core) computers. On single-processor systems, threads typically run uninterrupted for reasonably large time quanta, and there is no truly simultaneous execution, which dramatically constrains the set of likely behaviors. As a result, when run on multi-processor systems, concurrent programs that run perfectly well on single-processor systems often manifest previously-latent defects. This paper describes some common concurrency pitfalls and explains how static analysis with CodeSonar® can help find such defects without executing the program. CodeSonar ships with a range of advanced checks for problems that can arise in concurrent programs. For example, it includes an innovative Data Race analysis that is paired with user interface functionality for understanding the interactions between different program threads. In addition to the included checkers, an extension API is provided, enabling users to add their own checks for software defects. Throughout this document, CodeSonar warning class names are rendered in italics, sans-ser- if font: Null Pointer Dereference. If a warning class is disabled by default, the class name is marked with an asterisk: Recursive Macro*. MULTI-THREADING COMPLICATES DEVELOPMENT When multiple operations can execute concurrently, essentially, everything becomes more complicated. One of the most significant complications is that instructions in multiple threads can be interleaved. The number of possible interleavings can be huge and increases enormously as the number of instructions grows. This phenomenon is known as the combinatorial explosion. If thread A executes M instructions and thread B executes N instructions, there are possible interleavings of the two threads. Even the smallest threads have many possible interleavings – see Figure 1 for some examples. The number of possible interleavings for a pair of threads with twelve instructions each exceeds two million. In practice, the actual number of interleavings is reduced somewhat by constraints on execution order that are imposed by synchronization operations and the system scheduler; however, even with such constraints, real concurrent programs have astronomical numbers of legal interleavings. Real-world software involves many more instructions than these examples and also tends to involve branching and other complications. Testing every possible interleaving is infeasible. To make matters worse, interleaving decisions are made by the system scheduler, with very little control given to the programmer or end-user. Not only is it infeasible to test every interleaving, it is very difficult to enforce any specific interleaving on a given execution. System scheduling is enormously complex, to the point of being effectively nondeterministic. This is especially true of deployed software that will run in environments outside the control of developers. As a result, the nondeterminism in multithreaded programs makes traditional software testing significantly less effective. In principle, a test can be written to reliably expose any defect in a single-threaded program (although generating a complete set of tests using standard methods can be incredibly expensive). For multithreaded programs, however, this is not generally the case because nondeterministic interleaving means that a single test can have many different behaviors, and it’s difficult or even impossible to force a particular behavior to occur on a given run of the program. Nondeterminism also reduces the generalizability of Proven In Use arguments. For single-threaded programs, a long track record of safe operation is a strong indication that the software will continue operating correctly, even if its environment is changed. For multithreaded programs, however, even tiny changes in the operating environment can lead to wildly different behavior, because previously unexercised interleavings occur. The nearly unbounded number of possible interleavings means that even aggressive stress testing is not necessarily an effective way to expose concurrency defects. CodeSonar, on the other hand, is effective at this difficult task because it can discover software defects without exhaustively exploring interleavings. It uses sophisticated symbolic execution techniques to reason about many possible execution paths and interleavings at once. These techniques find concurrency errors without executing the program at all. Thus, it must play an important role in multithreaded software verification. The consequences of interleaving, however, stretch far beyond testing and verification because the interleaved threads can actually affect each other’s behavior. Ideally, these effects are intentional and correct, but in practice, they sometimes involve race conditions – a class of problems that does not even exist in a single-threaded environment. The software community has devoted extensive efforts to developing techniques that eliminate these ill effects, including the more frequently used locks, semaphores, and message passing. Unfortunately, these techniques introduce problems of their own. They increase the size and complexity of the code base, making it harder for human readers to understand. They can be used incorrectly in ways that lead to processes or even entire programs failing to make progress. They can cause a needless slowdown when used unnecessarily, and fail to provide protection when accidentally omitted. CodeSonar provides important assistance in identifying and addressing these kinds of issues. A single-threaded worldview remains pervasive in software development, even in projects that have made the transition to multithreading. In some cases, developers do not think about multithreading at all. Other developers may be aware of multithreading and its attendant hazards but treat artifacts like semaphores, thread-safe libraries, and the volatile keyword as magical talismans for warding off concurrency bugs. Even experts usually do not have a sufficiently holistic understanding of the system to reliably spot concurrency defects. Many development practices implicitly consider threads individually and so do not account for all the issues that can arise when several threads execute simultaneously. Unit testing, for example, will not uncover all bugs caused by the interaction of multiple threads. Similarly, running multithreaded programs in a debugger is not an effective way to diagnose concurrency problems, because the debugger itself disrupts the operating environment, and only one interleaving can be explored per execution. Such practices remain appropriate and valuable for addressing “classical” software defects but must be augmented with techniques that take into account the true multithreaded nature of the system. Development tools are also sometimes fundamentally based on a single-threaded worldview. For example, compiler optimizations are often based on the assumption that if the current thread does not modify a value, the value remains unmodified. But if more than one thread is running at once, this is not necessarily true. The definitions of C and C++ did not include any reference to multithreading until the 2011 revisions of their respective standards. Compiler and runtime support for these standards is still incomplete, which means that multithreaded programs are exposed to implementation-specific behavior to a much greater extent than single-threaded programs. As Boehm puts it, “Essentially any application must rely on implementation-defined behavior for its correctness.” Eliminating all potential concurrency defects like data races and deadlocks is a good way to avoid bad implementation-specific behaviors. In the remainder of this paper, we describe software defect classes that are specific to multithreaded programs and demonstrate how CodeSonar can be used to find these defects, reducing the probability of their occurrence. A data race arises when multiple threads of execution access a shared piece of data, and at least one of them changes the value of that data, without an explicit synchronization operation to separate the accesses. Depending on the interleaving of the two threads, the system can be left in an inconsistent state. Data races are especially insidious because they can lurk undetected indefinitely and only show up in rare circumstances with mysterious symptoms that are difficult to diagnose and reproduce. As a result, data races are a common source of errors in (well-tested) deployed software. At best, the presence of data races means increased development times; at worst, the consequences can be devastating. A data race in a computerized Energy Management System dramatically worsened the 2003 Northeast blackout by causing delayed and misleading information to be communicated to the operators . In an article titled Tracking the blackout bug , Kevin Poulsen notes that “[t]he bug had a window of opportunity measured in milliseconds.” The chances of a problem like this manifesting during testing are infinitesimal. A simple data race example is shown in Figure 2. A manufacturing assembly line has entry and exit sensors and maintains a running count of the items currently on the line. The entry sensor controller increments the count every time an item enters the line, and the exit sensor controller decrements it every time an item reaches the end of the line. If an item enters the line at the same time that another item exits, the count should be incremented and then decremented (or vice-versa) for a net change of zero. However, computers implement increment and decrement as a sequence of simpler operations that first load the value from memory, then modify it locally, and finally store it back in memory. If the updating transactions are processed in a multithreaded system without sufficient safeguards, a data race can arise because the controllers read and write a shared piece of data: the count. The interleaving in Figure 2 results in an incorrect count of 69. There are also interleavings that result in an incorrect count of 71, as well as a number that correctly results in a count of 70. Data races like this are difficult to eliminate for several reasons: Rare occurrence means little chance of even noticing that there is a problem. If the problem manifests infrequently, it may never show up during testing. As noted above, the number of possible interleavings of two processes is enormous and interleaving decisions are not under user control. Testing every interleaving is simply impossible, and even if testers identify a small number of interleavings that merit inspection they will generally not have the means to enforce test executions with those interleavings. Data race diagnosis is difficult. Firstly, the symptoms can be perplexing. In the Figure 2 example, the running count will (probably) usually be correct, but sometimes too high and other times too low. Secondly, programmers unaccustomed to considering the particular pitfalls of multithreaded programming may spend a lot of time puzzling over the code before the possibility of a data race occurs to them. The effects of data races often seem impossible when the symptomatic code is considered in isolation; this sometimes leads developers to discard data-race-related bug reports as unreproducible. CodeSonar’s static analysis is especially helpful in this regard. It identifies data races by examining patterns of access to shared memory locations – that is, it focuses on the causes, not the symptoms. When a data race is identified, CodeSonar issues a Data Race warning that includes supporting information to aid the user in evaluation and debugging. The need for a developer to work backward from a particular symptom is eliminated, which reduces the overall debugging burden. We note here that CodeSonar also provides a File System Race Condition check. This is a different form of data race vulnerability in which a program calls a function that checks a named file and then later calls a function that uses the same named file. The source code assumes the file is the same at both times, when in fact another process may have changed the file between the ‘check’ and ‘use’. For example, an attacker could replace the original file with a link to a file containing confidential data. Eliminating data races can introduce new problems. Data races are typically avoided by using locks or other synchronization techniques to protect shared resources. However these can introduce performance bottlenecks that might prevent the program from taking advantage of the full potential of multiple cores, so programmers must exercise care in using them. In the worst case, they can lead to a different set of problems, namely deadlock, and starvation. In a deadlock, two or more threads prevent each other from making progress by each holding a lock needed by another. Figure 3 shows how a deadlock can arise with two locks used to protect two shared variables. In this example, there are multiple assembly lines that share a count of the total number of items currently under assembly and a second bad_items value recording how many finished items have failed quality control. One thread acquires the lock on the count, another acquires the lock on bad_items. Neither thread can now obtain the second lock it needs, so neither can carry out its operations and so neither will get to the point where it will release its lock. Neither update can be completed, and both threads are completely stuck. CodeSonar can help identify software at risk of deadlock by issuing Conflicting Lock Order warnings if the same locks can be acquired in different orders by different threads: the example in Figure 3 has this property. Eliminating all such cases is sufficient to ensure that the system cannot become deadlocked. The Nested Locks check is even more aggressive: a warning is triggered whenever a thread tries to obtain two or more locks. If each thread can only hold one lock at a time then deadlocks cannot arise. However, completely eliminating all lock nesting is an ideal that many real projects cannot attain: some users will disable this check and enable only Conflicting Lock Order. Even though either of these restrictions on locks is sufficient to eliminate deadlock, process starvation can still occur. A thread starves if it is waiting for a lock that is held by another thread for a very long time. The most common instances of this problem involve the lock-holding thread waiting for an event like a large disk read or the arrival of data from the network. Suppose our example manufacturing automation system includes a regular audit thread that examines all entry and exit records to ensure that the running count matches total items entering less total items exiting. The audit thread needs to hold locks on the count and on all sensors, so all updates must wait for the audit to finish. If the audit runs for a long time, updates can be significantly delayed. If it runs for too long, the next audit may manage to acquire all the locks and start running before the outstanding thread can make any progress. In the worst case, some or all of the updates may never have the opportunity to run. Static analysis can help find starvation problems by examining the set of all procedures called by a thread that holds some lock. CodeSonar users can add custom checks to this form. For example, if there is a function f( ) that can block or is known to have a long-running time, engineers can add a custom check that triggers a warning whenever f( ) is called by a thread that holds one or more locks. CORRECT USE OF SYNCHRONIZATION TECHNIQUES It can be tricky to write code that uses synchronization techniques effectively, and coding standards often impose restrictions on which techniques can be used and under which conditions. For example, the JPL Institutional Coding Standard for the C Programming Language does not permit task delay functions to be used for task synchronization. CodeSonar includes a suite of checks for the JPL coding standard, including a Task Delay Function check that issues warnings at any use of a function that has been identified as having this purpose. A configuration parameter allows users to extend the list of known task delay functions as required. More generally, users can use the BADFUNC_* family of configuration parameters to extend the CodeSonar analysis by specifying forbidden synchronization functions whose use should trigger a warning. (The BADFUNC_* parameters are also useful for identifying functions – especially those in third-party code – that is or may be thread-unsafe. CodeSonar’s built-in Use of ttyname* check has this motivation.) CodeSonar is also ideally suited to identifying potentially risky patterns of synchronization function usage. - Unknown Lock: a lock or unlock operation refers to a lock that cannot be identified. - Missing Lock, Missing Unlock, Lock/Unlock Mismatch: an unlock (lock) operation in the body of some function does not have a corresponding lock (unlock) operation in the same function. This does not necessarily mean that the matching operation is not carried out, but keeping the lock and unlock operations in the same function ensures that the program is more human-readable and thus easier to maintain. - Double Lock, Double Unlock: the same resource is locked (unlocked) multiple times, which can have adverse effects on the resource or the locking infrastructure. Even if these effects are not experienced for a particular implementation, the doubled operation may indicate the existence of a previously unconsidered execution path. - Try-lock that will never succeed: indicates a redundant and possibly misleading try-lock operation. Users can add their own checks for risky usage patterns with the CodeSonar Extension API. Such checks could be based on local coding rules, or on the particular synchronization techniques used in a given project. Multithreading adds entirely new classes of potential bugs to those that must be considered by developers. At the same time, the non-determinism and sheer number of possibilities introduced by thread interleaving make it significantly more difficult to find bugs in multithreaded systems by testing and other traditional methods. The static analysis provided by GrammaTech CodeSonar supports development teams in addressing both of these issues. It provides checking and reporting for a range of concurrency-related problems without the limitations experienced by execution-based techniques or the oversimplification imposed by a single-threaded point of view. To learn more about CodeSonar, and for a free trial, contact GrammaTech. - JPL Institutional Coding Standard for the C Programming Language, 2009, Jet Propulsion Laboratory, California Institute of Technology JPL DOCID D-60411. - Boehm, H.-J., Threads Cannot be Implemented as a Library. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). 2005. Chicago, IL: ACM. pp. 261-268. - Kevin Poulsen, Tracking the blackout bug. in SecurityFocus. April 7, 2004. - U.S.-Canada Power System Outage Task Force, Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations. 2004.
<urn:uuid:95b466b6-628f-4c6c-9e29-b150dde3e0fe>
CC-MAIN-2024-38
https://codesecure.com/our-white-papers/finding-concurrency-errors-with-grammatech-static-analysis/
2024-09-20T01:12:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00898.warc.gz
en
0.931759
4,025
2.59375
3
The fire pump is at the heart of larger water-based fire suppression systems. In extensive fire sprinkler systems, the regular water supply is likely to be inadequate to maintain enough pressure to keep the water flowing in order to fight a fire. It is the fire pump which provides enough water to keep the pressure up and keep the water flowing at the necessary rate to be effective. The fire pump connects directly to the underground public supply of water or to a private storage tank and can then pump this water directly into where it is needed in an emergency at the rate that is required. Fire pumps are critical in large buildings where the main form of fire suppression is water based or uses an interconnected system, such as in high rises and warehouses. Fire pumps need to be routinely maintained by a preferred fire protection company. Depending on the type of fire pump, inspections vary from a weekly, monthly, or annual basis. All fire pumps need the following inspections at a minimum. A deficiency is identified during regular inspections when the devices and components do not meet acceptable standards. Here are a few commonly found deficiencies for fire pumps: If it has to do with protecting your workplace against fires and other catastrophic events, we probably sell it. More importantly, we understand how each product plays a role in your overall safety plan.
<urn:uuid:e9dbcb4a-1f0b-422d-8937-c229ae32d3c6>
CC-MAIN-2024-38
https://www.certasitepro.com/products-and-services/fire-pumps
2024-09-08T01:03:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00198.warc.gz
en
0.957205
263
2.859375
3
In a world where passwords can be cracked in seconds and hackers grow more cunning by the minute, a new line of defense is emerging—one that is a part of who we are. Enter biometric authentication — a cutting-edge technology that uses unique physical or behavioral traits to verify identity. From unlocking smartphones with a glance to securing sensitive data with a fingerprint, biometric authentication is revolutionizing how we approach cybersecurity. With the global biometric system market expected to hit nearly $68.6 billion by 2025, it’s clear that this technology is gaining momentum. But what exactly makes biometrics so powerful, and how secure is it in an era where privacy concerns are on the rise? Jump to where you need: - What is Biometric Authentication? - Main Types of Biometric Identification - The Power of Biometrics: Strengths and Advantages - The Security Challenges of Biometric Authentication - Passwords vs. Biometrics: Which is Stronger? What is Biometric Authentication? Biometric authentication refers to the process of identifying or verifying individuals based on their distinct biological characteristics. These can include physical traits such as fingerprints, facial recognition, and iris scans, or behavioral traits like voice patterns and typing rhythms. Unlike traditional methods that rely on knowledge-based security (e.g., passwords) or possession-based factors (e.g., security tokens), biometrics leverage unique human attributes that are inherently difficult to steal or replicate. This not only enhances security but also offers a seamless and convenient experience for users, removing the need to remember complex passwords or carry additional security devices. What sets biometric authentication apart from other security measures is its reliance on traits that are deeply personal and virtually impossible to forget. Our fingerprints, facial features, and voices are with us at all times, making them more reliable identifiers than passwords or keycards. The widespread adoption of biometrics in consumer devices like smartphones and laptops has brought this technology to the forefront of mainstream security practices. Main Types of Biometric Identification - Facial Recognition: Identifies individuals by analyzing their unique facial features. This technology is widely used in smartphones, credit card payments, and law enforcement. - Fingerprint Recognition: Uses a person’s distinct fingerprint to verify identity. Common in mobile devices, automobiles, and building security, this is the most widely adopted biometric authentication method. - Eye Recognition: Scans the unique patterns of the iris or retina for identification. Though highly accurate, it’s less common due to implementation challenges, often used in high-security environments like nuclear facilities. - Voice Recognition: Authenticates individuals by analyzing voice characteristics such as tone, pitch, and frequency. Frequently used in call centers for customer verification, especially in industries like banking. - Gait Recognition: Identifies individuals by their unique walking pattern. While not commonly used today, it’s expected to gain popularity as more advanced authentication methods emerge. - Vein Recognition: Utilizes infrared light to map the pattern of blood vessels in a person’s hand or fingers. Known for its extreme accuracy, vein recognition surpasses even retina/iris recognition in precision. The Power of Biometrics: Strengths and Advantages Biometric authentication offers a range of powerful advantages that make it a preferred solution for both security and convenience. By leveraging unique human traits, biometrics not only enhances protection against threats but also improves the overall user experience. Here are some key strengths of biometric technology: - Enhanced Security: Biometric authentication is based on unique physical traits, making it much harder to replicate or steal compared to traditional passwords or PINs. This provides an extra layer of security that protects sensitive information from unauthorized access. - Convenience and Speed: Biometrics eliminates the need to remember complex passwords or carry security tokens. Users can authenticate themselves instantly with a fingerprint, face scan, or voice command, making the process faster and more convenient. - Reduced Fraud: Since biometric traits are unique to each individual, they greatly reduce the risk of identity theft or fraudulent access. Even if a hacker obtains your password, they can’t replicate your fingerprint or facial structure. - Seamless User Experience: Many biometric systems operate in the background, allowing users to authenticate without disrupting their workflow. This frictionless experience leads to higher user satisfaction, especially in consumer devices like smartphones and laptops. - Scalability Across Devices: Biometric authentication can be easily implemented across a wide range of devices and industries—from smartphones and laptops to secure access in vehicles and buildings—making it a versatile and scalable solution. - Continuous Authentication: Some biometric systems offer continuous authentication, monitoring user behavior (like how they type or move) to ensure ongoing verification. This makes it harder for an unauthorized person to access systems even if they manage to bypass the initial security layers. The Security Challenges of Biometric Authentication While biometric authentication offers significant security advantages, it is not without its challenges. One of the primary concerns is the issue of data privacy. Biometric data, once compromised, cannot be easily changed like a password. If a person’s fingerprint or facial recognition data is stolen, it poses a lifelong security risk since these identifiers are permanent. Storing biometric information also introduces vulnerabilities, as centralized databases can be targeted by hackers. Even with encryption, a data breach involving sensitive biometric information could have far-reaching consequences, particularly in industries like banking or healthcare where data privacy is paramount. Another challenge is the potential for spoofing and hacking attempts. As advanced as biometric systems are, they are not completely immune to attacks. For instance, high-resolution photos or videos can sometimes be used to fool facial recognition systems, while fingerprints can be replicated using molds or prints left on surfaces. Additionally, legal and ethical concerns arise around the use of biometrics, such as the possibility of unauthorized surveillance and collection of personal data without consent. These concerns are also becoming more prominent as biometric technologies advance, raising questions about consent, data ownership, and the potential for intrusive surveillance. Each of these issues highlight the importance of implementing strong security measures around biometric systems to ensure that they provide not only convenience, but the highest level of protection. Passwords Vs. Biometrics: Which Is Stronger? When comparing passwords to biometrics, it becomes clear that biometrics offer a stronger and more secure method of authentication. Unlike passwords, which can be easily compromised through various attack methods, biometric authentication requires an aspect of the user’s physical presence. This adds a critical layer of security, as biometrics are inherently tied to the individual’s unique traits, such as their fingerprint or facial features, making them much harder to replicate or steal. Even if hackers gain access to other personal information, they cannot bypass biometric systems without being physically present or registered to the specific device. Passwords, while still widely used, are vulnerable due to the simplicity of hacking techniques. Phishing attacks, for example, trick users into giving up their passwords by impersonating trusted entities, which is much harder to defend against. In contrast, biometrics significantly reduce the risk of such attacks because authentication cannot be faked or duplicated. The uniqueness of biometric data, such as fingerprints or facial recognition, makes spoofing far less common and much more challenging. While passwords are still a critical element of many security strategies, biometrics provide a far stronger defense, especially when used as part of multi-factor authentication, combining both convenience and enhanced protection. In the battle between passwords and biometrics, the latter clearly stands as the more secure and reliable option, offering unparalleled protection and better peace of mind in an increasingly digital world.
<urn:uuid:42a8f823-6f08-4978-8292-a704cea894ca>
CC-MAIN-2024-38
https://agileblue.com/unlocking-the-future-the-power-and-security-of-biometric-authentication/
2024-09-09T06:13:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00098.warc.gz
en
0.916394
1,596
3.203125
3
In our previous introductions to our product, we emphasized on how RidgeBot™ not only finds vulnerabilities of the system, but also uses proof of concept to test the vulnerabilities in order to find possible exploits. A natural question that arises would be the difference between vulnerabilities and exploits and why such difference matters in modern cybersecurity and sets RidgeBot™ apart from our competitions. A vulnerability is a weakness in a system, or some software in a system, that the attacker could potentially abuse to bypass the system’s security infrastructures. Often when we use a vulnerability scanner, we will see a myriad of vulnerabilities in the report. For example, in memory unsafe languages like C, a buffer that fails to check its boundary is a vulnerability, as attackers could potentially overwrite the memory spaces above said buffer. While vulnerability will pose a significant problem, as its definition suggests, it is merely a potential attack target, meaning that out of these vulnerabilities, only 3% will result in an exploit. Exploit by definition is the act of trying to turn a vulnerability (a weakness) into an actual way to breach a system. Unlike vulnerabilities, which pose a potential for adversaries to attack the system, exploits will cause real damage to the system, stealing valuable information and resulting in massive financial loss. In the above example, the adversary’s actions to actually use the vulnerability to overwrite memory fragments constitutes a buffer overflow exploit. With a clear understanding of the difference between the vulnerability and exploit, it is easier to set RidgeBot™ apart from the scanners. RidgeBot™ is an ethical hacking tool which performs real exploits, advanced iterative attacks and post exploitation activities. While a scanner is able to find most vulnerabilities in a system, it does little validation which results in a high false positive rate and an unrealistic risk picture. Many of said vulnerabilities are low risk, i.e. they are infeasible or even impossible for an attacker to exploit. But a scanner does not distinguish exploitable vs. unexploitable vulnerabilities: it will always recommend a thorough patching of all vulnerabilities it finds, which in a realistic setting would be highly time-consuming and inefficient. Therefore, security testing shall not just stop at “vulnerability scanning”, the “validation (a.k.a Exploit validation) is imperative under today’s cyber environment. In addition to vulnerability scanning, RidgeBot™ will run real PoC exploits, and additional iterative exploits with new information in order to verify the risk of a vulnerability. In the report, we call the exploited vulnerabilities “business risks” and prioritize their patching; we categorize said vulnerability as High/Medium/Low, and will patch them in this order. As a result, RidgeBot™ has zero false positive rate and saves valuable time for our users while guaranteeing the same level of security as an average vulnerability scanner.
<urn:uuid:78b98979-d357-4750-85ac-7036d4542ad7>
CC-MAIN-2024-38
https://ridgesecurity.ai/blog/scanner-vs-ridgebot-vulnerability-vs-exploit/
2024-09-09T05:20:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00098.warc.gz
en
0.929215
583
2.640625
3
High-end computing back in the day required large computers that used to take up a lot of space, adding to a lot of investment. Thin clients were introduced to move computing on a lightweight device from a full-sized PC. Thin clients, since then, have changed a lot. In today’s environment, thin clients are more straightforward, compact, windows-less, low-end systems fueled by Thin Client Operating Systems. Since these end-point devices do not have many spaces, they can’t hold sensitive data and applications. These devices are designed to perform computing over the cloud. Hence, all they need is an internet connection and a browser. Data from this computing is stored in data centers located in the cloud or on-premises. Its’ sole purpose is to display the result on a screen as close as possible to a standard PC. Thin clients first appeared in 1984 and were used to execute display server software. What is VDI Thin Client? A VDI (Virtual Desktop Infrastructure) thin client is a lightweight endpoint device that is specifically designed to connect to a virtual desktop environment. Unlike traditional desktop computers, a thin client relies on a server-hosted virtual desktop infrastructure to run applications and access files. It does not have a local operating system or storage capacity, as all the computing resources and data reside on the server. Thin clients are cost-effective and easy to manage since they do not require individual software installations or updates. They provide a secure and centralized approach to desktop computing, enabling users to access their virtual desktops from any location while minimizing the risk of data loss or security breaches. Thin clients are ideal for organizations seeking a streamlined and efficient solution for deploying and managing virtual desktop environments. History of Thin Clients Although the term thin client was formulated in 1995, this type of computing has its roots in multi-user systems. Multi-user systems are devices that multiple users can access. These systems have then evolved from executing command line interfaces to fully performing graphical computation (GUI). In the current environment, thin clients are used to perform minimal display functions and to offer a platform that is combined with the latest cloud technology, allowing its users to perform high-end computing. These systems are highly inexpensive, relying primarily on servers to perform evaluations. Citrix, Unix, and Raspberry PI are some of the most known developers creating the most efficient thin client systems. Businesses that support remote working environments can get the best out of slim client computing technology. This is where distributors like Raspberry Pi came into the picture by offering affordable and compatible devices. With the help of ACE VDI, enterprises can implement Raspberry Pi or other such systems to create a remote-friendly working environment. How Thin Client Works Thin Clients can be implemented via three different methods: Shared Terminals, Desktop virtualization, and browser-based approach. Shared terminals on thin clients consist of a common server shared across every user performing computing using a thin client. This limits the number of complex computations that can be achieved in a normal PC. Desktop Virtualization allows IT experts to host an actual desktop-like presence on a virtual machine. The resources in a VMs server are not shared resources yet are still located in a single remote server. The browser-based approach is a little different from any of the above-mentioned techniques. In this framework, all the computations are done on a browser instead of on a server, which allows users to retrieve data and software on thin clients. Benefits of Thin Clients There are numerous thin clients benefits that can be leveraged in creating a remote yet secure environment while ensuring the power of computation is not hinged for users. - Less Expensive: As these clients do not have the critical components of a CPU, their costing reduces dramatically. Without a hard drive and a GPU processor, the cost can be brought down significantly. - Easy to Scale: On a requirement basis, Thin clients can be easily scaled up or down. Introducing an external hard disk, security tools, GPU or any other computational framework is easy and can be performed over the cloud as well. - Secure: Since there is no data storage or memory within thin clients, these systems become more invulnerable to external attacks and threats. - Less Power Consumption: Most of the thin clients are fanless. Some can even be held in the palm of a hand. With less memory and a smaller mechanism, lean clients generate less waste in the form of heat. - Easily Managed: Most of the software installation, addition, updating, or modifying any file or data is carried out on a server. Moreover, these files are stored on a centralized server making it easier to retrieve and manage overall. Thin Clients Vs Thick Clients As per the basic difference, Thin clients are just small device with no computation power of their own and relies on hosts’ resources. Whereas, a thick client is a full-fledged PC that does not depend on any other device for performance. Thin Clients, being smaller, are cheaper, easy to handle, and mobile. Hence, these systems are a perfect fit for remote working from anywhere environment. On the other hand, Thick clients are bulky, consist of all the CPU power, and hence are costlier and not so remote-friendly. There are many differences related based on their size, functionality, and use cases: Factors | Thin Clients | Thick Clients | Data Storage | On Server, Centralized data management | Local Storage, can be scattered among multiple desktops | Network Latency | Requires High-Speed network coverage | Can work in low network areas. | Deployment | Easier Deployment as all the clients are connected centrally | Expensive deployment as multiple systems needed to be calibrated separately | Security | Highly secure | Vulnerable to data leaks and external threats | Thin Clients Vs Zero Clients Thin clients are lightweight devices that rely on a server-hosted infrastructure to run applications and access files. They have some processing power and storage capacity but still offload most of the computing resources to the server. Thin clients require a thin client operating system and often support multiple protocols for connecting to virtual desktops. On the other hand, zero clients are even more simplified devices with minimal processing power and no local storage. They depend entirely on the server for computing resources and require no local operating system. Zero clients are purpose-built for specific virtual desktop protocols and are typically easier to manage and deploy compared to thin clients. While thin clients offer more flexibility and support for various protocols, zero clients provide a streamlined and cost-effective solution with lower maintenance requirements. The choice between thin and zero clients depends on specific requirements, infrastructure, and management preferences in a virtual desktop environment. Thin Client Use Cases Due to the Covid-19 Pandemic, modern workspace started changing a lot. Industries began to embrace digital transformation at large and instilled cloud services within their operations. Security is on top of the IT admin’s mind in a remote working environment. Thin clients paved the path for teams and industries to be productive while maintaining security, regardless of location. Cloud-powered virtualization techniques provided a centralized infrastructure to IT admins, ensuring data security. And Thin clients are the end-point devices from which the users can have a similar experience working on an actual feature-rich, high-specification computer system. In addition to this computing, and with the help of the latest technology from NVIDIA and AMD, users can now have the same experience in streaming, watching multimedia, conducting video conferencing, and much more without hindering the notion of workplace experience. Business Process Outsourcing Call centers and BPOs are the backbone of any service that is offered around the world. The total revenue of the BPO universe across the world was recorded at $29bn by the end of 2021. Moreover, BPO is the biggest Client of Thin client systems. Since the Thin Client inception happened with multi-user devices, BPO has been the most optimum zone of operation for this type of system as they fulfill all the prerequisites of a BPO: - Security from external attacks - Centralized Data management - Multi-User Access Thin clients are stable and secure platforms, providing ample reasons for BPO clients to switch to a more affordable solution. Thin clients are designed so that more than one of their components can be broken down into individual parts. This saves cost and gives the power to scale up the system by introducing new components when needed. Hence, not only in the BPO sector, startups and manufacturing organizations can leverage slim client computation using thin clients. The Developing and designing industry can quickly implement thin clients that ensure security and allow their workers to perform their duty anywhere they want. The manufacturing industry can implement small-scale systems that resist debris and dust. These systems can be kept in ventless spaces as they are entirely fanless. Moreover, these systems can be configured for extreme weather conditions withstanding the shock, vibrations, extreme heat, and cold. Cloud services adoption in the healthcare industry saw a spike during the pandemic. While most industries shifted to remote working culture, the healthcare sector still needed to work to its full potential from its actual environment. Moreover, the data collected by the healthcare sector has always been quite critical. All these challenges can be mitigated with the help of Thin Client. Data Privacy can be protected with the help of thin clients since these devices are memoryless. All the data is stored on a centralized server and is encrypted to ensure security from any external threats. Thin clients also enable access to real-time analytics data from cloud servers. Thin clients in healthcare can be used in patients’ rooms, offices, self-check-in kiosks, and nurses’ stations. Recommended Read: VDI Use Cases: Top 6 Real World Benefits and Applications Thin Client’s use cases suggest that it will be one of the essential techs in driving digital transformation. These devices drive significant improvements, security, and innovations in multiple industries. In this day and age, many startups are booming. Thin clients can support this upcoming change of change. These systems cost considerably less than traditional PCs, take a lot less space, and come with less clutter. Hence, not only in BPO and healthcare, Slim clients are impactful in manufacturing, designing, education, financial services, and other sectors. Integrated with the right managed benefits of DaaS, thin clients can prove to be a full-fledged solution. VDI integrated into a Thin client system can offer users a similar PC experience with all the ability to perform power computing, including heavy Graphical computing over the web. Ace-powered VDI is a managed DaaS service that industries can boost their productivity, enhance their environment and promote a healthy working culture when integrated with thin clients.
<urn:uuid:ec5ce934-3a7c-4439-8684-e0521ae03745>
CC-MAIN-2024-38
https://www.acecloudhosting.com/blog/what-is-vdi-thin-client/
2024-09-12T18:33:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00698.warc.gz
en
0.932289
2,227
2.65625
3
The emerging concept of green IoT serves multiple purposes with the goal of improving the quality of life through technology and eco-friendly practices. IoT stands for the "Internet of Things" which comprise real-time internet-connected devices designed to collect specific data. Meanwhile, green IoT involves the use of sensors that monitor the environment to ensure the safety of natural resources such as clean water, air, soil and food. Green IoT Explained Green IoT encompasses energy-efficient techniques that IoT uses to lessen the greenhouse effects of emerging technologies. Everyone is fairly aware that environmental elements such as air and water are vulnerable to greenhouse gases. An often understated drawback of industrialization is the toxic chemical pollution it spreads across the planet. Even computers contribute to pollution in the form of end-of-life e-waste if they end up in a landfill. Federal state and local governments are investing in green IoT as a step toward cleaning up the environment. Technologies Used for Green IoT Growing environmental concerns are driving the development of apps designed to monitor environmental conditions. IoT devices that connect with smartphone apps are used to measure temperature, humidity, and wind speed along with the quality of air, water and soil. Many farmers have adopted IoT devices to monitor agricultural conditions. When combined with machine learning software, IoT data can lead to calculated weather and crop growth forecasts. Here are some key emerging green technologies associated with IoT: - Green RFID - Decreasing the size of RFID tags is a step toward sustainability since it helps cut waste. Designers of green RFID technology have made advancements in energy-efficient algorithms for tag estimation. - Green WSN - As an integral component of green IoT, a green wireless sensor network (WSN) is a series of multiple computing devices that share limited power and storage capacity. - Green Cloud Computing - Migrating business operations to the cloud opens up several possibilities for reducing your carbon footprint. You don't have to invest in as much expensive hardware, which likely contains toxic materials. An entire book can be written about how the cloud itself makes business activity more efficient and eco-friendly. The only major environmental concern about the cloud is many large data centers around the world are powered by fossil fuel energy. When cloud usage becomes congested, it can strain the traditional electric grid system. On the other hand, Apple, Microsoft, Amazon, Google and other large tech companies have invested heavily in large facilities powered by renewable energy. In 2022 Amazon became the largest buyer of renewable energy on the planet. The company has been on a green buying spree as part of its goal to be "net-zero carbon" by 2040. Environmental Advantages of Cloud Computing The cloud frees up your physical space since you can store digital documents in a secure data center. Cloud computing allows for remote work from home, reducing the need for morning and afternoon commutes across town. Another green advantage of the cloud is it can cut your energy bills by using less hardware, electricity and office space. Allowing employees to bring their own devices helps reduce the need to invest in computers that generally have a lifespan of 5-10 years before becoming e-waste or getting recycled. On one hand, hanging on to legacy computing systems gives you the most for your investment. On the other hand, old computing systems are the most vulnerable to cyber-attacks. The cloud offers a balanced solution in the sense it makes computing efficient as long as individual IoT devices on your network have their own or nearby computing capacity. Ultimately, the cloud cuts environmental waste and risk. Moving to the cloud means there's less need for paper, which helps preserve forests. Developing a cloud-based digital infrastructure reduces the need to store a growing stack of hard-copy documents. Since the cloud allows for low-cost communication, it also reduces the need to print marketing material for mass distribution. Social networks such as Facebook, Twitter and LinkedIn are powerful cloud services that help businesses cut the costs of marketing by connecting directly with patrons. It saves businesses from doing costly large print runs to promote themselves. The ink used in print is typically a toxic material, although many print shops are seeking greener solutions. How to Optimize Green IoT Eco-friendly solutions are possible for an organization the more it deploys IoT devices that monitor environmental activity. The three main qualities for electronic devices to be considered sustainable are how lean, efficient and durable they are. A machine learning program can scan volumes of data from sustainability studies and experiments to provide concise reports on which solutions deliver the most eco-friendly results. Amazon Web Services (AWS) has been a notable pioneer in making IoT more efficient and secure. Its platform called AWS IoT Core allows multiple IoT devices to securely interact with each other in the cloud. With the help of device communication protocols, the system allows users to publish data in the cloud and offers subscriptions for accessing the data. According to a study by 451 Research, AWS reduces its carbon footprint significantly in its data centers thanks to more efficient use of servers. Another reason why AWS IoT Core is considered a sustainable cloud solution is it uses less code on IoT devices, which requires less maintenance. The system is secure due to its zero trust policy and its robust Transport Layer Security (TLS) mutual authentication protocols that all AWS traffic is routed through. For companies seeking to keep data as localized as possible, Amazon offers AWS IoT Greengrass, a service that expands the capacities of AWS IoT Core. This more advanced monitoring system allows for greater flexibility with offline capabilities and managing data remotely. The service utilizes components that make coding more efficient, such as runtime installers and libraries. Essentially, electronic devices no longer need to be designed with code for one purpose. Challenges Facing Green IoT Today there are billions of IoT devices connected to the internet. During this decade the number is expected to surpass 40 billion. Many of these devices are designed to be compatible with 5G wireless networks, although there are still countless sensors operating on 3G and 4G. The advent of 5G makes it possible for companies to integrate machine learning technology with IoT. But cluttering up the cloud with too much big data can be a digital pollution problem that creates latency. So, the alternative when it comes to using IoT devices is edge computing, in which sensors or nearby nodes process the data instead of sending millions of packets over long distances to the cloud. The combination of 5G and green IoT allows for vast data monitoring with the help of edge computing. The challenge for data centers to shift to renewable energy and greener practices mainly involves financial barriers. Otherwise, many data centers and businesses of all types are on board with green energy. The rise of green IoT is changing the way companies of all sizes are running their businesses. A growing wave of organizations is expressing their concern about protecting the environment, making IoT significant for going green. Not only can tech developers contribute to a greener world, but so can organizations that embrace smart technology.
<urn:uuid:d84ed74e-3619-441d-bcdf-d64c708b8ddf>
CC-MAIN-2024-38
https://iotmktg.com/all-about-green-iot-how-it-makes-businesses-more-sustainable/
2024-09-16T12:10:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00398.warc.gz
en
0.944086
1,416
3.390625
3
The Channel Definition Format (CDF) is an outdated technology. It was primarily used in the late 1990s and early 2000s as part of Microsoft’s Internet Explorer 4.0 to enable web content to be viewed in a push-like fashion, similar to a news feed. However, with advancements in web technologies and the decline in popularity of Internet Explorer, CDF has largely fallen into disuse. The technology that has effectively replaced CDF is RSS (Really Simple Syndication) and Atom feeds. These technologies allow for the distribution of regularly updated information, such as blog entries, news headlines, audio, and video, in a standardized format. RSS and Atom feeds are widely used across the internet, supported by numerous web browsers, feed readers, and content management systems. They offer greater flexibility, compatibility, and ease of use compared to CDF. Both RSS and Atom feeds enable users to subscribe to content and receive updates automatically, a concept similar to what CDF aimed to achieve but with more widespread support and continuous development. These feeds have become the standard for content syndication on the web, making it easier for content to reach a broader audience without the need for users to repeatedly check websites for updates. In this article: 1. What was the Channel Definition Format (CDF)? Channel Definition Format was an open standard created by Microsoft for Microsoft Internet Explorer version 4 (and proposed as a standard to the World Wide Web Consortium) that defines a «smart pull» technology for webcasting information to users desktops. Based on the Extensible Markup Language (XML), Channel Definition Format (CDF) lets administrators create Active Channels for delivery of content through the users Web browser, and Active Desktop elements and channel screen savers for delivery directly to the users desktops. Channel content can be personalized, and delivery can be scheduled according to users needs and preferences. Using CDF also reduces server load and allows delivery of just the needed content, instead of requiring users to download large quantities of unnecessary content. 2. How it Worked Let’s consider the delivery of Web content to the user’s browser using Active Channels. A Web site can be made into an Active Channel through the addition of a CDF file. The CDF file is a simple text file that is formatted using XML. It forms a kind of table of contents of the logical subset of the Web site that comprises the Active Channel. A link is then created to the CDF file on the Web site. The user clicks the link to subscribe to the Active Channel and download the CDF file. The Active Channel then appears on the channel bar on the user’s desktop. The content for the channel is downloaded to a cache on the user’s system. Channel updates are accomplished by scheduled Web crawls, using either the publisher’s predefined schedule or a user’s customized one. Users can also receive updates to channels by e-mail. Some of the advantages of using CDF for the distribution of Web information to users include - Simplicity: Turning an existing Web site into a channel merely involves creating a CDF file with a text editor and creating a hyperlink to this file. - Structure: CDF describes how to logically group information in a hierarchical structure, independent of the content format. - Personalization: Standard Hypertext Transfer Protocol (HTTP) cookies can be used to deliver personalized information to users. - Administrator control: The administrator can control how much of the site can be downloaded by users. - User control: The user can use CDF to specify which portions of a site to download to his or her browser, instead of pulling a lot of content off the site and hoping that it contains the needed information. CDF is not true webcasting in the sense of Internet Protocol (IP) multicasting because it is a “pull” technology. True webcasting is supported by Microsoft NetShow for delivery of content using IP multicasting. 3. The CDF File CDF Files were text files used for creating Active Channels, Active Desktop items, and channel screen savers for managed webcasting of content to user’s desktops. CDF files are based on the Channel Definition Format (CDF) standard. CDF files provide a mechanism for allowing users to select the content they want to download from a Web site, and they let administrators schedule content for delivery to user’s desktops. How CDF Files worked CDF files are used to convert existing Web sites into Active Channels without the need to change the existing site in any way. You simply create a CDF file using a text editor such as Microsoft Notepad and include it in your site. This will allow the content of the site to be webcast to user’s browsers. The CDF file must be saved with the extension .cdf, and a link on your site should point to this file so that users can subscribe to the channel. A typical CDF file defines a channel hierarchy for the different Web sites making up the Active Channel. This channel hierarchy contains a table of contents for webcasting the content and consists of a top-level channel, subchannels, and actual content items (Web pages). The simplest format for a CDF file is a list of Uniform Resource Locators (URLs) that point to specific Web pages in the site. More advanced CDF files can contain information such as: - A map of the hierarchical structure of the URLs in the Web site - Logical groupings of different content items within a site that can differ from the observable link structure of the site itself - The title of each referenced Web page and a brief abstract of its contents - Information controlling the scheduling of content updates The syntax of advanced CDF file items is based on the Extensible Markup Language (XML), an open specification that provides extensibility to standard Hypertext Markup Language (HTML) files. More than one CDF file can be created for a site, allowing users to subscribe to information in different fashions. For example, a news site can have separate CDF files for news, sports, and weather subscriptions. Channels in Active Channel enable personalized delivery of Web content using Web applications designed for Internet Information Server (IIS) for Windows NT (Internet Information Services for Windows 2000). Active Server Pages (ASP) can be used for dynamically generating personalized CDF files for users. Cookies can also be used for dynamically generating customized CDF files for users. These CDF files can be customized on the basis of preferences that a user specifies on an HTML form prior to subscribing to the channel.
<urn:uuid:55d4f85b-cbb7-40f6-be20-dea1d841ff1c>
CC-MAIN-2024-38
https://networkencyclopedia.com/channel-definition-format-cdf/
2024-09-19T01:43:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00198.warc.gz
en
0.885937
1,370
2.796875
3
There are many different types of WAN connections floating around in the market today. The decision of choosing one connection over the other will vary significantly upon the services it will offer and the requirements for the organization. The major WAN type and their properties have been discussed in detail below: All the WAN connections have to be provided by Telecommunication Company which is called the service provider. You cannot own the connections of a WAN unless you are the service provider and this is the thing separating it from LAN. Due to this, the type of connection you will be getting WAN will be highly different from the one on LAN. These network connections have also evolved over the years in terms of the properties they provide and the requirements of the organization. T1/E1: Service provider provides you various technologies to stay connected over a wide range of distances. T1 is the most common type of connection used these days in United States and the same in Europe is termed an E1. Through this, you can connect to any two parts of the country and then utilize the same over time. You just need to pay for the connection and once you have done it, it's your connection and you are the boss of it. The bandwidth of this channel can also be divided up to 24 channels. T3/E3: Use of this technology can cost you some big bucks. If you need a line but much greater than T1 because of the need for the demand of larger bandwidth by the organization through one data center to another, you will be accomplishing this by using T3 connection in United States and an E3 connection in Europe. The T3 connection provides you an equivalent of about 28 T1 connections together and the speed of 44.736 Mbps whereas an E3 connection provides you an equivalent of about 17 E1 connections together with a speed of 34.368 Mbps. DS3: The DS3 technology makes use of the digital signals (DS) which is used as a combination with a T - line circuit to overcome the problem of bandwidth requirement with mega bandwidth capacity around 44.7 Mbps. This technology provides your higher bandwidth and is a powerful combination of 672 DSOs (single connection channel of 64 Kbps is DS0) combined in a single connection and that connection is said to be digital signal or DS3 connection. OCx: The speed of the OCx connection is the highest of them all. The OC here signifies the optical carrier and x here signifies the relative speed of the connection from link. It may sound weird but the x here just goes on increasing and increasing. The initial speed for this OC connection was 50 Mbps and OC - 3 connections soon became available in the market with a speed of 150 Mbps. Till date, OC - 3072 is in operation and provides mind boggling speed of 160 Gbps but only very large organizations can afford such connections. SONET: SONET technology is a type of protocol which can transmit data at the speed of 150 Gbps by making the use of fiber links but has to be controlled by atomic clocks. The SONET here stands for Synchronous Optical Networking and is very beneficial for the networks which spans to many geographical regions because of the mechanism of atomic clock used in it. This is generally used only by mega corporations for data trafficking. SDH: The SDH stands for Synchronous Digital Hierarchy and is one of the standards which are similar to the SONET technology. SDH transfers data making the use of optical fibers along with LED or laser light. The capabilities and speed of the SDH technology are quite in comparison to SONET and is also controlled by atomic clocks. It was originally introduced by European Telecommunications and Standards Institute and SONET was defined by American National and Standards Institute. SDH also provides 50 Mbps bandwidth at STM - 0. DWDM: DWDM stands for Dense Wavelength and Division Multiplexing and is a type of optical technology which is majorly introduced to further increase the bandwidth over fiber optic to overcome the requirements. It functions by transmitting and combining many signals instantly of different wavelengths at the single fiber. A single fiber is in turn turned into many 'virtual fibers' and because of this technique, DWDM is capable to transmit data with a speed of more than 400 Gbps. Satellite: If some of you don't have access to the coaxial cables, there is no need to worry. The satellite hookup technique is here to help you out. This is one of the most dependable and economical service and you only require a dish antenna along with professional instructions or installer for locating the satellite with the help of dish antenna. Once this step is achieved, you can use the signal as download from internet. Though, uploading becomes a more of a challenge here as there may not be a high power transponder, so you can use the dial - up connection from the line of your regular telephone or some may also provide you DSL line for more bandwidth in uploading. ISDN: ISDN is very unlikely for real life and there is almost no possibility that you will encounter it in use these days. ISDN has two important aspects i.e. ISDN BRI and ISDN PRI. ISDN BRI has protocol of layer 2 which allows it to communicate between two channels and a controlling channel. The channel responsible for communication is bearer channel and for controlling is the delta channel. Each of these bearer channels can carry 64 Kbps data and delta channel can carry 16 Kbps for better data control. ISDN PRI came out way later and is very similar to a T1 line and contains 23 bearer channels and one delta channel both with speed of 64 Kbps. Cable: The cable companies now provide you a path to access internet beginning from connecting your computer with cable modem. The modem is also configured by the company and is recognized at the central office called headend. From there on, the company is now your official service provider guiding you for internet access. DSL: this is an inexpensive option for home users and small organizations which provides good bandwidth at cheaper rates. DSL stands for digital subscriber line the most common form of this technique these days is asymmetric DSL or ADSL. The line for such connection is already there in the form of telephone lines, so this connection is the cheaper way to access internet. Cellular: The cellular technology has the latest 4G technology in the market and can provide the speed of 100 Mbps, which is faster than many of the other service providers and can be used as a backup line or the primary internet option as well. May of the companies also provide you provision these days to set up your own private network and give access to others as well and GSM along with CDMA are prominent methods of setting up a cellular connection. WiMAX: It stands for Worldwide Interoperability to Microware Access and is a telecommunication protocol which can be used for many applications like broadband connection and it can also allow you to use the network at a much greater distance than the traditional Wi-Fi. It is also cost - effective and can deliver speeds up to 40 Mbps. LTE: It stands for a long term for evolution format and is widely accepted as the international standard. It is still under construction even at this time of writing, but when released, it will be offering 100 Mbps speed through wireless links for mobile phones, computers and PDAs. It will be a standard 4G connection and an enhancement of 3G standards. HSPA+: HSPA is an acronym for High Speed Packer Access and is wireless broadband standard used by many vendors for accessing internet through PDAs and phones. It is also known by Evolved HSPA and provides a speed of data link at 22 Mbps up and 83 Mbps down. Fiber: the fiber cable optic is an essential element of networking for higher bandwidth and reliable connections. Many companies offers these fiber cable connections directly to user's desktop and can be used for delivering internet speed of about 150 Mbps and it has a limited availability. It is available in both wired and wireless connecting terms. Dialup: The major benefits of this technology are cheaper cost and availability because almost everyone has telephone line these days but it does not have a much larger bandwidth that we all crave for in the computer these days. It is also known as plain and old telephone services or sometimes also known as public switched network telephone. PON: A passive and optical network technique, PON is a transmission from point to a multi point fiber to a premises network which allows single fiber cable optic to serve many premises. These premises can either be business or home. With the advancement in technology, more and more customers are using this technology for connecting to internet. Frame Relay: many organizations have their different sub - locations spread over an extensive geographic section and want to connect each of those locations together, this can be done either by dedicated lines which can be quite expensive or by using special routers and switches through which it can be connected to any point. This technique is called frame relay because it makes use of layer 2 frames that are relayed to switches and routers. ATMs: Here, ATM is acronym used for asynchronous transfer method or mode and is a type of protocol which was introduced after Ethernet and provides much more reliable way of transmitting data that the Ethernet. It was developed in 1980 for data and voice applications. Using fixed cell instead variable length cell for more efficiency and is now becoming even faster. Deciding the best type of WAN connections needs the knowledge of properties of these connections. Comparing properties of these connections as per the requirements can be quite beneficial and these properties are described below: Circuit Switch: The assets of a WAN technology provide the happening of events in the attribute of the communications. Dialup lines are essential components of a circuit switch. Once you make a proper connection on network switch, the whole data traffic will be sent over to physical connection until it gets terminated. Packet Switch: They are very different from circuit switch because a single data packet can take an entirely different route to final destination during transmission process. The newer techniques in the packet switch technology make use of frame relay technique and virtual circuits for avoiding errors and improving the efficiency. Speed: The speed factor for any connection depends on the available bandwidth along with data throughput. A single T1 connection doesn't guarantee you fast speed and depends on different factors. Bandwidth for slowest link determines the speed of the network. Factors such as Ethernet switch and the building or construction factors also take into account for overall speed of the network. Transmission Media: The transmission media is the carrier upon which communication happens to be carried out. This media can be anything like glass, copper or maybe even wireless. Bandwidth requirement is the major factor that makes way for the media required along with factors such as EMI concerns, distance the data will be sent to before encountering another router and the security factors and the same is being taken care of by the service provider. Distance: Distance is a measure of how far the data needs to be sent. Generally fiber optic cables are an ideal choice for transmitting data over a wide range of distance because it has a very low attenuation rate as that can be found in case of copper cables. This is because of the fact that signal is being sent via light than electricity. You should be able to recognize various types of WAN technologies, know their advantages along with disadvantages and the major areas that they will found their use. You should also be able to categorize various WAN properties, understanding the difference between them. So, these are some of the best technologies and factors affecting them that you will need to take into account before the connection. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from email@example.com and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:512d3c08-75b2-4967-b992-7e41065900d2>
CC-MAIN-2024-38
https://www.examcollection.com/certification-training/network-plus-wan-technology.html
2024-09-10T13:44:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00098.warc.gz
en
0.954755
2,501
2.796875
3
As awareness and attempts to slow overdoses swell, Big Data tools are set to play a large role in better tracking opioid prescriptions and enabling federal, state and local health agencies to more effectively allocate resources to communities in need. On March 19, the White House revealed its Initiative to Stop Opioid Abuse and Reduce Drug Supply and Demand, which seeks to address the driving forces of the opioid crisis through several means. The White House officially declared the epidemic a national emergency in October in response to rising death rates. Opioid abuse led to more than 63,000 deaths in 2016 — a mounting number that is overwhelming public health agencies, healthcare organizations and states alike. Now, the White House has set its sights on expanding education and treatment opportunities while reducing prescriptions to cut back on opioid addiction. But this isn’t the first attempt the federal government has made to better track, understand and quell the issue. The Health and Human Services Department hopes to control the epidemic through Big Data, following the lead of several states that are calling on data to better track prescriptions. HHS Could Call on State Data to Track Opioid Prescriptions President Trump’s proposed fiscal 2019 budget for HHS would allocate $10 billion in funding to address the crisis. Part of the proposal would “require states to monitor high-risk billing activity to identify and remediate abnormal prescribing and utilization patterns that may indicate abuse in the Medicaid system,” HHS Secretary Alex Azar told members of Congress last week, Health Data Management reports. Azar added that HHS could piggyback off of data from states’ internal prescription drug monitoring programs. PDMPs seek to spot doctor shopping by establishing a database of federal controlled substances that doctors and pharmacists can use to check patients’ medication histories alongside their use of other drugs. Certain states, such as Florida and Kentucky, have already seen the results of implementing these programs. Florida saw a 52 percent drop in deaths related to oxycodone overdoses from 2010-2012 after implementing a PDMP, for instance, and Kentucky has also seen a significant drop in controlled substance use, according to the Centers for Disease Control and Prevention. Following the lead of successful states, others, such as Virginia and Missouri are well on their way to establishing their own PDMPs. “[Prescribers] can see at a glance whether they want to slow down [prescriptions],” David Brown, the director of the Virginia Department of Health Professions, told Williamsburg Yorktown Daily, regarding a new tool that the state is implementing as part of its PDMP. He added that other clinicians are lined up to employ the technology as well. “It gives you a tool to show you if the person is doctor shopping or is already addicted. It will tell you if prescriptions are being dispensed at different pharmacies.” Meanwhile, providers are moving forward with their own goals and solutions. Utah-based Intermountain Healthcare, for instance, is leveraging its electronic health record system alongside new sources of data and its growing health IT infrastructure to cut prescriptions by 40 percent. EHR data could be the answer for health systems looking to cut prescriptions. A new study published last month in the Journal of General Internal Medicine, for instance, shows that electronic health record data could be leveraged to create predictive models to better forecast which patients might be at risk for opioid abuse. “Our model accessed EHR data to predict 79 percent of the future [chronic opioid therapy] among hospitalized patients,” the study notes. Chronic opioid therapy is how the authors define chronic opioid use. “Application of such a predictive model within the EHR could identify patients at high risk for future chronic opioid use to allow clinicians to provide early patient education about pain management strategies and, when able, to wean opioids prior to discharge while incorporating alternative therapies for pain into discharge planning.” HHS Opioid Code-a-Thon Highlights Big Data Winners Beyond current tools, HHS is also encouraging the production of new technologies that take advantage of Big Data to help quell the rising tide of opioid addiction. In December, HHS held its first opioid crisis code-a-thon, which saw participation from nine teams and 300 coders. They were given access to government data with the aim to build tools that could help prevent opioid abuse, enable better access to treatment and encourage healthy clinical use. “When the government holds a code-a-thon, there’s an admission in society that we have hit a roadblock, that we need help, and we go out to the polity to help us,” HHS Chief Technology Officer Bruce Greenstein said at October's Connected Health Conference, MobiHealthNews reports. “Maybe it’s not unique around the world, but it’s certainly rare when a government admits that it’s stuck and needs help from its people. And this is one of the most participatory forms of the relationship between a government and its polity, for maybe one of the most important, pressing questions and problems in our society today.” The team that took home the $10,000 prize in the prevention track created a data visualization tool that helps users — addicts, public health specialists and others — locate drug takeback programs. "We looked at CDC data and we found that, for the most part, individuals that abuse opioids are not obtaining them from drug dealers. The majority are obtaining them from their families. Over 70 percent of individuals that abuse opioids obtain them or borrow them or steal them from their families,” said Taylor Corbett, the spokesperson at the team presentations for the winning company, Visionist, MobiHealthNews reports. “Our hypothesis was, individuals are getting these meds because they’re left over from an operation or something like this. What can we do to get those drugs out of circulation?" The winner in the treatment track, meanwhile, used data to model and predict overdoses so that public health agencies could better track and predict overdoses and better stock and allocate resources. The move toward Big Data is part of a cultural move by HHS to better use and understand data in overall operations and to deliver better care to the U.S. population.
<urn:uuid:9fa9f829-aad5-4ce3-88f8-28948c18fd0b>
CC-MAIN-2024-38
https://fedtechmagazine.com/article/2018/03/hhs-embraces-big-data-help-battle-opioid-crisis
2024-09-11T16:03:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00898.warc.gz
en
0.953008
1,280
2.671875
3
In today’s time, cyber security is improving lives and businesses in unimaginable ways. However, as the world grows more digital, the need for cybersecurity is more important than ever. With cyberattacks, scams, phishing, data breaches, and other online threats, the internet can be deadly for individuals or businesses who are unprepared. Due to this, global organizations, regardless of their business model, size, or industry, have started investing extensively in cyber defense and training. Despite the rising emphasis on making businesses cyber safe, there are still many widespread misconceptions about cybersecurity in the business world. In addition, it might be challenging to distinguish fact from fiction. So below are the top 10 cybersecurity myths that you need to avoid: - Hackers don’t target small businesses: For start-ups and smaller businesses, cybersecurity might require a big investment, and many decision-makers would rather use that money in other areas of the company. The myth that hackers do not target small businesses, however, is untrue. Due to their inadequate security measures, hackers primarily target small firms and take advantage of untrained workers for social engineering attacks. Additionally, smaller organizations are significantly more vulnerable to the long-lasting effects of cyberattacks than larger ones. - Cyber-Security Software or Antivirus is Good Enough: There are several possible ways to prove that this myth is wrong. Even while the majority of people believe they are more protected after installing security software, this is not actually the case. The clients’ defenses are useless because such security software providers’ servers are susceptible to hacking attacks. It’s crucial to carefully consider the kind of cyber-security software you choose. It is simple to pick an antiviral at random and then regret it afterward. So, choose reputable companies with more advanced security measures. Some of the best ones could cost you a dollar or two, but saving money now might wind up costing you a lot later. - I have a strong password, I am safe: Even though, having a strong password is essential, it is regrettably insufficient on its own. Using multi-factor authentication (MFA), which requires users to authenticate themselves using a second method like their phone or an app like Google Authenticator, is an excellent approach to add an additional layer of protection. Even if criminals are successful in obtaining usernames and passwords, MFA will prevent them from logging in without the “second factor.” - It is easy to spot phishing: The notion that phishing can be quickly identified is another crucial cybersecurity misconception. However, it is one of the most commonly used methods for stealing people’s personal information or gaining access to a system. Anyone could become a victim of malware because it can be so deceptively hidden in an email. Never assume that the links you click on can’t possibly be fake. Ensure that everyone on your staff is aware of the dangers of phishing. Through training, they can learn how sophisticated these scams can be and how simple it is to fall for one. - The only real concern is external threats: Internal threats are just as concerning as exterior ones because they are more challenging to defend against. There are three main types of internal threats: - Negligent Insider - Stolen Credentials - Malicious Insider The best way to safeguard against this is to use monitoring and data loss prevention (DLP) tools while enforcing strict access permissions (and ensuring that employees can only access the data they need). - Security is the responsibility of IT professionals only: There is no magic wand available to IT professionals to ensure cybersecurity. They can identify and implement good processes and policies, but their effectiveness depends on all parties involved, including the staff. As a result, employees should undergo training on various cyberattacks and cybersecurity. If you’re looking for the best cybersecurity training provider, get in touch with InfoSec4TC now. InfoSec4TC is one of the most reputable and trustworthy companies that offer the best cybersecurity training programs for people. Additionally, they have professionals on staff who can assist you whenever you need it. - I will know straight away if my business is attacked: These days, this is rarely the case. Pop-up advertising and slowly loaded browsers were once simple warning flags, but scammers have become more cunning now. Since hacking is a silent crime, it serves the hackers’ best interests to t to remain unnoticed for as long as they can. The more data they can steal, the longer they have access to your systems. - My Data Isn’t Worth Anything This is a myth. Given that so many people use the same login credentials for the majority of their services, including online banking, even if hackers just obtain usernames and passwords, this may still have highly negative consequences for everyone whose data was hacked. - Hacking Apple devices is impossible: There is a misconception that Apple products are impervious to online dangers; this is untrue. Users who believe their Apple products are impervious to hacking are more likely to experience data loss. Apple products can and do get hacked. - Since I don’t own a computer, I am immune to hacking: As so many of our devices are connected to the internet today, hackers and scammers are no longer limited to targeting computers. Phones, routers, and even smart TVs are targets for scammers. We must confirm that all endpoints are secured. These are some of the common myths about cybersecurity that will help you and your employees prepare for the future. However, it is crucial for every business, whether small or big, to teach their employees about cybersecurity and provide them with the best cybersecurity training. If you’re looking for the best cybersecurity training provider, InfoSec4TC is the name you can count on. We are one of the leading online sites that provides a comprehensive number of online courses and training programs ranging from IT Basics to Cyber Security. We have more than 150 courses in IT and Cyber Security. To know more about our services, explore our website right away or WhatsApp us at +971501254773.
<urn:uuid:164cc0a1-c765-4f39-ae31-3999988d41c9>
CC-MAIN-2024-38
https://www.infosec4tc.com/10-cyber-security-myths-its-time-to-let-go-of/
2024-09-16T16:46:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00498.warc.gz
en
0.950419
1,265
2.71875
3
Get the highest-level security for your website with our affordable web security services. SSL Certificate Security Glossary This SSL Glossary Is A Compilation Of The Most Common Terms That Are Used In The World Of SSL128 Bit 128 Bit key is the length of a symmetric encryption key used for the encryption and decryption of data. An SSL Certificate with a key length of 128 Bit means it will have a possibility of 2128 combinations. That becomes almost impossible to hack. To use 128-bit key length, your server must be upgraded too to support 128-bit key length. 256 Bit encryption key is used for the encryption and decryption of data. Clearly, it can be said that this key length is more secure than a 128-bit key because it will have a possibility of 2256 combinations for someone to be able to crack that. Industry-wise, most of the certificate authorities are offering 256-bit encryption. 2048 Bit Encryption When speaking about the 2048-bit encryption, it is common to misunderstand it as the length of the encryption key; however, it is not an encryption key, but it is the size of the SSL certificate. The previous version of it was 1024-bit which has been denounced, and only key lengths 2048-bit and above are in use today. In an Asymmetric Encryption, the encryption and decryption of the data are done using two separate keys. However, they are connected mathematically. Those are the public key (visible to all) and the private key (only the owner has access to it). RSA and ECC algorithms are the most widely used Asymmetric Encryption. This technology validates whether the software uses a digital certificate issued by a certificate authority. It also confirms the publisher of the software and that it has not been altered. Business / Organization Validation A business or organization validation is one of the most important processes that any CA (certificate authority) performs before issuing an OV (organization validated) or EV (Extended Validated certificate). This validation basically includes the verification of the address, phone number and legal registration status of the organization for which an SSL certificate is requested. To validate these details the CA will use any government database, independent third-party online database, etc. The SSL requestor must ensure that their organization’s details are updated accurately on such platforms because their security certificate will be issued only under the verified organization details. It is a file signed and issued by a certificate authority, or it can also be self-signed. The file is issued to an organization or individual and it verifies that data exchange over the web browser or a device is from an intended source. The certificate details show the information about the entity that the certificate is issued for and the certificate’s specification. CA (Certificate Authority) A CA (Certificate Authority) is an entity in charge of issuing the digital security certificate. Before a CA can issue the certificate, they put the domain name and individual/organization details (depending on the type of certificate) under a thorough verification process fulfilling the CA/B Forum guidelines. This process induces trust in both the digital certificate owner and the dependent party (mostly the users/visitors). CA/B Forum (Certification Authority/Browser Forum) A voluntary body comprised of CAs (Certificate Authorities) and Web Browsers regulates the standards of SSL certificates and constantly governs any upcoming threat, and comes up with measures to safeguard the users from the same. An SSL certificate is not an individual element. The certificate authority strings the multiple pieces together to form a certificate. This model of multiple certificates combined is called a chain certificate. A chain certificate consists of three parts: leaf certificate (server certificate), Intermediate Certificate and Root Certificate. CRL (Certificate Revocation List) CRL is a list of all the revoked certificates. The Certificate Authority maintains the CRL. If the private key is compromised, the certificate can get revoked, and the browsers will no longer trust the certificate. If a certificate is revoked, it will no longer protect your website and it will be exposed to online threats. CSR (Certificate Signing Request) CSR stands for Certificate Signing Request, which is an encoded file consisting of the information provided by the certificate requestor, such as domain name, organization, locality, etc. It is an essential requirement to request an SSL certificate typically generated from the server where the domain name is hosted. But there are online third-party tools available as well to generate a CSR. A digital signature is an electronic signature that involves the use of a mathematical algorithm to sign. The purpose of having a digital signature is that the receiver comprehends the identity of the sender of any message or any data and knows that it is not tampered with. Digital certificates digitally signed by well-known and globally trusted CA (certificate authority) gain instant trust among the user because they can rest assured that the signature is authentic, and the data exchange is intact. These digital signatures cannot be copied by anyone else, which is one of the most significant advantages of having a digital signature. Digital Signature Algorithm (DSA) Digital Signature Algorithm (DSA) is a process of producing Digital Signatures based on mathematical expressions and is proposed by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA). DSA is used in 4 tasks: In the generation of key, Distribution of key, Signing and Verification of Signature. DSA does not have the ability to encrypt or decrypt any information. Digital Signature Standard (DSS) Digital Signature Standard (DSS) is a signature algorithm used to authenticate the signatory and the data. It was launched by the National Institute of Standards and Technology (NIST). The DSS functions by using a public key, a private key, a hash function, a random number k, and a global public key. E-commerce is an activity of buying and selling online services & products over the Internet. Such e-commerce websites require the highest level of security that is offered by an EV SSL certificate. ECC (elliptical curve cryptography) ECC is an encryption algorithm for SSL and is frequently called an alternative to RSA. It has gained popularity mainly because of its smaller key size. Also, in RSA, the private and public keys are both integers, but in ECC, the public key is a point on the curve, and the private key is an integer. The process of converting simple data into complex text is called Encryption. In the world of SSL, encryption is very necessary to deceive any person with ill intentions to read or steal your data. The data that is encrypted can be decrypted or deciphered using an appropriate key. With public-key encryption, anyone can encrypt data using the owner’s public key, but the private key remains with the owner that can decrypt the data. FQDN (Fully Qualified Domain Name) A fully qualified domain name is the exact domain name that gives its location on the DNS (Domain Name System). For example, mail.domain.com is an FQDN. HSTS (HTTP Strict Transport Security) is a web server directive that lets a site be contacted over HTTPS encryption. Only getting an SSL certificate cannot be enough sometimes because hackers may still find ways to reach your site over http://; That is why HSTS forces browsers and devices to use HTTPS if available even if a user type of http:// or www. Hypertext Transfer Protocol Secure (HTTPS) is a protocol for secure communication over the internet. Usually, suppose a web server has an SSL certificate installed for a particular domain, then on entering https:// before that domain name. In that case, the web browser will indicate that the website is safe to visit and the data exchange on the domain will be secure and encrypted. HTTPS Port / SSL Port A port is a point where the connection between a browser and a webserver will be established. There are different types of ports, but generally, port 443 is used for SSL. Port 80 is usually used to support non-secure http traffic. That is why port 443 is used by most websites to establish a secure HTTPS connection. Intermediate certificates are the connecting certificates between a server certificate and a root certificate. There is at least one intermediate certificate; however, there can be more than one too. MD5 (Message-Digest Algorithm) is a cryptographic hash function commonly used to verify whether a file has been altered or not. MD5 produces a checksum on both sets and then compares them to verify if the checksums are identical. However, MD5 has some flaws and is not recommended to be used with advanced encryption applications. Microsoft Exchange Server Microsoft Exchange Server is a mail server and calendar server, it is a product of Microsoft. Its mail server provides flexibility to send emails, calendaring, and tools customization. You can use the web application, Outlook, or your phone, it is that simple. Exchange servers have been helping around for more than 20 years and it keeps evolving over the years. Many businesses take advantage of Office 365 to streamline their mailing experience. MiTM (Man-In-The-Middle) attack means when an impersonator places himself in between a user and an application to eavesdrop with an intention to steal information such as login credentials, credit card details, etc. The easiest way to fall prey to MiMT is using WiFi connections that are not password-protected, using unsecure websites, using public networks at cafes, hotels, etc. to conduct sensitive transactions. Avoiding any such unsecured connections will help you not to fall prey to MiMT. Mixed Content means that a page is serving both secure and insecure elements. This totally negates the idea of SSL because even though you have installed the SSL certificate on your server, the browser will throw an insecure warning because of the mixed content warning. A multi-domain SSL certificate can be used when a user has more than one website to secure. For example, a multi-domain SSL certificate can secure both domain.com and domain.co.uk; Normally, the primary domain in a multi-domain SSL certificate is taken from the common name in the CSR (certificate signing request) and the additional domain that a user needs to secure is also known as SAN (Subject Alternative Name) which is generally asked to input during the generation of the SSL certificate order. PEM (Privacy Enhanced Mail) file format is the most used file format for certificate requests, certificates, and keys. It can be easily viewed with any text editor such as notepad. Extensions that they normally have are .crt, .cer, .pem, .key PFX stands for Personal Exchange Format, and it consists of the public key (SSL certificate) and its corresponding private key. It is in the format PKCS#12. Generally, the CAs (certificate authorities) issue the SSL certificate in PEM format because they do not have access to the private key. Therefore, the certificate requestor can use the PEM file (received from the CA) and the private key that they have and can create a PFX file; There are online tools available for this purpose. PKCS (Public Key Cryptography Standard) is a set of standards from PKCS#1 to PKCS#15. PKCS is based on an asymmetric cryptographic algorithm because it uses a public and private key. It was developed by RSA laboratories and backed by security developers around the world. PKI (Public key infrastructure) is a crucial part of the encryption process as well as they help in the authentication of the devices that are communicating. PKI combines various elements that form technologies such as software, hardware and procedures needed to create, manage, store, distribute, and revoke digital certificates. In the world of SSL, a private key holds very high importance in terms of security. A private key is generated with the CSR (certificate signing request) and must be preserved as it will be needed to install the SSL certificate. It should not be shared with anyone and if the private key is compromised, the SSL certificate can be revoked by the CA (certificate authority). In asymmetric encryption, a public key is accessible to anyone. Any data that has been encrypted with the public will only be decrypted by the corresponding private key. ‘Reissue’ is a term used when a user requests another security certificate despite an active certificate. The new certificate will not be a copy of the existing certificate but will have a different certificate serial number. There might be several reasons that a user might need to reissue a certificate; Most common is losing the private key and changing the server. The SSL certificates are made up of multiple certificates that the issuing CA (certificate authority) joins together, this is known as the Chain of Trust. In this ‘chain of trust’, the root certificate is at the base of the chain. RSA encryption algorithm is an asymmetric encryption algorithm, and it uses a key pair that is linked mathematically to encrypt and decrypt data. That means if a public key encrypts the data, then the private key can encrypt it and vice versa. To ensure maximum security, the minimum key length recommended by NIST (National Institute of Science and Technology) is 2048-bits. SHA-1 (Signature Hash Algorithm 1) is a Cryptographic Hash Function that has a message digest of 160 bites. SHA-1 was declared insecure in the year 2005 and major companies like Google, Mozilla, and Microsoft have stopped accepting SHA-1 SSL certificates. SHA-2 (Signature Hash Algorithm 2) is a family of the hashing algorithm. SHA-2 has replaced SHA-1 as it was declared insecure by NSA & NIST. SHA-2 has a drawback: few older devices or OS might not support the SHA-2 hashing algorithm. SHA-2 has six variants, SHA-224, SHA-256, SHA-284, SHA-512, SHA-512/224 and SHA-512/256. In each of these variants, the number represents the bit values. Shared SSL & Wildcard SSL Many web hosting companies share a single certificate among multiple clients. A wildcard SSL certificate will fulfil such a requirement and it becomes easier for the web host to manage multiple client sites using a single certificate. A wildcard can secure multiple sub-domains; for example, a wildcard certificate for *.hosting.com will secure all its first-level sub-domains like client1.hosting.com, client2.hosting.com, client3.hosting.com, etc. A shared or wildcard SSL certificate is not only limited to web hosts; any client or company can make use of this certificate if they want to secure multiple sub-domains. For web hosting companies, it is important to make sure that the server displays the correct SSL certificate issued for the domain name. SNI (Server Name Indication) permits a server to host multiple SSL Certificates for multiple domain names under a single IP address, which is most known as shared IP. In the process of SSL handshake, SNI will add the domain name of the server as an extension in the ‘Client Hello’ message and by this, the server will know which domain name to present while using shared IP. Therefore, the server will display the correct SSL certificate against the exact domain name. SSL (Secure Sockets Layer) SSL stands for Secure Sockets Layer. The SSL certificate encrypts the data that is transferred between the web browser and a web server. The SSL certificate will only function if it is correctly installed on the web server where the domain name is hosted. It is recommended to use an SSL certificate that is issued by a globally trusted CA (certificate authority) because such an SSL certificate will be compatible with all the leading web browsers like Chrome, Firefox, etc. An SSL Handshake is a process of exchanging details to establish a connection between a client and a web server. Basically, it is a process where the client verifies the SSL certificate that is installed on the server by initiating a message with their details such as TLS version, and cipher suite and the server replies with the same details of the server. The client then uses the server's public key and sends another message which is further decrypted by the server using the private key; once this cycle is completed, the SSL handshake is established successfully. An SSL key is also known as a Private Key. Most commonly, the SSL Key is stored on your server that is because when you create a CSR (certificate signing request) on your server, it will also generate a corresponding SSL Key and will be stored on the same server on which the CSR was generated. Now, when the SSL certificate is installed on the server, it will match with the private key that was created with the CSR and this proves the legitimacy of the SSL certificate. The SSL Key is an essential component in the entire SSL process and hence it should never be shared with anyone. If you lose the SSL key, you might not be able to install the SSL certificate on your server. SSL proxy performs Secure Sockets Layer encryption (SSL) and decryption between the client and the server without anyone detecting its presence. It controls the SSL traffic in order to conduct a secure exchange of data between a client and a server. TLS (Transport Layer Security) is a security protocol. It replaced SSL in 1999; however, because SSL was very widely popular, TLS is still referred to as SSL. UCC SSL Certificate UCC (Unified Communications Certificate) is a multi-domain SSL certificate that allows you to secure multiple domains and sub-domains all together in a single SSL certificate. This was specifically designed for Microsoft Exchange and Office Communications servers. Verification / Validation Is a process where the certificate-issuing authority puts your domain name and/or organization under a vetting process to ensure the full legitimacy of the details furnished by the certificate requestor. Vulnerability is a flaw that can allow cybercriminals to exploit and break into your system. The cybercriminal can run malicious code, steal your sensitive data, or even install malware in your system. Complexity, connectivity, poor password management, software bugs, etc. are a few of the many causes of vulnerability. Any flaws caused due to these would allow an attacker to target and time an attack appropriately. Vulnerabilities Assessment is a process to carefully review the security weakness in the system. It identifies if there are any vulnerabilities in the system, then assigns the level of severity of the vulnerability and recommends the remedy. External and Internal vulnerability scans, Environmental scans, etc., are some of the tools that can help with a vulnerability assessment. A proper vulnerability assessment can reduce the risk of your system falling to cyber threats. An online system to look up the information related to a domain name. The WHOIS shows information such as the organization name, registered email of the domain, address, etc.; However, some of this information may be hidden from the public depending on the domain name owner’s choice. X.509 Certificate is a digital certificate based on a standard globally renowned International Telecommunication Union X.509 standard in which the format of PKI certificates is defined. These certificates are almost everywhere, and we encounter them daily while using a website, device, and mobile application. X.509 certificate standard is applied in SSL and HTTPS authentication and encryption, in code signing, document signing, client authentication, etc.
<urn:uuid:04b76413-5d9b-4ec5-af25-411159fec36d>
CC-MAIN-2024-38
https://certera.com/support/glossary
2024-09-19T05:18:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00298.warc.gz
en
0.928151
4,039
3.234375
3
Every data center has an invisible ceiling that limits the amount of IT equipment, servers, storage, and network devices that it can handle. This is known as the capacity of the data center. To make the most of data center floor space there has been a shift towards high-density computing. Data centers must guarantee that localized cooling capacity is enough to match non-uniformly distributed heat loads. Non-uniform because a data center is a “living breathing animal” some racks may be working at higher loads than others at different times of the day. The original Mechanical Electrical Plumbing (MEP) architecture and data center location choice have already substantially determined a data center’s power and cooling capability. The combination of power, cooling, and floor space creates an unseen barrier that leads to the concept of stranded capacity. Because of the system’s design or configuration, stranded capacity is the capacity that cannot be used by IT loads. It’s incredibly expensive for data centers to fail to meet the operating and capacity needs of their initial design, and getting your data center green is a challenge. Stranded Cooling Capacity Stranded cooling capacity is the most expensive type of stranded capacity since it refers to any part of the mechanical system that is working but not contributing to cooling IT equipment. Wasted cooling energy is one of the consequences of stranded capacity and consequently money. But just because you can’t see the issue doesn’t imply it doesn’t exist. With that in mind, here are the effects of stranded capacity a data center manager should be concerned about: Effects of Stranded Cooling Capacity - There Is More Stranded Capacity In Most Data Centers Than You Might Think The data clearly revealed that there was 3.9 times more cooling capacity than IT load at the last 45 sites analyzed by Upsite. This means a lot of fan horsepower is being used unnecessarily, and money is being spent on cooling equipment that is not needed. - Low Cooling Unit Temperature Set Points Can Strand Capacity Manufacturers rank their cooling units based on conventional return-air conditions, which are typically 75°F with a relative humidity of 45%. However, because most sites run their cooling units at lower setpoints than recommended, the rated capacity cannot be met. Because the cooling unit’s cooling capacity reduces at lower return-air temperatures, this results in a very costly situation of more cooling units working. A typical 20-ton (70 kW) cooling unit, for example, has a total capacity of 20 tons (70 kW) at a return-air temperature of 75°F and an RH of 45%. The same 20-ton cooling unit, however, has a sensible cooling capacity of only 17 tons at a 70-degree return-air temperature and 48% Rh (59.7 kW). - High Relative Humidity Can Strand Capacity Condensation can build on cooling unit coils in some IT settings due to high relative humidity (RH%). Moisture condensing on cooling unit coils produces heat, which uses some of the cooling capacity of the unit, stranding capacity that could otherwise be utilized to lower the supply air temperature to IT equipment. - Misplaced Perforated Tiles Can Strand Capacity In your computer room, misplacing perforated tiles reduces cooling capability. Perforated tiles or grates, for example, can be placed in an open area or a hot aisle to allow valuable conditioned air to escape the raised floor plenum. The amount of air lost through these tiles is insufficient to keep IT equipment cool. Unused conditioned air is referred to as stranded capacity. - Unsealed Cable Openings Can Strand Capacity Unsealed cable apertures, like misplaced perforated tiles, release bypass airflow, stranding cooling capacity since conditioned air escapes via these cable gaps and cannot be utilized for IT equipment. Stranded capacity can be recovered without spending loads of money. All that must be done is basic and effective management and controls adjustments. These simple steps will recover stranded cooling capacity and lead to energy savings. How To Avoid Stranded Capacity? To avoid stranded capacity you must first identify the limiting factor and modify the capacity of the remaining 2 elements to rebalance each of the 3 defining parameters. - Cooling as limiting factor If cooling capacity is not able to efficiently cool the power load, the data center’s PUE will suffer; systems will become overheated and some of the available power will be wasted as heat output. Underfloor cabling with limited space for cooling can contribute to stranded power capacity. - Space as limiting factor When physical space within a data center facility has been exhausted, modular e-houses are an efficient solution to facilitate continuous upscaling of power capacity. Additionally, the introduction of low footprint infrastructure within the data center can help to optimize the white space and eliminate wasted square footage. - Power as limiting factor Overbuilding of data center capacity is a major issue and is largely due to the notion that data centers need to be armed with enough capacity to meet unforeseen demand. In reality, this can lead to excessive stranded capacity which can be very costly. It is important to right-size your data center to support optimal operating efficiency where cooling, power, and space are in balance. Preventing Stranded Capacity With AKCP Monitoring As you can see, stranded capacity in a data center can come from a variety of places. They could account for a significant portion of the site’s cooling capacity on their own. They can add up to a significant loss of resources and money when added together, thus they must all be handled to effectively maximize a room’s cooling efficiency and capacity. Reducing stranded capacity will increase the amount of potential load that can be efficiently cooled as well as the amount of money that can be saved by making necessary airflow management (AFM) changes. The Cabinet Analysis Sensor (CAS) features a cabinet thermal map for detecting hot spots and a differential pressure sensor for analysis of airflow. Monitor up to 16 cabinets from a single IP address with the sensorProbeX+ base units. The Wireless Cabinet Analysis Sensor is also available using our Wireless Tunnel™ Technology. Differential Temperature (△T) Cabinet thermal maps consist of 2 strings of 3x Temp and 1x Hum sensor. Monitor the temperature at the front and rear of the cabinet, top, middle, and bottom. The △T value, front to rear temperature differential is calculated and displayed with animated arrows in AKCPro Server cabinet rack map views Differential Pressure (△D) There should always be a positive pressure at the front of the cabinet, to ensure that air from hot and cold aisles is not mixing. Air travels from areas of high pressure to low pressure, it is imperative for efficient cooling to check that there is higher pressure at the front of the cabinet and lower pressure at the rear. Rack Maps and Containment Views With an L-DCIM or PC with AKCPro Server installed, dedicated rack maps displaying Cabinet Analysis Sensor data can be configured to give a visual representation of each rack in your data center. If you are running a hot/cold aisle containment, then containment views can also be configured to give a sectional view of your racks and containment aisles. Cabinet Analysis Sensor connects to AKCP sensorProbe+ base units. Extendable up to a maximum of 15 meters cable length, you can monitor multiple cabinets from a single IP address. Up to 16 sensors can be connected to a single SPX+. The latest generation of sensorProbe devices, in a form factor that allows for 1U, 0U, and DIN rail mounting. A low-profile design that is economical on cabinet space. sensorProbeX+ comes in several standard configurations or can be customized by choosing from a variety of modules such as dry contact inputs, IO’s, internal modem, analog to digital converters, internal UPS, and additional sensor ports. - Every sensorProbeX+ is equipped with Ethernet, Modbus RS485, EXP, and BEB communications. - Compatible with sensorProbeX+ EXP and BEB units, expand the capabilities of your device. - 1U rackmount brackets, Tool-less 0U mounting, or DIN rail mounting options. - Notification by SNMP, Email, SMS (requires optional cellular modem), built-in buzzer. - Compatible with a wide range of AKCP Intelligent sensors. - Start with base configuration and build up your device with the modules you need. - Up to 80 virtual sensors.
<urn:uuid:bc95e262-cfd7-4191-bc76-73d446c01639>
CC-MAIN-2024-38
https://www.akcp.com/blog/effects-of-stranded-capacity-in-data-centers-and-how-to-avoid-it/
2024-09-09T13:01:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00298.warc.gz
en
0.917459
1,775
2.65625
3
The oil train that exploded in West Virginia met the industry's voluntary 2011 safety standards, but a growing number of accidents has the Obama administration considering tougher rules for tank cars. A look at what's rolling on U.S. rails: OLDER TANK CARS "Legacy" tank cars have a shell that is seven-sixteenths of an inch thick, and no shields to protect their front and back from ruptures. They also lack protective fittings on top where oil is loaded, although some older cars do have an outer jacket, seven-sixteenths of an inch thick, for extra protection. The government wants to phase out these "legacy DOT-111 cars." The Railway Supply Institute said 28,300 of the cars were carrying crude oil and 29,300 were carrying ethanol last year. To meet the industry's 2011 standard, some "1232 cars" were retrofitted with a thicker, half-inch thick shell; "half-height" shields to protect each end from crumpling; and special top fittings to reduce the risk of ruptures. However, they don't have an outer jacket. Newer 1232 cars have the same thinner shell as the legacy cars, but they have an outer jacket, a full-height shield and special top fittings protection. POSSIBLE NEW RULES Draft regulations sent to the White House budget office for review are not public, but possible requirements considered by the Department of Transportation include a steel shell nine-sixteenths of an inch thick, an outer jacket and a thermal layer in between them to prevent overheating; extra protection for top and bottom outlets; full shields in front and back; systems to prevent cars from rolling over; and electronically controlled brakes that could stop all cars in a formation at the same time rather than sequentially. Oil and rail industries said the brakes alone could cost up to $21 billion for minimal benefits.
<urn:uuid:c0061b4a-22f3-4530-be8c-b8a861e79f32>
CC-MAIN-2024-38
https://www.mbtmag.com/quality-control/news/13213076/white-house-considering-oil-train-safety-upgrades
2024-09-09T11:22:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00298.warc.gz
en
0.959568
391
2.703125
3
For organizations looking to standardize software deployments across platforms, cut back on overhead costs and enhance scalability, (server) virtualization and containerization are the top two approaches in use today.Both virtualization and containerization technologies employ the use of self-contained virtual packages and can help IT administrators become more agile and responsive to evolving business demands.However, the two concepts can be a tad confusing. Read on to get a better understanding of how these technologies operate and which is the most suitable for different use cases. What Are Containers? Containers may be described as packages that include everything needed to run a single application or micro service. This includes its dependencies and run-time libraries. Containers allow the application to be run quickly and reliably from anywhere, right from desktop computers to physical and virtual servers and even the cloud. The container is abstracted away from the host operating system (OS). Each container shares the OS kernel with other containers rather than including its own full OS. Access to underlying OS resources is limited. As such, a containerized application can be run on different infrastructure types such as the cloud, virtual machine, bare metal and so on, without the need for refactoring the application for each IT environment. Since containers typically share the machine’s OS kernel and don’t have the overhead of having a full OS within each container, they are often regarded as being more “lightweight” than virtual machines (VMs). How Does Containerization Work? Containerization may be defined as a type of OS virtualization wherein applications are run within isolated user spaces (containers) that all share the same OS kernel. It is the encapsulation of an application and the environment required to run it so it can be efficiently and consistently run across several different computing platforms. Containerization is emerging as the preferred approach for software development and DevOps pipelines. The creation and deployment of applications are faster and more secure with containerization. When code is developed using traditional methods in a specific computing environment and transferred to a different computing platform, it can often result in errors. However, containerization effectively eliminates this problem by encapsulating the entire application code along with its related libraries, dependencies and configuration files required for it to run. Pros and Cons of ContainerizationLike every technology out there, containerization has its fair share of advantages and disadvantages that you must take into consideration. Pros of Containerization - One of the major advantages of containerization is that it provides a fast and lightweight infrastructure for you to run your applications. The relatively lightweight containers are more flexible, and you can create and move them more quickly than VMs. - Containerization supports policy-based optimization. You can use an automation layer to locate, auto-migrate and execute on the best platform. - Containerization is helpful in lowering your software development and operational costs. - Containerization also provides greater scalability. Compared to VMs, many more containers can be created and run on a physical server since they don’t require a full OS to be included in each container. In addition, monolithic applications can be broken down into smaller micro services using containers. You can then scale and distribute the individual features. Cons of Containerization - One of the major drawbacks of containerization technology is that it requires a significant amount of work to set up in an organization so that it performs efficiently. - Since the technology is relatively recent, the required application support and dependencies still remain insufficient. - It’s hard to find qualified container developers. - Containers share the host OS kernel. That said, in the event of the kernel becoming vulnerable, all the associated containers would become vulnerable too. - Container technology can be more expensive in terms of application development costs. What Problems Do Containers Solve? As per a forecast by GartnerInc., the global container management revenue is estimated to witness strong growth from $465.8 million in 2020 to a sizable $944 million in 2024.Gartner also predicts that by the year 2022, north of 75% of global businesses will likely be running containerized applications in production. The containerization technology provides tremendous portability across computing platforms and environments. It allows the developers to write the application once and then run it anywhere they like. Being a key component of the private cloud, containers are fast emerging as a game-changer for many businesses. Private cloud has become the favored approach for organizations to deliver the flexibility and control required while also enabling efficient consumption of multiple cloud services. What Are Virtual Machines? A virtual machine (VM) may be described as a virtual environment where each VM is a complete virtual computer with its own guest OS, virtual memory, CPU, storage and network interface. VMs function as software-defined virtual computers running on physical servers. Usually referred to as a guest, a VM is created within a physical computing environment called a “host.” Multiple VMs can share the resources of a single host such as memory, network bandwidth and CPU cycles and run concurrently. However, each VM will have its own OS and operate independently of other VMs that might be located on the same host. How Does Virtualization Work? As the foundation of cloud computing, server virtualization enables more efficient utilization of physical computer hardware. You can utilize the full capacity of a physical machine by running multiple VMs on a single server. Server virtualization is executed by running a virtual instance of a computer system within a layer, called a hypervisor, that is abstracted from the actual hardware. The hypervisor allocates hardware resources, such as CPUs, memory and storage, to each VM. Server virtualization allows running more than one OS on a single computer system at the same time. The global virtualization software market is estimated to see a compound annual growth rate (CAGR) of nearly 30% over the next two years. There are four types of virtualization: - Server Virtualization – With over 90% of businesses in Europe and North America using it, server virtualization is the most common type of virtualization.Server virtualization segregates one physical server into several isolated virtual server instances, as described above. - Network Virtualization – Network virtualization allows for the creation of abstract versions of physical network resources, including firewalls, routers and switches, within separate layers of the virtual network. - Storage Virtualization – Storage virtualization abstracts, aggregates and manages multiple physical storage resources to make them look like a single, centralized storage pool. The storage resources can be from different vendors and networks. - Desktop Virtualization – Creates a virtual version of the workstation, along with its operating system, that can be accessed remotely. Pros and Cons of Virtualization Let’s discuss some of the major advantages and disadvantages of virtualization. Pros of Virtualization - One of the key benefits of virtualization technology is that it enables efficient hardware utilization. You can create multiple virtual instances on the same hardware and reduce hardware costs. - Increased uptime and availability is another upside of virtualization. With capabilities such as fault tolerance, storage migration, live migration, distributed resource scheduling and high availability, VMs allow IT to quickly recover from unforeseen outages. - Virtualization helps lower IT operational costs since it requires a smaller number of hardware servers and associated resources to achieve the same level of scalability, availability and performance. This means less time managing and maintaining hardware resources. - Backup, duplication and recovery are relatively easier and quicker with virtualization. With real-time data backup and mirroring, there is negligible data loss and quick recovery from the last saved state that was mirrored on a separate virtual instance. Cons of Virtualization - With the initial setup cost of storage and servers being higher than usual, the high initial investment is one of the major downsides of virtualization. - In order to implement and manage a virtualized environment, you need to train your IT staff or hire experts that are well-versed in virtualization technologies. - Testing is critical to ensure your systems work flawlessly in a virtualized environment. What Problems Are Solved With Virtual Machines? Organizations today often require many servers in different physical locations, each operating at their highest capacity, to drive efficiency and ROI. As such, it has become a standard practice to use virtualization to increase the utilization of computing resources.The key idea behind virtualization was to boost the efficiency of IT systems. Virtualization helps boost IT scalability, flexibility and agility while reducing operational costs.In addition, availability of resources, increased performance, automated operations and workload mobility are some of the reasons why virtualization has become mainstream in the IT industry. The 2020 State of Virtualization Technology report predicts that by 2021, the rate at which enterprises adopt virtualization technology is estimated to grow significantly. While 75% of enterprises are likely to adopt application virtualization, nearly 69% are expected to start using desktop virtualization. Containers vs. Virtual Machines: What’s the Difference? In this section, we’ll discuss the key distinguishing factors between the two technologies. The diagram below shows that VMs each have their own “Guest” OS and sit on top of the hypervisor layer. Each VM has its own binaries and library files. Containers, on the other hand, may share binaries and libraries and don’t contain an OS. There’s a container engine in place of the hypervisor. Here’s a comparison table: Factors | Server Virtualization | Containerization | Security & Isolation | More secure since it provides full isolation from the host OS and the other VMs | Doesn’t offer as strong a security boundary as a VM but provides lightweight isolation from the host and other containers on a process level | Compatibility | Can run any OS inside the VM | Can run only on the same OS as the host | Networking Considerations | Utilizes virtual network adapters | Utilizes an isolated view of a virtual network adapter, consequently providing a lower level of virtualization | Virtualization level | Hardware-level virtualization | Operating system virtualization | Operating system needs | Each VM runs a complete OS | Each container shares the kernel of the OS | Speed | Startup time spans minutes, resulting in relatively slow provisioning | Startup time spans milliseconds, resulting in quicker provisioning | When Should You Use Containers? Containers might be the right choice if you’re looking to cater to your short-term application needs. Since containers are portable and can be set up and started up quickly, they can help elastically scale your applications to align with demand. Examples include event-driven video streaming, web service delivery, insurance claims or online order fulfillment.You should opt for containerization when your priority is to maximize the number of applications you’re running on a minimal number of servers. However, containers have the limitation of not being supported by dedicated storage resources and processing and operating systems.Containers are well-suited for packaging micro services and building cloud-native apps. When Should You Use Virtual Machines? VMs are the best option for businesses that need to run multiple applications that require the comprehensive functionality and support of a dedicated OS.VMs are best suited for applications that you need to use for extended time periods and run within a virtualized environment that is more versatile and secure. Virtualization is better suited for housing traditional, legacy, monolithic workloads, provisioning infrastructural resources, running one OS inside another and isolating risky development cycles. Monitoring Your Virtual Environment When it comes to monitoring and managing your virtual servers, make sure that your endpoint management tool provides complete visibility into your virtual environment so you can quickly identify issues and resolve them. Kaseya VSA discovers the two most popular virtualization infrastructures, VMWare and Microsoft Hyper-V, and includes hosts and VMs on its Network Topology Map. Learn more about Kaseya VSA. Request a demo today!
<urn:uuid:f9d92593-36b0-4468-8315-caacad939dee>
CC-MAIN-2024-38
https://www.kaseya.com/blog/containers-vs-virtual-machines-vm/
2024-09-13T00:37:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00898.warc.gz
en
0.921924
2,488
2.890625
3
The concept of 'absolute security' in the context of cyberspace might be unattainable, considering the relative nature of security. Amidst the global turbulence in 2020, malicious actors have found low-hanging fruits in AI, IoT, cloud systems, and 5G, targeting enterprises worldwide. However, international organizations have been agile and adapting innovative mechanisms to thwart online vulnerabilities. Malicious actors today apply a high degree of sophistication in security breaches. Hence, organizations must resort to equally robust measures to address online threats. The following figures of the last year help one gauge the weight of online vulnerabilities. - Between January and April 2020, cloud-based attacks witnessed a 630% rise. - In 2020, it took organizations around 207 days on average to detect a security breach. - While 25% of data breaches occur due to espionage, as much as 71% are motivated by financial gains. - March 2020 witnessed a 148% rise in ransomware attacks. Vulnerabilities From Emerging Technologies And Concepts In 2021 The pandemic in 2020 compelled a significant section of employees to work from home. As this trend continues in 2021, global organizations are required to reassess the vulnerabilities among home networks too. In the past, government and corporate networks were high priorities for malicious actors. However, as the digital landscape keeps changing, personal computing assets have evolved as high-value targets. Unlike corporate networks, personal computers do not enjoy the protective cover to prevent sensitive data loss. However, global corporate networks continue to be a highly prioritized target for the attackers. Below are more details on how technological sophistication is fostering the new age cybersecurity issues: - IoT And Online Vulnerabilities The digitized world presently includes more IoT devices than what existed a decade ago. It naturally exposes personal as well as corporate networks to online threats. The danger now encompasses almost all industries, including healthcare, farming, production, banking, etc. For instance, farmers get assistance from AI and satellites to enhance their productive activities. Modern agriculture equipment communicates directly with networks, data centers, and satellites. A large-scale cyberattack on farming can jeopardize the food supply of the entire nation. - 5G Technology Evolving technologies in the field of mobile networks have also made these systems more susceptible to online threats. In the past, monitoring point traffic involved fewer hardware modules. However, circumstances have significantly evolved with the inception of 5G technology. Since 5G is decentralized, it is imperative to deploy security and monitoring techniques on many gadgets. The increased bandwidth in 5G connectivity also enhances the scope for malicious actors. With larger bandwidth, these networks involve more IoT devices that remain connected. Consequently, organizations deploying 5G technology need scalable security measures to ensure the computing environment's integrity. - Deep Fake Techniques Over the years, video editing techniques have significantly evolved. It has resulted in the inception of a process called 'deep fake.' It is a kind of video manipulation that makes it difficult to detect whether videos have been altered. Besides, the expansion of the new 5G technology makes such vulnerabilities more pronounced. Malicious actors are likely to impersonate reputed organizations with these videos to obtain the victims' trust and credentials. Given that more than half the US population counts on social media for news, such manipulated videos can extensively impact their beliefs and decisions. - Drones And AI Intelligent systems like drones backed by AI and IoT present a multilayered threat to organizations. These gadgets reduce privacy as commercial drones can overcome physical borders. Therefore, malicious actors can use drones to capture information or passwords through spying. Presently, forward-thinking organizations are deploying professionals to thwart this risk from drone technology. Besides, they need to address the conspiracy theories based on smart vehicles. Remote manipulation of such vehicles exposes organizations and individuals to physical threats. How Should Organizations Prepare For The Next Generation Vulnerabilities? Corporate houses are already familiar with the term 'cyber maturity'. To secure your organization against the new-age online threats, you need to be proactive enough. Here are specific security measures that would prove valuable. - Identifying Cybersecurity As A Tactical Priority: Amidst the digital transformation, enterprises need to shield themselves against online threats. They should prioritize the hazards along with other elements in the risk profile of the organization. The cybersecurity strategy must involve both CISO and CEO. A seamless combination of crisis and risk management can help the organization address phishing and information theft issues. The enterprise should also consult experienced cybersecurity professionals and develop recovery roadmaps to respond to possible crises. - Maintaining Cyber Hygiene: Any organization must cultivate cyber hygiene in their respective work environments. As long as the staff keeps opening links for phishing or using weak passwords, one cannot avoid the threats. The degree of cyber literacy of the employees largely determines safety. Presently, organizations need to educate their staff with the necessary guidelines to boost their immunity against phishing and social engineering mechanisms. - Enhancing Professional Competence: To secure the IT systems, organizations must hire professionally competent cybersecurity experts. The developers and technical specialists need to undergo intensive sessions of training to mature in their respective fields. With hands-on experience in attack simulations, these experts can streamline their profiles. With better professional competence, they can enhance the cyber maturity of an organization. As a concerned business owner, one should understand the entire range of parameters constituting cyber maturity. Outsourcing cybersecurity responsibilities to established service providers can also be a logical move but at the end of the day you own the ultimate accountability. Any organization requires a cyber-resilient set-up to thwart the new-age vulnerabilities. A careful evaluation of the upcoming threats can prompt you to deploy the respective control measures for mitigating them and protect the confidentiality, integrity, and availability of the data.
<urn:uuid:58d1b5c4-fdfe-4bb1-a18b-6ee02de4a4e5>
CC-MAIN-2024-38
https://guptadeepak.com/evolving-threat-landscape-with-emerging-technologies/
2024-09-15T12:29:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00698.warc.gz
en
0.937025
1,176
2.5625
3
Three new ransomware types have been discovered only in two weeks time. They are based on Hidden Tear and EDA2 code, which are the first open source ransomware created for educational purposes. Hidden Tear and EDA2 are dangerous open source ransomware Open source ransomware is the latest hit for malicious users. Three new ransomware strains have been detected by security researchers that are based on the Hidden Tear and EDA2 designs. These open source ransomware were originally intended for education purposes. However, criminal programmers have modified their source code for attack purposes. Campaigns using spinoffs RANSOM_CRYPTEAR.B and RANSOM_MEMEKAP.A were detected by security researchers. One of the main factors that lead to the popularity of the new strains is the ease of modification. As they are based on the Hidden Tear and EDA2 open source code, the criminals don’t need to have a lot of skills to modify the behavior of the tools. The source code of both Hidden Tear 2 and EDA2 was taken down from public servers, but they are still widespread across various hacker networks. A prime example is the KaoTear (RANSOM_KAOTEAR.A) ransomware is based on Hidden Tear and uses the filename kaoTalk.exe with the KakaoTalk icon to disguise itself. KakaoTalk is a popular messaging application in South Korea that has 49.1 million users worldwide. Upon infiltration, it encrypts files and displays a ransom message in Korean. The two other famous examples that use Hidden Tear and EDA2 code are the Pokemon GO and Fsociety ransomware. All three variants use similar tactics prove to be a formidable enemy for security experts and victims. As the base, code is open source we expect to see more advanced ransomware attacks in the near future.
<urn:uuid:94f0784f-47e5-4db6-b6a1-008d7b878ec3>
CC-MAIN-2024-38
https://bestsecuritysearch.com/open-source-hidden-tear-eda2-ransomware-variants-latest-threat/
2024-09-18T02:45:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00498.warc.gz
en
0.948252
381
2.65625
3
Created date | Update date | Affects version | Fix version | | All | The clone protection type scheme relies on three different parameters for verifying fingerprints on a virtual machine: the virtual MAC address, CPU characteristics, and UUID of the virtual image. If one or more of the characteristics are changed the clone protection is triggered. Virtual MAC Address Each virtual machine has a unique virtual MAC address assigned. Within a network, each virtual machine must possess a unique MAC address. If a user clones a virtual machine and installs it on a second computer within the same network, working on either the original or the cloned virtual machine will be impractical as the two machines will constantly cause network collisions. In centrally managed virtual infrastructures (also called server based virtualization), hardware clusters can be virtualized. In this environment, the virtual infrastructure does not always utilize a single, fixed set of physical hardware resources. Instead, it utilizes a shared pool of resources. For the most common types of clustered environments, where live migration capabilities are typically required, a requirement usually exists for different hosts in the cluster to have identical CPU characteristics. Solutions such as VMware vCenter Server provide the ability to enable CPU masking to improve compatibility for the high availability and fault tolerance virtualization features. CPU masking allows host machines with different CPU characteristics to be used in the cluster, while providing common (masked) CPU characteristics across all hosts in the cluster. Therefore the CPU characteristics do not change when virtual machine migrates across the hosts in a cluster. This enables licensed applications to continue working when migrated from one host to another within a cluster. However, this type of environment is restricted to a limited subset of CPU types. In addition, the migration can only be performed when the target computer contains physical CPU whose capabilities match or exceed the characteristics of the virtual CPU. UUID of the Virtual Machine This is used as a means of unique identification of the virtual machine with the majority of virtual machines technologies. The UUID consists of a 16-byte (128-bit) number. Each virtual machine is assigned a different UUID. When checking the fingerprint for cloning, Sentinel LDK examines all of these characteristics. If one (or more) of these characteristics does not match the characteristics in the fingerprint of the license, Sentinel LDK prevents the protected software from consuming any licenses from the container. Thus, the combination of these parameters in the fingerprint provides protection against cloning. (See the table that follows.) Comparison results | ||||| Characteristics Compared | Virtual MAC Address | Identical | Different | Not relevant | Not relevant | CPU Characteristics | Identical | Not relevant | Different | Not relevant | | UUID | Identical | Not relevant | Not relevant | Different | | The license container is... | enabled | disabled | disabled | disabled | How to Clear the "Cloned" Status for a Product License? In the event that a Product license is disabled because it has been identified as "cloned", contact to Blancco Technical Support.
<urn:uuid:7b047c09-6b9b-4534-adba-1b063724b2ab>
CC-MAIN-2024-38
https://support.blancco.com/pages/viewpage.action?pageId=11403285
2024-09-18T02:46:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00498.warc.gz
en
0.863977
621
3.09375
3
Module 1: – Overview of Hacking What is Hacking exactly? Ten years ago talk about penetration to a 12-year-old, what answer would you have gotten? Hacking is trying to go through a wall whose door is not open, using drilling or hacking or beating your head to it in hopes that you don’t get hurt. Now what about Hacking, to that even our answer was to see whether you can destroy the wall or not and the 12 yr olds didn’t know JACK SQUAT so they didn’t say anything. Nowadays ask a 12 year old the same question. Those little demons have matured. They would say that Hacking is trying to go through a wall of security in the computer system and see if it breaks, the same way as beating the Wall Of Chine with a baseball bat if you are not well versed in Pen-Testing or a bulldozer if you know what you are doing……Well I don’t think he is gonna say THIS!! This is a mouthful. Most people are so superstitious that the term “Hacking” cause their attitude to go like….. “∀∗™™¡∼†„”…….DID YOU UNDERSTAND ANY OF THAT??? Me neither. They are sooo panicky in their pants towards the word hacking such that they think it literally means terrorist. I mean “GROW SOME BALLS” Now Hacking was mostly used for destructive purposes. But it can also fix the system. take the Following scenario for example “Anita had a big issue with her system. She asked Sunita to help her out. Sunita is a computer software engineer She says OK….She logs on to her system from her own system remotely and fixes out the software issues and BAM Anita’s PC is fixed” Now look closely at the above scenario. Under normal conditions, no one is allowed to even touch the PC remotely, but here what Sunita does is she fixes it remotely. Now, technically this accessing the PC remotely thing is illegal and still is considered a taboo. But here life got a little bit easy for Anita as she got her PC fixed. Most of the notions that drive people to take up pitchforks towards Ethical hacker or penetrators to you nice people is social media. They have made us out to be such culprits. That’s not to say these sorts of Testers are good people…because they are not…really. A Hacker can further exploit your dumb-asses can make some lives and completely destroy some. They can bring down nations, start world war but those are other types of computer people. GAWDD!! It would be so easy if I could actually use the word. Now Let’s discuss how many types of Pen-Testers are there? Types of Hackers The media have portrait Black Hats hackers as Rogues. Nowadays They are more acceptable then they were. Their job is to access the secure system, find the vulnerability, and then make a report of those vulnerabilities and find a way to tackle it. And sometimes, catch bad guys if you have a good day and you are lucky. Suiciders Hacker – They are not Ethical hackers they are a straight-up hacker and then destroyers. Their life depends upon this act, and their last wish tends to be a weapon of ultimate destruction in the digital world. In the Next blog I am going to discuss “Why Ethical Hacking??” and will share some hack basics with you.
<urn:uuid:2e74a940-1fe4-46ea-bf3c-14ca91d2c30b>
CC-MAIN-2024-38
https://www.cyberpratibha.com/blog/what-is-hacking/
2024-09-18T02:22:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00498.warc.gz
en
0.964697
763
2.921875
3
VoIP (Voice over Internet Protocol) is a technology that allows people to make voice calls using the internet instead of traditional phone lines. With VoIP, we can make phone calls from our computers, smartphones, tablets, and other internet-connected devices. This technology has become increasingly popular in recent years, and for good reason. One of the main benefits of VoIP is its cost-effectiveness. Traditional phone lines can be expensive, especially for businesses that need to make a lot of long-distance calls. With VoIP, we can make these calls for a fraction of the cost, and sometimes even for free. This can result in significant savings for individuals and businesses alike. Additionally, VoIP often comes with features that are not available with traditional phone lines, such as video calling and call recording. What is VoIP? VoIP stands for Voice over Internet Protocol. This technology allows us to make voice calls over the Internet instead of using traditional phone lines. With VoIP, we can make calls from our computers, smartphones, or other internet-connected devices. How VoIP Works VoIP works by converting analog voice signals into digital data packets that can be transmitted over the internet. These data packets are then converted back into analog voice signals at the receiving end. Encoding and decoding are the processes of converting analog signals into digital data and vice versa. To make a VoIP call, we need a VoIP service provider and a VoIP-enabled device such as a computer, smartphone, or IP phone. The VoIP service provider handles the call routing and provides the necessary software and hardware for making and receiving calls. Benefits of VoIP VoIP offers several benefits over traditional phone services. Some of the key benefits include: Cost savings: VoIP calls are typically cheaper than traditional phone calls, especially for long-distance and international calls. Flexibility: With VoIP, we can make and receive calls from anywhere with an internet connection. We can also use multiple devices to make and receive calls. Advanced features: VoIP offers several advanced features, such as call forwarding, voicemail, video conferencing, and more. VoIP uses several protocols for transmitting voice data over the internet. Some of the commonly used protocols include: Session Initiation Protocol (SIP): SIP is a signaling protocol used for initiating, maintaining, and terminating VoIP calls. Real-time Transport Protocol (RTP): RTP is a protocol used for transmitting audio and video data over the internet. H.323: H.323 is a protocol suite used for video conferencing and multimedia communications over IP networks. In conclusion, VoIP is a technology that allows us to make voice calls over the internet. It offers several benefits over traditional phone services and uses several protocols for transmitting voice data over the internet. When implementing VoIP, it is vital to consider the necessary infrastructure. This includes a reliable internet connection, sufficient bandwidth, and quality of service (QoS) settings. We recommend using a router that supports QoS to prioritize VoIP traffic and prevent call quality issues. Additionally, it is essential to have a backup power source, such as a battery backup or generator, to ensure that VoIP services remain available during power outages. Choosing a VoIP Provider Selecting a VoIP provider is a crucial decision. We recommend researching various providers to find one that meets your specific needs, such as pricing, features, and customer support. It is also important to consider the provider’s reliability and uptime guarantees, as well as their security measures. VoIP Security Considerations Security is a top concern when implementing VoIP. We recommend using encryption to protect sensitive information, such as call recordings and voicemail messages. Additionally, it is crucial to use strong passwords and two-factor authentication to prevent unauthorized access to VoIP accounts. Regularly updating software and firmware can also help prevent security vulnerabilities. Troubleshooting Common VoIP Issues Despite proper planning and implementation, VoIP issues may still arise. Some common issues include poor call quality, dropped calls, and echo. We recommend troubleshooting these issues by checking network connectivity, adjusting QoS settings, and updating firmware. If issues persist, contacting the VoIP provider’s customer support team can help resolve the problem. Overall, implementing VoIP requires careful planning and consideration of various factors. By ensuring a reliable infrastructure, selecting a reputable provider, implementing strong security measures, and troubleshooting common issues, businesses can successfully implement VoIP and enjoy its many benefits. Always at your service to provide the highest level of quality support to our customers. Anthony Firth Client Engineer “I’m passionate about building and fostering relationships, and finding solutions for success.” Michael Koenig Client Account Manager “I help clients stabilize and grow their IT infrastructure so they can focus on growing their core business.” Josh Wilshire Systems Engineer Team Lead “I strive to provide the highest level of quality service to our customers.” Tommy Williams Sr. Hardware Engineer “I’m driven by the steadfast belief that technology must serve as a business enabler. This mantra has driven 21 Years of successful partnerships.” Stephen Riddick VP Sales & Marketing “CSP doesn’t succeed unless your company succeeds.” Stephen Allen Inventory Manager “Through my intuition and genuine concern to help others I have built long-lasting relationships with our customers, co-workers and business partners.” Scott Forbes VP Support Services “Every day, I work with clients to help plan the future of their businesses.” Michael Bowman vCIO “Your IT problems become our IT solutions.” Mark McLemore Project Engineer “Managing internal and external operations to ensure that CSP provides quality and reliable customer service .” Margie Figueroa Business Manager “Providing quality internal and externals financial support to our customers and accounting support to CSP.” Katie Steiglitz Accounting Administrator “Some call me the CEO. I call myself the Cheerleader for an awesome team!” William B. Riddick Founder & CEO “CSP is here to assist you with your IT needs.” Beth Wylie Inside Sales Manager Thinking ofHiring A New IT Company? On What Questions You Need To Ask Before Signing Any Agreement.
<urn:uuid:96951db9-6a7f-435c-b824-b9b82a742749>
CC-MAIN-2024-38
https://www.cspinc.com/voip-the-ultimate-guide-to-internet-telephony.html
2024-09-10T20:54:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00298.warc.gz
en
0.926591
1,347
3.390625
3
Cybersecurity has undergone significant transformations, forcing organizations to rethink their defense mechanisms. Historically, security models operated on a trust but verify basis, presuming that everything within the organization’s network could be considered safe. This traditional perimeter-based approach has been challenged by the dynamic nature of modern cyber threats, leading to the adoption of a more stringent model: Zero Trust. Now you have probably heard of Zero Trust security models. But you might be hesitant to overhaul your organization’s entire cybersecurity infrastructure for a concept that still seems new or unclear. Traditional security models, based on the principle of “trust but verify,” may have served well in the past when network boundaries were clear and well-defined. However, with the rise of cloud services, remote work, and increased mobility, these models are facing new challenges that they weren’t originally designed to handle. The Flaws of Traditional Security Models Historically, cybersecurity relied heavily on perimeter-based security models that assumed everything inside the network could be trusted. This approach has substantial vulnerabilities: - Internal Threats: Once inside the perimeter, malicious actors could move laterally with little resistance. - Perimeter Erosion: With the adoption of cloud-based assets and services, defining and securing a perimeter has become increasingly difficult. Such vulnerabilities expose organizations to significant risks, highlighting the need for a paradigm shift in how security is approached. What is the Zero Trust Security Model? Recognizing these insufficiencies, the cybersecurity industry is shifting towards the Zero Trust model, which operates under the principle of “never trust, always verify”. Zero Trust doesn’t just fortify the perimeter—it eliminates the concept of a perimeter altogether. Instead, it requires verification at every step, aligning security closely with modern business practices and technology use. You wouldn’t let someone into your house if you didn’t know who they were, right? Zero Trust security models is built upon the belief that threats can come from anywhere, trust must always be earned and never assumed. This approach marks a deliberate move from inherent trust to a model where consistent verification is foundational, treating every attempt to access the system or data as a potential threat until proven otherwise. Zero Trust Security Models: Core Principles Principle 1: Verify Explicitly - Implement Multi-Factor Authentication (MFA) to ensure that users are who they claim to be by requiring multiple forms of verification (something you know, something you have, something you are). - Use device health checks to determine whether a device is secure enough to access resources. This includes checking for up-to-date security patches, whether the device is jailbroken/rooted, or if it has encryption enabled. - Employ context-aware access controls that take into account the user’s location, device, network, and behavior patterns to make real-time decisions on access requests. The benefit of explicitly verifying every access request is to dramatically reduce the risk of unauthorized access. By assessing multiple factors before granting access, you’re not only ensuring the identity of the requester but also the security posture of their device and the context of their request. This layered approach to security protects sensitive resources from breaches effectively, even if one factor (like a password) is compromised. Principle 2: Least Privilege Access - Conduct regular access reviews to ensure users have only the permissions they need to perform their current roles. Remove unnecessary privileges that may have accumulated over time (privilege creep). - Implement Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) to dynamically assign permissions based on the user’s role or attributes. - Utilize just-in-time (JIT) access and privilege elevation for situations that require temporary access to sensitive resources, ensuring that users have elevated permissions only when necessary and for a limited time. Granting users the minimum level of access needed to accomplish their tasks minimizes the attack surface and the potential damage from both external and internal threats. If a user’s account is compromised or an insider turns malicious, the impact is limited because they can’t access information or systems irrelevant to their duties. This principle is foundational to a strong security posture that focuses on damage control as much as prevention. Principle 3: Assume Breach - Segregate network segments to contain breaches within isolated zones, preventing lateral movement across your IT environment. - Implement strong detection capabilities, including anomaly detection, to identify unusual access patterns or behaviors that could indicate a breach. - Establish a comprehensive incident response plan that includes clear roles and responsibilities, communication strategies, and recovery procedures to swiftly address and mitigate breaches. The “Assume Breach” mentality shifts the focus from merely trying to prevent intrusions to also preparing for how to detect, respond to, and recover from them. It acknowledges the reality that organizations are constantly under threat, and some attacks may succeed. By planning for the inevitable, you ensure that when a breach occurs, its impact is minimized, operations can quickly return to normal, and data integrity is preserved as much as possible. This approach strengthens resilience against cyber threats, ensuring continuity and trust in your organization’s ability to protect its assets and stakeholders. Zero Trust Architecture (ZTA) Zero Trust Architecture isn’t just a concept; it’s a comprehensive blueprint designed to fortify modern digital environments against cyber threats. Here’s a breakdown of its essential components, explaining what they are and their role in the architecture: - Identity & Access Management (IAM): At the heart of Zero Trust security models is the ability to manage and verify who is trying to access your network. IAM is the framework that ensures only the right individuals can access the right resources at the right times for the right reasons. It’s about knowing and controlling identities, both human and non-human, within an organization. - Micro-segmentation: Think of your network as a series of compartments rather than a single open space. Micro-segmentation divides the network into smaller, secure zones, making it easier to control access and limit the spread of any potential breaches. Each of these zones requires separate access permissions, ensuring that an attacker cannot move freely throughout your network after breaching a single point. - Multi-factor Authentication (MFA): Beyond passwords, MFA requires one or more additional verification factors, adding a critical layer of security. This could mean a text message with a code, a fingerprint, or a face scan—anything that ensures that the person requesting access is who they claim to be. - Encryption: Essential for protecting sensitive information, encryption secures your data both when it’s stored (at rest) and when it’s being transmitted (in transit). Even if data is intercepted or accessed without permission, encryption makes it unreadable and useless to those without the key. - Security Orchestration, Automation, and Response (SOAR): This component handles how security alerts are managed and responded to. SOAR streamlines the processes involved in detecting, investigating, and mitigating potential security threats, making it easier for organizations to manage their security posture with efficiency and speed. Zero Trust Model and The Cloud The integration of Zero Trust security models becomes crucial with the shift towards cloud technologies. In the cloud’s shared responsibility model, both service providers and clients play active roles in security. Service providers manage the cloud infrastructure’s security, while clients are responsible for safeguarding their data within that infrastructure. This model highlights why adopting a Zero Trust approach is essential—since security responsibilities overlap, assuming trust in any component can create vulnerabilities. Zero Trust mitigates these risks by enforcing strict verification, no matter where the data resides or who is requesting access. This ensures a higher level of security that aligns with the complex, distributed nature of cloud services. Zero Trust Security Models: A Strategic Necessity The shift to Zero Trust security models is no mere trend. It is a fundamental change in how businesses approach cybersecurity. Its growing importance in an increasingly interconnected world cannot be overstated. Because of this importance, many businesses are turning to IT specialists to help support their cybersecurity infrastructure and posture. Innovative Integration understands the role of a secure IT environment for growing small and medium-sized businesses who are seeking continual growth. From managed services to data security and recovery, and from data center modernization to application mobility, Innovative Integration has the IT support solutions! Ready to learn more? Contact us today to see for yourself the Innovative difference!
<urn:uuid:8cb02c70-175a-4130-aa28-828b85278710>
CC-MAIN-2024-38
https://innovativeii.com/understanding-zero-trust-security-models/
2024-09-12T02:59:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00198.warc.gz
en
0.927906
1,764
2.65625
3
Students are spending more time online, and it’s a trend that will undoubtedly continue even in a post-pandemic world. Likewise, they are also doing more learning online, both in and outside of the classroom. For example, completing homework often requires online research and collaboration with other students in a cloud app like Google Docs or Word. For this reason, K-12 cyber safety programs are critical for keeping students safe online. For example, administrators should be planning for ways to monitor school-provided technology for student suicide digital signals as part of their broader student suicide prevention program. What is a digital signal? It’s anything that someone posts on a digital platform that can be identified and categorized. For example, if a student sends an email to another student describing the fact that they are depressed and want to kill themselves, an automated system can identify the intent of the email and categorize it as a student suicide digital signal. Digital signals can exist in text, video, and/or image files. And, it’s important for district admins to understand that these signals aren’t only being shared via instant messaging or social media. Unfortunately, students aren’t only using school-provided technology in the ways they’re supposed to. School admins we work with are finding a variety of inappropriate and harmful digital signals in district Google and Microsoft domains. In these cases, the digital signals reveal toxic online behavior where they’re planning to or are inflicting harm on themselves and/or others. Student digital signals can point to discussions of self-harm, thoughts of suicide, cyberbullying, discrimination, substance abuse, threats of violence, sexual exploitation and/or inappropriateness, and more. School IT teams are finding student safety risk signals in school-provided emails, shared drives, files, and chat apps. Research shows that adolescents are likely to talk about suicidal thoughts, but they also use digital media such as social networking sites, blog posts, instant messages, text messages, and emails. Further, they found an increase in the number of suicide digital signals over the four years of their study. They concluded that using digital outlets to convey distress may become even more common. To be clear, technology isn’t a silver-bullet for preventing student suicides. And K-12 IT admins are not mental health counselors. The first people to know about a student’s suicidal issues are usually going to be their peers, family members, teachers, and/or other trusted individuals in their lives. It is important for schools to train faculty, parents, and students on how to identify potential suicide risk signals and respond appropriately as part of their suicide prevention program. However, integrating technology into your district’s student suicide prevention program makes a huge difference for students in crisis and for your community as a whole. Your IT team is in a unique position to monitor for potential student suicide digital signals. You can then establish a process for reporting risk signals to building professionals who are equipped for counseling students in crisis. Here are five digital signals that schools we work with find that could prevent a student suicide. Cyberbullying is a big problem around the globe. 60% of parents who have children in the 14 to 18-year-old age range report that their children are being bullied, and 82.2% of that cyberbullying is happening at school. School leaders are well aware of the need for cyber safety in schools, but don’t always think beyond their content filter and blocking students from harmful websites. New technology that can handle cyberbullying monitoring is available, and schools need to monitor internal locations, like Google Docs, Slides, chat apps, emails, etc, for harmful content and behavior. When school leaders talk about cyberbullying, they relate stories of how students use school systems to harass one another. Stories we’ve heard from their own cyberbullying detection experiences include: Students using Google Slides to message each other. They used white lettering to make the files look like blank pages. Most risk alerts come from Gmail conversations between students—meaning they’re using school-provided email to have personal discussions Students are deleting text, renaming files, and moving files around shared drives to create different versions of a file and make detection difficult without an advanced alerting system It’s important to note that bullying doesn’t directly correlate to suicide. But, research shows that children and young adults under the age of 25 who are being cyberbullied are twice as likely to harm themselves and exhibit suicidal behavior. Interestingly, the people who cyberbully others are also at higher risk. Therefore, detecting and addressing bullying behavior early on is an effective way to reduce potential suicidal outcomes in the future. Not to mention the many other benefits it will have for all involved students’ wellbeing. Student self-harm and suicide are two different things, but they are related. Self-harm refers to students who hurt themselves in a number of ways, including cutting and/or burning themselves, misusing alcohol and drugs, or hitting themselves against walls or with weapons. Digital self-harm is another form of self-harm that is relatively new and not well understood by child psychologists. The intent of self-harm is to release difficult feelings, not to end a life. However, research shows that about 65% of students who self-injure will also become suicidal at some point. It’s important to recognize that when a student self-harms, it makes it easier for them to think about suicide. They have “practiced” harming themselves, which reduces the inhibition they would typically feel about taking their own life. According to the CDC, anxiety and depression in children is a big problem. In children aged 3-17 years old, 4.4 million have diagnosed anxiety and 1.9 million have diagnosed depression. The American Academy of Child and Adolescent Psychiatry (AACAP) states that most children and adolescents who attempt suicide have a mental health issue, usually depression. Therefore, spotting depression and anxiety signals is critically important to preventing suicide. Students with anxiety or depression may send out the following types of signals: A preoccupation with death and dying Fear of being away from their parents Worrying about the future and bad things happening Panic reactions such as heart pounding, trouble breathing, feeling dizzy Feeling extremely sad or hopeless Many parents think their children are guilty of excessive online browsing, but there comes a point where it can be a real problem. There is an illness called Internet Addiction Disorder, and children are as prone to it as adults. Many students do a lot of online browsing without becoming addicts, but excessive online browsing can signal the beginning of a real problem. For a student who has anxiety or depression, they may spend an unreasonable amount of time browsing to find bad things happening in the world. Or, they may focus on finding violent or destructive content online. That type of signal could certainly mean the student is moving toward more destructive behavior. Social and Emotional Learning (SEL) is a hot topic this year, as many students are returning to classrooms after an extended period of social distancing. There are indications that SEL can positively impact school violence, bullying, depression, anxiety, and other student safety concerns. We recently hosted a podcast discussing Social and Emotional Learning (SEL) in K-12 schools with Eileen Belastock, the Director of Technology and Information at Nauset Public Schools. During our discussion, Eileen shared how schools can incorporate something as simple as using a Google Form to do a daily check-in on how students are feeling. If your school does the same, or are looking to incorporate it into your SEL program, the form can be set up to collect responses in a Google Sheet. That way, admins can monitor responses to spot cries for help and potential suicide risk signals. This could be done manually and/or by using cyber safety artificial intelligence technology to flag potential risks. As mentioned previously, IT admins are not trained counselors. You can’t be expected to directly help a student in crisis. However, IT teams do effectively partner with student resources in the schools to help provide a window into what is going on in digital platforms. As a member of the IT team, here are a few things you should know about student self-harm and suicide signals. Student self-harm and suicide are different but related. As mentioned earlier, while these are two different problems, self-harming behavior is linked to an increased risk of future suicidal ideation, and sometimes action. That is why self-harm detection in school-provided technology is critical. Monitoring must cover images and text. While students will write about harming themselves or taking their own lives, they may also post images that illustrate one activity or the other. The images are just as much a signal as the text. IT should coordinate with school resources. You’re in a unique position to spot problems in a space that others are mostly blind to. It’s critical that you partner with those people who are trained to counsel students to develop a process defining how you’ll work together. You need to know who to alert when a problem is spotted, and that person needs to know how to manage the issue before you have an irreversible situation on your hands. Student suicide digital signals are typically a call for help. Schools need to develop a way to detect those cries for help online so they can intervene in situations, and help the student improve their mental health. With the right people, training, tools, and processes, you can help students live long enough to find their way in the world.
<urn:uuid:d949ff6a-13b1-4f5d-9572-b6e4be7703f0>
CC-MAIN-2024-38
https://managedmethods.com/blog/student-suicide-digital-signals/
2024-09-12T03:45:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00198.warc.gz
en
0.959102
2,001
3.359375
3
Password Management Doesn't Have to Be Painful Sensitive information is leaked enough as it is, and a secure password is at least one line of defense. October 15, 2014 What is it about passwords? They mean well yet they are the bane of the digital realm. We can’t live without them, and it’s certainly a mistake to willingly create an insecure password. Sensitive information is leaked enough as it is, and a secure password is at least one line of defense. There are a lot of tips and tricks to managing a bevy of secure passwords for the dozens of sites we need to login to. Some are best avoided, some are downright stupid, and some will save you a lot of headaches. Let’s look at the pros and cons of a few different options. Using the same password Do I need to take time to explain this one? By using the same password for every site you visit, you’re making it pretty darn easy for people to access the sensitive information you keep on various sites. If someone gets one password, they have them all. It’s so easy to do and yes, we’ve all done it, but it’s not smart. You should not sacrifice security for convenience. Using a pattern I have a friend whose password is just a pattern on his keyboard. His pattern creates a random, long, and secure series of numbers, letters, and special characters that he doesn’t have to remember as long as he knows the shape he’s drawing on the keyboard. It’s a cool idea, but there are two problems. One is that this pattern won’t work on a tablet or smartphone since the characters don’t line up the same way. Two it’s really only good for creating a secure password for one thing unless you can somehow remember a new pattern for each login, and that’s just as tough as remembering the characters themselves. Writing passwords down on paper There are more secure and less secure ways to write down your passwords. For instance, a sticky note on your monitor is not secure at all, but keeping a notebook with your passwords locked in a safe is not the worst thing in the world. The trouble is you need a lot of passwords each day so your notebook probably won’t be locked in a safe. Instead, it will probably be within reach because you’ll probably use it a lot. Whatever the case, writing passwords down is not a solution to the password problem since it’s just too easy for someone to find. Unless… Writing passwords down in a spreadsheet Writing passwords down isn’t a great idea unless you write them down in an Excel spreadsheet or other document that you encrypt. This means that one password protects all of your other passwords, but this document isn’t in a third party cloud like it would be with some password managers. This method allows you to share the master password with other administrators who need access to the list of passwords. In fact, a lot of IT pros on the Spiceworks forum use this simple method quite effectively. The trouble with this method is that forgetting the master password means not having access to anything in the document. You’ll have to reset every password if that happens. Using a password formula With this method, you basically create a formula with a few variables that change for each site you need to login to. This way you can have a formula with lots of special characters that’s complex, but one that also changes for each site you go to so you never have the same password twice. Unless someone discovers both your formula and how you select the variables for each site, you’ll have a secure, unique password across the board. Here’s an example of what one might be: You can see that there are three variables bolded: X, Y, and Z. There are a number of ways to determine what these variables will be, but let’s just say you take the first three letters of the website you’re on and use those as your variables. If you were on Amazon.com, you’d take the first three letters, AMA, and plug that in for your variables, giving you: The password above received a “Best” rating from Microsoft’s password checker. This method is my favorite because you don’t put all your passwords in one place, none of them are written down, and each one is secure and unique to each site. But here’s the bad news: it might not work for every site. Since different websites have different requirements for passwords, your formula might not work in some cases. A space, for example, is a special character in some places but isn’t something you can use at all for others. Just because you have a super secure, moldable password doesn’t mean it works across the board. You might end up with a few passwords that don’t match your formula, which you might have to reset every time you need to login—a massive pain. Worse yet you could end up writing them down, which we already know isn’t smart. This method also might not work at all if you’ve got multiple people using the same passwords, as an IT department might have. Using a password manager Password managers like LastPass and KeePass make it ultra-simple to manage the dozens of passwords you use, and even let you share them if multiple users need access to the same accounts. As you may know, password managers let you save lots of password for lots of sites, all of which live inside a “vault” behind one super-secure master password. Here’s a cautionary tale, however. When I used LastPass, I couldn’t bear the idea of forgetting my master password, so I wrote it in a notebook I kept somewhere safe. We know now that this is dumb, but at the time I thought it was too long and complex to commit to memory without a lot of effort. One day I needed to log back into LastPass after I was somehow logged out of the browser extension (I used it with Chrome), but I couldn’t remember the giant password and then I couldn’t find the notebook I wrote it in. After looking everywhere for a few days, I assumed the notebook had been stolen or that I dropped it somewhere—it was nowhere to be found. One security feature of Last Pass is that you can’t recover your master password if you lose it. The only way to get into your account is to have the password. Forget it and you’ve got to delete the account completely. When you’re too dense to remember the only password you need (like me), you have to delete your account and change every password inside one by one since you probably can’t remember those either. Plus, if you did write your password down (because it’s long and complicated) and lost it, whoever might find that has access to all of your accounts, and that thought alone is terrifying. My sob story aside, password managers are the best method if you need multiple people to have access to many of the same passwords or if you’re certain you can remember your master password. It is, after all, the last one you need. Dealing with passwords, for now Passwords won’t be going away soon, at least not until we have thumb scanners or other biometric equipment built into systems. For now, it’s best to just keep secure passwords and find a way to commit them to memory so they never exist anywhere but in your head. If you need multiple people to have access to the same systems, a password manager is probably the way to go. For IT admins, I’ve heard excellent things about KeePass, which is an open-source and completely free way to store and manage passwords. Casey Morgan is the marketing content specialist at StorageCraft. U of U graduate and lover of words, his experience lies in construction and writing, but his approach to both is the same: start with a firm foundation, build a quality structure, and then throw in some style. If he’s not arguing about comma usage or reading, you'll likely find him and his Labrador hiking, biking, or playing outdoors -- he's even known to strum a few chords by the campfire. [email protected] About the Author You May Also Like
<urn:uuid:15e59f68-0286-47eb-b3ad-3396b828ddea>
CC-MAIN-2024-38
https://www.itprotoday.com/archived-assets/password-management-doesn-t-have-to-be-painful
2024-09-16T22:52:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00698.warc.gz
en
0.955213
1,773
2.578125
3
AI technologies have the potential to revolutionize data centers, driving automation, efficiency, predictive maintenance, resource optimization, intelligent data management, enhanced security, and facilitating the adoption of edge computing. By harnessing the power of AI, data centers can operate more intelligently, efficiently, and securely in the era of big data and complex computing requirements. Here are some ways AI is expected to revolutionize data centers: Automation and Efficiency AI technologies can automate numerous tasks within data centers, streamlining operations and improving efficiency. AI-powered systems can optimize energy usage, cooling, and overall resource allocation, leading to reduced costs and increased performance. Intelligent automation can also enhance the management of server provisioning, maintenance, and troubleshooting processes. In short, this improved efficiency translates into faster response times, reduced downtime, and increased availability of services for customers. AI enables data centers to adopt predictive maintenance practices. By analyzing vast amounts of data from sensors and equipment, AI algorithms can detect patterns and anomalies, allowing for the prediction of potential equipment failures or performance degradation. This proactive approach optimizes maintenance schedules and improves overall reliability. As a result, customers experience minimized downtime, ensuring continuous availability of services without interruptions. AI can optimize resource allocation within data centers. By continuously monitoring workloads, traffic patterns, and user behavior, AI algorithms can intelligently allocate computing resources, storage capacity, and network bandwidth in real-time. This dynamic resource allocation ensures efficient utilization of available resources, enabling data centers to handle fluctuating demands and optimize performance. Predictive maintenance also helps data centers reduce costs associated with reactive repairs and unplanned downtime. By identifying potential issues in advance, data centers can plan and allocate resources more efficiently, optimizing maintenance efforts and reducing the need for costly emergency repairs. This cost savings can be passed on to customers, contributing to more competitive pricing and improved value for their investment. Intelligent Data Management Data centers deal with vast amounts of data, and AI can assist in managing and analyzing this data more effectively. AI algorithms can categorize and tag data, making it easier to search, retrieve, and analyze. Intelligent data management systems can compress, deduplicate, and archive data efficiently, optimizing storage space and reducing costs for customers. Additionally, AI-powered analytics can uncover valuable insights and patterns within the data, enabling data center operators to make data-driven decisions and optimize operations. Customers can leverage these insights to make informed business decisions, identify opportunities, and gain a competitive edge in their respective industries. AI has the potential to strengthen data center security measures. AI algorithms can continuously monitor network traffic, identify potential security threats, and respond in real-time. Traditional security systems often generate numerous false positives, overwhelming security teams and leading to alert fatigue. AI-based security solutions have the potential to reduce false positives by applying advanced analytics and machine learning algorithms that can learn from historical data to detect anomalies and patterns associated with cyberattacks, refining threat detection accuracy. By minimizing false positives, customers can focus their resources on investigating genuine threats and responding to them promptly. Machine learning models Edge Computing and AI The rise of edge computing, where processing and data storage occur closer to the source of data generation, is being facilitated by AI. AI algorithms deployed at the edge can process data locally, reducing latency and bandwidth requirements. This allows for real-time decision-making, enhanced responsiveness, and improved performance in various customer applications. Customers can have greater confidence that their sensitive data remains secure and protected. In addition, AI edge computing brings the processing power and intelligence closer to the data center's edge, minimizing the latency associated with transmitting data to remote cloud servers. This reduced latency enables faster response times, ensuring near real-time insights and actions. Customers can experience quicker interactions with AI-powered applications and services, leading to improved user experiences and increased productivity. Scalability and Flexibility AI can enable data centers to scale and adapt more efficiently. AI algorithms can analyze historical data and predict future workload demands, allowing data centers to dynamically allocate resources based on changing needs. This flexibility ensures optimal resource utilization while maintaining performance and responsiveness. In short, this scalability and flexibility ensure that customers can deploy AI-powered solutions seamlessly and adjust them to meet their evolving business requirements. In the end, customers benefit from AI technologies in data centers through improved efficiency, cost reduction, enhanced performance, reliable service delivery, faster issue resolution, and the ability to scale and adapt to changing needs. AI-powered automation and optimization not only enhance the operations of data centers but also translate into tangible benefits for customers, ensuring better service quality and a more satisfying experience.
<urn:uuid:86fe4c28-3f31-471c-9b9e-8fa3b48b9fd9>
CC-MAIN-2024-38
https://www.dpfacilities.com/blog/the-perfect-marriage-of-ai-and-data-centers
2024-09-18T03:45:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00598.warc.gz
en
0.910347
936
2.75
3
MPLS - VPN label One of these is the VPN label. The VPN label, sometimes referred to as the "inner label" plays an essential role in this service. What is a VPN Label? When a packet is forwarded within an MPLS network, it's labeled with one or more labels. In the context of MPLS L3 VPNs, there are usually two labels: - *Outer Label: This is the transport label that is used for forwarding the packet across the service provider's core network. This label ensures that the packet reaches the correct egress router. - Inner Label or VPN Label: This is the label that's specific to a particular VPN. It's used by the egress router to determine the correct VPN to which the packet belongs and the next hop within that VPN. Use of VPN Label: - VPN Segregation: The VPN label helps in differentiating traffic from different VPNs. Even if two customers use the same IPv4 or IPv6 address range, their traffic remains isolated because the VPN labels are different. - Routing: The VPN label is used to look up the next hop for a packet when it reaches the egress PE (Provider Edge) router. This ensures that the packet is correctly forwarded to its destination within the VPN. Operation of VPN Label: - Assignment: When a PE router learns a route from a VPN, it assigns a unique VPN label to that route. This label assignment is then communicated to other PE routers using MP-BGP (Multiprotocol Border Gateway Protocol). - Encapsulation: When a packet from a VPN customer enters the MPLS network at a PE router, the router pushes two labels onto the packet. The inner VPN label identifies the destination VPN route, and the outer label identifies the egress PE router. - Transit: As the packet travels across the MPLS core (P routers), only the outer label is considered. The P routers simply swap or pop the outer label based on their MPLS forwarding tables, but they remain unaware of the VPN label. - Decapsulation at Egress PE: Once the packet reaches the egress PE router, the outer label has served its purpose and is removed. The egress PE router now looks at the VPN label to determine the appropriate VPN next hop and any additional forwarding treatment. After this decision, the VPN label is removed, and the packet is forwarded toward its final destination within the VPN. The VPN label in MPLS L3 VPN ensures traffic segregation and correct forwarding for individual VPNs. It allows multiple customers with potentially overlapping IP addresses to coexist within a single MPLS network without any IP address conflicts. The MPLS core remains agnostic to the customer's individual routes, ensuring scalability and simplicity.
<urn:uuid:9519b031-9e03-412c-90a8-141554efb7cc>
CC-MAIN-2024-38
https://notes.networklessons.com/mpls-vpn-label
2024-09-20T18:31:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00398.warc.gz
en
0.906294
567
3.46875
3