text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Researchers Moritz Lipp and Daniel Gruss of the Graz University of Technology and Michael Schwarz of the CISPA Helmholtz Center for Information Security devised a new side-channel attack that affects AMD CPUs. The group was one of the teams that first discovered Meltdown and Spectre vulnerabilities, their successful exploitation can allow threat actors to steal sensitive information from the memory of the vulnerable system. The new attacks devised by the researchers leverage time and power measurements of prefetch instructions, experts pointed out that their variations can be observed from unprivileged userspace. “We discover timing and power variations of the prefetch instruction that can be observed from unprivileged user space. In contrast to previous work on prefetch attacks on Intel, we show that the prefetch instruction on AMD leaks even more information.” reads the announcement published by the experts. Experts demonstrated their attack technique with multiple case studies in real-world scenarios. The researchers demonstrated the first microarchitectural break of (fine-grained) the exploit mitigation technique KASLR on AMD CPUs. They monitored kernel activity (e.g. If audio is played over Bluetooth) and were able to establish a covert channel. The team also demonstrated also how to exfiltrate data from kernel memory using simple Spectre gadgets in the Linux kernel. The flaws were collectively tracked as CVE-2021-26318, according to AMD the medium severity flaws impacts all of its CHIPS. However the chip maker doesn’t recommend any mitigations because the the attack scenarios presented by the researchers do not directly leak data across address space boundaries. The researchers reported their findings to AMD on June 16th, 2020, and the remaining parts on November 24th, 2020. AMD acknowledged the findings and provided feedback on February 16th, 2021. The research paper also includes countermeasures and mitigation strategies for the presented attacks such as: (SecurityAffairs – hacking, cyber security)
<urn:uuid:ff7b65b6-a94f-4809-be0e-2fb7a789bc82>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/123401/hacking/side-channel-attack-amd-cpus.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00511.warc.gz
en
0.934716
397
2.546875
3
An email has undoubtedly taken over from physical mail when it comes to keeping in contact with friends and family. But certain things, like postcards or parcels, can’t be sent over the internet. Millions of Americans rely on the United States Postal Service (USPS) to deliver goods. The USPS has been a reliable form of delivery for decades. No matter where you’re sending a letter throughout the U.S., it gets there quickly. Well, it used to. It should only take a few days for a package to reach its destination, but that is going to change soon. Not for the better, but for worse. Here is why first-class mail is now going to move slower to get to its destination. Here’s the backstory Depending on where you live, you will get your mail at varying frequencies. According to reports, around 20% of first-class mail arrived late to recipients last year. Things will only worsen. Starting October 1, the USPS will be implementing new service standards. These changes will hurt deliveries. USPS said around 30% of first-class mail volume will be slowed down. So, where you expect a package to take about two days to be delivered, it could arrive in about a week. But why is this happening? Well, somewhat ironically, it is part of the USPS’s Delivering for America plan. The 10-year roadmap is aimed at reducing the agency’s massive debt burden and streamlining some processes. This includes reducing operating hours of post offices, raising prices for customers and hikes in postage fees over the holidays. But what will impact delivery times of parcels to far-away places is the reduction in airplanes used by USPS. This will be offset by using more road-based transport like trucks. Here’s what you can expect First-class mail is classified as standard-size, single-piece letters and envelopes. According to USPS, around 39% of these will be delivered in three to five days. The remaining 61% won’t be affected by the new standards. Smaller and lightweight parcels that fall into the first-class package service will see 32% of goods delivered in four to five days. Others will be delivered at a maximum of three days. If you have a magazine subscription, you will have a 9% chance that your monthly reading will be later than usual. How late? Well, you might have to wait up to five days. At other times, magazines and newspapers will be delivered in about two days. You might not have noticed, but the price of stamps and postage have already gone up by a few cents. In August, the price for first-class stamps went up by 3 cents to $0.58.
<urn:uuid:0a81806b-accb-4324-85bb-2d149c700b82>
CC-MAIN-2022-40
https://www.komando.com/technology/mail-delivery-slower/809103/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00511.warc.gz
en
0.961328
568
2.53125
3
After adjusting for sociodemographic factors, chronic health conditions and smoking history, the study published in the Journal of Gerontology: Medical Sciences found that people with low muscle strength are 50 percent more likely to die earlier. “Maintaining muscle strength throughout life—and especially in later life—is extremely important for longevity and aging independently,” said lead researcher Kate Duchowny, who recently completed her doctorate in epidemiology at the U-M School of Public Health. Duchowny said a growing body of research has indicated that muscle strength may be an even more important predictor of overall health and longevity than muscle mass. Additionally, hand grip strength specifically has been found to be inversely related to mobility limitations and disability. However, despite being a relatively simple and cost-effective test, grip strength measurement is not currently part of most routine physicals, Duchowny said. “This study further highlights the importance of integrating grip strength measurements into routine care—not just for older adults but even in midlife,” said Duchowny, who is now a postdoctoral scholar fellow at the University of California, San Francisco. “Having hand grip strength be an integral part of routine care would allow for earlier interventions, which could lead to increased longevity and independence for individuals.” Duchowny and colleagues analyzed data of a nationally representative sample of 8,326 men and women ages 65 and older who are part of U-M’s Health and Retirement Study. Grip strength can be measured using a device called a dynamometer, which a patient squeezes to measure their strength in kilograms. Researchers used “cut-points,” or thresholds, to define levels of strength. For example, muscle weakness was identified as having a hand grip strength less than 39 kg for men and 22 kg for women. Duchowny said those thresholds were derived based on the nationally representative sample, something she says is unique to this study. Based on their data, 46 percent of the sample population was considered weak at baseline. By comparison, only about 10 to 13 percent were considered weak using other cut-points derived from less representative samples. “We believe our cut-points more accurately reflect the changing population trends of older Americans and that muscle weakness is a serious public health concern,” Duchowny said. “Many aging studies—not just those on muscle strength—are conducted on largely white populations. However, as the U.S. population becomes increasingly diverse, it is critical to use nationally representative data for these types of studies.” More information: The Health and Retirement Study: hrsonline.isr.umich.edu/ Provided by: University of Michigan search and more info website
<urn:uuid:8b971f2c-c19b-4728-aef0-e841c7eba228>
CC-MAIN-2022-40
https://debuglies.com/2018/08/22/individuals-with-weaker-muscles-do-not-typically-live-as-long-as-their-stronger-peers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00711.warc.gz
en
0.958732
579
3.171875
3
Condition monitoring and Predictive Maintenance (PdM) form the foundation of what we now know as Industrial IoT (IIoT) or Industry 4.0. It’s common to assume that both terms mean the same thing, but the reality is, they are very different. Condition monitoring involves monitoring a specific parameter (vibration, temperature, pressure, etc.), to identify any significant changes from the norm which could indicate potential equipment failure. Predictive maintenance, on the other hand, is the process of analyzing data to identify patterns and predict any issues before they occur. It’s important to note that an integral part of condition monitoring is providing the data that can then be used for PdM. Maintenance is a major concern when developing and manufacturing a product. Consider all the resources involved to ensure that these assets are running smoothly and at optimal efficiency. For machine operators and factory personnel, asset repairs consume unnecessary resources, increase overall operational costs and present a serious impediment to efficient operations. Consequently, this means higher labor costs, reduced customer satisfaction and a competitive disadvantage. How is Bluetooth technology involved? So, how does Bluetooth play a role in condition monitoring and PdM? Bluetooth technology has emerged as a wireless connectivity solution because it’s low cost, provides easy remote management of assets and reduces the wiring costs associated with cabling. Furthermore, with a wireless connectivity solution, equipment now becomes smart equipment which allows us to continuously monitor the health of the equipment and plan ahead for any maintenance issues. To provide a little background information on predictive maintenance, it’s important to discuss how sensors play a role. First, a sensor is the first and most important component of the entire system. There are different types of parameters that a sensor can measure. The most common ones are vibration, temperature, pressure, and current. The figure1 below shows a sensor that measures such parameters. An effective monitoring system requires both the hardware and software components which should include condition monitoring sensors and gateways (the hardware) that will handle data processing and transmission, and a secured cloud server (the software) that will handle the data storage and data analytics. The sensors are placed on the machines as seen in Figure 1, and a wireless gateway (seen below in Figure 2) will collect the data from the sensors and send it to a management system for further analysis. If the sensors detect any machine anomalies, the user will receive an alert and perform any maintenance if needed. The depth of data that is collected provides manufacturers with extremely valuable information that can be leveraged to make informed business decisions to reduce the impact of equipment breakdowns, unscheduled maintenance, failures, etc. This is the core of predictive maintenance and is one of the main factors driving IIoT. By having a proactive rather than reactive approach, factories worldwide will become more efficient, productive, reduce unplanned downtime and minimizes long-term costs. The future of Bluetooth technology Industries such as IIoT, education and healthcare are using Bluetooth technology because of its ease of management, wireless connectivity and low cost. Bluetooth-powered IoT can be a big differentiator for companies looking to reduce manual tasks and enhance operational efficiency. Organizations with a well-planned IoT strategy and an understanding of Bluetooth technology use cases will definitely gain a competitive edge and improve their bottom line.
<urn:uuid:bf48e9d4-6efe-4e54-9575-bfd9853e6aaa>
CC-MAIN-2022-40
https://www.cassianetworks.com/blog/iotconditionmonitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00711.warc.gz
en
0.922272
682
2.546875
3
The current technological advancements are poised to revolutionize the world that we live in. Machines and everyday objects can now communicate with each other. IoT systems are the driving force behind all this incredible change. IoT has the potential to incorporate data-driven decision making into every aspect of human activity. The network of sensors and actuators in an IoT system are blurring the lines between the physical and digital worlds. But IoT Isn’t an Entirely New Concept We have already witnessed its adaption in our lives. Industries have been utilizing the technology to monitor machines and sense changes in the environment. Hospitals are using it to keep closer tabs on vital human signs. Internet-connected devices and appliances are taking care of our homes, offices and schools. And soon, autonomous cars will be governing the roads. The technology is quickly expanding to encompass every angle of our lives. The vision of the entire smart cities isn’t a distant reality. According to Deloitte Insights, this model will help improve the quality of life, economic competitiveness and sustainability. It is, in fact, the internet, which is laying the foundation for the vast applications of IoT systems. It’s facilitating new forms of collective intelligence that are supporting the automated world. However, education is one of the sectors where this technology is still in the infancy stages. It’s only now that educational institutions are striving to implement IoT methods. The Importance of Modernizing the Education Sector Despite significant technological progress over the past few years, the adoption of smart classroom technology has been rather late in leveraging change in teaching methods. Don’t get me wrong. The syllabus has been dramatically updated. The digital generation focuses more on STEM (science, technology, engineering and mathematics) courses. But the process is still the same. We all understand that education reduces poverty, boosts economic growth and increases income. But utilizing up-to-date and modern teaching techniques can bolster enormous progress for any country. Therefore, the latest tech developments should be integrated into the educational system. Otherwise, it’s at risk of falling behind. Modernization of the education sector can increase the productivity of both the individual and the nation. As the quality of education improves, so too does the acumen of the students. A recent survey from Microsoft and YouGov stated that 60 percent of parents were optimistic about the role of smart classroom technology in their child’s life. More impressively, 86 percent said that the use of tech in schools, such as computers and educational software, was beneficial to their child’s education. Classrooms Are Changing Gone are the days when teachers reprimanded students for passing notes. Now it’s more so for using a mobile phone in class. And, as electronic devices are becoming commonplace in schools, teaching methods are aiming to be more interactive and applicable to the real world. The one-size-fits-all approach no longer holds true. Advancements, such as IoT, AI and VR, are altering typical ideologies. They’re bringing change into the mainstream. Schools and universities alike are embracing a whole new world of concepts. How Can IoT Systems Be Implemented in a Classroom? IoT can reform the educational system. Inter-connectivity is linking previously disconnected offline objects and devices. The applications here are numerous, beginning from classrooms and extending beyond its walls. IoT enabled education solutions range from smart boards to school security applications to managing a comfortable physical environment to study in. A technically-advanced classroom can be equipped with the following IoT systems: - Interactive whiteboards - Automated attendance tracking systems - Student ID cards - Temperature and environmental sensors - Smart, efficient lighting and predictive maintenance for infrastructure - Smart HVAC systems - Wireless door locks and lockdown protocols - Security systems As an extension of the system, connected schools could also provide smart buses. They could inform parents when a child is dropped off at home or provide students with internet access, allowing them to consume content en route. What Are the Benefits of IoT in Educational Institutions? 1. Active Engagement Education must be interactive in order to be effective. As the technology behind the Internet of Things is also interactive, it only makes sense to combine the two and reap double the rewards. Students can learn more by receiving constant feedback on their work. Interactive learning material promotes a child’s interest, passion and creativity. For example, the use of supplemental study aids can make dissections more accessible to students of all ages everywhere. 2. Extends Current Resources IoT systems require an array of devices from which data is collected. Smart learning software and IoT apps permit students to collaborate through their phones and tablets. Tech-enabled books that are enhanced with connected pens can allow students to mark various portions of text. IoT aids in bringing activities to the classroom that would have been impossible before. Students can now experience a more technical side of a course. For instance, simulation and models can help explain topics such as planetary movements or the development of weather conditions. 3. Greater Efficiency This smart classroom technology can help ease a teacher’s workload. By automating specific tasks such as taking attendance or grading tests, a teacher can focus on the more essential aspects. This is only the beginning of connected technology. A school in Eastern China has gone so far as to install facial recognition technology. It allows teachers to monitor how attentive students are in class. If a student seems confused or distracted, the teacher can intervene sooner. 4. Enhances the Quality of Instruction The use of the internet and modern devices can encourage the improvement of current lesson plans. More interestingly, whiteboards could record everything discussed in class and facilitate access to courses from any location. Smart-microphones may even recognize a teacher assigning homework and update a student’s planners automatically. But as teaching shifts away from the traditional textbook material, students may be able to relate better to real-world situations. References to real-time problems and real people can improve the quality of education. 5. Expansive Learning A school based on IoT does not necessarily constitute an ordinary classroom full of students reading from a book. It fosters learning over the internet through group activities, discussions, webinars and debates. 6. Promotes Interactive Teaching IoT is altering the dynamics of the principles of education. An instructor no longer explains a topic. Instead, they interact with various components of a classroom. Activities that require constant feedback, like coaching and academic training, help boost an individual’s skills. IoT cultivates task-based instruction where students learn-by-doing and teachers assist when needed. 7. Identifies Weaknesses A problem that often happens in classrooms is a weak or shy student often falls behind. A child that does not respond promptly may require one-on-one attention. Additionally, teachers can design interactive learning modules to match their pace. The lessons could be personalized to build a stronger base. The sooner the child gets back on track, the quicker he can perform alongside his classmates. 8. Aids Students With Disabilities IoT and AI can help disabled students obtain an education, as well. Microsoft, a company applauded for its contributions to technology, is backing the modernization of classrooms to cater to children with special needs. It has developed a symbol-based communication app, Snap + Core First, to help children with speech and language impediments express themselves. 9. Empowers Better Security Measures Smarter schools can implement stringent security measures, primarily through student IDs. Some may call it encroachment in privacy, but it’s one effective way to monitor a student’s location. Though it may be a while before IoT systems in classrooms enter the mainstream, its future is optimistic. There’s no denying the robust benefits it has to offer. Implemented in the right way, it enables a considerable amount of innovation in the education sector. It can help create a system to educate the next generation of tech-savvy students. The use of real-time data will provide valuable insights to students, parents, faculty and the administration. With IoT, the possibilities are endless. The educational institutions of the future will offer greater security, increased access to students, more interactivity and better learning.
<urn:uuid:b1ed6342-a0e9-4b65-ab13-8e43fa4cffb3>
CC-MAIN-2022-40
https://www.iotforall.com/smart-classroom-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00711.warc.gz
en
0.928927
1,742
3.078125
3
The company claims that users with two-factor authentication (2FA) see a decrease in account breaches by 50%. Combating the ever-growing risk of a compromised account, last year, Google started auto-enroll millions of its users to use 2FA, or two-step verification as Google calls the security measure. There's plenty of reason for the mass use of 2FA. Last October, Google's anti-hacker team warned 14,000 Gmail users about a phishing campaign carried out by Fancy Bear, a Russian government hacking group. The company expanded the 2FA program to include additional steps users need to access their accounts. 150 million Google users and 2 million YouTube creators had 2FA auto-enabled. "As a result of this effort, we have seen a 50% decrease in accounts being compromised among those users," Guemmy Kim, Google's Account Security, and Safety director, said in a blog post. The company also states it aims to reduce reliance on passwords, adopting security keys. At the same time, user passwords can be stolen and sold in batches of millions. It's much more challenging to collect the same amount of security keys without anyone noticing. 2FA and multi-factor authentication (MFA) dramatically reduce the risk of a successful phishing attack since obtaining a victim's password is no longer enough. A 2019 study by Google showed that adding a recovery phone number to Google Account can block up to 100% of automated bots, 99% of bulk phishing attacks, and 66% of targeted attacks that occurred during our investigation. Meanwhile, Microsoft claims that MFA can increase the level of security by a staggering 99%. According to the Cybersecurity and Infrastructure Agency (CISA), MFA increases security because if one authenticator is compromised, unauthorized users will not meet the second authentication requirement. More from CyberNews: Subscribe to our newsletter
<urn:uuid:889de089-ab92-424a-886b-b24ee36fac34>
CC-MAIN-2022-40
https://cybernews.com/news/2fa-program-increased-account-safety-by-half-google/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00711.warc.gz
en
0.932719
387
2.515625
3
The crystal ball is somewhat cloudy, but here are my thoughts on user interfaces and their adoption. User interfaces on computing devices Alphabetically, these are the practical computer-interface options we know today: - Heads-up Display (HUD) – Military displays have been based on HUD technology for decades. Basic concept is to provide see-through information that is available within the area of vision without the need to look around. - Motion sensing – Motion allows the user to direct through body motions; you can lump the joystick and mouse in this category, but, preferably, Motion is done without manipulating a physical device. - Projection – A key component of HUD, it could enhance or replace displays, especially on mobile devices that can be difficult to read due to their small size. Projection, combined with Motion, will get interesting when you can gesture within a larger image projected onto a nearby surface. - Speech recognition with text-to-speech or TTS – Older technologies (a blind friend has had both since the late-80s), but computer processing is now robust enough to support Speech for mainstream use. - Touch displays – Touch has been around since the early 1990s, but it wasn’t until a few years ago that manufacturing costs of touch displays decreased to assist with the widespread adoption of mobile devices. Touch simplifies the user interface by removing the need for separate keyboards (and mice), but generally mimics the function of a keyboard when inputting significant amounts of text. - Type – I’d define this as old-school typing on a separate keyboard, usually with a mouse to assist; can’t seem to get rid of this one since it is so inexpensive and since most (all?) computers still support its use. Some examples with their approximate costs: - Google Glass – Combines HUD with Speech in an eye-glass format; $1,500. - Microsoft Table – Touch with Projection on a table-top surface; just $8,400. - Nitendo’s Wii – Maybe not so new, but Motion for game consoles that was revolutionary in the mid-2000s; about $130. - Keyboard plus mouse – Older than dirt, but you can get both for under $15. Adoption of user interfaces within the generational divide In terms of adopting new interfaces, I think that much depends on your age group: - Younger folk (less than 30 years old) take naturally to the newest and fastest; they’ll still Type via Touch (reluctantly, usually by abbreviating wherever possible), but HUD, Motion, and Projection, are their future. (Not quite so sure about the use of Speech in this group; do people under 30 talk to others on their phone or do they only text one another?) - Mid-range (call it 30 to 55 years old) people can adapt, but it gets tougher as you advance (age-wise) within this group. I figure these folk Speak, Type and Touch, but would be willing to migrate to other options if they are easy to deploy and inexpensive to own. Full-size keyboards and mice will remain (and, hopefully, die) with this group. - Older (over 55) folk are less adaptable, but can cope with current technology. Switching platforms is a challenge, even if the interface is conceptually easier to grasp and use. Some can learn how to use other options, but I suspect most will stay with what they know: Touch and Type. From my experience: - I have had computing experience since high school. While training my dad on Microsoft Windows, I was struck by the amount of effort required to transfer knowledge; the concepts were tough for my dad, who had no computing background, to assimilate. - My son, who grew up with graphic-intensive video games, has a broad grasp of current technologies and flexible fingers; he always looks pained when demonstrating basic touch-screen usage to me on my mobile phone. (It doesn’t help that I can barely see the screen and that my thumbs tend to stray away from their intended targets, especially in portrait mode.) Basically; you can teach an aging human a new interface, but it takes some work.
<urn:uuid:2f4452c5-99eb-4115-a25c-7796575ebe72>
CC-MAIN-2022-40
https://www.bryley.com/the-near-term-future-of-computer-technology-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00711.warc.gz
en
0.954465
876
3
3
Transmission Control Protocol/Internet Protocol (TCP/IP) tracks the address of nodes, routes outgoing messages, and recognizes incoming messages. Current networks consist of several protocols, including IP, Internetwork Packet Exchange (IPX), DECnet, AppleTalk, Open Systems Interconnection (OSI) and LLC2. This wide diversity of protocols results from application suites that assume their own particular protocols. Collapse from this wide variety is inevitable, but users will only be able to reduce this diversity, not eliminate it. Most users will collapse networks into two main protocols: IP and IPX. Installed-base applications and the pain of change will prevent a total reduction to a single backbone protocol.
<urn:uuid:b61de8f0-0b28-4d6c-ac27-7b9d7f4a0318>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/ip-internet-protocol
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00111.warc.gz
en
0.885414
141
3.125
3
Sustainable Data Centers: The Path to Net Water Positive In 2019, our largest data center in Carrollton, Texas consumed 13.3 million gallons of water onsite through its hybrid air- and water-cooled system. In 2020, informed by the results of our Water Risk Assessment, which indicated that Central Texas is a high water-stress region, we upgraded the facility to a 100% water-free cooling design. This had the impact of slightly raising the average Power Usage Effectiveness (PUE) from 1.37 to 1.39 while reducing the onsite water use by 65% to 4.6 million gallons used for landscape irrigation, fire system maintenance, and domestic water for Carrollton’s 60,000 ft2 of office space. Only a portion of this 4.6 million gallons is actually consumed (the irrigation), and the rest is discharged to the water treatment works, but for our case study we counted it all as water consumption to be conservative. This 65% decrease looks great in theory, but we wondered if the supply chain water for the extra electricity would mean that the total water consumed by the facility stayed the same or even increased due to the upgrade. Using the World Resource Institute’s emissions factors for water consumption in the local electrical grid, we estimated the total water consumed (onsite and for electricity generation) by Carrollton in 2019 and 2020. While we discovered that supply chain water greatly outstripped the water used onsite, Carrollton’s overall water use still decreased by more than 5 million gallons between 2019 and 2020 due to its switch to a water-free cooling design. This result challenges the conventional wisdom that consuming water for cooling saves total water, at least in today’s supply chain. Moreover, a new renewable electricity source that we invested in during 2020 will cover an estimated 70% of Carrollton’s power needs with renewable solar electricity beginning in 2021. Based on the WRI’s tool, solar electricity has a water consumption factor of zero, thus reducing our energy supply chain water consumption by 70%. As you can see in the adjacent table, the total water consumed at Carrollton in 2021 will be less than a third of the consumption in 2019, demonstrating the promise of our onsite water-free cooling enabling a truly water-free cooling future for this facility. Also in 2020, we purchased Water Restoration Credits to offset our onsite water use at Carrollton, restoring 20% more water than we consumed in order to achieve our net positive water designation. From here, it’s easy to imagine a future when the facility uses 100% renewable electricity for the full promise of net zero carbon with net positive water data centers.
<urn:uuid:fba9e185-ce1c-4a8d-a572-069c6b708653>
CC-MAIN-2022-40
https://cyrusone.com/about/sustainability/water-positive-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00111.warc.gz
en
0.931809
550
2.921875
3
New research conducted by Kaspersky lab experts has revealed that a number of the components that make up smart cities such digital kiosks, interactive terminals and speed cameras are vulnerable to cyber-attacks. The findings of the firms research shed light on how the digital components aimed towards making life safer and more convenient for citizens actually pose a certain degree of threat to their data and safety. Despite appearing different outwardly, ticket terminals in cinemas, bike rental terminals, service kiosks in government organisations, booking and information terminals at airports and infotainment terminals in city taxis all are either Linux-based, Windows-based or Android-based. What separates these kiosks from ordinary devices is the special kiosk-mode software that serves as their user interface. An attacker who is able to access the functions that are restricted to the average user opens up a number of opportunities to compromise the system in much the same way that an ordinary PC can be compromised. By gaining access to a virtual keyboard and mouse pointer, it is possible for an attacker to launch malware, get information on printed files, obtain the device's administrator password or do any number of malicious actions using its systems. Denis Makrushin, security expert at Kaspersky Lab, said: “Some public terminals we’ve investigated were processing very important information, such as user’s personal data, including credit card numbers and verified contacts (for instance, mobile phone numbers). Many of these terminals are connected with each other and with other networks. For an attacker, they may be a very good surface for very different types of attacks – from simple hooliganism, to sophisticated intrusion into the network of the terminal owner. “Moreover, we believe that in the future, public digital kiosks will become more integrated in other city smart infrastructure, as they are a convenient way to interact with multiple services. Before this happens, vendors need to make sure that it is impossible to compromise terminals through the weaknesses we’ve discovered." Another security expert at Kaspersky Lab, Vladimir Dashchenko, offered further details on the firm's findings regarding speed cameras, saying: “In some cities, speed control camera systems track certain lines on the highway - a feature which could be easily turned off. So, if an attacker needs to shut down the system at a certain location for a period of time, they would be able to do that.” “Considering that these cameras can be, and sometimes are, used for security and law enforcement purposes, it is really easy to imagine how these vulnerabilities can assist in crimes like car theft and others. It is therefore really important to keep such networks protected at least from direct web access” Image Credit: Jamesteohart / Shutterstock
<urn:uuid:32c68bbe-a16a-49d5-abb5-02bcc2e1939c>
CC-MAIN-2022-40
https://www.itproportal.com/news/smart-cities-are-vulnerable-to-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00311.warc.gz
en
0.942927
560
2.578125
3
When Eben Upton and his colleagues introduced the first Raspberry Pi more than four years ago, they hoped the small, basic and cheap computer would convince more people to get into computer science and that maybe they would sell about 10,000 units. Now the Raspberry Pi Foundation is manufacturing multiples of that every day and recently passed the 10 million mark. To mark the occasion, Upton and other foundation members decided to roll out an offering that may encourage even more people to start programming. The group on Sept. 8 released the official Raspberry Pi Starter Kit, which comes with everything from the latest version of the Raspberry Pi computer card to an optical mouse to the foundation’s official “Adventures in Raspberry Pi” book, all delivered in an official case. Not bad for a group whose primary hope was to jump-start an interest in technology in younger people and grow the number of those applying to study computer science at the University of Cambridge in England. The money from every Raspberry Pi board sold goes to fund the foundation’s ongoing engineering work and such educational outreach programs as Code Club and Picademy. “By putting cheap, programmable computers in the hands of the right young people, we hoped that we might revive some of the sense of excitement about computing that we had back in the 1980s with our Sinclair Spectrums, BBC Micros and Commodore 64s,” Upton wrote in a post on the foundation blog. “At the time, we thought our lifetime volumes might amount to ten thousand units—if we were lucky. There was no expectation that adults would use Raspberry Pi, no expectation of commercial success, and certainly no expectation that four years later we would be manufacturing tens of thousands of units a day in the UK, and exporting Raspberry Pi all over the world.” The first computer was launched to give students, makers and others an easy-to-use system for developing projects or, in the case of students, to learn how to build technology and programs. It has since taken off, and has helped fuel what’s now known as the maker movement, where people take tools like Raspberry Pi and use them as the basis for developing new devices and systems. Other tech vendors also are getting into the space—for example, Intel for the past several years has been equipping makers and enthusiasts with such development boards as Galileo and Edison, and most recently aired the first season of “America’s Greatest Makers,” a reality TV show on the TBS station where makers compete to develop the most interesting product based on the chip maker’s Curie technology. Upton is hoping the Raspberry Pi Starter Kit will get even more people interested in technology, device making and programming. Included in the kit is a Raspberry Pi 3 Model B compute board, an 8GB NOOBS (new out of the box software) SD card, a 2.5A multiregion power supply, a 1-meter HDMI cable, an optical mouse and a high-performance keyboard, and the Raspberry Pi Foundation book. “This is an unashamedly premium product: the latest Raspberry Pi, official accessories, the best USB peripherals we could find, and a copy of the highest-rated Raspberry Pi book,” Upton wrote in the blog post. The kit, which will cost $131, can be ordered online now in the UK and will be available in other regions within the next few weeks.
<urn:uuid:c02a5296-a742-4dce-9885-f0e079513142>
CC-MAIN-2022-40
https://www.eweek.com/pc-hardware/raspberry-pi-group-rolls-out-its-first-starter-kit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00311.warc.gz
en
0.954258
708
2.625
3
Steps To Stop Hackers Breaching Security by Emails and More Cybercrime is the fastest growing crime in the USA – it’s true. In 2021, digital incidents cost businesses of all sizes US$200,000 on average. Moreover, the worldwide cost is expected to reach US$6 trillion by the end of 2021. That’s trillion, with a T. Sadly, in this huge problem, it is small businesses that are a frequent target of hackers – receiving 43 percent of all attacks. What’s worse is that most are largely unprepared to defend themselves. As the Coronavirus pandemic has increased business reliance on remote workforces, it has also provided a basis for large-scale spam campaigns. These tend to use the crisis as a pretext or cover to spread ransomware, install banking malware and direct users to fraudulent webpages about COVID-19. For these reasons, it has become more important than ever to know how to determine if you’ve been hacked, and, if so, what you should do about it. What Is Hacking? A Quick Definition Hacking is the process by which an unauthorised party attempts to find and exploit weaknesses in a computer system, device or network in order to gain access. The aims of hackers can include stealing information, identity theft, ransomware, vandalism, bringing down major websites, protest, destruction of the system itself or simply personal amusement. How To Know If You Have Been Hacked? Emails are a common target for hackers. According to Symantec’s 2017 Internet Security Threat, 1 in 131 emails contained malware in 2016, the highest rate in 5 years. Many email hacks start out with forged “phishing” emails. These appear to be legitimate email messages from a reputable company or friend, but which contain links or attachments that download harmful malware onto your computer. It’s important to know the signs of a phishing email so you can avoid being hacked. Example: have you ever received an email from “PayPal” asking you to update your account information? If so, bad news: in all likelihood, it was a fake. Curiously, as careful as the phishing scams are to be technically probable, they are often betrayed by sloppy spelling and poor grammar. Weird email addresses are also another tell. Be wary of anything purporting to be from PayPal but which has an address like email@example.com or firstname.lastname@example.org. If you have clicked on one of these phishing emails, you may start to notice suspicious activity on your accounts. Some of the more common accounts that get compromised include bank accounts, social media sites and apps on your local PC that contain sensitive information. The signs of hacking to watch out for with respect to online platforms are: - Emails from service providers alerting you to suspicious activity or an unknown device accessing your account - Friends receiving social media invitations from you that you didn’t send - Online passwords that no longer work - Missing money from online accounts - Notifications that your credentials have been compromised in a password dump. For attacks on your local PC, things to look out for include: - Ransomware messages - Unwanted browser toolbars - Fake antivirus messages - Redirected internet searches - Unexpected software installs - Your antivirus program being disabled. For most attacks, your immediate course of action should be to change the passwords on your local PC and affected online services. For financial accounts, make sure you check for unauthorised charges and notify the financial institution so that appropriate blocks can be placed on your accounts and associated credit cards. One of the most serious kinds of attack is ransomware. In this scenario, the threat actor hacks into your PC and takes all of your important files hostage (usually by encrypting them) until you pay a fee – usually in Bitcoin or some other form of cryptocurrency – to unlock/decrypt your data. It is estimated that about 50 percent of victims pay the ransom, incentivising hackers and making this attack one of the most often attempted. If ransomware does infect your system, you may have very little time to act before it begins encrypting all of your files. Unless you have a backup plan in place, you will have no way of retrieving the files without paying the ransom. Even if you do pay, it might not do any good. An article from Forbes cites statistics that only 8 percent of businesses and people who pay ransomware demands actually get their data back. Protecting Yourself From Being Hacked In cybersecurity, as in so many things, prevention is better than the cure. Some steps you can take to protect yourself from ransomware and other forms of online attacks are: - Using strong, unique passwords for all your accounts. The key to creating strong passwords is to mix several different types of characters, numbers and special symbols. And, in general, a long and simple password – such as mittensisthenameofmycat321 is often better than a short and complex one, such as &tT%6gu7. Avoid using the same password across several sites or accounts and do not write your passwords down. Instead, you can utilise a password manager to help you create complex passwords, as well as store your passwords securely. - Using two-factor authentication (2FA). Where available, this adds an extra layer of security to your accounts by requiring a special PIN or password from a physical device (such as an SMS system or an authenticator app on your smartphone) when you log in from a new device. 2FA is especially important for financial and online shopping accounts. - Keeping your computer software up to date by installing all security updates and patches. Updates often fix vulnerabilities that allow ransomware and malware to infect your system. - Only installing software from reputable companies and download apps from official app stores. Hackers often release fake versions of popular software to infect your computer. In figures from 2017, more than 3 million malicious apps were detected in the Asia-Pacific region alone. - Having an antivirus program installed with spam filters turned on at all times when browsing. While antivirus software isn’t perfect – its hit ratio has been found to be 50-75 percent – it is, nevertheless, better than nothing as a first line of defence. - Having a firewall protecting your corporate network. - If you suspect that an email is a phishing email, delete it. Do not click on any links or open any attachments. - If you’re ever asked for personal information via an email or social message from a friend or colleague’s account, talk to them first before giving away anything possibly sensitive. - Backing up all your files and important documents to a separate storage device that is not connected to your computer. Do this regularly and periodically test it by restoring the backup. If you do have a ransomware attack, you’ll be able to easily roll back to a clean system. Hacking is becoming more prevalent and pervasive, costing businesses large and small an increasing amount of money each year. While there are steps you can take to minimise your risk, they may be no match for a determined and smart hacker. Internet 2.0’s solution offers best-in-class protection and is managed by a team of ex-military and ex-intelligence cyber experts. What this means is you can just turn to the experts rather than hiring dedicated staff to maintain your security. If you’d like to understand how you can protect your business, contact Internet 2.0 today for a confidential conversation.
<urn:uuid:c50ae589-2d57-4dcb-b930-23b725beae23>
CC-MAIN-2022-40
https://internet2-0.com/help-ive-been-hacked/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00311.warc.gz
en
0.933735
1,578
2.890625
3
A dependent field is an entity field that creates a parent-child relationship between 2 entities. If you set the dependent field from one entity (the child) as the primary field of another entity (the parent), the data in the dependent field shows the data from the associated primary field. A dependent field is similar to a foreign key in a relational database. There are 2 entities: Customers and Orders: - In the Customers entity, CustomerID is the primary field. - In the Orders entity, CustomerID is a dependent field that is configured with a relationship to CustomerID in the Customers entity. - In the Orders entity, the CustomerID field shows the associated value from the CustomerID from the Customers entity. About This Page This page is a navigational feature that can help you find the most important information about this topic from one location. It centralizes access to information about the concept that may be found in different parts of the documentation, provides any videos that may be available for this topic, and facilitates search using synonyms or related terms. Use the links on this page to find the information that is the most relevant to your needs. foreign key, key, field, dependent field, relationship
<urn:uuid:57b10741-11f3-4858-8488-3d3eeeed0db4>
CC-MAIN-2022-40
https://helpdesk.agilepoint.com/hc/en-us/articles/360007865693-dependent-field
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00311.warc.gz
en
0.860821
338
2.859375
3
Viruses, if left on your computer, can be detrimental to the overall performance of your computer. This is why most viruses are considered a major security risk. Is your computer operating slower than normal? Is it encountering a lot of errors when you open and close programs or documents? If so, then your computer may have a virus. So how do you prevent viruses from infecting your computer? Install a reliable, effective virus scanner and run a scan for viruses regularly. Also, you should ensure that your virus scanner is updated and operating fully. But there are certain viruses that you can unknowingly invite into your computer. Virus hoaxes are new and becoming more common. If you read information and articles on the Internet, then you may be aware of virus hoaxes. So what are they? Virus hoaxes are spread using emails that are designed to make you believe that your computer is infected with a virus even though it isn’t. Tricky, right? Virus hoaxes are quite clever. They will normally tell you that your computer is infected with a virus. Not only that, the email or message will advise you that certain files need to be deleted. Usually, a list of instructions will specify how to delete this file which is essential to the optimal performance of your computer. Once these files are deleted, your computer may not turn on or function normally. Virus hoaxes work extremely well because most people are worried about viruses attacking their computer. This is how they lure you into believing the hoax. Most of these hoaxes will expose your computer to a virus or they may tell you to download a virus scanner to help repair your computer. But they will do exactly the opposite. These virus scanners will simply add more viruses to your computer which will cause additional damage. Delete or ignore any emails that tell you that your computer is infected. In most cases, it is a hoax virus email just waiting to infect your computer with a virus. Hoaxes can also pop up on websites that claim to scan your computer for viruses. Avoid these sites as much as possible. It is impossible for these sites to scan your computer unless you download a trusted scanner from an official website, such as McFee or Norton. Also, if you ever receive emails telling you that you need to send the message to other people, then you will know that it is a hoax. This is called a chain letter. To ensure that virus hoaxes don’t work their way into your computer, be aware of any messages or emails that you may receive and read them very carefully. Your computer’s life depends on it.
<urn:uuid:b672756e-a42f-4054-b8a5-0f1a74970d54>
CC-MAIN-2022-40
https://davidpapp.com/2011/02/21/why-virus-hoaxes-are-a-security-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00311.warc.gz
en
0.961555
533
3.375
3
Imagine having a digital version of yourself that can represent you and your thoughts in the real as well as in the virtual world. Fascinating, right? The advancement in the field of artificial intelligence has resulted in the emergence of a new technology called “Digital Me” (DM). Digital Me is an AI agent or a digital avatar that can digitize the knowledge of each person. This digital avatar of a person could take part in certain activities (digital or real-life activities) to improve the productivity of the person. Since all the data or knowledge (in this case) of a person is stored digitally, it becomes practically immortal. The learning process of a DM depends on artificial intelligence (AI) and human intelligence (HI) to transfer data from a carbon-based environment (human brain) to a silicon-based environment (System/machine storage). The combination of AI and HI enables the DM agent to constantly learn from the actions of people and update its knowledge. The information stored digitally can be used for various applications. What makes a Digital Me agent unique? In the current state of AI technology and applications, the people who access or use the AI applications are called “users”. Whereas, every person who uses digital me technology is called a “contributor”. This is because the person could build his own avatar that could read and learn from his actions and behavioral aspects. In terms of AI-powered service-based applications such as chatbots, there are either predefined answers or no answers depending on the queries or service requested. For instance, AI cannot tackle medical queries as different symptoms can lead to different medical scenarios. In this case, a DM agent can provide information by browsing based on personal opinions from various people. A DM agent can not only learn from open-source documents but can also actively learn from human beings and the surrounding environment. These specific characteristics of the digital me technology could make it multiple times more powerful than ordinary AI while also making it a very unique technology. You may also like reading – What is Digital Twin technology? Applications and use cases of Digital Me technology Digital Me agent can store, process and statistically analyze the collected data which could be effectively used in increasing the productivity of a person. With the help of a DM agent, a person can be assisted in the following ways to improve productivity. - Assist in performing common tasks. - Take over repetitive tasks and reduce the efforts of a person. - Reduce the cost of communication. As mentioned above, a DM agent collects and stores the knowledge of a person in a digital format. These massive amounts of digital information can be analyzed to gain insights into a person’s behavior, preference, needs and even predict future actions based on analyzing the collected information. Below are some of the applications of Digital Me. Quick data access The information collected by the DM agent can be displayed in a graphical timeline format. This can help call specific events to gather necessary information from a particular timeline. Even if the user can remember partial information only, the DM agent can provide clues from collected data to recall the event. This data is also accessible from practically anywhere at any time, which makes it available for use in realtime across the globe. Any other DM agent that needs to make use of another’s intelligence can greatly benefit from this, as DM are never practically offline. Dynamic data search By analyzing the user’s DM history, the DM agent can automatically provide the necessary information based on the task that is being performed by the user. Rather than querying the information, the DM agent will use the previously collected information as the source to provide the user with the required information. Intelligent data storage Users can store all the data in DM agents rather than remembering it all by themselves. For instance, a meeting can be consistently stored in DM. Here, the DM collects and stores all the data provided by the individual participants of the meeting. These data can later be accessed by the user during the next meeting to recall the events of the previous meeting. Data and performance tracking With the help of a DM agent, the user can keep track of his work. Information such as the number of hours worked and the tasks undertaken can be recorded by the DM agent. This can help the user in allocating his time more effectively. In a workspace environment, provided that the employees allow access to their DM agents, the employer can keep a track of the number of working hours, tasks completed and other related information about the employees. Design, implementation of digital me technology A Digital Me system is an intelligent database server with an API (Application Programmable Interface) interface for recording data and utilizing the recorded data. The components that record the data are called loggers. These loggers record data such as the user’s actions or the surrounding environment and send the collected record into the person’s own DM server. The records stored in a DM server are called events. These events could be any activity that is being performed by the user such as reading documents or contents from a computer screen or measurements of heart rate from a smartwatch. The loggers have to be explicitly installed by the user. This enables the user to keep track of the events that are being recorded by the DM system by checking the dashboard of the web interface of the system. The events recorded by the loggers are then utilized by the software components called applications. The application provides the user with a graphical user interface (GUI) where the data can be viewed and manipulated. Applications are of two types namely the local application and connected applications. Local application run on user’s machines while connected applications run on servers and connects information from various DM servers. Implementation of Digital Me server The implementation of a DM server involves a lot of complex technical processes. The DM server is programmed in java using spring frameworks and the core components of DM software include API interface, database and search engine functionalities. The function of the API is to support the addition of new data and viewing, modifying and deleting existing data. The API also supports filtering data. That is, to retrieve data from the specific timeline. The data filtering/extraction feature is to extract data by identifying important phrases from documents and visual factors from images. This enables the user to access the required data seamlessly. The DM database is focused on running locally (on users’ laptops/computers)and the search engine functionality is to identify events and elements with textual contents. Challenges associated with Digital Me Technology The core concepts of digital me technology are data mining, deep learning, reinforcement learning, NLP (Natural Language Processing) and Cloud technology. The challenges of these technologies are considered to be the challenges in adopting digital me technology. Job and Career Opportunities in Digital Me Technology Since the very foundation of digital me is very technical and has deeper concepts like deep learning, machine learning, big data analysis and so on, the job roles associated with this technology are very complex and require detailed knowledge in all of the above-mentioned fields. Some of the job roles are mentioned below: - Big data engineer - Data scientist - Machine learning engineer - Research scientist - Data analyst - Big data solution architect - Deep learning research engineer Digital me technology is very much in its beginning stages. There are still no details about various aspects of this technology. But as far as we know, this technology is really going to bring massive changes to the digital world. Similar to this technology, Gartner predicted few other technologies such as composite architecture, algorithmic trust, formative AI and the evolution of new advanced materials will also play a major part in shaping the future of the digital world, thus improving the real world that we live in.
<urn:uuid:dee5d28d-6d6e-40a6-af17-e8e9540f6ed6>
CC-MAIN-2022-40
https://expersight.com/what-is-digital-me-technology-a-detailed-guide-beyond-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00311.warc.gz
en
0.932556
1,591
3.203125
3
In order to appreciate the need for virtual LANs, let’s consider how LANs would be built without switches using hubs only. As you are aware hubs are layer 1 devices without any intelligence and they typically relay the frame received on one port to all other ports regardless of the type of frame. As a matter of fact they don’t care what the content of the frame is and the same treatment is given to unicast, multicast, and broadcast frames. As a result, the set of devices connected to a hub are in the same collision domain which means two devices connected to a hub cannot transmit at the same time without causing a collision. The Carrier Sense Multiple Access / Collision Detection (CSMA/CD) mechanism of Ethernet is at work in networks built with hubs. As all devices are in the same collision domain there is performance degradation as more and more devices are connected to the same hub and more collisions start to take place. Let’s assume we have five physical LANs in our organization: Engineering, Finance, Management, Marketing, and Sales each belonging to one department that need to be connected to the same router which provides Wide Area Network (WAN) connectivity. Here is how this network can be built using hubs alone. Figure 7-4 Physical LANs Please note that enterprise networks do not use hubs any more and are built exclusively with switches. But analyzing the above network built with hubs would enable us to appreciate the benefits switches bring. First there is one hub for each physical LAN and all devices in that physical LAN are cabled to the same hub while each hub itself is connected to a separate interface on the router. Also, there is one IP subnet for each physical LAN and any device that is connected to that LAN has to have an IP address in that IP subnet. The router interface on a certain physical LAN also has an IP address in the IP subnet for that LAN. The hosts in the physical LAN have the router’s IP address set as their default gateway. In this design, if you need to add another device in a physical LAN say Engineering, you simple connect it to the Engineering hub and assign it an IP address from the IP subnet for Engineering LAN. There are some shortcomings in this design. First there are limitations on where you can physically place devices on a certain LAN due to the limited maximum cable length supported by Ethernet cabling standards. Let’s assume there is a new employee in the Engineering department that needs to be connected to the Engineering LAN but there is no physical space in the Engineering department to make room for the new employee. There is plenty of space in the Sales department and the new Engineering employee is made to sit in the Sales department instead. Now, the Sales department is located in another corner of the building and it is not possible to connect the new Engineering employee to the Engineering hub due to the simple fact that the distance exceeds the maximum cable length permissible. The new Engineering employee is instead connected to the Sales hub but this has some undesirable side effects. The Engineering employee is now on the Sales LAN and he can access all resources on the Sales LAN like servers which are meant to be visible only to the Sales people. It is a security issue as organization policies may prevent employees from other departments to have access to Sales documents and data. Also the new Engineering employee would be cut off from resources on the Engineering LAN which may prevent him from effectively doing his job. In the coming sections we will see how virtual LANs in networks built with switches instead of hubs provide means to prevent problems like this yet enabling physical mobility. The design I just described, though obsolete today, has worked well for several years despite its limitations. In an Ethernet LAN, a set of devices that receive a broadcast sent by any other device is called a broadcast domain. We just learnt in the last section, a switch simply forwards all broadcasts out all interfaces, except the interface on which it received the frame. As a result, all the interfaces on an individual switch are in the same broadcast domain. Also, if a switch connects to other switches too, the interfaces on those switches are also in the same broadcast domain. On switches that have no concept of virtual LANs (VLANs), the whole switched network is one large flat network comprising a single broadcast domain. A VLAN is simply a subset of switch ports that are configured to be in the same broadcast domain. Switch ports can be grouped into different VLANs on a single switch, and on multiple interconnected switches as well. By creating multiple VLANs, the switches create multiple broadcast domains. By doing so, a broadcast sent by a device in one VLAN is forwarded to all other devices in that same VLAN; however the broadcast is not forwarded to devices in the other VLANs. VLANs provide bandwidth efficiency because broadcasts, multicasts and unknown unicasts are restricted to individual VLANs and also provide security as a host on one VLAN cannot directly communicate with a host on another VLAN. Because a trunk link can transport many VLANs, a switch must identify frames with their associated VLANs as they are sent and received over a trunk link. Frame identification assigns a unique user-defined number to each frame transported over a trunk link. This VLAN number is also called VLAN ID and as each frame is transported over a trunk link, such unique identifier is placed in the frame header. As each switch along the way receives these frames, the identifier is examined to determine to which VLAN the frames belong and then is removed. VLAN ID field contains a 15-bit value and as such the range of possible VLAN IDs is 0 – 4095. The VLAN IDs of 0 and 4095 are not used and the usable range of VLAN IDs hence is 1 – 4094. By default, all ports on a Cisco switch are assigned to VLAN 1. VLAN 1 is also called the management VLAN and control plane traffic belongs to VLAN 1. Best practices recommended by Cisco dictate using a separate IP subnet for each VLAN. Simply put, devices in a single VLAN are typically also in the same IP subnet. Layer 2 switches forward frames between devices on the same VLAN, but they do not forward frames between devices in different VLANs. In order to be able to forward frames between two different VLANs, you need a multilayer switch or a router. We will cover this in more detail in a later section of the chapter. Now, let’s see how we can build the same network using switches instead of hubs and what benefits switches bring to us. Table 7-1 lists VLAN IDs corresponding to organizational departments and IP subnets associated with each VLAN. There is a different IP subnet assigned to each VLAN and if you look carefully the third octet of the IP subnet number is the same as the VLAN ID. It is just an arbitrary number to make the design easily understandable and let us focus more on concepts rather than specific numbers used. The router has a Wide Area Network (WAN) connection and it provides two important functions: WAN connectivity and inter-VLAN routing to enable communication between two different departments or VLANs. As you can see in Figure 7-6, devices belonging to VLANs 20 and 40 are connected to more than one switch. VLANs remove the restriction of having to connect devices belonging to the same LAN to the same device while still providing traffic isolation at layer 2. In yet other words, VLANs can span multiple switches with switch ports belonging to same VLAN existing on different switches. This is one of the several benefits switches bring to local area networks. Also if we want to add another device to say the Engineering VLAN and we want to locate the new user at a location different than the Engineering department, it can simply be accomplished by connecting the new device to the nearest switch and assigning the switch port to the Engineering VLAN. Compare it with the similar situation in the network built with hubs and you would be able to appreciate that network becomes more flexible without sacrificing security or traffic isolation. The router provides inter-VLAN routing in this scenario but the same functionality can also be achieved by using a layer 3 switch like the Cisco 3560. Table 7-1 VLANs Corresponding to Organizational Departments |VLAN IDs||Names||IP Subnet| Figure 7-6 Switches Using VLANs Remove Physical Boundaries One of the tasks that a system administrator has to perform while creating VLANs is to assign switch ports or interfaces to each VLAN. There are two ways switch interfaces can be assigned to VLANs and this gives rise to two different types of VLANs: static and dynamic. In the case of static VLANs, each switch port is statically assigned to a specific VLAN and any host connected to that switchport would automatically be a part of that VLAN. This kind of VLANs is static because individual switch ports are permanently allocated to specific VLANs. VLANs in this case are tied to switch ports and not to what is connected to those switch ports. In the case of dynamic VLANs however, all the host devices’ hardware addresses are assigned into a database so the switch can be configured to assign VLANs dynamically any time a host is connected to a switch. In this case VLAN is tied to the hardware addresses or MAC addresses of hosts and not to switch ports. A host whose MAC address is tied to a certain VLAN would be part of that VLAN regardless of which switch port it is connected to. Static VLANs are easier to create than dynamic VLANs because there is no need to document the hardware addresses of all hosts that would possibly be connected to the LAN and then to store them in a database on the switch. I will cover both static and dynamic VLANs in the coming sections. The most common and straightforward method to create VLANs is static VLANs and these are also more secure than dynamic VLANs. The security hinges on the fact that VLANs are manually associated to switch ports by switch configuration and these VLAN associations are always maintained unless the port assignment is manually changed. Static VLAN configuration is pretty easy and it works really well in most enterprise networking environments where user mobility is limited and controlled. End points typically consist of desktop computers, printers and servers which are permanently cabled to switch ports. It is fitting to associate VLANs to switch ports in such an environment. Dynamic VLAN bases VLAN assignment on hardware (MAC) addresses, protocols, or even applications that create dynamic VLANs. Let’s consider a specific case where MAC addresses have been entered into a centralized VLAN management application and you connect a new node to a switch. If the node is attached to an unassigned switch port, the VLAN management database can look up the hardware address and both assign and configure the switch port into the correct VLAN. This makes configuration and management a lot easier because if a user moves to a new location and plugs the node into a new switch port, the switch simply will assign them to the correct VLAN automatically. It is possible because VLAN assignment is tied to hardware addresses of nodes rather than physical switch ports. But this ease of VLAN management and user mobility comes at a cost: you have to do a lot more work initially noting the hardware addresses of all nodes that are supposed to be connected to the LAN and set up the VLAN database accordingly.
<urn:uuid:cf429c0d-28cb-42ab-b93a-573152150885>
CC-MAIN-2022-40
https://www.freeccnastudyguide.com/study-guides/ccna/ch7/7-2-virtual-lans-vlans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00311.warc.gz
en
0.938861
2,455
3.15625
3
What is Scrum? Even if you have never heard a thing about it, by the end of our definitive guide, you’ll know everything you need to know about the what, how, and why of this powerful agile methodology. Let’s dive in. What does Scrum mean? The term “Scrum” comes from rugby. It was first used in the context of product development in a 1986 paper published in the Harvard Business Review titled “The New New Product Development Game.” The authors, Hirotaka Takeuchi and Ikujiro Nonaka, saw that traditional frameworks for project management no longer suited the fast pace of development. “What we need today,” they concluded, “is constant innovation in a world of constant change.” They described how massive companies like Honda and IBM were finding success by letting small groups of autonomous teams take on projects without rigorous oversight. They likened this “holistic method” to a Scrum in rugby, where “the ball gets passed within the team as it moves as a unit up the field.” A lot has changed since then. But the world has only become more fast-paced and hyper-competitive. And Takeuchi and Nonaka’s model of small, autonomous teams that deliver complete products is still at the heart of Scrum as it’s practiced today. How does Scrum work? Scrum is a process framework that uses small teams to build products that continuously improve. Like Kanban, XP, and other agile methodologies, Scrum employs lightweight, iterative, and integrated development. It’s a mindset and approach rather than a specific technique. In other words, Scrum lays out a set of working relationships that help people collaboratively manage complex projects. These Scrum relationships are broken down into artifacts, ceremonies, and roles. Instead of hierarchy, Scrum promotes self-organization. It leaves the teams to own the execution of their own work. This is why it’s so important for Scrum team interactions to be grounded in the three pillars of transparency, inspection, and adaptation: - Transparency: Significant aspects of the project must be visible to those responsible for outcomes. Good Scrum teams share information constantly. - Inspection: Every Scrum event is an opportunity to analyze progress and processes for potential problems and improvements. - Adaptation: Adjustments are made accordingly. All of this might seem a bit sterile, but during meetings, the question of progress can get personal. It’s easy to let emotions and disagreements cloud your ability to inspect and adapt. Great teams use the five Scrum values—commitment, courage, focus, openness, and respect—to build the strong relationships they need to enable open communication. Scrum team roles - Self-organizing: Individuals decide how to accomplish their work, as opposed to getting instruction from above. - Cross-functional: The Scrum team is made up of all the skills and capabilities needed to complete the work. Think back to Takeuchi and Nonaka’s rugby metaphor and how Scrum teams pass the ball within the team as they move up the field. That means they shouldn’t need to ask anyone outside the team for direction or support to deliver a useful product increment. What it doesn’t mean, of course, is that the Scrum team is completely cut off from the company. For example, the product owner is responsible for interfacing with stakeholders outside the team to figure out what the team should work on. They are the best-positioned to know which features and fixes will add the most value. The development team is responsible for completing the product increment. Developers work with the product owner to make sure they are choosing product increments that will add value, but they alone decide how to get it done. The Scrum master facilitates Scrum events and makes sure the development team is clear of blocks and distractions. Scrum masters help the team improve their processes and are responsible for communicating the results to outside stakeholders. Here’s a simple way to think about the roles and responsibilities of the Scrum team: Scrum teams work in short time-boxed periods known as sprints (typically two to four weeks) to create product increments that are potentially releasable. So, the average Scrum team has 10 workdays to select, complete, and integrate a product increment. To maintain this fast pace, they use the Scrum artifacts mentioned above to track the sprint input (what to work on) and output (what gets done). Side note: Keeping the team on track also means using the right Scrum software. Let’s go over those artifacts in more detail. The first is the product backlog: a list of all desired features, fixes, and maintenance that the team could work on. Because sprints are short, Scrum teams can only work on a few items from the product backlog at any time. The product owner is responsible for prioritizing the product backlog so that the team is choosing from the highest-priority or valuable backlog items. Together with the development team, the product owner regularly refines the backlog (also known as backlog grooming) to keep it up to date. At the beginning of the sprint, the development team chooses the backlog items that will become the sprint backlog, then plans how to get this work done by breaking each item down into tasks. By the end of the sprint, the team should have completed the sprint backlog items. This constitutes the product increment. Product increments are supposed to be useable, additive (building on previous iterations of the product), and potentially releasable. They are fully integrated and meet the definition of “done” as determined by the team. Now that you know the Scrum artifacts and roles, it’s time to take a closer look at the sprint. Tip: Looking for a way to keep your team on the same page? Scrum boards are designed to provide a real-time visual snapshot of the sprint ahead. There are four formal Scrum ceremonies that break down the work of the sprint. Regardless of its length, these events provide the sprint’s basic structure and important moments for the team to inspect and adapt its processes. The sprint begins with sprint planning, when the entire team comes together to choose items from the prioritized product backlog to work on that they can reasonably expect to finish. No two backlog items are the same, so the product owner works with the development team to understand each item’s “size” and the team’s capacity to avoid overloading them. Now for the development team to break items down into tasks. Usually it’s enough to decompose just the first few days’ worth of work by the end of sprint planning. To inspect the sprint as it progresses, the team meets for 15 minutes at the beginning of each day. This Scrum ceremony is known as daily Scrum, when the development team comes up with a plan for the day ahead. The goal is to cover: - What each person has done in the last 24 hours - What they plan to do in the next 24 hours - Any issues or concerns they have about completing the sprint backlog This routine ceremony helps team members with different skill-sets coordinate their work. It brings dependencies to light and regularly checks in to see that the team is still working toward an integrated, functioning product increment. (Although there are edge-case reasons for skipping, most teams hold daily Scrum every day.) And transparency is key. While the development team leads daily Scrum, the Scrum master helps keep that communication open and focused. It’s up to them to make sure that only the development team participates and is comfortable sharing problems. Tip: Scrum masters need to have certain soft skills to promote healthy team interaction. Here are the Scrum master interview questions you need to ask to find the right candidate. At the end of the sprint, the team inspects the product increment during sprint review. They might feel free to invite stakeholders from outside the team—at the product owner’s discretion. This is the time to present the product and discuss the work of the sprint: has the “done” product increment delivered the expected value? Did backlog items fit true to expected size? Based on this knowledge, the Scrum team adapts the product backlog and talks about what to do next. This generates feedback and ideas that are useful for the final Scrum ceremony: the sprint retrospective. Held after sprint review but before the team begins the next sprint, this ceremony is prime time for the team to inspect and adapt their processes. The retrospective agenda is distinct from that of the sprint review in that the team is focusing on itself, not the product. A successful retrospective helps the Scrum team identify and implement new and improved ways to work. It’s about about what went well, what caused problems, and the team’s relationships, processes, and tools. What of these can be done differently to make the next sprint more productive and fun? Here’s where the Scrum master’s soft skills and creative ideas for retrospective exercises shine through. It’s their job as facilitator to keep finger-pointing to a minimum and brainstorming as open and engaging as possible. The end goal of the retrospective is for the Scrum team to negotiate several concrete improvements they can carry forward into the next sprint. Retrospective tools can help track commitments and feedback. Tip: Running your first retrospective? Here’s an example agenda you can copy from one of our most productive retros at FYI. Professional development for Scrum So, you’ve got the basic concepts of Scrum. Putting them into practice is much more difficult. And folks with the expertise to help companies transition seamlessly to Scrum are in high demand. One of the most straightforward ways to demonstrate your experience? Through a Scrum master certification. Choosing the right one for you might depend on your personal goals or a prospective employer’s requirements. Scrum training courses are another good option for people looking to learn more. Some are geared toward total beginners; others for advanced practitioners looking to enhance their knowledge in a particular domain. There are also a ton of great Scrum books out there. (And not-so-great ones you can avoid by skipping right to this list of the nine books every Scrum master should read.) Tip: Interested in learning more about scaling Scrum, specifically? Check out my post on how to run Scrum of Scrums to coordinate multiple teams. Final thought: Scrum is about… …making work more enjoyable. At its core, Scrum is about making sure teams are working together on valuable projects in a sustainable way. Burnout is real. And over the long term, it’s guaranteed to sap motivation and productivity. It’s no accident that whenever I talk about Scrum with someone who has really made it work for their company, the words “fun,” “exciting,” and “proud” come up a lot. The rest should fall into place.
<urn:uuid:d053c20d-468c-4d93-b54f-bd29f73850b4>
CC-MAIN-2022-40
https://nira.com/scrum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00311.warc.gz
en
0.943686
2,342
3
3
Updated on July 13, 2022 As network infrastructure becomes more complex, many IT admins and DevOps engineers wonder when to use a RADIUS server and when it doesn’t make sense to use one. In order to understand the use cases of RADIUS, we should take a step back and get a grasp on how IT networks have evolved over time. RADIUS Server and Dial-Up The concept of RADIUS first appeared with dial-up networks a long time ago. RADIUS was what authenticated, authorized, and accounted for user access to networks. The protocol was often used by ISPs to enable access to the internet when modems and dialing in was still relevant. In fact, RADIUS was even in use before the idea behind Microsoft Active Directory came to pass in the 1990s. Microsoft Active Directory and the Domain Controller In those early years following the introduction of Active Directory, a key concept began to take shape. That concept was the domain controller and its role in controlling access to Windows-based IT resources. As Active Directory took off and found a home in more enterprises, the architecture behind networks became clear. A user logged into their Windows machine, when inside the network, and would immediately be granted access to their Windows-based IT resources. In these early days, VPNs were introduced for remote workers and when attached to the network, those workers authenticated against AD. It comes full circle when you realize that often, VPNs were back-ended by RADIUS to provide authentication to that layer and then enable AD authentication. How RADIUS Was Used Over time, RADIUS found its niche as the protocol, or translation layer, from networking equipment such as VPNs, routers, switches, and more to the core identity provider (IdP) within an organization, often Active Directory. The reason Active Directory served as the IdP in most organizations was that IT networks were generally on-prem and Windows-centric. Stemming from the fact that IT networks were on-prem, there was really one path for remote workers into the network — VPN. As a result, the RADIUS server was largely limited with regard to the benefits it provided organizations. For a detailed look at the history of RADIUS, how it’s commonly used, and ways to implement it, take a look at our article: What is the RADIUS Protocol?. RADIUS Benefits Expand with WiFi and Cloud That all started to change with the introduction of WiFi and the cloud. As networking infrastructure shifted and users became more mobile, different approaches to the authentication process started to necessitate change. While WiFi environments could be authenticated with a shared SSID and passphrase, IT admins realized that simply wasn’t secure enough. At the same time, more mobile users made VPNs much more popular, which embedded RADIUS servers further into the mix. Then, as data centers and wireless network infrastructure continued to become more popular, the idea of user authentication for these IT resources was important to address. Further, with new security models such as Zero Trust Security appearing, RADIUS server implementations have increased dramatically. Additionally, innovations including hosted Cloud RADIUS solutions appeared on the market and effectively turned RADIUS implementation into a turnkey task. These cloud RADIUS platforms simply require a VPN, WiFi access point, or other networking solution to point their authentication path to the RADIUS endpoint. Then, the SaaS RADIUS provider handles the rest of the integration and management work. Utilize Cloud RADIUS Without an On-Prem Server For many IT organizations, a cloud RADIUS service can help them dramatically step up their network security without the heavy lifting of learning and implementing on-prem FreeRADIUS servers. But, does it come as a standalone service or how can IT organizations deploy it? A key component of JumpCloud’s Open Directory Platform is Cloud RADIUS. Because the directory platform backends the RADIUS component, implementation, integration, and ultimately network security are each easily achievable via the hosted RADIUS service. And because Cloud RADIUS is a part of the directory, you get identity management, Google Workspace/Microsoft 365 integration, system management via GPO-like policies, Cloud LDAP, single sign-on, and much more all rolled into one cloud-based, open directory service. Try JumpCloud Free If you’re ready to secure your network with a cloud RADIUS server that provides interoperability and Zero Trust security, sign up today for a JumpCloud account. It’s free for up to 10 users or devices. If you’d like additional information, feel free to consult JumpCloud’s Knowledge Base, or drop us a line.
<urn:uuid:e3c4a2de-adad-4319-a2eb-0182c42e6565>
CC-MAIN-2022-40
https://jumpcloud.com/blog/use-radius-server-when
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00311.warc.gz
en
0.950473
987
2.921875
3
Floods are the most common and expensive natural disaster in the United States, with 90 percent of all national disasters involving some sort of flooding and flash floods causing approximately 200 deaths each year. Flash flooding typically occurs within the first six hours of a heavy rainstorm or after a dam or levee breaks causing a sudden release of water, leaving community leaders and emergency managers with very little time to evaluate. This limited time before an immediate flood threat is why communities must train and prepare prior to the storms and flooding, and why the Department of Homeland Security’s Science and Technology Directorate (S&T) is developing a variety of tools responders could use to enhance community resilience. Part of the mission of National Preparedness Month is to foster conversations and share information on what S&T has learned. Currently, FRG is holding a National Conversation on Flood Resilience to gather firsthand testimonies and best practices from those who have lived through the worst of floods. Details that can assist future flood response preparation include answers to questions such as the following: - What type of data and information do first responders need to have at their fingertips to create and maintain effective flood response plans with emergency management partners and why? - Where do emergency managers currently find this data, how do they maintain and share it, with whom and how often is it exchanged? - When emergency managers are certain a flood emergency will occur, what type of support do they need, when and how do they implement their plan? Road conditions, weather predictions, power status, and the state of communications are top-tier essential elements of information for any hazard. - How can responding agencies motivate community residents to either evacuate or take protective action as a flood emergency nears? To what extent are emergency managers currently able to obtain this information during the evacuation preparation phase and how do they receive it? How can emergency managers most effectively communicate information to residents? The First Responders Group (FRG) is working with Federal Emergency Management Agency (FEMA) and the National Weather Service to help communities mitigate harm and bolster resilience. While FRG cannot control the weather, it can develop the tools needed by responders on the front lines to help stand up to the challenges floods present, including evacuation of community residents, providing support services, and restoring infrastructure. Through its five-year Flood Apex Program, FRG is honing in on how to best serve first responders and others who have specific data and information needs related to flood preparedness, mitigation, response, and recovery. “The goals of the Flood Apex Program are to save lives, decrease uninsured losses and reduce property damage,” said FRG Director Daniel Cotter. “FRG hopes to achieve this by increasing access to community, regional, and national data and information sources; analytical tools; and other resources that may help everyone make better flood resilience decisions.” “We’re working directly with partners who have experience in all aspects of mitigating and responding to flood events,” Cotter continued. “We want to learn from those who have dedicated their careers to developing innovative approaches and solutions, and for community practitioners tasked with implementing local solutions.” Currently, the Flood Apex program is working towards a concept of a National Flood Decision Support Toolbox (NFDST), which aims to serve everyone involved in flood resilience: first responders, emergency management, public works, critical infrastructure experts, real estate agents, insurance agents, home builders, and individual community residents. “The Flood Apex Program envisions three phases with each informing and enabling future projects. It will culminate in the development of the NFDST, translating science into actions that reduce risk exposure in high-risk communities,” said FRG Program Manager Denis Gusty. “When developed, the NFDST will be transitioned to FEMA to assist federal, state, local, tribal, and territorial users in making investment decisions related to floods.” Knowing what type of support first responders need from the NFDST is crucial. Help FRG make the right research and development decisions on flood response by using the National Conversation on Homeland Security Technology to answer the questions posed in this article. In this particular situation, less is not more! Feedback from those in flood-prone areas is essential to guiding FRG’s work.
<urn:uuid:f6082a9d-b902-4eb4-a3ca-c1a511b2ce2d>
CC-MAIN-2022-40
https://americansecuritytoday.com/help-frg-reduce-impact-flood-disasters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00511.warc.gz
en
0.939432
888
3.953125
4
30 Essential Terms of Project Management Published on Thu 21, 2020 Knowing project management jargon is the very first step for both preparation of standard project management exams and being a part of or handling a project. While acquaintance with PM terms is considered a primary step, not knowing the project management tongue can easily lead to miss-communication and eventually to critical loss during the project execution. So, let’s revise the project management glossary to identify some standard terms that you may come across or should know as a project manager or as an associate working on any project. 1. Activity: PMBOK V6 defines it as the smallest portion of a project used in planning, tracking, and control. It is also identified as an amount of work performed that converts inputs into appropriate output. 2. Assumptions: Assumptions are listed factors while dealing with the statement of work. It contributes to ensuring the validation and result of a project. 3. Backlog: Backlog is essentially everything that needs to be done to ensure the completion of a project. It is a prioritized list of tasks derived from the requirements and roadmap of the project. 4. Baseline: It is a fixed reference point against which all the actions and tasks of project progress is compared. 5. Business Case: It captures the reasoning for initiating a project and answers the question, “Why are we undertaking a project?” The business case generally outlines the problem statement, potential investments, and the benefits and impact of the project. 6. Work Breakdown Structure: It is a deliverable oriented breakdown of projects into smaller components. The main objective of WBS is to organize the term ‘work’ into manageable sections. 7. Milestones: These are tools to mark specific points along a project timeline and describes a set of related deliverables. Milestones are one of the components of the Gantt chart and are mainly used as anchors that may signal to project start, ending phase, etc. 8. Stakeholder: Stakeholders are people who are engaged in the project and are influenced by the project. They can be from both within an organization or from outside the organization. 9. Work Plan: It creates a clear path to the desired outcome. It is essentially a project management plan that outlines that identifies and describes the steps needed to achieve the goals. 10. Change Management: It a written document that holds the set of activities and procedures that needs to be undertaken by various roles to control and deal with the changes. 11. Gantt chart: Named after Henry Gantt, it is a bar graph that visually represents a project plan over time. A Gantt chart typically shows the status of tasks and who is responsible for them. The chart helps with planning, scheduling, and managing relationships between tasks. 12. Dashboard: A project management dashboard displays key performance indicators of a project along with crucial problems that require further attention. 13. Deliverables: A deliverable is an element of output of project, activities, and tasks within the scope of a project. 14. Dependencies: Dependencies are the interlinking relationships between preceding tasks to succeeding tasks. 15. Kickoff Meeting: It is the first meeting between the client and the project team and sets the course and tone of the project. 16. Agile: Agile is a project management approach that focuses on the iterative progress and delivery of the project and starts delivering business value from the very beginning of the project. 17. Minimum Viable product: It is a project development methodology where the product development team moves towards the end product by adding a minimum number of features to satisfy early adopters and each iteration of the project. 18. Mission Critical: Being mission-critical identifies anything as being critical to the success of a project. While it can be associated with an activity or deliverable, it can also be associated with a whole project. 19. Contingency plan: It is the plan B that comes into action when the primary plan does not work. They provide solutions to critical risks that may have disastrous effects. 20. Cost estimation: Cost is an essential factor for any project. As that name suggests, cost estimation id the method of calculating the entire project cost. 21. Network Diagram: A network diagram is the graphical representation of all the activities, time duration, roles, responsibilities, relevant interrelationships, tasks, and the workflow scheme of your project. 22. Critical Path method: It is the longest duration path through a network diagram, which determines the shortest time to complete the project. 23. Dummy Activity: it is a simulated activity of zero duration. Dummy activities are created for the sole purpose of demonstrating a specific relationship and path of action on the arrow diagramming method. 24. PMO: The project management office is a department in an organization that is in charge of overseeing project management in the organization. They can be internal or external departments with varying levels of control over projects. 25. Earned value: It is an approach that monitors project plan, actual work, and value to identify if the project is on track. It shows how much of the budget and time should have been spent on the current amount of work done. 26. Feasibility Study: It is the assessment that determines the viability of a project based on various lines like legal and technical feasibility. 27. Organizational Structure Planning: It is a hieratical structure of the organizational framework and is used for project planning, resource management, etc. 28. Risk Management: It is the process of identifying possible risks that may arise during a project and measuring their potential impact on different aspects of the project. 29. Scope Creep: Also referred to as requirement creep, it is the change in project scope at any point after the project begins. 30. Sprints: Sprint is identified as a set period of time during which a specific task or set of tasks/activities are completed and reviewed. So, there it is. You know are now acquainted with 30 essential terms of project management. However, a word of caution is in order. This is not an extensive list, and there is much more to learn. If you are interested in extensively diving into acquiring project management training, Certification Planner brings to you digital courses that you can take up from the safety of your home. Choose from , , and certification, depending upon your career requirements and secure your professional career. Visit us at to explore additional training solutions or drop your requirement at , and we will reach out to you. Happy Learning!
<urn:uuid:c01a2984-73d3-424c-bf00-1eb1c7fa3340>
CC-MAIN-2022-40
https://www.certificationplanner.com/resources/30-essential-terms-of-project-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00511.warc.gz
en
0.939364
1,361
2.828125
3
The IT world is full of myths and legends circulated via email or simply spread by word of mouth. These legends are not the infamous hoaxes or chain letters, but assume that certain things are true, when they usually aren’t. However, they are so difficult to prove that they are accepted as true without any evidence whatsoever. And these strange myths also exist in the IT security world. One of them, based on an accepted fact, is being increasingly refuted: creators of malicious code are good programmers. Some time ago, when viruses where in their prehistoric era, this was true. For a program to multiply automatically, without users realizing or prehistoric security programs detecting it, it must have been created by a good programmer. Programmers needed a wide knowledge of systems, the options they offered and a huge capacity to innovate. However, nowadays, these programmers are no longer the “stars” of IT coding. Malicious codes are becoming coarser, less innovative, and sloppier. The Gaobot.AAF case The statement that creators of malicious code are poor programmers (or at least, not as good as we think) is not unfounded, as there are methods for scanning programs to see how they were built. One of them, which is widely used for its visual results, offers graphic representation of the components of a program. These graphs are lines that relate each sub-routine of the code, so that a simple and well-built program would return a simple and clear graph. However, a program without any internal organization and without adequate systemization would offer an extremely complex and disorganized graph. What’s more, two similar programs would offer similar graphs. PandaLabs, Panda Software’s malware detection laboratory, has used them to establish the similarities between different variants of a malicious code. They have done this because calls to the same function in different programs are shown graphically. When PandaLabs analyzed a bot (Gaobot.AAF), they were surprised with the result: not only because it was spectacular (they called it “Death Star” due to its resemblance with the space station from Star Wars) but for its strange complexity. Why is this strange drawing returned? Simply because the original source code of the Gaobot bot family was released to malicious code writers and each one created a new variant. But these variants were not optimized, and therefore, each variant was more complex. Instead of demonstrating that they were good programmers, all the creators of the Gaobot variants did was prove that their in-depth knowledge was a myth and that they are simple apprentice thieves who copy others´ code. The “undetectable” viruses Another widespread myth, which is fed by many false email messages is that there are viruses (worms, or Trojans, etc.) that no security solution can detect. And unfortunately, even though this is not true, this myth is sometimes rumored. A recent news article reported that a student had created a Trojan that recorded the images of his classmates’ webcams and then blackmailed them with the recordings. It was said that the Trojan was “undetectable.” The statement that a Trojan is undetectable contradicts this information, as the authorities created a system to detect and eliminate this code. So, is it undetectable or not? The problem lies in the difficulty to detect a certain Trojan. The majority of manufacturers of antivirus solutions depend on samples of malicious code to develop a detection and disinfection routine. For this to happen, two circumstances must arise: 1. The malicious code arouses suspicion from a user. If a message is not displayed or if it does not carry out any special action on the computer that makes the user realize that something strange is happening, the system will remain infected, as a sample will not be sent to the laboratories for analysis. 2. The malicious code must have a certain rate of propagation. This increases the probability of affected users notifying the laboratories of the appearance of the code. In the case of this Trojan, neither of the two circumstances arose. Like most Trojans, it did not show any messages or leave any clues that could give it away. What’s more, as it was distributed to very few computers (only the hacker’s classmates), it did not arouse any suspicion. This is an example of the malware situation today: reduced and well-hidden examples. Therefore, antivirus companies will not detect it, as the report says. However, this statement is not complete: it will not be detected until it has been discovered. In spite of this, this problem only arises with old malicious code detection systems. These system rely exclusively on data stored about malicious programs and do not incorporate any other detection systems. Therefore, anything that is not stored in its program signature database will be considered valid. More modern technology for combating malicious code prevents these problems, as instead of clinging to previous knowledge of malicious codes, it seeks them out by analyzing their behavior. Therefore, a program that tries to carry out a malicious action on a computer will be blocked, not because it can be identified, but because of the action it was going to perform. While users continue to trust in partial and outdated solutions to detect viruses and other malicious programs, they cannot adequately protect their computers, as “undetectable malicious codes” will continue to exist for them, instead of simply “dangerous programs unidentified up until now”.
<urn:uuid:59f28413-09ab-4642-8fd5-f245fc587534>
CC-MAIN-2022-40
https://it-observer.com/security-myths.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00511.warc.gz
en
0.958505
1,131
3.015625
3
Strong password security is an essential aspect for online consumers these days. This is required to mitigate the chances of data breaches or cybercrimes that are caused due to weak account password and sharing of account security credentials. It has been observed that people use their birth date or a pet’s name to create online account passwords. Let it be a normal computer user or a professional person, around 77% of online users create tenant passwords via phone numbers, date of birth, mother’s name, etc. They don’t take the stage of password creation seriously due to which it becomes easier for hackers to attempt brute-force attacks and get successful result too. Recipe of Strong Password Security “Tastes fantastic, simple to prepare” – This principle should be kept in mind when an individual decides to create the account password. Online users must follow the mantra ‘change password in every 180 days’. Although some educational companies enforce password expiration standards to encourage faculties, students, and staff to alter their password on the basis of predefined schedule. Also, other organizations in other fields can enable multifactor authentication approaches to mitigate the password challenges. Below mentioned are the basic tips to achieve strong password security for an online account: - Prefer The Use a Pass Phrase – Nowadays security experts are suggesting a ‘passphrase’ instead of a simple password. Such kind of phrase should be comparatively long – with minimum 20 characters and comprise of seemingly random alphabets consolidated with special symbols, uppercase letters, numbers, and lowercase letters. Users have to think of something that is easy to remember but, hard to guess. - Password Should be 12 Char Long – The more the password is long the more it is better. It has been noticed by Cybersecurity experts that longer passwords are tougher for attackers to crack. It becomes complicated for them to target the account if it has a password with more than 12 characters. With the feeling of quitting, criminals ignore the account that holds a long security password and hence, difficult for them to guess. - Activate Multi-factor Authentication – In case, luckily hackers succeed in their task of accessing target’s account then also, there is nothing much to panic. Well, some of the cloud service providers render an option of Multi-factor authentication, which is by default deactivated. Account-holders have to activate this feature in their tenant to use it and secure their tenant, even if the account credentials get leaked. - Ensure That Devices Are Secured – One of the strong password security standards is to secure the devices where account passwords are stored. Several human beings are in the habit of saving their passwords on electronic, portable devices. If these devices are not secured properly, big chances of data leakage are there. Therefore, don’t take any risk; simply enforce passwords on the devices that comprise of account security. - Involve Symbols, Numbers, etc. – A password test with simple numbers or lowercase letters or uppercase alphabets are easy to guess for cybercriminals. They make possible combinations to crack the target’s account password. God knows when and how their combinations & guesses get correct, and unfortunately, an unexpected bad day for the account holder. Thus, don’t take a chance! It’s better to be aware at an early stage and create a password with special symbols, numbers, uppercase and lowercase letters. - Never Post It In the Plain Sight – This may look an obvious situation but, researched have told that there are several users who post their password on PC’s sticky note. Completely a bad thought! If you cannot remember account password, write it down on a note that is invisible for unauthorized computer users. Only one authentic person should be allowed to watch that note; none of the others should be permitted. Strong Password Policy Can Only Be Achieved By Humans Neither a machine nor an automated app can help human beings in creating their account passwords. It is the major role of each and every individual to secure tenant password and store it safely wherever they wish. A strong password security can only be achieved when each and every person in the Cyberworld is serious regarding password protection. Basic tips to achieve strong password security are completely enough; the only thing required is to be aware and work accordingly.
<urn:uuid:536370cd-171c-4c77-bc41-3cd79729469b>
CC-MAIN-2022-40
https://www.cloudcodes.com/blog/strong-password-security-for-online-users.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00511.warc.gz
en
0.912329
889
2.59375
3
Many business practices change over time, but few have the potential to radically alter how we live and work. Cloud computing is one innovation that’s not only here to... In today’s tech-powered marketplace, companies need options in order to stay on top of computer and IT trends. Whether it’s in the form of data collection, storage, security, or infrastructure needs, managing these quickly fast-pieces can be tricky. Cloud computing helps to address some of these challenges by offering flexible, on-demand services to meet a variety of software needs. Although cloud computing is practical and modern, it does introduce certain security risks. In this post we’ll take a closer look at what cloud computing is, how to address known security risks, and how to prepare for your organization’s cloud needs. What is Cloud Computing? Cloud computing is the process of delivering standard and traditional computing services over the internet. Traditional systems such as computer storage, databases, networks, intelligence hubs, and other systems are monitored and maintained in the cloud versus through an on-site or physical space. At MircoTech Systems, one of the questions we often receive from clients is, “How reliable is cloud computing?” The reliability of cloud computing often depends on factors that you can actually control, which should bring peace of mind. Reliability comes in the form of factors like– - Reputable and trustworthy service providers - Reliable IT management structure - Secure data storage solutions - Fast and safe responses to security incidents How Does Cloud Computing Work? Primarily, cloud computing works by transmitting information over a secure internet connection. For cloud computing to work effectively, information (in the form of data) must be able to travel back and forth from one computer system to another. Because cloud computing takes place over the world wide web, important information such as files, drives, applications, and analytics can be accessed from users all around the world. This accessibility and uptime provides greater flexibility, especially to businesses. Although the cloud is decentralized (meaning that data isn’t usually stored on-site), businesses can benefit from hosting service providers that manage the cloud’s infrastructure and maintenance on their behalf. When to Use Cloud Computing Cloud computing can and should be used in any circumstance in which greater flexibility, access, or storage is needed. This is a popular option for– - Dispersed corporate businesses - Remote work models - Public or shared communication channels - Social technology (related to sources of entertainment) - “Big data” As cloud computing grows in size and popularity, more individual and personal use cases will continue to appear. This growth implies that average users will come to expect the flexibility and ease of use associated with cloud computing. Main Risks of Cloud Computing Even though cloud computing seems to be the way of the future, there are several risks and vulnerabilities that it can expose. Many of these risks are the result of poor planning or failure to act on known cybersecurity information and best practices. Below are a few common cloud computing risks that you should be aware of, no matter what stage you’re at in your cloud computing journey. - Reduced level of direct data control – Cloud computing puts another organization in charge of your data, which underscores the need to hire or use the right provider - Risk of unauthorized use – Greater flexibility means a higher number of unauthorized access attempts by threat actors and cybercriminals - Gaps in credentials – As more companies migrate to remote models, verifying appropriate credentials is more important than ever - Loss of data – When operating in the cloud, there is always a chance that vital data could be lost, stolen, or otherwise compromised How to Address Cloud Computing Risks There are certain steps you can take to ensure that the cloud computing risks you face are relatively low and manageable. If you’re a business that serves customers (or regularly collects consumer data), always prioritize the protection of that data as part of your holistic cloud computing strategy. To minimize cloud computing security risks, take the following steps. - Only hire or use a reputable cloud computing service provider with known experience - Stay active and informed in your company’s cloud computing agreements - Provide in-depth employee training to anyone working from cloud-based systems - Adopt a comprehensive access and security policy (particularly around credentials) - Be transparent with data storage protocols - Regularly reassess cloud computing policies to stay up-to-date with current trends, threats, and mitigation strategies Does Your Company Need Cloud Computing? Recent data shows that enterprise companies are moving quickly to cloud-based models, either through hybrid or multi-cloud strategies. In fact, up to 70% of enterprise companies plan to increase their cloud computing budgets over the next two to three years. As you consider whether your company could benefit from cloud computing, it’s important to weigh potential costs, resources, and growth goals. Cloud services are proven to help you expand your growth potential, increase communication, and stay current with industry-specific software solutions. Adopting a cloud-based business model is one way to stay competitive in a crowded marketplace. Yet having the resources and skill to respond to cloud computing risks is another factor entirely. As a Boise tech company or small business, we know how valuable it is to have a local connection for your cloud computing needs and questions. We encourage you to reach out today to MicroTech Systems for an evaluation and overview of our reliable cloud computing services.
<urn:uuid:db317f37-4d3d-44bf-98b7-4f8c25594b2b>
CC-MAIN-2022-40
https://www.microtechboise.com/blog/top-security-risks-of-cloud-computing-and-how-to-address-them
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00511.warc.gz
en
0.929074
1,143
2.75
3
The US Federal Aviation Administration (FAA) is teaming up with CNN, BNSF Railway and drone-maker PrecisionHawk to study the usage of drones in urban environment, beyond previously allowed operations. CNN is planning to use drones to take journalism to the next level by filming in urban areas, while BNSF is likely to use the flying device to inspect its railroads across hundreds of miles, and PrecisionHawk will be using the device to collect data on crops and agriculture operations. FAA Administrator Michael Huerta said: "Even as we pursue our current rulemaking effort for small unmanned aircraft, we must continue to actively look for future ways to expand non-recreational UAS uses. "This new initiative involving three leading U.S. companies will help us anticipate and address the needs of the evolving UAS industry." FAA also introduced an app, B4UFLY which will provide drone users with vital information including safety and legal regulations required to fly drones in a particular area. The app will provide a ‘status’ indicator regarding the area of proposed flight, along with information on the parameters that drive the status indicator, interactive maps with filtering options, and contact information for nearby airports. Huerta added: "We want to make sure hobbyists and modelers know where it is and isn’t okay to fly. "While there are other apps that provide model aircraft enthusiast with various types of data, we believe B4UFLY has the most user-friendly interface and the most up-to-date information."
<urn:uuid:38621997-1b0e-45cd-a3e5-838fc5507e3c>
CC-MAIN-2022-40
https://techmonitor.ai/technology/us-allows-drone-use-in-more-areas-070515-4570846
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00511.warc.gz
en
0.921597
325
2.640625
3
Cybersecurity as a career has increased in popularity over the last few years, which explains why more students and enthusiastic learners have enrolled for cybersecurity training. As of now, there is a surge in demand for security professionals in the digital space, with the Federal Government predicting that cybersecurity is becoming more important than ever. With more businesses migrating online, cyberattack threats have simultaneously increased. The increase reflects the increasing demand for cybersecurity professionals, which is expected to grow by 31% by 2029. That said, interested learners can consider cybersecurity bootcamps for their training. What are Cybersecurity Bootcamps? Cybersecurity bootcamps are among the many ways that cybersecurity students can advance their cybersecurity skills and expertise. Bootcamps offer full and part-time studies where students gain the required technical skills and certifications to enter the cybersecurity world. Unlike college degrees that take four years, cybersecurity bootcamps are short, lasting between 12 and 14 weeks, and intense. The short and intense study period helps students to easily identify, mitigate, and resolve cybersecurity incidents. Training in bootcamps combines theory and hands-on learning. Different cybersecurity bootcamps focus on varying cybersecurity roles, including cybersecurity analyst, penetration tester, cybersecurity engineers, compliance analyst, system security administrator, cybersecurity consultant, and more. What to Consider when Choosing a Cybersecurity Bootcamp Cybersecurity bootcamps operate differently than a typical college education. College programs focus on philosophical teaching, while bootcamps focus on imparting skills through hands-on experience. This applies to coding and data science bootcamps as well. Since cybersecurity bootcamps take a short period, training is rigorous than the typical college learning process. Also, since bootcamps teach different topics, you should check the specific cybersecurity topics before enrolling. Research what’s expected of you before joining the bootcamp or learn more about them with Cybint’s Ultimate Bootcamp Guide. To find a suitable bootcamp, consider the following pointers: Are There Any Skills Required? Prerequisite cybersecurity knowledge required during enrolling depends on the program that you choose. However, most cybersecurity bootcamps targeting cybersecurity beginners, such as those with slight networking, IT, and operating systems knowledge, only require a high school diploma. For absolute beginners, they should discern the course chosen carefully. With the tight curriculum, evaluate if you can master the coursework within the training period sufficiently enough to get a job after graduating. If you doubt your ability, it is prudent to start familiarizing yourself with the course before joining the bootcamp. Depending on the topic of the bootcamps, beginners should also consider taking foundational lessons on security and coding. For interested persons with some cybersecurity background, intermediate programs are the best option. To join intermediate programs, you should have prior IT experience, networking certifications, and familiarity with cybersecurity architecture. What are the Admission Requirements? Different cybersecurity bootcamps have varying admission requirements that students should satisfy before being accepted. The requirements are meant to weed out students who are not passionate or committed enough to withstand rigorous learning. Note that cybersecurity bootcamps do not conform to national or state education requirements. Beginner programs generally have minimal admission requirements. On the other hand, high-level programs have stringent admission requirements. They may begin with skill tests and interviews before students are admitted. You may also be asked to produce a college degree or baseline certification for advanced programs. What Certifications do you Get After Cybersecurity Bootcamp? You should also consider the certifications given by the organizers after graduating. Certifications are the only proof that cybersecurity students without college degrees can use to get employment opportunities. Common certifications offered by cybersecurity bootcamps include; - Certified Information system security professional - Certified Information Systems Manager - Certified Information Systems Auditor - CompTIA Security While cybersecurity bootcamps will give you 90 percent of what you need to begin a career in cybersecurity, enroll in additional courses to stand out from other job applicants. What Do Cybersecurity Bootcamps Teach? As mentioned, cybersecurity bootcamps focus on hands-on experience and skills. Therefore, student learning will focus on completing assignments, working on labs, and troubleshooting security problems. However, coursework slightly varies depending on the specific topic that you choose. For instance, beginner courses will teach broad concepts, such as system administration, maintaining computer systems, networking basics, and more. Intermediate courses will teach ethical hacking, incident response, security auditing, and SIEM administration. Advanced classes are geared towards penetration testing, advanced infrastructure, python programming for security, and more. So…are Cybersecurity Bootcamps Worth It? Cybersecurity bootcamps equip learners with the necessary cybersecurity skills immediately transferable to the real world. Employers also currently rely on bootcamps for talents. Students with a genuine interest in cybersecurity can apply for financial aid through GI Bill benefits or non-profit programs. That said, the answer as to whether cybersecurity bootcamps are worth it or not is a resounding yes. Currently, employers prefer bootcamp graduates for their practical experience. Besides high chances of landing a job, cybersecurity employees command high salaries. To learn more about the opportunities in the cyber field, download our free infographic on the Cybersecurity Career Landscape.
<urn:uuid:457df140-1d57-4a6d-901c-209d71738cb0>
CC-MAIN-2022-40
https://www.cybintsolutions.com/are-cybersecurity-bootcamps-worth-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00511.warc.gz
en
0.928179
1,090
2.546875
3
The most important carriers in the US are working with the Federal Communications Commission to develop appropriate tools to fight against cellphone theft. One of the projects consists of building a common database of mobile phones that are reported lost or stolen. Then, the carriers will block access to these phones on their networks. The stolen phones will not receive voice or data service on the networks, which will decrease their resale value. Over time, it's hoped this will in turn decrease thefts. Julius Genachowski, chairman of the Federal Communications Commission, negotiated the creation of the database, being convinced that new methods are needed to fight smartphone thefts. "New technologies create new risks. We wanted to find a way to reduce the value of stolen smartphones," said Genachowski. Currently, the theft of mobile devices is one of the fastest growing infractions in the US. The database of stolen cellphones has been welcomed by police associations. One of these pressure groups, The Major Cities Chiefs Association who has members in the US and Canada, published last month a proposal urging the Federal Communications Commission to convince the carriers to disable stolen mobile devices. source: WSJ (opens in new tab)
<urn:uuid:cf8d1af8-36ca-4142-b2c4-df79a22431e4>
CC-MAIN-2022-40
https://www.itproportal.com/2012/04/11/us-carriers-join-forces-prevent-iphone-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00511.warc.gz
en
0.936598
240
2.671875
3
One of the things you probably do the most when on the internet is using a search engine. These websites scan millions of pages for your query terms, all within a matter of seconds. Businesses spend time and money on search engine optimization, a set of methods by which they can improve their ranking in search engine results. A better ranking in search engines can translate into more web traffic for businesses, and in turn more revenue. From a hacker’s point of view, this poses an opportunity. Instead of legitimate sites reaching the top of the results page, what if a compromised site were to appear first? This could make the life of the hacker even easier, with those visiting the page potentially unaware that they are the victim of an attack; this is the reality when we examine search engine poisoning. What is search engine poisoning? In search engine poisoning, a cybercriminal will use the way search engines operate to try to carry out an attack of some kind. In the same way that businesses try to optimize keywords on their pages to attract visitors, so too could cybercriminals; by keeping up with current trends, a hacker could create a page relevant to a trending topic that is receiving lots of searches, like a new product or a developing story in the news. The result of this could be a seemingly innocuous search result that bombards the user’s machine with malware when opened. How can I protect myself? Search engine poisoning can be difficult to protect against, especially being that hackers can easily adapt to, or even fool, search engine algorithms. For example, advancements in technology have made it possible for the operators of these websites to tell if their site is being accessed by an automated search engine process, or rather by a genuine user. Where a search engine might see a perfectly legitimate site, the genuine user could be shown the compromised version of the site. Search engines update their algorithms and security features periodically, but being that they may not even realize a page is compromised, it’s important to take your security into your own hands. Using anti-virus or anti-malware software can help detect a website affected by search engine poisoning, but other hallmarks of compromised pages, like pop-ups, redirects, and overly persistent ads can be used to identify an infected site. You can also compare the page to the times you may have accessed it before; if it looks significantly different or somehow altered, it could be compromised in some shape or form. Most browsers feature various safety measures to prevent obviously compromised sites from displaying at all – enabling these in your browser preferences can help prevent being a victim of search engine poisoning as well. It’s important to stay abreast of ways you or your business might fall victim to a cyberattack. If you’re looking for a partner with whom to develop a comprehensive cyber security strategy, reach out to us at Cyber Sainik today!
<urn:uuid:e66b07ca-5559-4988-a7c6-6328ee6590db>
CC-MAIN-2022-40
https://cybersainik.com/search-engine-poisoning-what-to-be-aware-of/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00511.warc.gz
en
0.93367
595
2.84375
3
Excel spreadsheets can be a great way to collect information. However, how valuable that data is depends greatly upon the conformity of the data inputted. One way to keep the data uniform is to control the data that is entered by inserting drop-down lists with preset selections that users can choose from. This is especially helpful when multiple people are entering data. This post discusses how to control the data that can be entered into certain columns by adding drop-down lists with preset options. The following video demonstrates how to accomplish this: How to Control Data Entered into Cells using Drop-Down Lists in Excel Inserting a drop-down selection into a column or set of cells forces users to choose from the preset list, allowing the creator of the spreadsheet to control the data a user can enter. This can prevent users entering incorrect data, or data in a format that is unusable to the creator of the spreadsheet. Some quick examples of where preset data can be helpful are: - Listing states in a 2-digit format - prevents full state names, misspellings, and slang like "Cali". - Having yes or no answers be "Y" or "N" - prevents variations like yes, yeah, yah, nah, nope, etc. - Using "New" or "Returning" for customer type - prevents different versions of the same word like recurring, previous, repeating, etc. - Zip codes - to prevent typos when you are collecting from a smaller demographic where the zip codes can all be accounted for. - Any other question where you have a specific set of answers you want users to choose from. The examples above show several variations of answers users might enter if asked to fill in an answer. Obviously there are others which proves how important it can be to control data for consistency so you can use the data more effectively. Creating drop-down selection lists in Excel Before creating a drop-down selection list in Excel, first decide which column(s) you want to add selections to as well as what the selection options will be. Planning this ahead of time will significantly reduce the amount of time it takes to create the drop-down lists. - Open a new or existing spreadsheet in Excel. - (Optional) If the spreadsheet is new, create headings for the columns. - Click to add a new sheet that will be used to define the preset data used in the drop-down lists and name the sheet something that makes sense. NOTE: Storing the list options in another tab helps protect the data set up for these lists. - In the newly created sheet, enter all the answers you want to appear as options in the drop-down list for a column or group of cells in another sheet. NOTE: Be sure to enter the answers in the order you want them to appear in the list. - For this example, we are tracking customer satisfaction with the order process. - Highlight the cells where you want to insert the drop-down list. - Click on the Data tab and click on "Data Validation". - In the Data Validation box, select "List" under the Allow: heading on the Settings tab. - Click in the Source box to select the preset answers. Click on the spreadsheet tab where you listed the answers and highlight all of them. - Click "OK" in the Data Validation box to save these settings. If you toggle back to the main sheet, you can now select the options you set from the drop-down list that appears next to each cell. - Continue this process to add all of the necessary preset drop-down lists. - Verify each drop-down list in the other sheet works as expected. (Optional) Hide the sheet with the list of options If desired, you can hide the sheet where you created the preset list answers. This can help prevent users from accidentally modifying the data in each list. To hide the sheet: - Right-click on the title of the sheet at the bottom of Excel and select "Hide" from the pop-up menu. If you ever need to update the answers in a list, remove a list, or add a list, you can do this easily by unhiding the sheet with the list answers. To unhide a sheet: - Right-click on the name of an existing tab. - Select "Unhide" from the pop-up menu. - Click on the name of the sheet to unhide and click "OK". - The sheet will automatically be visible again. Creating drop-down lists allows you to control what answers users can enter in certain columns. This helps provide better data because it controls the formatting of the data, which can make it much easier to filter through and analyze. Additionally, the sheets with the preset answers to the drop-down lists can be hidden so users do not accidentally modify this data. If needed later, these sheets can be unhidden so the data can be updated, added to, or removed. As always, knowing how to manage the format of data ensures the data is uniform, and therefore, more valuable.
<urn:uuid:62e30c68-fe51-4a58-94dc-69543ed45dbb>
CC-MAIN-2022-40
https://blogs.eyonic.com/how-to-control-data-entered-into-cells-using-drop-down-lists-in-excel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00511.warc.gz
en
0.833953
1,058
2.78125
3
The term IoT or Internet of Things refers to a network of physical objects or "things" embedded with electronics, software, sensors and connectivity that enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected devices. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. The benefits to the industry are numerous. DHL and Cisco published a white paper indicating that IoT will deliver $1.9 trillion to the enhancement of SCM and logistics. It is also estimated that in 2020, 50 billion devices will be connected to the internet when compared to 15 billion as of today. According to a recent Internet Project survey and study done by Pew Research Center, a large majority (83 percent) of technology experts and engaged Internet users agreed with the notion that the Internet/Cloud of Things, embedded and wearable computing (and the corresponding dynamic systems) will have widespread and beneficial effects by 2025. IoT solutions have a crucial role to play in transforming the logistics industry. Some of the benefits of introducing IoT in the 3PL industry include: - Collaborating information - Ambient intelligence - Resource optimization - Process Automation There are different types of IoT devices available in market. Some of the more commonly used devises for transportation and contract logistics include: - Telematics devices - Wearables like glass and wristband The potential uses of IoT-enabled solutions across various facets of the transportation industry are listed below. In this industry, shipment “in transit visibility” is an important area of critical interest to all customers. But with the current technologies, LSPs are facing a challenge in providing visibility into the exact location of in transit shipments to their customers. In warehousing, 3PLs face challenges in the day-to-day operations of their warehouses such as: - Stock turn around ratio - Resource productivity - Manual intervention in SCM - Stock integrity in a warehouse - Technology transfer/upgrade IoT-based solutions can help resolve these issues. For example: RFID and Sensors can minimize issues in relation to the visibility of in transit shipments in warehouses, which means that a company can provide end-to- end visibility of the shipments across its supply chain. In Smart Warehouses, wooden pallets with RFID sensors can be used for visibility and tracking while at the same time being used to help calculate warehouse slotting and optimization. Vertical solutions around perishables and pharmaceuticals are one of the fastest growing segments of business. For tracking perishable and time-bound shipments, airlines have introduced smart air freight pallets and ULDs so that real-time updates can be shared by carriers with the LSPs. In ocean freight, wireless temperature sensors and data loggers are introduced in containers to monitor the temperature. If there is any change/variance in the temperature then an alert is sent to the captain of the ship. This enables a quick check and resolution of the issue impacting the cargo. HCL has developed unique solutions for Logistics companies, especially in the areas of fleet management, critical order processing, warehouse utilization, and warehouse slotting. Do contact HCL if you need such solutions. Once IoT is implemented by 3PLs, tracking and tracing will become a lot faster, and in warehouses with connected pallets, stock taking accuracy will get much higher. Vehicle optimization and maintenance scheduling can also be carried out quickly and efficiently in the transportation sector.
<urn:uuid:5a0d6e32-9988-43db-9d33-2776f3612fad>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/internet-things-intelligent-logistics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00711.warc.gz
en
0.931934
728
2.703125
3
Do you wonder if technology has given us more information than needed? Perhaps, but sometimes it is very helpful, if not invaluable. There is so much good, solid data on the Internet that researchers can use it with great comfort and assurance. Almost gone are the days when academics were stuck in libraries searching old books for scraps of data to use in scholarly articles. All of the available information is also helping the general public. What’s happening? Obviously, searches have become infinitely easier and quicker. Also, something else is happening: information asymmetry is falling by the wayside. What Is Information Asymmetry Many of you may not have heard of this somewhat novel phrase. Economists have known about it for a long time. Here’s a definition from Wikipedia: “Information asymmetry occurs when one party to a transaction has more or better information than the other party.” Listen to Ted di Stefano (7:26 minutes) Even if you think that you’re unfamiliar with this phenomenon you have, no doubt, had experiences in your life when you were the victim of information asymmetry. For instance, when you’ve purchased a new car, did you feel comfortable and empowered when you started to negotiate and haggle over price? Or, did you feel sort of out of control, not knowing exactly how much you should be paying for a car? Here’s another example: You’re in your doctor’s office waiting to be called. You’ve got a list of concerns about which you know next to nothing. You’re actually a victim of information asymmetry. How about this: You’re trying to find information on a stock or a particular industry. You go to your broker who, for some reason, tries to change your focus to another stock or industry. You aren’t sure what to do because you’re suffering from information asymmetry. One more example: You are looking either to get a mortgage or to refinance your present mortgage. You go to your bank or mortgage broker, who tells you the proposed loan is “the best you can possibly get, given your circumstances.” You have no idea whether the proposal is competitive or whether it can actually hurt you. Technology to the Rescue I’m sure that you know where I am going with this. To a very decent extent, information asymmetry has been shattered by technology. Why is that? The Internet can give you reams of information on an almost infinite variety of subjects and topics. The only prerequisite, and it’s an important one, is that you have to have both some basic searching skills as well as discretion and common sense about the source of the articles, studies, information, etc., that you find on the Internet. Given that you have a basic grasp of how to effectively search for a topic, and you have the wisdom and discretion to sense what’s real and factual so that you can sort out the amateur sites from the professional ones, you’re there! Asymmetrical Situations That Can Be Avoided Let’s look at the examples that I gave above and go through each of them. They should not be a major problem for you — you can be empowered and take advantage of these situations. Buying a car. You should not feel powerless when you walk into an auto dealership to purchase a new car. There is so much information on cars — their quality, prices, and performance — that you should walk into the dealership fully armed with the necessary data and make a very wise purchase. There is simply no reason why you should be at a disadvantage in such a situation. A visit to the doctor’s office. This one’s a little tricky. Certainly the Internet won’t qualify you to become a physician. There’s no doubt about that. However, there is a tremendous amount of good, sound data available to make you a very informed patient who can work with your doctor to find a rational and safe solution to your health problems. During an annual visit with my doctor, I showed him the results of a European study on a common condition that affects many of us. Actually, the doctor told me that he had read the study, but he then told me that one of his patients found it on the Internet, printed it out, and gave it to him! A great case of asymmetry shattered. Purchasing a stock. Some people blithely take the advice of brokers without doing independent research when purchasing a stock. I’ve done that, and I’ve gotten hurt by my laziness. There’s just too much financial information on companies and industries for you to go to your broker uninformed. Getting a mortgage. So many of us have approached our banks as victims — victims either of misinformation or no information. With so much information on mortgages, that no longer has to be the case. In fact, one of the companies taking advantage of information asymmetry is a company called LendingTree.com. LendingTree presumably works with certain banks that provide competing offers to the prospective borrower. Even in this case, there is plenty of information available for you to check out whether or not you’re getting the best deal available. Markets Are More Efficient Today Many economists have written about the fact that the more we know about a sector, for example the stock market, the more efficient it is, meaning that it is working in a well-oiled manner to the benefit of both sides to a transaction. So, whether you are buying a car, visiting the doctor’s office, buying a stock, or getting a mortgage, you can deal in a much more efficient and fair environment than you were able to several years ago, before the widespread use of the Internet. This makes the vendors of goods and services more likely to be honest — they’re forced to be by the mere mass of good information out there. As a result, the purchasers are better served by getting better prices. All in all, quite a good situation. Theodore F. di Stefano is a founder and managing partner at Capital Source Partners, which provides a wide range of investment banking services to the small and medium-sized business. He is also a frequent speaker to business groups on financial and corporate governance matters. He can be contacted at email@example.com.
<urn:uuid:671a46f3-39e9-4a98-81eb-f79ebd468f82>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/information-asymmetry-shattered-by-technology-54380.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00711.warc.gz
en
0.958015
1,342
2.59375
3
Impact of drone delivery on sustainability and environment How Drones Are Helping Save the Environment The word 'drone' frequently referred to as UAVs – unmanned aerial vehicles often bring to mind an image associated with warfare owing to the heavy use of drones by the military. Nevertheless, this is a myth! There are numerous ways that drones are being used in the different commercial operations outside this domain. They are being used more and more in other ways in society as they are helping aid workers and even ecologists have started putting these devices to use towards sustainable development. Their capability to access, inspect and gather data from an area of concern in minimal time and cost is almost incomparable. The use and applications of drone technology are being extended to make the devices active tools in humanitarian and safeguarding the environment. Used suitably and with the correct regulation in place, drones can help improve work carried out in the field of sustainable development. Various examples of drones in development projects show the possibilities of using them in the field of humanitarian aid and environmental protection. Renewable Energy Projects Aerial technology is proving to be anemerging trend in renewable energy projects. A great example of this is solar farms. Drone inspections are keeping large-scale solar energy projects successfully running around the world. The thermalmapping ability of this technology is assisting solar companies in assessing the anomalies in a quick time and carrying out rectification to efficiently manage production of solar energy. Simply put, drones are helping solar projects kick off and get going. The thermal imaging camera helps relay footage back to a team on the ground. Using a Thermal drone one can map an area, spot invasive vegetation and dust on the solar panels and other problems that might occur. It can also detect defective panels that are overheating. Often people relate mapping to real estate or agriculture; developing an in-depth view of an area that can be used for planning and modelling. But one mapping can be beyond land, that is, industrial emissions. Advanced sensor technologies in drones coupled with AI and special software platform are capable of tracking invisible gases from above. Deforestation and Wildlife Conservation Drones are also being used to tackle illegal logging and industrial deforestation. Pandemic has proved that the natural world is something that should be cared for and preserved. And that includes wildlife. They can also monitor banned logging activity and map landscapes in their entirety. This allows conservation teams to keep an eye on the development of the land over time, and ensure that farms, plantations and poachers don’t upset wildlife and the environment. Today illegal poaching is causing certain animals to fall to the point of near-extinction, including elephants and Tigers. Besides tracking poachers, drones are also deployed to count populations of animals. Drones are also helping with animal conservation efforts in many countries. In interiors and inaccessible locations, this is usually a labour-consuming and costly process, but drones facilitate local conservationists to keep a finger on the pulse of local wildlife with ease. It is well known that drones are very useful in agriculture. They help farmers to evaluate crop health & yield from above in a more cost-effective and efficient way. They help farmers make informed decisions on the use of pesticides and fertilizers. Drones not only match up manual labour that goes into agricultural work but perform better in terms of cost and time. By empowering farmers to use their land as efficiently as possible, drones can help to keep the necessary industry of agriculture free from waste and environmentally damaging practices. Water is one of the most precious commodities around the world, but significant quantities are lost on a daily basis through leaking and broken pipes. Armed with infrared cameras, drones are being used in hot and remote locations, to spot leaks in underground water pipes in the desert. A research done by an environmental scientist at Lawrence Livermore National Laboratory shows that delivery drones can lessen the burden on the environment and deliver smaller packages faster than trucks. Delivery trucks are responsible for about 20% of the world's greenhouse gas emissions So, drone technology and its applications are ever growing and are offering multiple and dynamic technological and social impacts. Filling the gap between satellites and ground surveying, drones enable fast, remote access to a particular area, allowing activists, scientists and aid workers to compile reports, analyse situations and obtain data about a region, issue or disaster faster, more accurately and saves more time than having teams of people on the ground. Beyond monitoring and observation, drones can also proactively be used to help get medical supplies to those in need, plant trees and monitor deforestation.
<urn:uuid:db725419-2f02-4bda-bec3-41f7b4a735e4>
CC-MAIN-2022-40
https://drone-technology.ciotechoutlook.com/cxoinsight/impact-of-drone-delivery-on-sustainability-and-environment-nid-7306-cid-171.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00711.warc.gz
en
0.936522
948
3.21875
3
The cache feature of search engine is a useful feature that allows you to access a website even if it has been taken offline or removed completely from the internet. Security firm Aladdin has reported that (opens in new tab) an attack on a university originated from a 'poisoned cache'; it was an infected page that had been taken offline for a certain amount of time, but survived in a search engine cache. Although the name of the search engine was not revealed, it could only be one of the big three: Microsoft, Google or Yahoo. More worrying is the fact that the difference in URL (the cached pages have Google's URL), mean that traditional list-based URL filtering systems cannot weed out those pages. Depending on whether the cached page is popular or not, the malicious code could remain online for months, infecting even more web users. Legally, the search engines could be liable to prosecution because they have become the largest 'legitimate' source of storage for malicious code. A simple solution could be for Google to look for strings of malicious code within their cache and remove them as they are found.
<urn:uuid:7abf32e0-c73a-4c66-80dd-72dfb292382d>
CC-MAIN-2022-40
https://www.itproportal.com/2007/12/10/search-engines-cache-place-where-danger-lurks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00711.warc.gz
en
0.957317
230
2.9375
3
Researchers from Tomsk Polytechnic University found that polymer and hybrid silica-coated micro-capsules are more efficient in genome-editing when applying CRISPR-Cas9. In the future, this development will significantly simplify and increase the efficiency of genome editing. Which can treat previously uncured inherited diseases, such as Alzheimer’s, hemophilia and many others. Clustered regulatory interspaced short palindromic repeats / Cas9 (CRISPR-Cas9) is a revolutionary genome-editing technology with huge potential for research and clinical applications. According to Prof. Boris Fehse of the University Medical Center Hamburg–Eppendorf, bacteria have an adaptive immune system ensuring recognition and eradication of viruses if they try to infect them more than once. For this purpose, bacteria incorporate short sequences of the viral genome into their own genome and utilize them as a template to synthesize short complementary RNAs that recognize a phage genome using the key-lock principle. CRISPR-Cas 9 procedures present a safety challenge. These delivery methods are highly toxic. When they used, some of the edited cells die. To solve this problem, the researchers decided to use 2-2.5 micrometer polymeric and hybrid silica-coated capsules (SiO2). They loaded with genetic material and delivered to the target cell. It absorbs them through micropinocytosis, a process in which cells capture relatively big particles. Thus, microcapsules enter the cytoplasm of cells and release their contents. The shell itself gradually dissolves. Researchers say, delivery efficiency with these microcapsules is achieved due to low toxicity. The study showed that over 90% cells survive after transfection. Thus, the percentage of edited cells became significantly higher in comparison when liposomes used for transfection. The results allow the scientists to conclude that the microcapsules represent promising vectors for application of genome-editing tools. The next stage, according to the researchers, will be the application of CRISPR-Cas9 using microcapsules to deliver genetic material in vivo studies. More information: [nanomedicine]
<urn:uuid:dca1741c-64f2-444a-bc76-9540eef7f472>
CC-MAIN-2022-40
https://areflect.com/2017/10/06/hybrid-silica-coated-microcapsules-increase-the-efficiency-of-genome-editing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00711.warc.gz
en
0.898438
465
2.921875
3
The whole world is in a crisis. A small virus has changed the scenario and also the way humans used to live. The current situation is nothing less than a disaster. Months have passed since this disaster began. However, this has not altered the will to combat, and disaster management is being done with full-fledged determination. Technology has transformed lives and its use in the whole process can not be ignored especially if we consider our own reliance on it. The technological advancements have made everything look so easy. Artificial Intelligence and IoT are among two such advancements that are being utilized to facilitate us. ‘We can't stop a disaster but definitely can manage it’. Even when writing this statement there is a feeling of insecurity because such has been the level of control the humans have managed to take over the happenings around them. So, in this blog, we will look at the use of technology to combat a disaster like the spread of the novel Coronavirus. The emphasis will be on IoT. To understand it better, we shall start with a brief description of IoT. What is IoT and how is it transforming lives? The Internet of Things (IoT) describes the network of physical objects—“things”—that are embedded with sensors, software, and other technologies to connect and exchange data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools. The past few years have seen IoT emerging as the technology of the century. People can connect their everyday use devices—kitchen appliances, thermostats, baby monitors, cars—to the internet via embedded devices. This has made communication seamless between people, processes, and things. Using low-computing, the cloud, big data, analytics, mobile technologies as a medium, human intervention has been minimalized. This has made physical objects capable of data collection with the least intervention. This development was not achieved alone, other technologies were used to make the IoT experience more convenient. Access to low-cost and low-power sensor technology, connectivity, cloud computing platforms, machine learning and analytics, conversational artificial intelligence (it may interest you too take a look at our blog and AI and IoT as one technology) are such useful technologies. IoT has benefited a lot of Industries like manufacturing, healthcare, automotive, transportation and logistics, retail, public sector etc. So, in this blog, we will look at the role of IoT in disaster management with the help of examples from the COVID-19 crisis. How IoT helped countries combat the widespread disease? Some stats about disaster possibilities “Disaster management is one of the key use cases for IoT in India given our vast diversity and complexity in our geography and hence the varying levels of vulnerability to both natural & other disasters. The power of real-time information availability together with real-time analytics associated with IoT can be a game-changer in planning for prevention and response to disasters. I sincerely believe that such efforts on generating awareness will go a long way in the creation of sustainable and useful solutions in the Long Run.” -Dr. Neena Pahuja, Director General, ERNET Ministry of Communications and Information Technology Government of India The role of IoT in disaster management To better understand the topic we must start with a small brief of disaster management. Disaster management aims to mitigate the potential damage from the disasters, ensure immediate and suitable assistance to the victims, and attain effective and rapid recovery. These objectives require a planned and effective rescue operation post such disasters. Different types of information about the impact of the disaster are, hence, required for planning an effective and immediate relief operation. Now looking at the requirements in disaster management like spreading awareness, planning of rescue operations, IoT has posed a great help. 4 ways in which IoT helps The Internet of things offers disruptive potential in prevention, preparation, response, and recovery phases of disaster management. We should look at them one by one: IoT can be a game-changer in the prevention of disasters through the following: Vehicles using telematics Water levels using sensors Sensors to detect wildfires, tornadoes, earthquakes, cloudbursts, and volcanic activities IoT has the potential to streamline preparation efforts. Use of sensor technology to address real-time stock and supplies replenishment spares planning, and automated indent processing Asset track and trace Use of complex event processing for notification of an action based on capturing streaming sensor data resulting in predictive resource deployment. IoT can facilitate response planning and actions through: Use of sensors to monitor the movement of key personnel Using NFC for geofencing and parameter fencing Situational awareness and incident management through streaming data, unstructured data handling, predictive analysis, big data, complex event processing, and social media analytics. IoT can be a great enabler for recovery efforts and activities through: Use of sensor technology for identification and authentication of beneficiaries Use of smart cards and RFIDs for relief disbursal Create a virtual logistics network that allows hub operators and others to monitor traffic towards and within a hub in real-time and facilitates communication between all involved parties. After going through all the ways IoT can be used in disaster management, we should have a look at the benefits too. Benefits of IoT in Disaster Management The following are the key benefits of the application of IoT in disaster management: Agencies gain a clear picture of operations with real-time visibility of data. Agencies can extract current and historic data from multiple sources; transform it into rapidly accessible, actionable intelligence for faster and better-informed decisions. It helps in creating a single, federated information hub. Agencies can build an information backbone that all parties – including government agencies, NGOs, infrastructure operators, and the community – can contribute to and work from. It increases collaboration and interoperability - It allows all stakeholders to work together more effectively by creating consistent and shareable workflows, processes, forms, and plans that address disasters and emergencies of all kinds. Agencies gain by leveraging cutting-edge technology – through the harnessing of the power of Big Data, cloud computing, mobile technology, and sophisticated yet intuitive analytics to streamline and optimize all emergency management processes. Using sensors, M2M communication, and other IoT tools make it easier to recover the loss incurred by disasters and also in the preparation. Recent examples of disasters are the Coronavirus pandemic, Amazon Rainforest fire, and Australian Bushfire. The pandemic is unprecedented, however, the latter two examples are disasters that take place quite often and hence they can be prevented with the use of sensors that can sense such a sudden rise in temperatures. For the current situation, the Coronavirus pandemic, the Indian government came up with an app Arogya Setu App, that has benefitted the government and people equally. Governments are using the data to contact trace while the people can keep an eye on the situation of the spread of Coronavirus. So, this is how IoT helps in the whole process of disaster management. This article may have cleared the doubt about the use of technology in our lives. Not only small tasks, but something massive like disaster management too requires the help of technology. IoT is one such advancement that is making lives better.
<urn:uuid:89f21318-50c0-4319-9ef8-5d5e79b0ef5c>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/4-ways-which-iot-helps-disaster-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00711.warc.gz
en
0.931331
1,574
2.90625
3
The Health Insurance Portability and Accountability Act (HIPAA) is one of the most important pieces of legislation in the American healthcare industry. Enacted by Congress in 1996 and signed into law by President Bill Clinton, HIPAA was originally designed to address the issue of health insurance coverage for people who were between jobs. Without HIPAA, individuals who found themselves in these circumstances would be left without health insurance, and potentially unable to pay for critical healthcare. HIPAA’s role stretches well beyond the provision of healthcare for people between jobs. Today, HIPAA is synonymous with data protection laws. Many of its Acts were designed to improve the experience of patients in the healthcare system and introduce a nationwide standard of patient data storage and protection. Existing laws were deemed inadequate to deal with the increasing use of technology in the healthcare industry and the looming threat that hackers pose to personal data. It is now one of the most important data privacy and protection laws in the US. Many types of businesses in the healthcare industry are required to comply with HIPAA regulations, including healthcare providers, health plans, healthcare clearinghouses, and business associates of HIPAA-covered entities. These organisations are expected to be familiar with every aspect of HIPAA legislation; the fines for violations are hefty, and ignorance is not deemed an acceptable excuse for a violation. HIPAA legislation can be complex. One area which causes much confusion are the HIPAA record retention requirements. HIPAA makes a distinction between medical and HIPAA-related non-medical records, which must be treated separately. Here we shall discuss HIPAA’s requirements regarding the retention of each type of record. HIPAA’s Privacy Rule does not stipulate how long medical records should be retained. Therefore, there is no official HIPAA medical record retention period. Each State has its own laws which cover the retention of medical records, and there is no nationwide standard. There is great variation on what is deemed an acceptable period of time to retain medical records, not only between states, but within states for different types of healthcare providers. For example, Florida requires physicians to retain medical records after the last patient contact, but hospitals must retain them for seven years. North Carolina requires hospitals to retain a patient’s records for much longer periods of time; eleven years since the patent was discharged, or until the patient is thirty if they were admitted as a minor. Although HIPAA’s Privacy Rule does not include medical record retention requirements, it does have requirements regarding the manner in which the data is stored. The covered entity is required to apply appropriate administrative, technical, and physical safeguards to protect the medical records for whatever period they are being retained, and ensure that they are disposed of in a secure manner. Administrative safeguards include policies and procedures designed to manage information access within the organisation and train the workforce in HIPAA compliance. Physical safeguards require the physical protection of data such that it may not be accessed by unauthorised individuals. This may include workstation and device security, and are often the most straightforward-yet effective-security measures. Technical safeguards include controlling access to computer systems and the protection of communications containing PHI which is being transmitted electronically. Each State may have further requirements regarding the storing of medical records. In general, medical records should be stored in a system which allows for the records to be accessed and retrieved promptly (by individuals authorised to do so) should they ever be needed. HIPAA-Related Non-Medical Records Unlike medical records, HIPAA does have requirements about how long HIPAA-related non-medical records should be retained. Section 164.316(b)(1) HIPAA requires that organizations: “(i) Maintain the policies and procedures implemented to comply with this subpart in written (which may be electronic) form; and (ii) if an action, activity or assessment is required by this subpart to be documented, maintain a written (which may be electronic) record of the action, activity, or assessment.” According to Section164.316(b)(2)(i), the required documentation must be retained for six years from the date of its creation, or the date when it last was in effect, whichever is later. For example, if a policy is implemented for a year before being revised, a record of the original policy must be retained for at least seven years. The types of HIPAA-related non-medical documentation covered by these Sections may include, but are not limited to: - Security risk analyses - Breach notification documentation - Employee sanction documentation - Business associate agreements - Notices of privacy practices and policies - Contingency plans for disasters - Log records for viewing of PHI - IT system reviews - Physical security maintenance and update records For non-medical records, HIPAA preempts State laws if they require shorter retention periods. However, a State may require longer periods of retention; in these cases, the State law must be followed. For organisations that work in multiple states, it is worthwhile checking the laws in each individual state regarding the retention of HIPAA-related non-medical documents. Summary of HIPAA Retention Requirements Once one understands the distinction between medical and HIPAA-related non-medical records, the requirements are generally quite straightforward; for medical records, States set the law, but for HIPAA-related non-medical documents, a minimum of six years is required. However, there are some exceptions. For example, some employers may be required to keep records indefinitely due to the requirements of the Employee Retirement Income Security Act and Fair Labor Standards Act. It is worthwhile for organisations to perform a thorough audit of the data which they hold, ranging from patient and employee records to policies and procedure documentation. A thorough understanding of which laws relevant in each case is vital to ensuring full compliance with HIPAA’s record retention policies.
<urn:uuid:8e14a4b9-dbce-435c-ba3b-8376c9451fcd>
CC-MAIN-2022-40
https://www.defensorum.com/hipaa-record-retention-requirements/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00711.warc.gz
en
0.95085
1,205
3.203125
3
Enterprises, educational institutions and local government offices are all responsible for the security of the data they store and share. In the UK, specific legislation like the General Data Protection Regulation (GDPR) is enforced by regulators at the Information Commissioner’s Office (ICO), and companies who are found to have failed in their obligation to safeguard data can face heavy penalties. Fortunately, detailed information regarding data security responsibilities is widely available, along with specialist solutions designed to protect information. In this blog, we’ll look at some of the most common questions companies have regarding data security, accompanied by straightforward answers to alleviate concerns. 1. What kind of data needs greater protection? Under the GDPR, there are some kinds of personal data that are deemed especially sensitive and are classified as “special category”. This information concerns or reveals ethnic or racial origin, religious and political beliefs, genetic and biometric data, and health data, among other areas. Data handlers processing this kind of information should give serious consideration to how and why this data is used and ensure that it is only used when necessary. Companies must also safeguard business critical data that can be exploited. For example, financial details that if disclosed, could result in theft and loss or private deals and agreements. 2. Can you share data with another organisation? If your firm has a valid reason, it can share data with another enterprise or entity. However, to comply with the GDPR, companies must have a lawful basis for data sharing and keep a record of it. If the data belongs to a data subject, they must have given their consent for their information to be shared with a third-party. Additionally, when the information is shared it must be adequately secured, ensuring that no unauthorised entity can interact with it. 3. What is the best way to secure data? Cybersecurity experts agree that the most effective way of keeping data secure is to use encryption software. Once applied data encryption will scramble the contents of a message or file ensuring that no one can read it without authorisation. The sender of a message or creator of a file will hold a private encryption key that gives them access to view, alter, or delete the content. They can then issue a public key to those they wish to bestow access to, also known as a decryption key or decryptor for short. Once a file or message is encrypted it can be shared or sent or sit stored on a server or cloud-based storage vault while remaining entirely secure. Companies that encrypt the sensitive and confidential data they share, and store can avoid heavy fines from data regulators. The ICO recognises the use of data encryption as evidence that an operation has taken sufficient measures to secure data. 4. How long should data be kept on file? While there are no precise rules regarding how long information can be kept, for security reasons, personal data should only be held for as long as it is needed. When data is no longer required, it must be destroyed securely. A clear record should always be kept of when data was destroyed, and evidence of its deletion retained. While electronic data can be deleted from devices, servers and backups, physical data files must be shredded. 5. What makes a strong password? Despite their limitations, passwords are still commonly used credentials used to limit access to data. However, to remain effective, they must be tough to crack but easy to recall. While a series of random characters of different case, numbers and symbols might be impossible for a threat operator to guess, they will be equally difficult for a user to remember. In such, cases a user will either save the password to their local device or keep a record of it on file where they can cut and paste it as required. Either way, the result is a data security vulnerability. The latest advice from the UK’s National Cyber Security Centre (NCSC) is to construct passwords from three unrelated words. Simpler to remember, these passwords remain difficult to deduce by attackers. Passwords should always be issued by IT departments rather than user, and should be changed regularly. Multifactor authentication should also be activated on password protected areas and accounts to add a secondary layer of data security. 6. What actions should you take when data security is compromised? If your team identifies that there has been a breach in data security, you must immediately conduct a risk assessment. Find out what type of data was disclosed and how sensitive the information involved is. If the informed exposed belonged to specific data subjects, they must be informed of the breach and advised on the potential risks involved. For example, if their email addresses and telephone numbers were compromised, you can warn them to be wary of suspicious messages and calls. You must also report the incident to the ICO. Most firms will have a 72-hour window to assess and report the data security breach. The ICO will want to know how and when the breach occurred, when it was first identified and what measures have been taken to prevent the same incident happening again. It will also need to know the extent of the breach, the type of data involved and what data security measures were in use at the time of the incident. Experts in data security At Galaxkey, we provide businesses, local authorities, universities, and schools with the tools necessary to protect the data they retain and share. Our secure workspace has zero back doors and never stores passwords, while our suite of email tools features handy options like time-out and recall. We also offer digital document signing and data encryption solutions that ensure all content kept or used by your operation has three layers of powerful protection and is always completely traceable for compliance. To access our data security solutions today, contact our team and start a free 14-day trial of our cutting-edge toolkit.
<urn:uuid:872939ad-23be-4d54-a58a-9721f6d29328>
CC-MAIN-2022-40
https://www.galaxkey.com/blog/six-common-questions-and-concerns-about-data-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00711.warc.gz
en
0.948606
1,184
2.75
3
Is your business prepared for a cyber attack? Data breach. Hack. Ransomware. Phishing. These are the terms that strike fear into the heart of any IT manager. And, if you’re a business owner, you fear them too. But you can prepare yourself to prevent a cyber attack, and to better withstand one if it does happen. Here’s what you need to know. What is a cyber attack? A cyber attack is any incident in which an unauthorized party gains access to your computer network or information. There are multiple ways to do this, and various outcomes, but they all have this in common: it’s expensive, disruptive and unsettling when your business is the victim. According to the 2019 Hiscox Cyber Readiness Report, the most common concern for companies with respect to a cyber attack is that their company email addresses will be compromised, which can lead to phishing and spearphishing attacks. These types of attacks allow hackers to gain access to your network by sending a malware-infected email that looks like it came from another employee, or to steal funds directly by posing as a vendor and sending an email requesting payment. The second most concerning type of attack is ransomware. Hackers infiltrate a company’s computer network. They may lock down the system, preventing the company from doing business. Or they may threaten to expose sensitive information such as customer data or information on products that have not yet been released. The hacker demands a ransom from the company in exchange for releasing their hold on the system. How likely is an attack? Among U.S. businesses, 53% experienced a cyber attack in the last year, up significantly from the 38% who reported an attack the previous year. Over a quarter (27%) of companies experienced four or more attacks in the past 12 months. The average cost of a single cyber attack in the U.S. is $73,000. The cost for all attacks suffered by the average U.S. business in the past 12 months is $119,000. Why would hackers bother attacking a small company? Hackers are opportunistic, and will often cast a wide net just to see what they catch. In many cases, small businesses have fewer controls in place, making them easier to hack. And many businesses are at risk due to the security shortcomings of vendors or suppliers with whom they do business. Fifty-seven percent of U.S. companies said they experienced a cyber attack due to an incident in their supply chain. What can businesses do to protect themselves? There are a number of things that companies can do to try to stay one step ahead of cyber criminals, but the most important is to use your employees as a ‘human firewall.’ Alert and well-educated employees are your best defense against hackers. Prevent attacks before they happen. Have one person in the company who is dedicated to cyber security. Make sure that security is a priority and that this is communicated from the highest level of the organization. Train all employees in best practices like how to identify a phishing attempt and how to create strong passwords. Detect an attack early to minimize damage. Many types of cyber attacks rely on human interactions to spread a virus throughout the organization, so the sooner you see it, the sooner you can stop it, and the less damage you will have. Make sure everyone in the company knows to report anything they see that might be suspicious. Even if it turns out to be nothing, it’s better to say something than not. Mitigate the impact on your bottom line. Given the frequency of attacks on businesses of every size, in every industry, experiencing a cyber incident is not a matter of ‘if,’ but ‘when.’ Be prepared by backing up your data regularly and storing it offsite so you can restore it quickly. And invest in cyber insurance for your business. Hiscox cyber insurance can not only protect you from the financial costs associated with a cyber incident, but also includes resources to help you educate your employees and manage the logistics of reporting a breach and repairing your reputation so you can get back to doing business.
<urn:uuid:7a0aec76-d9e1-4ab9-b628-7375b6e2a26a>
CC-MAIN-2022-40
https://www.hiscox.com/blog/your-business-prepared-cyber-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00711.warc.gz
en
0.963186
852
2.75
3
Dr Bernard Parsons, CEO at Becrypt, looks closer into how every organisation can prepare, prevent and even learn cyber threats using Digital Forensics. The significance of activities such as Incident Response planning and Digital Forensics may for many seem only relevant for organisations that work in the most security conscious sectors. However, I believe that a rounded appreciation of good cybersecurity practices is valuable, if not critical, for all organisations. It is important that, in any size or type of organisation, if a security incident should occur, those charged with responding and investigating are prepared to follow a structured, effective and informed process. Spending a small amount of time thinking through how well an IT environment’s configuration and security controls may support a forensics exercise, in the event that an organisation suffers a breach, can have a significant impact on the cost and disruption experienced when one actually does occur. Being prepared could be the deciding factor for the subsequent longevity of an organisation or individuals within it. Both physical and digital forensics have the same fundamental goal; to prove exactly what happened during a given event period, and to attribute actions to a specific individual, allowing effective and appropriate response. They both rely on the acquisition and analysis of data in a timely fashion, and in a manner that allows the provenance of the data to be confirmed. There are many proposed methodologies for digital forensics, but generally, they can be condensed into the same five steps: Gather human intelligence Clarify the time and date boundaries A modern network generates thousands of events every minute, which means that, before undertaking any investigative action, it is important to narrow down where to look. Find out who is involved The crux of any investigation, this requires detailed questioning of those who reported the event. Questions such as: ‘When did you first spot it; how long was it a problem/did it go on for; is it still happening; who is involved?’ Ascertain which machines are affected You can identify from the users which machines have been affected. However, this may not represent the only area that needs investigation; remain open minded. Identify what actions have been taken since the discovery In any digital forensic investigation, once you interact with the environment it automatically changes and the evidence is altered. It is important to understand what actions people have taken (or tried to take) and work from that point. Be prepared to eliminate ‘false positives’. Disproving facts with evidence is equally as useful as proving a theory during an investigation. Plan your approach Prioritise your targets In a digital environment events happen very quickly. Identify and prioritise the areas where you can get valuable evidence; working from the most volatile environment, to the most stable. Keep it legal Ensure that legal guidelines are followed. If you don’t follow procedure, evidence may be inadmissible in a court of law, should the need arise. Allocate resources and skillsets Ascertain whether you have the right people to conduct the investigation. You will need experts for your hardware and software configurations to ensure that valuable evidence is not inadvertently compromised. External agents could provide an unbiased alternative. Balance value against cost There is a cost associated with any work, and so a sanity check is vital. Balancing the proportional effort, cost and risk to the business is essential. Document and sign your evidence Everything that is captured must be documented exactly, dated and signed because as evidence is touched, it is immediately changed. This ensures that a clear audit path is kept. Capturing the data Any work carried out on data should be on copies only, always preserving the integrity of the original data. Keeping a strong chain of custody ensures that the master copy is kept intact and remains the ultimate reference point. Use cryptographically verifiable data When data is captured and recorded it will always have a ‘hash’; its unique identification number. Any copies taken will also have the same hash reference. Analyse the evidence Make a timeline of events Data from multiple sources may have different time stamps, by compiling the data together you can build a complete picture. Matching the evidence over the time period also helps to identify corroborating evidence. Analyse the data From the timeline of events it is important to work systematically, hypothesising and running tests to prove/disprove any theories. Additional corroborating evidence may be required. Report on your findings At the end of the investigation your report needs to be understandable and contain only defensible data. The report will need to explain findings that make sense to non-technical people. The report must be factual, presenting data, dates and events that have happened, and it must be impartial. As well as the summary report, it is also important that all relevant data is compiled in an additional appendix. For serious cases, investigative experts will need to review the data to corroborate the facts that you have presented. By following these five steps your digital forensic investigation and subsequent report is more likely to meet the stringent requirements of courts and industrial tribunals, and provide valuable information to the business and people affected.
<urn:uuid:cb239143-42ef-40c8-ba78-15d65388b8a4>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/five-key-steps-digital-forensics-incident-response/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00711.warc.gz
en
0.940394
1,064
3.015625
3
Article Fighting Bullying With Technology Attitudes toward bullying have come a long way since the days when people shrugged their shoulders and considered it an inevitable rite of passage. By Teresa Meek / 9 Feb 2016 By Teresa Meek / 9 Feb 2016 Today, bullying has expanded from the bus stop and the playground to social media, and cyberbullying is recognized as a serious problem that can cause psychological trauma for boys and girls from Kindergarten through high school. Bullying victims are two to nine times more likely to consider suicide than other kids, Yale research shows, and some have followed through on their impulses. Kids can be vicious on the Internet, where they don’t have to face their victims, and they don’t always outgrow the behavior. In fact, while physical bullying peaks in middle school, cyberbullying continues through high school, and boys feel less sympathy for victims as they age, according to DoSomething.org. In high school, 160,000 kids skip school every day because they are afraid of bullying, a National Education Association report says. But kids are too ashamed to admit it. By age 14, less than 30% of boys and 40% of girls will talk to their peers about bullying. Solving the bullying problem requires a coordinated effort by parents, teachers and kids themselves. According to the National Bullying Prevention Center, school-based prevention programs decrease bullying by up to 25%. But what about after school? If some of today’s innovative programs catch on, the very technology that enables cyberbullying may also contain the seeds of its destruction. League of Legends, the world’s most popular videogame, is a breeding grounds for abusive behavior, its owner Riot Games admits. The company decided to do something about it, hiring scientists and designers who used machine learning to find effective ways to make players more civil. Results amazed the company. It now uses software to monitor for abuse, and has uncovered millions of incidents. The system delivers feedback to users within five minutes, and offending players are penalized during or after games. A full 92% of offenders have not repeated their behavior after being caught. The company believes its model could be used by many other applications, and is open to sharing its data. Sometimes a nudge is all it takes to stop an impulsive teen from bullying. That’s what 15-year-old Trisha Prabhu found when, inspired by the suicide of an 11-year-old Florida girl who was bullied on social media, she developed an app to discourage bullying. She did research to learn the best way to approach teens, and learned that the prefrontal cortex of adolescent brains isn’t fully developed, making young people prone to impulsive behavior. Her app alerts them to the potential harm hateful messages could cause and asks them to think twice before posting. Prabhu was skeptical about her app at first, fearing image-conscious teens would consider it “stupid.” But it worked — in a test of 300 kids ages 12 to 18, 93% changed their minds and decided not to post offensive remarks. Other anti-bullying apps on the market include STOPit and Bully Block, which encourage kids to report bullying to adults anonymously, and Bully Box, which enables reporting and also points kids to outside help for counseling and prevention. Twitter, often used as a platform for bullying and inciting others to violence, announced changes to its platform to reduce cyberbullying, including widening its definition of violent content and policing violators more vigilantly. The National PTA offers a full menu for combatting bullying, including tools to help you get started, discover your community’s specific problems, build a team and develop positive social media messaging.
<urn:uuid:770bc3df-b509-407c-bacd-1ed90db27797>
CC-MAIN-2022-40
https://www.insight.com/en_US/content-and-resources/archive/206684319-fighting-bullying-with-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00111.warc.gz
en
0.966743
778
3.328125
3
Mold can sneak into the data centers rapidly. No doubt that it causes distress for data center managers. Mold only takes 24-48 hours to develop after exposure to moisture. Incorrect temperature and humidity make a suitable habitat for mold. This problem can be more threatening when seasonal weather introduces humidity and moisture to the equation. With such neglect, your facility and equipment will be in danger. Thus, Temperature and humidity monitoring should be a critical practice for data center managers. This will help you control all the damages and losses associated with mold growth. Companies consider their data centers to be the heart of their business operation. With the amount of data stored here, it is their major intent to protect it from any potential threat. However, the inertia of day-to-day operation and assiduity in protecting their software from viruses, makes them neglect their unseen enemy. Little did they know, an organism beyond our naked eyes can see, can prompt their data center’s full operation. What Causes Mold? Molds are micro-organisms that live on almost every surface. They usually grow in humid areas without adequate airflow to dry up the moisture. It can be behind the walls, glasses, ceiling, or even on equipment such as circuit boards. Together with yeast and mildew, mold is classified as a fungus. They typically consist of a network of threadlike filaments that infiltrate the surface on which it grows. Molds may appear black, green, gray, brown, or other discoloration of the affected surface. The role of the mold is to break down organic dead matter and return the nutrients to the environment. They require food to grow which is anything organic. For instance, a mold grows on your data centers glass wall not because it eats the glass. The reason is the dirt on it that a mold can eat. Molds can also reproduce through microscopic spores. They are usually found outdoors but brought inside by sticking to our shoes and clothes. Additionally, a lot of them are very small that can be carried in even with the gentle air. The number of mold spores may vary on season-to-season and day-to-day. Lastly, humidity and temperature are the key environmental parameters that introduce the growth of mold in the data center. When the relative humidity and temperature threshold are not maintained, molds can grow within your facility causing damage and loss. The Impact of Mold Growth on Data Centers The impact of Molds is mostly associated with human health. Headaches, skin rashes, and asthma are usually the effects of molds. For critical missioners –facilities such as data centers, though its effects are less commonly known, they are undeniably terrible. Molds can cause structural damage to the physical facility. As mold thrives at any organic material, any part of the facility structure is prone to mold damage. Be it concrete cement, wood, or glass. When mold growth is tolerated, mold weakens the structure causing the walls, floors, and ceiling to collapse. In the event of any collapsing, the data center operation will be interrupted and repair costs will increase. Molds can grow on the equipment for as long as there is a favorable condition. It often occurs when the moisture stays long before getting dried. After absorbing moisture from the air, plastic insulation such as printed circuit boards can change in electrical properties and can be degraded. Mold on circuit board can also cause lead to short circuits. Additionally, most electronic equipment has its fans inside use for cooling. As the fans exhaust in the air, they also suck dust particles. The combination of dust particles with moisture will be great nourishment upon which the hungry mold spores feast. If not cured early enough, the accumulation of mold will damage the equipment. And needless to say, equipment damage leads to interrupted business operations. The Importance of Temperature and Humidity Monitoring Cleaning your data center to keep free from dust and dirt is a sure way in preventing mold growth. However, doing it alone is not enough. Monitoring environmental factors such as temperature and humidity are also vital in achieving a mold-free data center. Maintaining the balance between hot and cold air has always been the foremost foundation of data center uptime. Temperature monitoring is mostly associated with protecting equipment from overheating. Data centers comply with ASHRAE Standard to ensure that every device has sufficient cooling. Yet, these are not the only risks of temperature. Molds can also devastate your facility and equipment if the temperature is not regulated. In general, mold grows well in warm temperatures ranging from 60 to 80 degrees Celsius. When cold surfaces are exposed to hot and moist air, condensation occurs. This is because the cooler air is incapable of holding as much moisture. This condensation provides the perfect condition for mold to develop and grow. For instance, larger HVAC units can also lead to a condition that is ideal for mold growth. Units that are relatively larger for the room size are installed as they believe “the bigger, the better”. This can have a counter effect called the short cycling condition. This condition happens when an air-conditioner runs, then turns off again. The air holds up the moisture which promotes molds. Monitoring the consistency of the temperature creates an impact on mold prevention. By avoiding the “hot then cold air” cycle, the data center would not be a place that can host life for molds. Humidity is complimentary of temperature. If temperature changes, so do the humidity. Moisture is the easiest requirement for mold growth. Moisture can form with two types of moisture: 1) water leakage from pipes and air conditioning unit and 2). Humidity in the air. Of these two, the humidity is more concerning. Mold growth caused by water leaks can be anticipated by determining the potential locations of the water leak. Whereas moisture caused by humidity in the air is always a puzzle for data center managers. Data centers maintain a Relative Humidity of 20-55% for optimal performance and reliability. Same for mold prevention, the Relative Humidity should be below 70%. If the humidity goes beyond this threshold, then your data center is at risk. Wet, humid summer air also affects the relative humidity of the facility. Along with this factor is the rainfall. Mold spores attach themselves to damp surfaces. When it rains or even when the air is particularly humid, surfaces outside the home can become damp, allowing for mold spores to latch on and grow. It is impossible to control mold with the temperature alone. Humidity monitoring is also critical for monitoring. Datacenter managers need to ensure that the humidity of the facility does not get high enough to encourage mold growth. Temperature and Humidity Sensors in Mold Prevention Mold grows faster than you can imagine. So the key to saving data centers from mold damages is prevention. Using temperature and humidity monitoring sensors is an advantage. It helps regulate the temperature and humidity of the data center more efficiently. It works by monitoring the physical environment over a specific period then convert it into electronic data to record. Sensors send operators alerts when these factors reach the threshold. So it stops the mold before it can even form on any part of the facility. It helps ensure that the environment does not contain the combination of conditions that will promote mold growth. However, it is important to note that sensors should be placed on strategic locations inside the facility to achieve superior accuracy. AKCP Wireless Sensor AKCP Wireless Temperature and Humidity sensor ensures that environmental conditions are within required parameters. It logs and provides a graph of data over time, and receives real-time alerts when user-defined sensor thresholds are exceeded. Use as a data logger with data buffered and synchronized to the gateway when in range. For data centers, safety and sustainability are critical. Investing in the safety of the data center should be part of the development strategy. It will help protect costly equipment and facility to sustain the business in the long run. With AKCP Wireless Temperature and Humidity Sensor, the data center are mold-free, and a well-performing data center. Wireless Temperature and Humidity Monitoring Ensure that your environmental conditions are within required parameters, log, and graph data over time, and receive real-time alerts when user-defined sensor thresholds are exceeded. Use as a data logger with data buffered and synchronized to the gateway when in range. IP66 rated enclosure provides weatherproofing for use in outdoor environments. Mounting with DIN rail, wall hang, or cable-tie / pipe clamp. Pair with any AKCP Wireless Tunnel™ gateway.
<urn:uuid:10393562-5c50-4ac0-9691-32911e7e57e4>
CC-MAIN-2022-40
https://www.akcp.com/articles/humidity-monitoring-eliminates-mold-problems-in-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00111.warc.gz
en
0.926898
2,092
3
3
On Aug. 14, 2003, the largest major blackout in American history affected the northeast region of the U.S. and eastern Canada as a result of a generator failure at FirstEnergy Corp. in Akron, Ohio. About 10 million people in Ontario, Canada were affected as were about 40 million in the U.S. Experts estimate that outage-related losses were between $4 billion and $10 billion. Experts also said that several factors contributed to the disaster, including inadequate disaster preparedness and software deficiencies. Imagine how much money would have been saved if the systems at FirstEnergy were all in place, and hardware, applications and the network infrastructure were all aligned. Anatomy of a Disaster Specifically, a disaster is any unplanned event that disrupts “business as usual.” This makes a disaster really a business issue rather than a technology issue. So, in order to successfully manage a disaster, a preparedness plan should be in place. According to a survey conducted by Gartner, two out of five companies that experience a catastrophic event or prolonged outage never resume operations. Of those that do, one of three goes out of business within two years as a direct result of that outage or event. The conclusion: 60 percent of businesses affected by major disasters are out of business within two years. Rather than hypothesize about potential disasters individually, an overall plan should address two major factors. The first is the recovery time objective (RTO): How quickly must lost data be recovered after a disaster? Some systems might not need to be recovered immediately, while others must be brought back online as soon as possible. The second point is the recovery point objective (RPO): To what point do the systems and information need to be recovered? Can a loss of time or a loss of data be incurred? If the last transaction was lost, could it be recovered it in another way? For each aspect of a business, the RTO and the RPO must be identified. Once these two factors are determined, planning will fall easily into place. The first element that must be identified is what needs to be recovered in the event of a disaster. Accounting applications, customer relationship management, financial systems and production management systems must all be brought back online in the event of a disaster. Although running and restoring key operations is imperative, other services and data, including email, voice mail, access to the intranet or to the internet and forms, licenses, and other business information must also be accessible. The next question is when lost data needs to be recovered. The answer will determine recovery priorities. In most businesses, customer-facing functions and communications are imperative. In the event of a disaster, businesses must be able to communicate with their customers and employees. Without this capability, it becomes exponentially more difficult to recover from the disaster or crisis. Personnel must know who to turn to in the event of a crisis, and they also must know what is expected of them. The next question on the list is who will conduct the recovery. Every individual in the company must be aware of his or her responsibilities for disaster recovery as well as what is expected of them. They must also be aware of the time in which their tasks should be completed. The last element is how recovery will take place. Unfortunately, most businesses cannot justify costs for a fail-over hot site, in which data instantaneously flips over and becomes available at an alternate location. Therefore, they must identify what kind of solution their budget will allow, as well as the RTO and RPO. Each level will reflect the time needed to recover certain data and how the data will be stored. The cost will be higher and the technology more advanced for data that must be recovered in a short period of time. Consequently, it will cost less to recover data over a longer period of time. To successfully implement a solution, a plan must be drafted. The RTO and RPO of each business application must then be identified. The next step is the installation of the technology, procedures and documentation of the plan. Following that, a test run is required for each application and business function, during which time staff is cross-trained in recovery operations performance. Finally, tests must be performed at least once per year to ensure that the backup procedures are functional, that all the technologies are still compatible and that the employees are still familiar with the proper procedures. Data recovery processes are imperative to have in place in the case a disaster strikes, but testing the processes is also important. Just as the military practices its drills to maximize effectiveness, it is crucial that a data recovery effort run smoothly. Therefore, the procedures should be tested at least once per year to ensure that everyone knows his or her responsibilities and who to communicate with. Drills are also necessary to ensure that if any systems or procedures have changed, all components of a recovery effort will still be successful and error-free. Although risk management can certainly be a costly venture, more and more companies are taking the proper precautions to ensure that, in the event of a disaster, they are prepared for seamless data recovery. If recent headlines have taught us anything, they have taught us that companies can never be too safe when it comes to protecting their data. Lastly, ensuring that communication, training and procedures are in place will determine whether a company fails or succeeds at disaster recovery. Bill Abram is president and founder of Pragmatix, a diversified IT company that builds custom database and web-enabled applications. He can be reached at 914-345-9444 or via e-mail at [email protected].
<urn:uuid:1f98070d-f871-42ea-870a-d10a6094388f>
CC-MAIN-2022-40
https://cioupdate.com/disaster-recovery-made-easy-well-sort-of/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00111.warc.gz
en
0.967593
1,161
2.84375
3
The coronavirus pandemic has profoundly changed the way we live our lives. While it may have been the case that increasingly more employees were being allowed to work remotely, nobody expected the shift to occur as abruptly as it did. As cyber criminals sought to capitalize on the crisis, businesses were scrambling to strengthen their security posture and adapt to the new paradigm. Now that most companies have become accustomed to the new norm, it would be a good time to reflect on some of the things that we can (or have already) learned about securing our systems and data in a remote working environment. There are many similarities and parallels between the ongoing pandemic and the security threats that businesses face – both in terms of how victims are infected and the techniques that are used to contain them. While this article focuses on the cyber-security lessons that we can learn from the ongoing health crisis, we could argue that there are lessons that Governments could learn from cyber-security. As an example, take the five stages of incident response, which include: preparation, detection and analysis, containment, eradication, and recovery. In the context of our response to the pandemic, did we have clearly defined protocols for dealing with each of these stages? Probably not. Lesson 1: Every second counts There was a significant delay between the time when the coronavirus was first identified, and when authorities across the globe (specifically the Chinese authorities) issued the first public health warning and started taking action to prevent the virus from spreading. While we can speculate about the exact origins of the virus, it’s likely that those directly or indirectly responsible would have hesitated to inform the relevant authorities. Likewise, companies often fail to report security incidents for fear of how they would affect their reputation. However, in a situation where an attacker is able to move laterally throughout our network, every second that we fail to identify and respond to the incident counts. Lesson 2: Don’t wait for disaster to strike before taking action The UK Government was a global leader in preparing for pandemics. With the exception of the foot-and-mouth disease outbreak in 2001, and the Ebola crisis in 2014, pandemics such as COVID-19 are exceptionally rare. As a result, the UK Government became complacent, and funding for the Global Health Strategy, launched in 2008, dwindled. The result was that the UK fared no better, if not worse than the rest of the world in dealing with the pandemic. The obvious lesson we can learn from this is to always be prepared and expect the worst. Instead of waiting for disaster to strike and then scrambling to figure out the best way to respond, we must ensure that our cyber-security programs are well funded and that we have an up-to-date, tried and tested incident response plan (IRP) in place to deal with security incidents in a controlled and predictable manner. Lesson 3: Make sure that everyone is aware of the symptoms As a part of the public health warning issued by Governments following the initial outbreak of the coronavirus, citizens were made aware of the symptoms, which include a high temperature, continuous cough, and a loss or change to your sense of smell or taste. When it comes to security incidents, the symptoms we need to look out for might include anomalous file and folder activity, failed login attempts, suspicious outbound network traffic, unusually slow internet speeds, and so on. As with COVID-19, all relevant personnel must be given a list of these symptoms, and the list must be clearly visible and regularly brought to the forefront of their attention. Lesson 4: Know the rules, and follow the rules An important lesson we can learn from the lockdown is that there are always people who either don’t know what the rules are or simply choose not to follow them. Of course, some were not happy with the rules, and some would argue that there were times when disobeying the rules was necessary. However, in the context of cyber-security, it only takes one seemingly harmless act of disobedience to bring the entire company network to its knees. Viruses spread quickly and in ways that we can’t predict, hence why citizens around the world were advised to stay at home. In situations where that was not possible, they were advised to wear a mask, maintain a distance of at least two meters between themselves and other members of the public, and wash their hands regularly – using sanitizer where possible. Many of these protocols are relevant to protecting ourselves against cyber-attacks. For example, in the context of work-related activities, employees should be advised to only interact and share information with people within their organization – ideally with people who they have a direct relationship with. Employees should be advised against opening or forwarding emails that might contain malware. Another lesson we can draw from the social distancing guidelines relates to network segmentation. Network segmentation is the process of dividing a network into smaller, isolated sections. One of the main benefits of isolating network traffic is damage control, in that it helps to prevent attacks from spreading from one system to another. Likewise, if one of the network segments were to be compromised, it gives the administrator time to figure out what has happened and respond accordingly. In the context of keeping our systems and data secure, it is important that we always wear a mask. In other words, we should ensure that all devices have the appropriate anti-virus tools installed on them and that all servers (and devices) have a firewall and some sort of threat detection solution in place. In the case of preventing both the spread of coronavirus and potentially malicious applications, the zero-trust methodology applies, which essentially stipulates that organizations shouldn’t automatically trust anything inside or outside its perimeters. Lesson 5: The threat landscape is always changing Few people expected there to be different strains/mutations of the coronavirus. It was naturally assumed that once a vaccine had been found, and enough citizens had received the jab, the virus would eventually start to disappear. A common misconception held by organizations is that once they have identified, contained, and eradicated a security incident, the problem is effectively solved and thus, doesn’t require any further attention. However, just like a virus that is able to mutate, cyber-criminals are constantly changing their tactics in order to gain access to our networks. In some cases, they will install a backdoor, which they can use to gain access to the network at a later date. Now, with AI being used to create super-viruses that are able to mutate in order to evade even the most sophisticated anti-malware solutions, it has never been so important to ensure that we stay ahead of the curve, install patches in a timely manner, and leverage the latest and greatest threat detection technologies on the market. If you’d like to see how the Lepide Data Security Platform can help you to improve your data protection and compliance standing, schedule a demo with one of our engineers or start your free trial today.
<urn:uuid:be68a89d-975e-4500-a1cf-b57e8324c18f>
CC-MAIN-2022-40
https://www.lepide.com/blog/5-cybersecurity-lessons-learned-from-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00111.warc.gz
en
0.970828
1,454
2.53125
3
According to the consumer fraud reporting research on internet scams, scam complaints and reports increased by more than 30% from 2008 to 2009 alone. More disturbing yet is the fact that more than 65% of these scams originated from theUS, followed closely byUK at 10.5% andNigeria at 7.5%. A line from a recent movie goes “the most persistent virus is an idea. Once in, it is almost impossible to eradicate. It is so powerful. that it can change the world…” Well an army of ‘ideas’ is indeed spreading like wildfire on Facebook and more often than not, they are scams. Like real life viruses, these scams spread from network to network through friends and acquaintances that fall for them. So if you notice your friends or even your own Facebook account posting weird links to web pages promising a hilarious, shocking or scandalous video, freebies like an iPad, free Facebook credits or a Subway gift card, chances are your friend’s account or your own account has already been infected. Here are some of the Most Common Facebook Scams! Phishing / Identity Theft This type of scam hijacks your Facebook account by luring you to a webpage having a fake Facebook log-in page or malware that installs a keylogger [a malicious program capable of searching for usernames and passwords in your PC]. Upon gaining control of your account, the scammer will then contact your friends and attempt to scam them by pretending that you’re in trouble and you need some money or by posting messages and links that will compromise their accounts in turn. Malware / Spyware Infested Links A scam that also lures Facebook users into clicking on a link or pasting a code on their browsers that would activate a download of malicious programs such as worms, viruses, Trojans, keyloggers, etc into their computer. These malicious programs could then be used to collect personal data, hijack online accounts [bank, PayPal, email, Facebook], send malware infested links to email and social media contacts, control user’s computer to perform click fraud or ‘cyberwar’/DDoS, etc. Yet another Facebook scam that at the surface seems relatively harmless. This type of scam involves luring Facebook users into clicking on a link that accompanies a message that friends have ‘liked’. The link would then take a Facebook user to a page where they will be asked to perform an action such as clicking on a button that says the user is over 18. This action, in turn would activate a code that would automatically post a message that you have also ‘liked’ that subject on your wall and thus spreading the scam to your network. While such action appears relatively harmless, Sophos technology consultant Graham Cluley warns that this could be adapted as a method of delivering malware through social media networks like Facebook. A scam that lures users to unknowingly subscribe to a service that will automatically charge their mobile phone accounts or credit cards. This is usually accomplished by taking Facebook users to a page that requires them to perform a series of actions that culminates into the user giving his or her mobile phone number of credit card number. 419 Advanced Fee Scams / Romance Scams This type of scam involves convincing Facebook users to send money in order to collect a lottery prize, to buy a non existent product, to become a part of a get rich quick or residual income scheme, or even to help a Facebook ‘Friend’ or ‘Lover’ in distress. How to Identify a Facebook Scam? Facebook scams may appear in different forms but there are a number of things they have in common that should help you identify possible Facebook Scams: 1. Paste a Code in your Browser: Anything in Facebook that prompts or lures you into pasting a code / URL into your browser is a sure sign of a scam. This is because pasting such code in your browser will automatically run a javacript command [which is against Facebook Policies as it could contain malicious codes] on your account or direct you to a malware infested page. 2. Upgrade your Flash Player or Download a Program: Similarly, clicking on any link in Facebook that prompts you to upgrade your flash player or download a program is also a clear indication of a scam in progress. Just like in the previous example, performing this action will download a host of viruses, malware and other malicious programs in your computer. 3. Post Links to Other Pages and Invite your Friends: Anything in Facebook that requires you to perform this action before you can claim a prize, view a video, join a group, or anything else that you want to do is a big red scam flag. This type of action turns you into a Facebook ‘virus’ that widens the reach of the scam. 4. Fake Log In Pages: Fake Facebook log-in pages are hard to distinguish from the real one. You can spot fake Log-in pages by checking on the url address that appears on the top portion of your browser. Pop-up log in pages are also fakes. A simple way of avoiding phishing scams of to only use www.facebook.com address when logging in. 5. Requires Private / Confidential Information: Another dead giveaway to Facebook scams are applications, quizzes, polls, forms that require you to provide confidential information such as your mobile number, credit card number, or social security number before you can view your results or claim something. Such information should never be disclosed within the Facebook network. Submitting such information will lead to automatic subscriptions to mobile phone services or your credit card being used by scammers. How Do These Scams Spread? Having learned what the popular Facebook scams are and how to identify them, the next thing you need to know is how these scams spread. 1. Wall Posts: News Feed is the most common medium of Facebook scams. One your friend is infected, the link and message that is automatically generated by the scam appears in your new feed. Clicking on the link that comes with the message will in turn, infect your account and post the scam message and malicious link on the news feeds of your friends. 2. Chat/Private Messaging: these scam messages and links could also spread through chat and private messaging. Moreover, some of the recent 419 advance fee scams and identity scams make use of the chat and private messaging feature of Facebook to solicit money from the victim’s friends. 3. Groups and Pages: Scams also spread through viral groups and pages which promise something in return for you inviting your friends to also join the group. These will typically require you to invite your friends first before you can join. Sometimes they will also require you to paste a code on your browser or perform a task. Examples of groups/pages like these are the profile spy group [which promised to release a Facebook profile spy service as soon as the required minimum number of members is met], the group against Facebook charging it users fees and the Facebook free credits group. The massive following of these groups as well as the high level of interest of its members make it easy for group/page admins to spread their scams 4. Rogue Applications: Under the Facebook marketing strategy, third party applications are allowed and can pull profile information from its users. While it’s true that Facebook security teams police these 3rd party applications and shut down those that violates its policies, there are still many rouge applications that still manage to get through. An example of this is the quiz application that requires users to input their mobile phone numbers in order to view their scores. 5. Fake Events: Fake events in Facebook work much like scam groups and fanpages in the sense that they lure participants into performing a set of actions like inviting their friends and posting links on other pages. An example of this is the Get a New Facebook Theme event. How to Protect Yourself From Facebook Scams Given the dangers of social media as presented above, here are some practical tips on how to be safe from Facebook scams. 1. When in doubt, DON’T CLICK that LINK! This holds true for any internet activity such as using your email, surfing the net, using your instant messenger, etc. Be on the look out for weird status messages, weird private messages, emails, etc. Always ask your contact if he or she has indeed posted the link before clicking on it. Be extra careful when you see links attached to messages designed to arouse your curiosity or greed as this is a standard operating tactic for scammers. Some examples are: hey do you know your profile pic is posted in <link>, 98% Of people FAILED this EYE TEST! Will you? <link>, 99% of people can’t watch this video more than 25 seconds” <link>, hilarious video <link>, I won a free iPad <link>, etc. 2. Keep your anti-virus/anti spyware programs updated! Just like in the real world, computer viruses, along with spyware, malware, keyloggers, worms, etc adapt to online threats. It’s a never ending virtual arms race between antivirus programmers and virus makers and the only thing protecting your computer and your data from new malware and viruses is your antipyware/antivirus daily update. 3. Limit Online Surfing to Trustworthy Sites: Trustworthy sites are those having a long history of good reputation and service in the internet. An online tool that will help you in determining the trustworthiness of the site is the Google Page Rank tool. Generally, the higher the page rank of a certain website, the more reputable it is. You can also use the page rank tool in determining whether a Facebook log-in page is a phishing site or not. The main log-in page of Facebook –www.facebook.com has a pagerank of 10/10. Phishing sites will have a pagerank of 0/10. 4. Optimize your Privacy /Security settings and Limit Information posted on Facebook Accounts: Make sure that your Facebook privacy settings are set to the highest level by limiting those who can view your profile info, email address, birthday, home address, pictures and updates to friends only or in some cases, by setting these to private. Check and limit the applications settings to include only the applications you trust. [you can check how trustworthy applications are by reading the reviews on their application pages and by doing a Google search on them as well as the reputation of the companies that made them]. You can edit application settings by clicking on the appropriate drop down option under the account button. Here you can check what applications you have granted additional permissions to, applications you have authorized and those that you have added to your profile, etc. You can also view additional details on how each application uses your profile info by clicking on the edit settings button. Limit the information you post on your profile. Don’t post sensitive information such as your home address, mobile number, home phone number, etc if you want to make sure that such info will not be used against you and your friends by scammers. 5. Change and Optimize your Facebook Password: If you have noticed your Facebook account doing weird things liked posting links on your wall without your permission or sending pms to your friends, change your Facebook password immediately after updating your antivirus programs and doing a full system scan. Use a password that cannot be easily hacked by making your password long and by utilizing numbers and special characters. Lastly, always be on the lookout for new Facebook scams and how they work / spread by joining the Facecrooks Fanpage and by reading the scam watch section on our website [www.facecrooks.com]. Facecrooks update its members daily regarding trending Facebook scams, what they do and how you can deal with the threat. By keeping up with the new scams and by passing along the warnings / alerts to your friends, you can do a lot in keeping Facebook a scam free social network. SocialSafe helps you to create your library of you. It’s the safest place for your online life. Downloaded to your computer, auto organised and instantly searchable. Supports most major social networks. PRIVATE WiFi® is a Personal VPN that encrypts everything you send and receive. Don’t access Facebook from a public WiFi hotspot without it. BitDefender Safego is a Facebook application you can install that will scan your News Feed and help keep you safe from scams on Facebook.
<urn:uuid:43c3c877-29e5-49ff-8108-1bb5fb6b0747>
CC-MAIN-2022-40
https://facecrooks.com/Scam-Watch/The-Ultimate-Guide-to-Facebook-Scams.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00111.warc.gz
en
0.928143
2,597
2.5625
3
Blockchain has been around for a very long time now and more than that, it has been turning the previously existing technologies around upon a whim by its possible, practical and theoretical applications in everything. Although, still in its nascence, it has been more useful to every possible industry on this planet. Similarly, IoT or Internet of Things is in its initial phases, still blooming, but the stir it has created in the world it unparalleled (or only comparable to blockchain). These two have become the epitomes of technology and the minute they come together, it produces an endless network of interconnected devices sharing the entire world’s data among themselves and the external environment, functioning perfectly without the intervention of humans. To define these two simply, Internet of things is the internet of everything. It is every device you can put the word “smart” in front of. Your phone, TV, lights, air conditioner, coffee machine, anything, and everything. The interconnected web of hardware devices that are able to communicate, share data among themselves and the external environment and function perfectly without the intervention of humans. There are multiple predictions made on the subject, let us look at what amount of IoT connected devices will be installed worldwide till 2025. Image Source: Statista.com In contrast to the interconnected net of hardware device system in IoT, Blockchain is a digital decentralized ledger which is encrypted and distributed through filling systems that allow the creation of immutable, real-time records. These two have proven themselves to be the future of technologies. In theory and in practice, the applications of Blockchain and Internet of Things together are breaking the set norms and overall limits of what technology could do. How does Blockchain fit into IoT? Theoretically speaking, (Blockchain of Things) merging blockchain and IoT together will give us a verifiable, secure and permanent method of recording data processed by almost any (smart) device today. And the result of this will be an endless network of interconnected devices sharing the world’s data among themselves and the external environment, functioning perfectly without the intervention of Humans. This sounds like it has been taken straight out of a sci-fi movie and that it will end up becoming a powerful AI-like system which will be capable enough of guiding and regulating our entire lives. If we go by the stats - Published in March 2017, A Gartner study revealed that Blockchain will add a business value of over $176 billion by the year 2015, later exceeding till $3.1 trillion by 2030. And Forbes forecasted in 2017 roundup that the IoT market will grow to $457 billion by 2020 from $157 billion in 2016. Talking practically, despite an advancement so big, the security concern in the IoT market remains a huge issue, since it exposes multiple devices and a huge amount of data to security breaches. If we take an example, the privacy of data in sectors such as healthcare and finance is paramount but if IoT is allowed to regulate these two industries, and multiple devices are allowed within the ecosystem to exchange data, the entry points for hackers will also multiply, thus risking data security. If we talk about the Challenges Faced in IoT, they consist of the following points: - Exploitation by cybercrime on IoT devices – Software attacks by malware and viruses and physical attacks through unauthorized device control. - Network attacks like the denial of services and wireless vulnerability exploitation. - Encryption attacks. Here enters Blockchain technology. The fact that there are numerous kinds of attacks and threats to keep IoT far from being safe is testimony enough that it needs an external security system to function. With the growing proliferation of IoT devices and these devices lack the necessary authentication standards to keep user data safe, it becomes almost imperative to introduce blockchain with the IoT ecosystem. Following are a few ways in which Blockchain can help IoT by integration and become Blockchain of Things: - If IoT devices are directly connected with Blockchain technology, it will help in keeping a track of the history of all the connected devices for any troubleshooting purposes. - Although, the initial costs of the two technologies together can be high in the present scenario when they are just starting up, the final costs or the costs in more mature phases of the technology can be reduced since the need of an intermediary can be reduced to zero. Blockchain development companies are working towards a more sustainable model in the same aspect. - A distributed decentralized leger can eliminate a single source failure in the IoT ecosystem protecting all the devices’ data from collusion and tampering. Blockchain will provide a distributed system for record sharing of data across a decentralized network of stakeholders. - Blockchain also enables device freedom through smart contracts. It provides independent identity, data integrity and peer-to-peer communication support and safety through the removal of technical inefficiencies. - Blockchain can be easily deployed to track the sensor data and prevent any sort of duplication with harmful data sources. It will give way to embedded business terms for the automation of interactions between the system nodes - Blockchain’s hash-based security and identity verification code can be crucial for the security loopholes in IoT. - Blockchain can also provide consensus and agreement models in form of smart contract for avoiding the mitigating threats. A merger of these two technologies or Blockchain of Things will disrupt the existing ways of working of various industries such as manufacturing, healthcare, shipping, finance and so on and simplify the business processes, thereby, improving customer experience. What are the challenges faced in IoT and Blockchain integration? After understanding that Blockchain integration with IoT can benefit into Building trust, reducing costs and accelerate transaction time, let us have a look at what are the major challenges that are faced in IoT concept and more importantly, how will blockchain reduce or eliminate them. So, like any other technology, there are always loopholes or bottlenecks in IoT and Blockchain integration as well. Namely, the issues of scalability, Security, interoperability, Multiparty collaboration, etc. Let us look at these, one by one. - Scalability: The most urgent technical challenge being faced in DLT and IoT convergence is the ability to scale the requirements of services and security over a dynamic network of devices. A decentralized consensus mechanism might save us a great deal of Information neutrality, Authenticity, security, fault tolerance, etc. Problems such as the unsustainability of processing a vast network of nodes for every transaction, the cloud-based architecture of blockchain, limited bandwidth, traditional data storage structures, etc all make up the redundancies of DLT and IoT implemented on their own. In other words, DLT is not a solution for IoT’s scalability problems, rather scalability will be defined when, how and in what scenarios IoT and DLT will converge. - Security: DLT architectures (like blockchain) provides the world with a promise of data security while it remains a challenge in a shared device network (Like IoT). Any business should not only be concerned with the protection of data (contacts, files, etc.) but, it should also work towards maintaining privacy, authentic identity and prevention of data theft. Having blockchain introduced along with IoT institutes new design considerations across the stack. - Interoperability: It is the ability to securely interconnect multiple networks. And this is a challenge faced in both the domains IoT and DLT (blockchain). Complexities like the integration of private and public blockchains, designing the permission and data access across multiple blockchains, integrating multiple open source platforms, ensuring common standards for compliance, etc. are standing in the way of IoT and DLT to become a standalone technology where interoperability is concerned. - Multiparty Collaboration: Standard business instincts are competitive, interdependent and shared in nature. On the other hand, the IoT market is a product based market, but our current business needs are pushing it towards a data-driven and service-based approach. Therefore, the need for collaborating blockchain with IoT is paramount. For interactions such as multiparty integration with major systems, security and permissions testing across parties, Encoding and designing and implementing the shared framework that complies to regulations. Blockchain will demand multidisciplinary integration to define new laws and liability frameworks rather than requiring the coming together of estranged participants. Blockchain In IoT – Use Cases Although looking from afar, the ideas around the concept of IoT and blockchain seem to be a technology in future but it is already here and has been implemented quite successfully over a large bracket of things. According to IDC, 20% of all IoT deployments will enable blockchain-based solutions by 2019. Banks such as ING, HSBC, Deutsche Bank are deploying PoC to validate blockchain technology. Blockchain and IoT applications are endless. Let us have a look at how IoT and blockchain together are disrupting different business systems. Following are some of the Blockchain and Iot use cases: - Supply Chain and logistics A supply chain network working on a global level involves multiple stakeholders. They can be people from raw material providers to processors to packagers etc. Thus, making the end-to-end visibility more complicated. Having multiple stakeholders also means that the chain can extend over months of time having a multitude of payments and invoices. This also makes the delays in deliveries very frequent. All of these challenges faced in Supply chain management call out for a solution that consists of features that are taken from both IoT and Blockchain to solve particular issues. For instance, Blockchain and IoT can be combined to provide an enhanced reliability and traceability of the network. IoT sensors like, GPS, motion sensors or any other connected devices provide absolute details about the status of shipments, and then this collected information can be stored in the form of blockchains. Once this information is stored and available for access, all the stakeholders listed in the smart contract can access this information in real-time. Post this, the payments and invoices can be prepared accordingly at all nodes. A food supplier company – GSF (Golden state foods) is a very known food products manufacturer and distributor. GSF is currently working with IBM to optimise its business procedures making use of IoT and Blockchain. All the sensors data which is collected on the blockchain makes sure any problematic data is reported before becoming a big issue. With the integration of Blockchain and IoT, GSF can create an immutable and transparent ledger which is accessible by different stakeholders in real-time. - Automotive industry Blockchain and IoT integration can disrupt many areas of Automotive industry such as fuel payment, autonomous cars, smart parking and automatic traffic control. The automotive companies using IoT enabled sensors can integrate them with the decentralized network which will enable multiple users to exchange crucial information quickly and easily. For instance, NetObjex has exhibited a smart parking solution with the collaboration of Blockchain with IoT. this integration eases the process of finding a vacant place in the parking lot of a place and automates the parking payments using crypto wallets. Blockchain for IoT security payment methods is the new way to go. The collaboration has happened with a parking sensor company called PNI for incorporating real-time vehicle detection and finding an available parking space. The IoT sensors calculate the parking charges for the duration of use and the billing is done through crypto-wallets. - Smart Homes IoT-enabled “smart” devices play a day to day a crucial part of our lives. For one common example, IoT enables our home security systems to be easily managed from a remote control. But the traditional approach to home security using IoT sensors is centralised and thus lacks the security required. The introduction of Blockchain in this system can enhance the home security manifold. Blockchain for iot devices can be a boon in home security. A company named Telstra is an Australian telecommunication and media company which provides smart home solutions - Sharing economy Sharing economy or collaborative consumption is concept based on haring. Start-ups such as Airbnb and Snapgoods are two good examples of what sharing economy is and unarguably, it has become a widely accepted concept around the world. And blockchain can help it in creating decentralized shared economy applications to earn revenue. Theoretically (Still), it will have an Airbnb apartment which rents itself. But this conceptualization is not much far off. Because Slock.it is doing it by integrating Blockchain with IoT. They have planned to develop a USN (Universal sharing network) to form an online marketplace of connected things. With the help of USN, any object possible can be rented, sold or shared without the intervention of any intermediary. As far as the vision goes, it will be possible for third parties such as manufacturers or retailers to upload any object to USN without seeking permission. And the data will be secured and transparency will be maintained with the help of smart contracts. These were some of the real-life use cases where these two amazing technologies have merged and given out their best to the world. Blockchain for IoT has made the world a safer, more secure place in terms of technology. Besides the Use cases, there are companies which are working towards disrupting the entire world. Following are some Blockchain IoT companies which are working on projects which are pure innovative technologies: IOTA – It is a new transactional settlement and data transfer layer for IoT which is based on the DLT, the Tangle. It efficiently overcomes the inefficiencies of current Blockchain designs and institutes new ways of reaching consensus in a decentralized peer-to-peer system. In other words, it is a cryptocurrency which is designed for IoT. And it has some major benefits over Blockchain such as Scalability, Absolute decentralization, Modularity and It is free of cost. Unlike the other most popular cryptocurrency – Bitcoin, it uses the DLT called Tangle instead of Blockchain. And it solves the issues of scalability and transaction fee faced in Bitcoin by making the sender in the transaction to perform a proof of work which approves two transactions. Chain of Things – “The chain of things of CoT is a general intelligent hardware infrastructure platform based on multi-chain blockchain technology.” Its vision is to introduce more trusted identification, transactions, interactions and transfers using various links with the base of IoT intelligence hardware system and serve various business entities with customisation. Atonomi – “Atonomi is a company that provides a security protocol and infrastructure to enable billions of IoT devices to have trusted interoperability for data and commerce.” IoT Chain – It is developed as a lite operating system using the blockchain concept and implementing PBFT (Practical Byzantine fault tolerance), DAG(Directed Acyclic Graph), SPV, and CPS technology, allowing data to be layered and stored in a decentralised manner and providing protection with the combined power of millions of IoT nodes within the network. Using this hybrid approach, a decentralized IoT network of unparalleled speed and security has been created that allows users to maintain data sovereignty. As we move forward, the future has started to seem closer than ever. Looking at the technologies which have already been put into practice using the Internet of things and blockchain is mesmerizing as it is exciting. There will be a time, as mentioned at the start of this article, that we are on our way to creating a technology which will enable every hardware device to operate on its own without any human intervention. Wonder what it will be like to add AI to IoT and blockchain.
<urn:uuid:4a3da1cb-e97d-48ee-aa0b-30c82da1d15e>
CC-MAIN-2022-40
https://resources.experfy.com/fintech/blockchain-of-things-two-disruptive-technologies-in-alliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00111.warc.gz
en
0.936025
3,148
3.0625
3
If your network operations are slow and inefficient, you could be having issues with your latency. Latency measures how long it takes for data to move from one place to another. There are many reasons why you could be experiencing network latency—propagation time, transmission delays, and processing delays are all common causes of latency. When latency occurs, it negatively affects the performance of your network. To ensure your network is transmitting data quickly and reliably, check network latency often and as accurately as you can. Using a tool especially designed for measuring latency can help you check latency reliably, understand why network latency is occurring, and figure out how to stop it. In this article, I’ll review my top four network latency monitoring tools. My personal top pick is SolarWinds® Network Performance Monitor (NPM). NPM has a 30-day free trial available for download. What Is Latency in Networking? A network’s latency represents the total amount of time it takes for sent data packets to reach a recipient. This time is measured in milliseconds (ms), with the recommended value for network latency being 125 ms. Usually, network latency is determined in two ways: - Round Trip Time (RTT): How long it takes a data packet to reach its destination and travel back - Time to First Byte (TTFB): How long it takes for the recipient to get the first byte of data after the client initiates a data transfer request It’s critical to measure your network latency regularly, especially if you think your network operations are running slower than usual. To check network latency, you can open a command prompt on Windows and type tracert, followed by your destination of choice. Once the tracert command is typed in, you’ll see a list of all routers on the path to the website followed by a time measurement in milliseconds. Add up all those millisecond measurements to find the latency of your machine. You could also use other methods to measure network latency, including: - Ping: Records the latency of response packets - Traceroute: Tracks each hop a data packet makes on its journey - OWAMP: One-Way Active Measurement Protocol - TWAMP: Two-Way Active Measurement Protocol These methods only enable you to find the time of your latency in milliseconds—they can’t help you determine the root cause of latency or troubleshoot pressing network latency problems. If you’d like to understand your network latency and address slow network operations, you need a network latency test tool that also enables network latency monitoring. How to Reduce Network Latency To efficiently reduce network latency, you must invest in a network latency monitoring tool. These types of software are designed to track network response time and perform quick, accurate, and reliable network latency tests. Many network latency tools can also enable you to understand the root sources of your network latency issues, so you can figure out why they’re happening and how to start troubleshooting. It’s important to invest in a network latency tool built to track critical latency metrics across your network, including up/down status or video and call activity. Many network latency tools are built to let you correlate network activity with availability, which helps you identify root sources of latency failures. You could also compare key network performance metrics with other important metrics, such as bandwidth utilization, using a network latency monitoring tool. This can enable you to correlate latency with overall network performance. Tracking, measuring, and understanding your network latency is a critical aspect to reducing slow connections and promoting faster, more reliable data transmission. Many network latency tools enable you to access performance reports, real-time alerts, and search or filter functions for improved network latency monitoring. It’s important to find the tool that works best for you, your organization, and any collaborators with which you share data. Recommended Tools for Monitoring Network Latency 1. Network Performance Monitor (NPM) (Free Trial) Network Performance Monitor (NPM) is an extensive network performance and latency monitoring tool designed to go beyond the basics. NPM can actively measure response time for over 1,200 applications across your network. You can calculate network response time and categorize issues by risk level, relation to business function, and more with NPM as your network latency monitoring tool. Leveraging easy-to-read dashboard displays, NPM can enable you to identify network traffic in real time and make key network latency discoveries. These discoveries could help you determine the root sources of latency, how user experience is being negatively affected, and whether issues stem from a network-wide or single-application issue. Along with measuring and helping you reduce network latency, NPM is designed to keep scaling up monitoring as your network grows. Another special feature of NPM is the Quality of Experience (QoE) dashboard, which can help you with network latency monitoring, testing, and tracking response time. This can enable you to better identify areas for improvement in your network and run targeted optimization efforts. The QoE dashboard is also designed to pull metadata directly from virtual and physical servers, so you can accurately see how network latency affects end users. For more quick and accurate information, you can enable customizable alerts on NPM and target latency issues before your end users experience negative side effects. Download a 30-day free trial of NPM today. VoIP & Network Quality Manager (VNQM) is a network tool designed for staying ahead of critical VoIP (voice over internet protocol) quality issues and end-user complaints. VNQM can enable you to accurately monitor latency along with other metrics like call quality metrics, quality of service (QoS) metrics, and network-wide performance insights. You could correlate these various metrics on VNQM to identify the root cause of latency, failures, and more. VNQM leverages a visual interface, providing pictorial representations of packets exchanged across your networks plus a global snapshot of all call detail records (CDRs), call management records (CMRs), and IP SLA operations. This enables you to easily find, view, and understand critical latency monitoring metrics. VNQM is also built to let you search, filter, and display critical network latency metrics through the SolarWinds PerfStack™ dashboard. This makes it easy to identify critical latency issues, which can streamline the troubleshooting process. You can enable real-time alerts on VNQM to find out when critical latency thresholds are exceeded. With these alerts, you can become aware of pressing network latency issues, then work to stop these latency issues and prevent similar ones from occurring. VNQM is also designed to use VoIP statistics to visualize site-to-site network latency and performance. There is a 30-day free trial of VNQM available for download. ManageEngine NetFlow Analyzer is designed to give you a holistic view of network latency, bandwidth, and traffic patterns by utilizing network traffic analysis and flow monitoring. ManageEngine can enable you to get real-time visibility into latency issues and dive deep into critical details. View these real-time statics in dedicated snapshot views, which are designed to give you an instant overview of latency by device, applications, or other network interfaces. Along with giving an overview of network latency, ManageEngine NetFlow Analyzer is built to let you drill down into interface level details across the platform. This is designed to give you more information on root causes of latency problems and take proactive steps towards reducing network latency. You can predict and plan for latency by fostering in-depth latency discoveries across time—using long-term reporting, ManageEngine can enable you to track both current and historical latency. ManageEngine NetFlow Analyzer leverages many kinds of reports, including latency and bandwidth usage reports. These are designed to display granular details of applications, end users, and conversations, so you can better understand where network latency occurs. Along with reports, ManageEngine NetFlow Analyzer is built to enable threshold-based alerts to tell when latency might happen. You can download a 30-day free trial of ManageEngine NetFlow Analyzer here. Site24x7 is a network monitoring tool built to track network latency, traffic, and bandwidth by analyzing network traffic flows. This can enable you to gain visibility into the largest traffic generators, so you can spot latency trends and patterns in network traffic. Site 24×7 has a built-in health dashboard to display real-time latency data. You can use the information found on the Site 24×7 health dashboard to identify the root sources of latency issues and save important data for historical analysis. Along with identifying network activities to spot latency trends, Site 24×7 can enable you to track VoIP metrics including latency, jitter, and packet loss. You can also track the latency of your VPN devices using Site 24×7 Network Monitoring and help protect your network by monitoring your network’s paths and connections. Site 24×7 Network Monitoring is built to enable thresholds for latency, enabling you to receive a notification when latency, bandwidth, or traffic exceeds certain limits. You can configure threshold limits yourself and block the responsible application, port, or IP address. While Site 24×7 Network Monitoring leverages threshold-based alarms, this network latency monitoring tool does not manage capacity planning reports. There’s a 30-day free trial of Site 24×7 Network Monitoring available for download. Don’t Overlook Network Latency It’s important to measure network latency quickly, reliably, and accurately to reduce network latency issues from their root sources. Read the above list of network latency monitoring tools and think about which one is the right choice for you and your collaborators. My personal recommendation is SolarWinds Network Performance Monitor (NPM) for its network latency monitoring features. You can download a 30-day free trial.
<urn:uuid:44fcb2d2-1b5b-4e0a-b6be-82c6b291ced8>
CC-MAIN-2022-40
https://logicalread.com/network-latency-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00111.warc.gz
en
0.923061
2,036
2.640625
3
The following coverage maps depict expected maximum information rates while operating a Kymeta u8 terminal mounted horizontally on a vehicle or vessel, on the Kymeta broadband network. The following coverage maps depict the expected maximum information rates while operating a Kymeta u8 GO at broadside on the Kymeta Broadband network. (Scroll down for an explanation of broadside coverage, if you’re unsure). To determine the gain of an ESA terminal, we must know its scan angle. The scan angle is one of two angles that define the direction of the satellite beam (known as the boresight vector) relative to the antenna in a spherical coordinate system (see below). To calculate ‘Signal to Noise Ratio’ (SNR), we need only consider theta, the scan angle. In fixed applications, the broadside vector is normally pointed at the satellite, so the angle between the broadside and boresight vectors (theta) is zero. In mobility scenarios, however, the antenna is typically mounted horizontally, so that the broadside vector is always vertical, but since the boresight vector must always point at the satellite, theta is positive (unless the satellite is directly overhead). Since less antenna area (aperture) is available to the beam in the case of a positive scan angle, the effective gain of the antenna is reduced. The reduction in gain with increasing theta is called the cosine (or scan) roll-off. Copy courtesy of Kymeta’s ‘Link Budget Calculations‘ PDF.
<urn:uuid:d01a4660-9d01-4415-b286-effeed481838>
CC-MAIN-2022-40
https://www.groundcontrol.com/us/knowledge/calculators-and-maps/kymeta-coverage-map/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00111.warc.gz
en
0.906876
319
2.53125
3
peterzayda - stock.adobe.com Bug bounty programmes have become a popular technique for code reviews; either in conjunction with, or instead of, penetration testing. Rather than an organisation relying on their own internal teams to review code for vulnerabilities, bug bounty programmes can be used to outsource the testing to external researchers. The origin of bug bounty programmes can be traced back to 1988, when the hacker community testified before the US Senate about internet vulnerabilities. Since then, bug bounties have become a formal process, with client organisations being able to comply with the ISO standard for vulnerability disclosure (ISO/IEC 29147:2018). Bug bounty programmes are similar to “wild west” wanted posters, in that financial rewards are paid to external parties, in this case for the successful identification of vulnerabilities in code. The advantage of using bug bounty programmes over an in-house penetration testing team is that the talent pool is much larger, bringing in a wider skill set, which allows for continuous reviews as the code is developed. “We think it is really difficult to have 100% confidence that you have found everything, even with internal controls,” says Adrian Ludwig, chief information security officer for Atlassian. “This is why bug bounty programmes become a really valuable way to tap into resources that you could not [otherwise] bring to bear before you ship.” One of the key advantages of bug bounty programmes is the diversity of thinking they provide. This offers benefits that are broader than just a range of skill-sets, as different cultures tend to have different problem-solving techniques. There may also be local factors that code developers are unaware of. “Researchers in one country may have one version of the product that is localised in a local language and as a result they pick up on different behaviour,” says Ludwig. However, an over-reliance on bug bounty programmes should be avoided. As Luta Security CEO and bug bounty pioneer Katie Moussouris recently warned in an interview with Computer Weekly, bug bounties should not be considered as a replacement for penetration testing. Ludwig observes: “The reality of security assurance is that there is no silver bullet. There are a lot of different technologies; all of them are getting better over time. But it is really important that you deploy as many overlapping technologies as possible.” Despite the advantages, bug bounty programmes carry with them inherent challenges that need to be addressed in order to be fully effective and worthwhile. A poorly implemented bug bounty programme can lead to development teams becoming swamped by notifications, resulting in vulnerabilities being missed and a harassed workforce. That said, these challenges can be mitigated as follows: 1. Perform a pre-review pass with machine learning tools Prior to initiating a bug bounty programme, and especially when the project is already in development, organisations should take the time to review the code using machine learning tools to identify any quick-fix bugs. Not only will this reduce the number of bug bounties to be paid, it will also allow the researchers to focus their time on searching for the more subtle and less obvious bugs. “Machine-based tools are good way to speed everything up during a first pass, so the human has more time for the deep dives and to look at the more unusual things,” says Colin Tankard, managing director of Digital Pathways. 2. Enact strict versioning rules One of the key routines a development team should adhere to is following a strict versioning system, thus ensuring the latest version is shared for review. Complications can arise if the development team is working on one version of the code while the review team is examining another. As well as being inefficient, conflicting versions of code can cause confusion when researchers are referencing parts of the code that are potentially no longer there, or have been substantially changed. Organisations need to ensure their development teams adhere to management-led, organisation-wide, versioning procedures. This needs to be performed in conjunction with confirming that the latest versions of the code are published for review, thereby ensuring that everyone is working on the same code. “We have experienced where the code is being developed by an internal team and it only takes somebody to pull the wrong version or not to upload it in time, for it to start to become out of synch,” says Tankard. 3. Introduce bug bounties sooner rather than later Ideally, like conventional internal testing, bug bounty programmes should be implemented as the code is being developed. This allows bugs to be spotted sooner, allowing them to be resolved with minimal impact on the overall coding. Waiting until development is nearly completed carries with it the potential for deeply embedded flaws to be uncovered. This could require significant changes to the code, which could have been mitigated had they been discovered sooner. 4. Scale up the bounties One of the greatest challenges when first introducing a bug bounty programme is that organisations can be inundated with results. This can be overwhelming and demoralising for a development team. To mitigate this, organisations should start with lower-priced bounties, thereby reducing the number of bugs likely to be reported in the early stages of the bug bounty programme. As the bug bounty programme continues, and the more easily discovered bugs have been found, organisations can start to increase the bounties. This incentivises a greater number of researchers to spend more time examining the code for subtler bugs. “We try to guide the bug bounty programme to areas that are particularly interesting or complex, by looking at how much we pay,” says Ludwig. “If a product is not getting as much attention as we would like, we increase the amount we will pay for specific issues.” 5. Gradually increase the number of researchers reviewing the code Organisations can also consider opening the programme only to a limited number of researchers initially, thereby preventing a surge of reports relating to the more obvious bugs. As with restricting the initial rewards, organisations can begin to increase the number of researchers examining the code for potential vulnerabilities as the bug bounty programme matures. An extension of this would be to completely change the researchers examining the code once the number of bug reports start slowing down, allowing new people, with different backgrounds and skill-sets, to examine the code. “The idea is to slowly ramp up not only the customer development but also the researchers,” says David Baker, chief security officer of Bugcrowd. “You train both sides of the market place to provide value to each other.” 6. Curate the results Often the bugs that have been found will be duplicates of what others researchers discover at the same time. Rather than having the results fed directly to the development team, and potentially distracting them with the same bugs being repeatedly reported, organisations should ensure the feedback is curated. It is also important to check that the submissions cannot lead to the introduction of vulnerabilities into the code. These could occur as a genuine mistake, but equally – especially in fintech industries – potentially be deliberate malfeasance by malicious actors wishing to subvert the code. “We score bug reports using the common vulnerability scoring system, which is an industry standard way of scoring those issues, and have a service level agreement process,” says Ludwig. “There is a rate at which issues must be resolved, based on the severity of the issue, we bracket them as low, medium, high or critical issues.” 7. Have the bug bounty programme form part of the development cycle Having a bug bounty in place is one thing, but the feedback needs to disseminated in an orderly and concise fashion. Rather than bombarding development teams with endless notifications when bugs are found, distracting them and potentially causing another bug to be created, organisations should incorporate the feedback into team meetings to ensure the information is adequately disseminated. “Two main challenges exist when integrating bug bounties into an existing cycle. First, a team doesn't know when new reports appear, so it's not possible to plan proactively,” says Tim Douglas, an operations engineer at DuckDuckGo. “Second, the reported vulnerabilities can be severe, becoming the top priority and delaying other work.” Although organisations can distribute the feedback as a regular email detailing all of the discovered bugs, it may be better to introduce the feedback in a meeting, to promote discussion and encourage creative thinking. The key point here is to ensure that the development team is regularly updated with the latest feedback in a concise format without causing unnecessary distraction. This will allow the development team to work more efficiently, by highlighting which parts of the code should be prioritised. 8. Ensure reported bugs are resolved swiftly Not only can it be costly for organisations to pay for previously identified bugs that have not yet been fixed, but researchers can be left feeling frustrated and unappreciated when they are repeatedly finding the same bugs. A strict development cycle ensures any reported bugs are swiftly resolved, without interrupting the developers with endless notifications. A code review meeting at the commencement of the week, discussing bugs that need to be resolved, allows development teams to plan when and how the bugs are resolved. Also, scheduling time for debugging, or assigning part of the development team to focus on this, can further help with resolving bugs before the next version goes live. “Even if you have been penetration-tested, if you do not have a means by which to take a vulnerability presented to the development team to have fixed and prioritised according to its criticality, any pen test or bug bounty is not going to be effective,” says Baker. 9. Maintain project confidentiality One of the main benefits of relying on internal penetration testing is that it reduces the likelihood of exposure to malicious actors. However, with the appropriate security protocols, these risks can be mitigated. Rather than utilising an open bug bounty platform, where anybody can view the code, organisations can use what is called a closed bug bounty platform, in which only certain researchers are invited to take part. Researchers can be curated based on being positively identified, passing a background check, and/or signing a non-disclosure agreement. While restricting entry to a bug bounty programme can limit one of their core benefits, namely a wide skill-set through the broad range of researchers, it does allow an organisation to have controlling measures in place to maintain project confidentiality and security. 10. Ongoing review, rather than short-term assessment Ideally, bug bounty programmes should be an ongoing review throughout the lifecycle of the product, rather than a short-term review. As many online applications have ongoing support and regular patches to update the code against the vulnerabilities, these cause the code to evolve and expand in ways that were unforeseen by the development team. These updates have the potential to create inadvertent vulnerabilities, which organisations may be unaware of if they do not have a bug bounty programme. Furthermore, having a bug bounty programme as part of the update cycle allows for the code to be constantly reviewed by external parties. This, in turn, allows organisations to focus more of their resources on development projects, thereby allowing new products to be developed faster. Having these new products available on the market sooner will allow organisations to cover the costs of their ongoing bug bounty programmes for their existing products. Bug bounty programmes allow organisations to better manage their own resources and expand the potential talent pool of security researchers examining their code. However, these programmes need to be carefully managed and curated, so that development teams are not overwhelmed. The key is for organisations to remain open to constructive criticism and accept that they will never – internally – spot every vulnerability. “One of the things companies hang up on is inviting random researchers to look at their security. They want more control of the situation,” concludes Ludwig. “When you have a bug bounty programme, you have some definition of what that engagement should look like. It actually ends up being more transparent, but also more controlled.”
<urn:uuid:42452c8e-a173-42b0-8d3a-5cb850213710>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Debugging-Bug-Bounty-Programmes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00111.warc.gz
en
0.94649
2,477
2.53125
3
Secure your business with CyberHoot Today!!! A Checksum is a value used to verify the integrity of a file or a data transfer. In other words, it is a sum that checks the validity of data. Checksums are typically used to compare two sets of data to make sure they are the same. Some common applications include verifying a disk image or checking the integrity of a downloaded file. If the checksums don’t match those of the original files, the data may have been altered or corrupted. A checksum can be computed in many different ways, using different algorithms. For example, a basic checksum may simply be the number of bytes in a file. However, this type of checksum is not very reliable since two or more bytes could be switched around, causing the data to be different, though the checksum would be the same. Therefore, more advanced checksum algorithms are typically used to verify data. These include cyclic redundancy check (CRC) algorithms and MD5 hash functions. It is rare that you will need to use a checksum to verify data since many programs perform this type of data verification automatically. However, some file archives or disk images may include a checksum that you can use to check the data’s integrity. While it is not always necessary to verify data, it can be a useful means for checking large amounts of data at once. For example, after burning a disc, it is much easier to verify that the checksums of the original data and the disc match, rather than checking every folder and file on the disc. Both Mac and Windows include free programs that can be used to generate and verify checksums. Mac users can use the built-in Apple Disk Utility and Windows users can use the File Checksum Integrity Verifier (FCIV). What does this mean for an SMB or MSP? MSPs and SMBs should be aware of the MD5 Hash that can verify the integrity of the file to determine if anything malicious has been done to the file. MD5 hashes can be used before executing files to see if the file has been tampered with prior to its execution and installation. This is done by researching a file’s MD5 Hash (tip – research and compare multiple websites reported MD5 Hash) and compare them to the downloaded file. This can validate the new file you downloaded hasn’t been tampered with. It’s important to always be sure you’re installing safe applications or files on your devices. This can be extended to patches from vendors to validate their file integrity as well. In addition to the recommendations above you can also check website reviews, the application’s country of origin, or the reputation of the developers. Each of these can provide you incremental improvement in your trust of the downloaded file before installing it on your computer. Additional Business Cybersecurity Recommendations The recommendations below will help you and your business stay secure against the various threats you face on a day-to-day basis. All of the following suggestions can be accomplished in your company by hiring CyberHoot’s vCISO services. For a vCISO proposal, please email firstname.lastname@example.org. - Govern employees with policies and procedures. All companies need a password, acceptable use, information handling, and written information security policies (aka: WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a learning management system like CyberHoot’s product to teach employees the skills needed to become more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. All of these recommendations are built into CyberHoot’s product and/or vCISO services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services or email email@example.com for a free consultation. Do it today as you never know when an attack will occur. At the very least continue learning by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity threats, vulnerabilities, and breaking news. To learn more about Checksums, watch this short 3-minute video: CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:6ce6c287-e7b0-4bbc-bc2f-af7a08bf5c0c>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/checksum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00312.warc.gz
en
0.917148
1,226
3.921875
4
ITC Secure’s Chief Operating Officer, Fleur Chapman, explores why gender diversity is so important for the future of cyber security and how the industry can make progress in becoming more inclusive for women. Cyber security is one of the most sought-after technology skills today, with 59% of organisations indicating that they would find it challenging to respond to a security incident due to a shortage of skills, according to a recent World Economic Forum report. One major reason that limits the talent available is a shortage of women entering the sector with many making the assumption that male-dominated teams have always been the norm in computing and security technologies. However, this is far from the truth. When we look back at history, women have been at the centre of computing technology since its inception, programming giant leaps for humankind. From Ada Lovelace, who was instrumental in designing and building Britain’s first real computer programme when she was just 19 years old, to Margaret Hamilton who was instrumental in NASA’s moon landing. Not to mention Bletchley Park’s codebreaking operation, which was accredited for cracking the Nazi’s ‘Enigma’ code during World War II, using the same abilities cyber security analysts use today: logic, methodology, and data analysis. This team was made up of nearly 10,000 people of whom 75% were women. There is also a misconception that, to begin a career in cyber security, you must have a technical background. In reality, many of the best security professionals bring fresh perspectives and a range of soft skills from non-technical backgrounds such as English or psychology majors to artists, writers, and stay-at-home mums! Just as the saying “anyone can cook” goes, so too can anyone get started in cyber security if you have a passion for learning. Afterall, cyber security is actually all about learning how technology (and people) works. Again, taking inspiration from female role models in history, Hedy Lamarr, the Austrian-American actor was a primarily self-taught inventor in her spare time. She co-invented a frequency-hopping method to control torpedoes remotely without the risk of signal tracking or jamming. Naval ships were using the technology by the 1962 Cuban Missile Crisis. And this invention later became part of Bluetooth and wifi! These are just some of the rich examples of invention, strength, and achievement from female role models that dispel common gender myths and a source of inspiration to those considering a career in cyber security. Why diversity matters Today’s threat landscape is constantly evolving and attacks are increasingly complex – ranging from more sophisticated ransomware and supply chain threats, nation-state attacks on critical infrastructure, overlooked vulnerabilities at the firmware level, and more. Solving cyber security challenges of today requires a holistic approach and, therefore, building a cyber security workforce needs to involve holistic thinking. Work wisdom tells us that diverse teams create diversity of thought, fresh outlooks, and better outcomes that can lead to changes in status quo. Bridging the gender gap There has been some improvement over recent years in terms of growth in female representation within the cyber security workforce globally. The Global Information Security Workforce Study from (ISC)² estimates that this make up has grown to 24% today vs only 11% in 2013. However, while this is positive news, there is still some work to do to bridge the gender gap within the industry. So how can we make progress as an industry? 1. Encouragement for girls to understand cyber security at an early age Cyber security is an exciting and diverse career that needs more exposure! The lack of diversity in STEM fields has been well-discussed, but it’s time to bring this topic home by showcasing some interesting opportunities for those with a knack towards cyber security – starting at school level. Cyber security programmes need encouragement from both educational institutions as well industry professionals if we want our younger generation exposed enough so that they know what great careers are out there waiting when they graduate school or start working. 2. Visibility of female role models, including more female-specific awards and events The next generation of cyber security professionals will be more likely to choose a career in the field if they see people that look like them succeeding. Visible female role models can educate younger generations, help demystify this profession for young women and men alike – showing how broad it is with many different skills sought after – covering not only traditional ‘technology’ skills such as product development and engineering, but also skills such as human behaviour, effective communication, business impact, risk management, and project management. 3. Establishment of industry-wide female mentoring and coaching Mentoring and coaching programmes can be a vital part of the diversity efforts for cyber security professionals. These types of initiatives help to retain female employees, provide support when it becomes time to expand career options or simply feel more confident at work because you’re surrounded by others who understand what life is like outside office walls as well! A number of groups and projects are already in place that are good sources of inspiration including 4. Commitment to building an inclusive environment for all An inclusive environment is important for any organisation because it allows people from different backgrounds regardless of gender, race, sexual orientation, disability, or religion to thrive in the workplace. And to do this, four areas should be considered: - Emphasis and support on the business case for diversity and inclusion at the leadership level. - Unconscious bias training that helps people recognise and detangle their own prejudices. - Intentional practice of inclusive leadership to create a safe team environment where all employees can speak up, be heard, and feel welcome. - And finally, leadership accountability where inclusion is a core value to the organisation and not just a check-box exercise. By making progress in these areas as an industry, we can start to make cyber security a more diverse and inclusive industry. This will not only help to plug the estimated 2.7 million skills shortage, but it will also lead to organisations that are more successful and innovative. At ITC, while the gender ratio of 26% is above the industry average, advocating and improving all areas of diversity (including gender) is an ongoing commitment for our business. If you’re reading this and want to explore a career in cyber security, take a look at our careers page which includes open roles right at the bottom of the page. We’d love to hear from you!
<urn:uuid:2fc3686b-6fce-4f19-a60c-3a518dc7571c>
CC-MAIN-2022-40
https://itcsecure.com/the-world-around-us/bridging-the-gender-gap-in-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00312.warc.gz
en
0.955691
1,342
2.734375
3
Rest assured, there is no global conspiracy to bug you with update notifications. As you may have noticed, unpatched software enables a large proportion of cyberattacks, which is why developers are constantly fixing vulnerabilities in their programs, and why you’re constantly getting alerts about updates. Update the software, patch the vulnerabilities, foil the crooks. To learn more about the situation, we investigated user attitudes about updates in two dozen countries. It turned out that every other person we surveyed is inclined to click “Remind me later.” That being the case, here’s a handy list of the five most important types of software to update — the ones worth tearing yourself away from work or play. 1. Operating system The operating system is the shell within which all programs on your computer or mobile device run, so security problems here can have very serious consequences. By exploiting a vulnerability in the operating system, cybercriminals can encrypt your data and demand ransom for it, mine cryptocurrency on your hardware, intercept your payment details, discover materials for extortion, and more. Operating system attacks are some of the most massive and destructive attacks out there. For example, through a vulnerability in Windows, WannaCry and NotPetya ransomware compromised hundreds of thousands of computers worldwide, leading to losses in the billions of dollars (read more about it in our history of ransomware post). The Windows updates that addressed the vulnerability — the updates that would have thwarted the attacks — had long been available for download at the time of both WannaCry and NotPetya outbreaks. Tracing and fixing vulnerabilities in operating systems is an ongoing process, so updates should be regular. This applies to both computers and mobile devices. Browsers, too, can give attackers access to a device. For example, cybercriminals can inject a malicious script into website code for drive-by attacks; victims need only open a Web page to pick up the malware. The creators of an exploit for Chrome carried out such an attack, using a browser vulnerability to download a Trojan to victims’ computers. Although Chrome’s developers quickly released an update that patched the vulnerability, users who put off installing it remained easy prey. Don’t forget about preinstalled browsers such as Safari or Edge. Even if you opened it only once to download Firefox or Chrome, it is still there. Some attacks harness programs that are simply in the system, regardless of whether you use them. Users of iOS and iPadOS versions older than 14.2 had to reckon with a bug in the Safari engine that allowed attackers to run other programs. 3. Office productivity software We are forever viewing and editing documents, so it should come as no surprise that cybercriminals often use bugs in the Microsoft Office and Adobe suites for attacks. For example, cybercriminals used a vulnerability in Microsoft Word’s DDE feature to download Locky ransomware to victims’ devices. A ransom demand followed, with a threat to destroy or publish confidential data. A short while later, Microsoft released a patch. The moral of the story: To keep your files, reputation, and money safe, update office software the moment you can. 4. Bank apps Financial apps are among the juiciest targets for cybercriminals because a successful attack gets them directly inside the victim’s wallet. Banks understand that, of course, and constantly update their apps to improve protection. The main thing is to install updates as soon as they become available. 5. Antivirus software It should go without saying that you also need to keep your security software up to date. New Trojans and viruses appear every single day; in the second half of 2020 alone, Kaspersky security products detected attacks by 80 million unique malicious objects. To keep you safe from cyberinfections, your antivirus protection needs regular and timely updates. Your antivirus utility probably already updates itself by default, which is both convenient and security-centric, but just in case, check the settings to make sure. Both antivirus software and the malware databases on which it relies need regular, automatic updating. If you want your computer and smartphone to serve you and protect your personal data for a long time, it’s important to consider protection at every level, including timely updates of software, in particular: - Operating systems, - Browsers (all of ’em), - Office productivity software, - Bank apps, - And, of course, your security solution. What to do while installing updates One last thing. We are used to having digital devices at our beck and call, so a computer or smartphone being out of commission while installing a system update can be rather discombobulating. But there is more to life than technology, if you can believe it. Think about things you’ve wanted to do, if you could only find the time. Breathe some fresh air? Call your parents or kids? Get some exercise? Click that “Update” button, and get yourself busy elsewhere. It’s a win-win. To make exercise more interesting, we asked popular blogger, yoga teacher, and former elite gymnast Shona Vertue to develop a special workout routine, one that’s just the right length to keep the body active during a “medium-scale” update of an operating system. The exercises are designed especially for infoworkers — people with sedentary occupations are prone to posture problems. It’s important to update yourself too, not just your software.
<urn:uuid:8ca8b80c-4eab-4754-9bf6-040e6d56eda6>
CC-MAIN-2022-40
https://me-en.kaspersky.com/blog/5-things-that-you-must-update-asap/18336/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00312.warc.gz
en
0.920611
1,152
2.5625
3
Nowadays, internet data cabling has become an important part of our daily life. While, with fast update of technology, the old Ethernet patch cable versions are being substituted by the new ones for better experience. For example, Cat6 cable is an advanced version over an older one Cat5. If I update to Cat6 cable on previous Cat5 network, will it work? Read through this post and find out the answer. What Is Cat5 Cable? Cat5 cable is a twisted pair cable for computer networks. Consisting of four twisted pairs of unshielded copper wire, Cat5 is a multimedia cable for carrying signals to transmit information, voice and other information communication services. It is widely used in broadband access projects such as broadband user premises networks. The cable standard provides performance of up to 100 MHz and is suitable for most varieties of Ethernet over twisted pair. Cat5 is also used to carry other signals such as telephony and video. The specification for category 5 cable was defined in ANSI/TIA/EIA-568-A, with clarification in TSB-95. What Is Cat6 Cable? Cat6 cable is a standardized twisted pair cable for Ethernet and other network physical layers that is backward compatible with the Category 5 cable standards. Similar to Cat5, Cat6 cable is made up of four twisted pairs of copper wire. Compared with the previous cable versions such as Cat5 and Cat5e cable, the best Cat6 ethernet cable has a better performance of up to 250 MHz and a faster speed of 10Gbps. Cat6 cable must be properly installed and terminated to meet specifications. The cable must not be kinked or bent too tightly (the bend radius should be at least four times the outer diameter of the cable). The wire pairs must not be untwisted and the outer jacket must not be stripped back more than 12.7 mm. Can I Use Cat6 Cable on Cat5 Network? Of course, Cat6 Cable can work on Cat5 network. It is backward compatible with previous specifications, which means it can be effectively used with Cat5 network. From the introduction above, the major components between Cat6 cable and Cat5 cable are similar. The Cat 5 Cat 6 difference mainly lies in their electrical specifications, or signal transmission capabilities. Category 6 cable has better specifications than 5 or 5e, enabling it to support faster data transmission when installed with compatible devices. In fact, it’s typical to use newer cabling types when upgrading a physical network infrastructure, even though the hardware is still using older standards. This is how a network admin can get newer cable installed in preparation for a future time when newer hardware will be deployed. Things You Need to Know: The major differences between these cables are their capabilities when put into use. Nevertheless, it is apparent that both Cat5 and Cat6 cables use the same end piece known as RJ-45, which is capable of connecting to Ethernet jack on a computer, router, etc. They can be plugged in to the same ports. Therefore, Cat6 cable works on Cat5 Network. However, Cat5 cable can’t be used on Cat6 network since Cat6 network has more requirements on cabling performances and capabilities, which Cat5 cable can not reach.
<urn:uuid:cb40a90c-bd37-4ae4-96c3-050a3559bbb0>
CC-MAIN-2022-40
https://www.fiber-optic-solutions.com/use-cat6-cable-on-cat5-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00312.warc.gz
en
0.915973
670
2.765625
3
The Ultimate Guide to Network Infrastructure Networks are everywhere today, from Bluetooth personal area networks, to Local Area Networks in the home and office, to Wide Area Networks connecting remote offices, the cellular networks carrying mobile voice and data, and the big one which combines all networks, the internet. Network infrastructure describes the many components enabling networks. This includes hardware like modems, hubs, switches, routers, gateways, repeaters and more, and software services such as monitoring and management tools, operating systems on the hardware devices. Network Infrastructure even includes media contained on and transmitted through the network.
<urn:uuid:08dcbbc1-1bda-48a5-8895-70fe41819ec0>
CC-MAIN-2022-40
https://securitybrief.asia/tag/network-infrastructure
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00312.warc.gz
en
0.921172
118
3.1875
3
By now the global economic impact of the COVID-19 virus, commonly known as the corona virus, has become apparent. While hand sanitizers may be enjoying 15 minutes of fame, many businesses are witnessing a huge decline in their bottom line. The travel industry alone has endured a loss of more than USD$10 billion in just the first quarter of 2020. In Canada, the gas prices are steeply declining within days. As the world comes to understand the global impact of the virus, the cybersecurity industry has to be on high alert. Most of the information and misinformation is being passed through digital media outlets. Researchers will be conducting and storing their clinical data about the corona virus and possible vaccines onto digital data management software. This has opened up many avenues for the cyber-scammers to violate and create viruses online. The fear of corona virus has created a desperation for information about the virus and its effects, resulting in readers letting their cybersecurity guard down. Scammers can easily create malicious codes disguised as sourced information about the virus. Instead of becoming easy targets, there are four things users can do to ensure safety on the internet. Trust government sources The best source of information is the local government and their online websites. They will be able to provide the most up to date information about the virus and any lock downs or closures affecting their citizens. Check your sources It is always important to check the reliability of the source of information. Although Facebook and Instagram posts tend to be the ones that gain the most clout, the information passed through them is not necessarily moderated or fact checked. It is best to look for reliable sources with information dispensed by experts, such as the World Health Organization (WHO), Health ministers, doctors and nurses. It is not just important to access reliable information, but it is also important to ensure that myths and misinformation about the very threatening virus is not spread. By reporting posts that convey false advice to the masses, the threat against cybersecurity can be further reduced. In addition, this practice will establish the need for social media corporations to better moderate the problem of "fake news". When posts are reported, it also puts the onus on the social media companies for failing to police information that affects their users' health and safety. Practice safety during online shopping Since many stores are running out of stock for essentials, many consumers might turn to online shopping for their needs. When making purchases on sites like Amazon or eBay, ensure that the seller is either the company itself or the seller is credible by searching through their ratings and reviews. It is always a good exercise to not provide information to websites that do not look reliable. One easy way to confirm if a site is trustworthy is to check the address bar that highlights safe sites with a green "https:" or has a lock symbol next to the web address. It is extremely critical to stay vigilant and hold corporations accountable for cybersecurity, instead of allowing fears and desperation to overtake safety and health.
<urn:uuid:2c938d3a-d67c-4765-98f7-2534cfff7ed1>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2020/3/20/4-tips-for-cybersecurity-during-corona-virus-fears
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00312.warc.gz
en
0.951845
613
3.234375
3
A quick search on “the problem of journalism today” returns results that speak to issues of balance and bias, trust in the press, and, of course, the unabating and pervasive problem of “fake news.” An article published in The Washington Post reads, “The crisis in local journalism has become a crisis of democracy,” and other sources ponder, “Does journalism have a future?” The answer could lie in blockchain, the technology that is usually reserved for circles discussing cryptocurrency or supply chain integrity. But applying blockchain technology to journalism may very well generate the results that the public is demanding in terms of content integrity and a decentralized publishing platform. What Is Blockchain? Blockchain can be an elusive technology for the average person to understand. A great–and very simple–example of what blockchain technology does is offered by The New York Times, with the author of the article asking you to look at a coin dancing on the screen in front of you. Now, note that this coin–which is being displayed on your screen and therefore cannot be touched or held by you–is a crypto coin. The coin, unlike a real coin, doesn’t have an actual physical presence that you can hold. But, like a real coin, it does represent value. Every movement that the coin makes, and the movement of every other coin like it, is being constantly recorded and updated in a shared ledger. The name of this ledger is …blockchain! If you’re a person who benefits from more specific, technical details, then a more complex definition of blockchain, providing by Blockgeeks, is: “A time-stamped series of immutable records of data that is managed by a cluster of computers not owned by any single entity. Each of these blocks of data…is secured and bound to each other using cryptographic principles.” What Is Blockchain Used for? Blockchain is usually used for making and tracking money, but it has a variety of applications. For example, blockchain technology is being used in supply chain management to record product status at each stage of production, allowing a company to see where each kernel of oil (or whatever other material or product) comes from. It’s a way to provide transparency and tracking efficiency, logistical improvements, and information. The most prominent use of blockchain, though, is still in regards to cryptocurrencies. Blockchain enables the existence of various cryptocurrencies by confirming transactions that occur involving the currencies without a central authority; everything is peer-to-peer. The Use of Blockchain in Journalism – What? How Does that Work? If you’re scratching your head at how blockchain could be used in journalism, you’re not the first person to do so. The idea is relatively new, and has raised a lot of questions about the range of uses of the technology and its ability to be applied to numerous different applications. With that in mind, experts have found that blockchain technology may be used by media organizations in three key ways: - To create auditable, verifiable, and transparent database solutions that can be referenced; - To create crypto-currency based business models; and - To access public data that’s secured in blockchain-based systems. Why Does it Matter? As stated above, the problems associated with journalism today are rampant. One of the most pressing issues is that of fake news – news that is either created and advertised but that it totally made up and has no basis in reality, or news that is labeled as fake but is actually true, but perhaps doesn’t have indisputable data behind it that’s accessible on a large scale. Many claim that this wouldn’t happen if news wasn’t being published for profit. Even Elon Musk has stated that he wants to fix journalism and media mistrust. Indeed, there is a journalism crisis at large. The three uses of blockchain technology by media organizations may provide benefits in a number of different ways, including: - Offering tools to allow for the origin and authenticity of content to be controlled; - Creating an automated content management marketplace; - Creating opportunities for companies to incentive users for content creation; - Making it easier to charge less for content; - Allowing for news to be paid for with cryptocurrencies; - Creating news that is verified and tamper-proof; - Verifying journalists and holding their organizations accountable; - Reducing the need for ads and the ad-driven business model; and - Helping to spur innovation for independent newsrooms. There’s More to the Story Blockchain technology may indeed offer benefits when applied to journalism – being able to track sources and verify publications is clearly valuable. But some experts warn that just lauding the benefits of blockchain technology doesn’t tell the full story. To be sure, an article in The New York Times titled, “Alas, the Blockchain Won’t Save Journalism After All,” raises concerns about people’s confusion surrounding blockchain technology and the process of buying tokens, the fact that there is a culture in place that limits people from buying into blockchain technology unless they believe that they can make money, and a lack of blockchain relevance to the average consumer. The Columbia Journalism Review raises another concern: whether or not the journalism community can address the inequities associated with uneven distribution of tokens and ask the hard questions to ensure that the technology isn’t only accessible to few, and that the technology is used for the benefit of public interest. These issues are very real, and perhaps daunting, but not insurmountable. Part of the Solution, but Not All of It For now, repairing and restoring trust in journalism is likely to require cultural shifts driven by a combination of remedies, including public demand for transparency, deliberation and thought in the way consumers choose news, and trustworthy reporting that goes beyond just getting the “scoop” and gets to the heart of influence in America. Blockchain may be part of that solution, and does show promise in ameliorating the problem of trust, monetization of news, and ethical governance. However, until consumers become more familiar with the technology and more willing to invest in it–which will come with trust and understanding–it’s unlikely to be a viable solution on its own to the problems that journalism is facing. However, we can all agree that us citizens must stop the “spin and fake news” and overall dissemination of disinformation from new sources via social media.
<urn:uuid:8f3aa23c-73be-4716-8704-0acd67e757bf>
CC-MAIN-2022-40
https://coruzant.com/blockchain/for-truths-sake-can-blockchain-save-journalism/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00512.warc.gz
en
0.944085
1,338
2.515625
3
The Advent season marks the beginning of the Christian year across many western churches in the United States. Its length varies from 22 to 28 days, starting on the Sunday nearest St Andrew’s Day and encompassing the next three Sundays, ending on Christmas Day. (Advent Sunday is the fourth Sunday before Christmas Day). First Sunday of Advent 2014 starts on Sunday, November 30, 2014 It is uncertain as to when exactly the celebration of Advent was first introduced in the Christian church. Some sources say that Advent began on November 11 (St Martin’s Day) at some time in the fifth century in the form of a six-week fast leading to Christmas. Advent was reduced to its current length at some stage in the sixth century and the fasting was later no longer observed. Advent is originally a time to reflect and prepare for Christmas similarly to how Lent is in preparation for Easter. Advent has sometimes been referred to as the Winter Lent. In recent times the restrictions that Advent brings to Christians have become more relaxed. Advent traditions spread from Europe to the United States, especially the Advent calendar, which became very popular in the United States after World War II as American military personnel and their families who were stationed in Germany brought them home and made them a part of the pre-Christmas traditions. Some people credit President Dwight Eisenhower with helping the tradition of the Advent calendar spread in the United States during the 1950s. What do people do? Many Christians in the United States attend a church service on the first Sunday of Advent and may engage in activities such as special prayers and contributing to ideas on enhancing peace. Many Advent traditions are observed in the United States in the prelude to Christmas Day. For example, the Advent wreath is becoming increasingly popular in the United States. The wreath can be seen in various churches across the nation around this time of the year. Advent calendars of all designs are also given as gifts at this time of the year. The calendars feature openings in the form of windows or doors that are numbered to count the days to Christmas. Calendars may contain chocolates, toys, or candy and are given to children as a fun way to observe the Christmas countdown. Some traditional Advent calendars show 24 days but many Advent calendars showing 25 days, with the last opening on Christmas Day. The church year begins in September 1 in many eastern Christian churches, so Advent begins at a different time to when it starts in the western churches. The eastern equivalent of Advent is called the Nativity Fast, which runs for 40 days. Personal Note: I may not practice, but will guide my children, and perhaps one day…. one day – Jermal
<urn:uuid:57740a4d-01e5-4333-bb59-a013080cd41f>
CC-MAIN-2022-40
https://jermsmit.com/first-sunday-of-advent-2014/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00512.warc.gz
en
0.966761
542
3.234375
3
DevOps is an ideal way of carrying out the app development that enhances the app performance for business and allows users to share feedback. The increasing adoption rate of DevOps is making job options wider for DevOps learners. Critical DevOps Lifecycle Phases for Software Integration The term “DevOps” is made with two distinct words – Development and Operations; it’s an agile relationship between the duos. DevOps is a process practiced by the engineers of development and operations from the beginning of the design to production support. You cannot get the DevOps without understanding its lifecycle. The continuous DevOps lifecycle has in total seven phases, which we will discuss below – - Continuous Development - Continuous Integration - Continuous Testing - Continuous Monitoring - Continuous Feedback - Continuous Deployment - Continuous Operations IT engineers can achieve a lot more with the correct implementation of DevOps lifecycle. It makes the app production process more effective and protected. If you are into IT engineering, you must know the fact that DevOps is the future of production lifecycles that guarantees maximum operational efficiencies for the giants. First, Understand What is Exactly DevOps? The modern marketplace needs rapid product development which should be based on customer feedback and requirement to respond instantly to market shifts. This eliminates the need of waiting for a year to add extra features in the next release and put pressure on IT team to release a new version instantly. DevOps is a practice that enables a single team to handle the whole app development lifecycle, i.e. development, testing, deployment, and operations. DevOps aims to shrink the development lifecycle of the system while delivering features, fixes, and updates frequently in compact alignment with the objectives of the business. It is a software development approach through which developers can build premium-quality software quickly and reliably. Let’s Discuss the Each of the Phases of DevOps Lifecycle 1. Continuous Development – The continuous development phase of DevOps lifecycle includes ‘planning’ and ‘coding’ of the software. The future of the software project is pronounced by the developer while planning the project and they start developing the code for the app or software. There are no such DevOps tools that are made for planning, yet there are numerous tools that help maintain the app code. The code can be written in any established language, but it is maintained with Version Control tools. Code maintaining is referred to as Source Code Management. The widely used tools for this are SVN, Git, CVS, Mercurial, and JIRA. Some developers may also use tools like Ant, Gradle, Maven in the continuous development phase to build/package the code into an executable file that can be used at any of the next phases of the lifecycle. 2. Continuous Integration – The continuous integration phase starts after the development of the application. It includes several steps such as planning of tests that will be performed in the next phase, understanding the code to produce the desired outcome as required in the initial project documentation. It is a software development practice in which developers need to make adjustments to the source code more rapidly. This can be a routine task for them and they may need to do it on a daily or weekly basis . Every adjustment is then created and this enables early detection of bugs, if present. Code building includes several things along with compilation- code review, integration testing, unit testing, and packaging. 3. Continuous Testing – In the continuous testing phase, the QA team continuously test the developed software for bugs. For this phase, the testing team uses automation testing tools like Selenium, JUnit, TestNG, etc. These testing tools allow QAs to test multiple code-bases efficiently in parallel to make no space for flaws in the functionality. QAs can use Docker Containers in this phase for simulating the test environment. Automation testing done by Selenium and TestNG generates the reports. This whole testing phase can be automated using a Continuous Integration tool called Jenkins. For instance, a developer has composed a selenium code in Java to test the app. He can create this code using Maven or Ant. Once the code is created, it is tested for UAT (User Acceptance Testing). Developers can save their time and effort with automation since they can execute the tests instead of picking them up manually. Moreover, report generation is a major advantage. 4. Continuous Deployment – In the continuous deployment phase, the code is deployed to the production servers. It is critical to use correct code on all servers for deployment. Configuration management and containerization tools are available to help in achieving continuous deployment. Configuration management is an act of releasing deployments to servers, scheduling updates on all servers and keeping the configurations consistent throughout the servers. Containerization tools are also important for the deployment phase. Docker and Vagrant are widely used tools during this stage. These tools assist in producing consistency throughout development, test, staging, and production environments. Moreover, these tools help in scaling down and scaling up of instances at a faster speed. 5. Continuous Monitoring –Continuous monitoring is the most important stage of the DevOps lifecycle. In this stage, developers continuously monitor the app performance and record all the vital information related to the use of the software. This recorded information is later processed to recognize the proper app functionality. The system errors such as server not reachable, low memory, etc. are taken care of and resolved in this stage. This stage helps developers to reach the root cause of the issue. It maintains the security and availability of DevOps development services. Network issues are also resolved in this stage. Developers use Splunk, ELK, Stack, Nagios, Sensu, and NewRelic tools to monitor the app performance and the servers closely along with the health of the system. 6. Continuous Feedback – The performance of the application gets enhanced consistently with proper analyzing of the final output of the product. The continuous feedback is a vital phase of the software app where feedback of the customer is a key asset that helps enhance the working of the present software product and release new versions at a faster rate based on response. 7. Continuous Operations – DevOps operations are based on continuity with the complete automated release process. With DevOps, it is clear now that you can build any software product that will deliver efficient results and enhance the count of potential clients.
<urn:uuid:77899849-028a-4889-9a2a-199aec8c19e7>
CC-MAIN-2022-40
https://www.kovair.com/blog/important-devops-lifecycle-phases-for-software-integration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00512.warc.gz
en
0.933935
1,319
2.84375
3
You've heard them called virtual assistants, digital personal assistants, voice assistants, or smart assistants. Operated by artificial intelligence, technologies such as Siri, Alexa, Google Assistant, and Cortana have become ubiquitous in our culture. But what exactly do they do? And how seriously should we take them? While all the tech giants want us to use their smart assistants all the time, what do they offer us in return? And how do we keep our information and conversations safe? Each of these smart assistants is limited by the platform and the devices they are running on. I shouldn’t expect Amazon’s Echo to give me step-by-step directions to the nearest pizza place...or should I? Here's what you need to know about smart assistants and the real value (and danger) they provide. Getting startedIf you're looking to purchase a smart assistant, it's best to take a beat and think about what it is you really need from it. Do you want to be able to control appliances or other devices in your home at the sound of your voice? Do you want to be able to look up valuable information without having to reach for your phone or boot up your computer? Are you looking for some virtual company for yourself or your kids? While all these virtual assistants have a wealth of information at their disposal, it requires some getting used to in order to make optimal use of their possibilities. In addition, they each have their specialties, so take a good hard look at which technology is the best fit for your needs. And while you are shopping around, do not ignore the security implications—both what's possible today and what could come to pass. Understanding voice commandsWhile smart assistants have come a long way since the early days of Siri, they all have a common flaw: a less-than-accurate reading of voice commands. When the smart assistant is unable to understand the voice command, its AI experiences a dilemma between being sure of the received instructions and the danger of annoying the owner by having to ask to repeat the question or instructions too many times. This brings with it the risk of the assistant misunderstanding the given instructions and taking unwanted actions as a result. We have covered this subject before, focusing on some of the vulnerabilities that researchers were able to uncover in smart assistants’ voice commands. The possible consequences can range from slightly annoying mistakes to ridiculous behavior, such as sending a recorded conversation to all your contacts. Improvements in voice command technology are being made each year, as more precise algorithms are created to better adapt to complex vocal signals. Machine learning is being credited with significantly improving voice recognition, but there's a long way to go before smart assistants can hear and process requests with the accuracy of human beings. Kids and smart assistantsDo you let your kids/grandchildren play with your phone? Notwithstanding the fact that at a certain age our grandchildren probably have a better understanding of the phone than we do, we must warn against unsupervised usage of smart devices by young children. This absolutely extends to smart assistants, who can be accessed by children in the house alone, by simple voice command. Parental controls are available for most of these devices, and some smart assistants have even been developed specifically for kids and allow for parents to easily access search history. Unlike phones and other screen devices, smart assistants are screen-less and encourage more human-like interactions. Experts are cautiously optimistic of the effects of smart assistants on children, but we hesitate to fully endorse the technology's safety for kids, especially considering some of the security vulnerabilities inherent in the software, which we will cover below. CybersecurityMost of us, and I’m including myself here, love to show off what our latest gadgets can do. So we may be tempted, without thinking it through, to give control over our IoT devices to our smart assistants. Under normal circumstances, this shouldn’t be a problem—but circumstances are not always normal. Devices get lost, stolen, hacked into, and otherwise compromised by less-than-well-meaning individuals, which can be troublesome, to say the least, when they are in control of your domestic devices. One such abnormal circumstance is a cybersecurity attack method that researchers have investigated called "dolphin attacks." These are ultrasonic audio waves that are hard to hear for humans, but that the smart assistant would interpret as a command. To protect yourself from these types of attacks, you would have to turn the smart assistant off until you need it or introduce a confirmation protocol for certain commands, which would alert the human to the fact that the assistant has received a command of some sort. For convenience, the protocol could be set to work only in the case of sensitive operations. One could compare this to a 2FA for a certain subset of commands. By using virtual assistants to do our online shopping, we also run the risk of these technologies and their parent companies learning facts about us that could be potentially sensitive, such as payment information and product ordering history. Consequently, this information is stored in the cloud, where security would be in the hands of the operator of the smart assistant or their cloud provider. And with the growth of smart assistant usage, you can imagine this grows the interest of malware authors looking for associated vulnerabilities and bugs they can abuse for personal gain. In fact, the weaponization of IoT is just starting, but we expect it to grow quickly as there is little security in place to stop it. Paying attentionAnother important thing to keep in mind is that we humans are not as good as multi-tasking as we like to think. Even as your virtual assistant reads out your email to you, your brain gets distracted enough to avoid performing tasks that require your full attention. You could end up either missing the point of the mail or spicing up your family dinner with something inedible. EavesdroppingEven though this study showed no conclusive results about apps listening in on our conversations, we should realize that by using voice-activated assistants, we have implicitly given them permission to eavesdrop on us. They are designed to pay attention and wait for an activation command. And how often do we realize this, and turn them off when we don’t want or need them to listen? (My guess would be close to never.) If we are honest with ourselves, we don't think to do this—plus how inconvenient is it to constantly boot up a device every time you want to use it, especially one designed to interact seamlessly with your life? As a consequence, they are always on standby and therefore always listening. At least it's funnyOn the bright side, we have been introduced to a whole new dimension of humor, thanks to the snarky writers behind smart assistants' programmed responses. - We can let the virtual assistants talk to each other and see what develops. - We can introduce smart assistants as guest stars in comedy TV shows. Who remembers The Big Bang Theory's Raj meeting Siri “in the flesh”? - We can even try to make them tell us jokes. Don’t become a victim of your smart assistant. Use the parts of it that give personal value to you and your quality of life, and tighten up security on parts you don't need. Think about what information you trust your assistant with and who could be behind the scenes. And remember: just because it's a new, fun technology doesn't mean you have to have it.
<urn:uuid:755e02cc-d846-4ba7-997b-b1c9840f8f1c>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2018/07/whats-the-real-value-and-danger-of-smart-assistants
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00512.warc.gz
en
0.961748
1,519
2.65625
3
Optical fibers are not usable for outside installation purposes if they are not protected properly. The loose tube is one of the most widespread methods to protect optical fibers from environmental stresses and climatic changes. Optical fibers are protected by the secondary coating. The secondary coating is nothing but further protection or secondary protection over bare fiber. The secondary coating is also known as a buffering process. Optical fibers are protected by many different methods. The most common is by putting the fibers in a loose tube, by applying a tight coating, or by ribbonization. Here we will discuss the loose tube process. The loose tube making process is very important in the sense that there are many practical difficulties associated with it. The first one is the control of excess fiber length inside the loose tube. Called EFL by the industry, the excess fiber length has a vital role to play in the process of loose tube fiber cable stranding and also on the temperature characteristics of the cable. The bare optical fiber used for telecommunication purpose complying with one of the recommendations of ITU T G.651, G.652A, G.652B, G.652C, G.652D, G.653, G.654, G.655A, G.655B, G.655C, G.655D, G.655E, G.656, G.657A, G.657B will be having a nominal diameter of 250micro meters. Optical fibers are colored to distinguish before using in bundles. The loose tube contains a plastic protection cover called a tube. The fibers are safe inside the tube usually filled with jelly. The current practice to reduce installation time where water penetration threats are minimal, a fully dry cable design, free from jelly inside the tube is also employed. The dry loose tube can contain water blocking threads or polymer powder filled inside along with fibers. Material for making a loose tube, among plastics are PBT – Polybutylene Terephthalate, PP – Polypropylene, PC – Polycarbonates, Nylon-12. PBT is the widely used plastic for the loose tube. PBT has a very low thermal coefficient of expansion values while retaining the required tensile strength and elongation properties. Nylon is also a superior material, but it is comparatively much costlier than PBT. It is estimated that PBT is around 3 times costlier than Nylon-12. Polycarbonates are used for dry tube design. Loose tubes can accommodate fiber counts from 1 to 72 as the case may be, but up to 12 numbers of fibers are an industry practice and generally accepted norm both from the process point of view and from an installation convenience point of view. The thickness of the tube depends on the crush resistance the cable will experience. Also, the thickness varies according to the outer diameter of the tube. For a 2 fiber loose tube, 1.7mm outer diameter and 1.1mm inner diameter is suitable. Similarly, a 4 fibers tube can be of 1.8mm outer diameter and 1.2mm inner diameter, 6 fibers can be put into 1.9mm outer diameter and 1.3mm inner diameter tube. 8 fiber tubes are with 2.0mm outer diameter and 1.35mm inner diameter. Currently, many manufacturers use 2.2mm outer diameter and 1.4mm inner diameter tube for 12 fibers. The old practice was to use a tube with 2.5mm outer diameter and 1.7mm inner diameter. This gave a good clearance for 12F bundles. Loose tubes are designed in a variety of ways. One common way is to put the loose colored fibers into the tube. The fibers are safe inside the jelly injected into the tube during the tubing process. Water blocking yarn or powder can be placed inside the tube. A typical diagram of a 12F loose tube is shown below: Loose tubes can accommodate fiber bundles also. This increases the fiber count capacity of the loose tube. Fibers are made into bundles of 12 or 24 before they enter into the tube in the loose tube line. The bundles are tied together with one or two binder yarns of different colors. In that way, up to 6 bundles or 8 bundles can be inserted into a loose tube. In most cases, manufacturers are limited to put two bundles into a tube to make a 24F loose tube. A sample diagram of 48 bundled fiber loose tube with 4 bundles of 12 fibers is shown below: The loose tube can accommodate ribbon fibers too. Ribbonized fibers are placed into the relatively bigger size loose tube. This way, if we put 6 numbers of 12F ribbon, we can manufacture 72F loose tube. The BSNL, the state-owned telecom operator in India designs High count ribbon cables using this technology. BSNL design for 576F high count metal-free optical fiber cables is designed with 72F loose tubes. These 72F loose tubes are manufactured by putting 6 numbers of 12F ribbons. Each ribbon is printed for identification. A typical diagram of a 72F loose tube with 6 numbers of 12F ribbons is shown below:
<urn:uuid:0a7de8f5-d8f5-4606-95ab-fdb1fb365e26>
CC-MAIN-2022-40
https://www.tutorials.fomsn.com/sobhana/optical-fibers-and-cables/design-construction-and-properties-of-different-types-of-loose-tubes-in-fiber-optic-cables/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00512.warc.gz
en
0.927675
1,044
3.125
3
Updated: Dec 3, 2021 Fuzzing is a method of testing areas that accept user input whether intended or unintended. It involves presenting the input area with a large variety of invalid, unexpected, or random data with the intention of causing the system to accept the input to act in an unexpected way. Doing this can indicate that there could be potential attack paths available against the system being targeted. Types of fuzzing There are various other styles of fuzzing from automated fuzzing, unlike the above example where many requests are sent to the system rather than just a single request. This is done with the same goal in mind; however, it allows for greater coverage and automated methods of testing against the target system. As an example of fuzzing a web application, the following string could be presented to the web application This fuzzing string contains a variety of symbols intended to trigger various different effects depending on how well secured the application is, what software is in use, etc. In the event the application is vulnerable to Cross-Site Scripting or SQL injection, for example, the resulting page may give some indication of these vulnerabilities and allow the person investigating the application to further dig into how to exploit the issues found from fuzzing the application. If you like this blog post, find more content in our Glossary.
<urn:uuid:53d25090-9740-4116-aabc-0809c179a3b1>
CC-MAIN-2022-40
https://www.covertswarm.com/post/fuzzing-hacking
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00512.warc.gz
en
0.912608
293
3.421875
3
REST (Representational State Transfer) is a simple way of transferring content using HTTP protocol. REST does not need to contain any XML (like SOAP), and can be provided and consumed using a simple URL. The REST software architecture involves clients (consumers) and servers (providers); the consumer initiates requests and the provider processes the requests, and may deliver the information back to the consumer. The consumer of a REST Web Service points to a single service, identified by a URL. For example, a provider makes their service available through a URL query line; the consumer sends data (a request) to the URL and may receive a response from the provider. A typical REST URL might look like this: http://localhost:8080/OrderStatus?ParmA=1&Parm2=B When inspecting the above URL query line, there are several types of information being communicated: HTTP Request Methods, Status Codes, and Header Fields - GET: Retrieves data/resource - PUT: Updates data/resource - POST: Creates data/resource - DELETE: Deletes data/resource When a consumer sends a request, the provider returns HTTP status codes that show whether the request was received; it also includes one of several status descriptions:Processing (1xx), Successful (2xx), Redirection (3xx), or indicating Client (4xx) or Server errors (5xx). HTTP header fields are components of the message header of requests and responses used in HTTP. They define additional information about the request to and response from the Service provider. This additional information may be sent with the status codes. This may include required authentication info, such as user name and password, or just additional information about the requester. They can be used to override common headers, or create custom ones. Examples of common header fields include cookie, ETag, Location, HTTP referer, DNT, and X-Forwarded-For. In the CIC Studio's Web Service Consumer object (for REST), this HTTP Header information will be used in the related Request and Response actions. Advantages & Disadvantages Generally, REST is easier to set up and implement from a provider's standpoint, however, it's important to consider both the pros and cons. - Easier to build and maintain - Lacks security standards - Tied to HTTP - Point-to-point communication (as opposed to distributed computing)
<urn:uuid:7909e8f4-3453-4f95-92ce-93395ea5b0c0>
CC-MAIN-2022-40
https://support.cleo.com/hc/en-us/articles/360045713713-REST-Basics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00512.warc.gz
en
0.842514
517
3.15625
3
Data theft and data breaches have become the most common form of cybercrimes, and with the advent of complex SaaS applications, they are getting even more frequent. And here comes the time when you realize that you might have been the next target of a cyberattack, and you become paranoid about the security of your data. Penetration testing (aka. pentest) is the most crucial thing in a business. Because if you are a business owner and do not perform penetration testing, you would be in a terrible position. The penetration testing will help you find the different vulnerabilities in your system and fix them. You can identify the security flaws in your system and prevent hacking or cyber-attacks. Penetration testing also helps you find the different things you can do to fix your security flaws. But what exactly is Penetration Testing? Penetration testing is a process of identifying vulnerabilities in an information system and attempting to exploit them. The goal of a penetration test is to determine if a system can be compromised. Penetration testing is a method of validating the security of an information system by simulating or emulating an attack on the system or software. The pentesting is used to identify security vulnerabilities and weaknesses in the system or software and determine the extent of damage a malicious user could cause to the system or software. Why should I undergo Penetration Testing? Penetration Testing helps to identify any problem that a cyber-criminal can exploit. The most important benefit of penetration testing is that it helps improve the system’s security by identifying weaknesses. Penetration testing is a way to provide your organization with an external view of the security of your applications and get a higher level of detail and focus than an internal vulnerability assessment would provide. Penetration testing is the best way to determine the security of a network or an application. It ensures that the network or an application has been tested for flaws and vulnerabilities. Who performs Penetration Testing? Based on the size of the organization, penetration testing is performed by two parties: - Internal security team - External penetration testing provider The main difference between them is the perspective. An external penetration testing provider tests your security systems from outside, from the perspective of a malicious hacker. An internal security team member takes a more internal view, testing their security systems from inside, from the perspective of an insider who could damage the system. Most organizations opt for an external penetration testing provider to test the security of their network infrastructure and applications. Many companies, especially large organizations with multiple locations, rely on an internal security team to perform penetration tests. How often should I perform Penetration Testing? Penetration testing is a highly effective method for evaluating the security of your IT infrastructure and network. Yet, this method is one of the most neglected and underused solutions for managing the security of a network and application. Every penetration test is different and should be treated as such. However, penetration testing should be performed regularly to ensure consistent IT and network security management. Smaller businesses that offer essential services may only need to perform penetration tests once every year. In comparison, larger companies that provide more valuable services may often want to perform penetration tests. How much does a Penetration Test cost? The purpose of penetration testing is to provide business leaders with a solid understanding of the security posture of their network and test the effectiveness of existing network security controls. The cost of a penetration test is driven primarily by the following factors: - Size of the company - Complexity of the application or network - The approach of Pentest black box, grey box, and white box - Scope of work - Level of expertise The Pentest market is highly competitive, with a vast range of Pentest companies offering their services for a wide variety of prices. Pricing primarily depends on the factors that have been enumerated earlier. However, one might expect a fee within the range of $4500 to $6500 for simple applications or networks and around $10,000 to $15,000 for more complex networks or applications. Do I need permission from my Cloud Service Provider before Performing Pentest? Simply answering the question, No, you don’t need permission from your Cloud Service Provider (CSP) before you can perform penetration tests on your applications. Earlier, Amazon Web Services and Azure by Microsoft needed permission, but now all 3 top cloud providers (AWS, GCP and Azure) allow cloud penetration testing. However, there are specific boundaries to what a Cloud Penetration Tester can play with while the rest remains out of bounds for pen-testing. I have an in-house Security Team. Why should I choose an External Pentest Provider? Choosing an external pentest provider can significantly benefit your organization, even if you already have an internal team. External pentest providers can provide you with a much more in-depth analysis of your security. This can be highly beneficial if you are a small organization with limited resources to dedicate to your pentest. It also provides you with independent analysis. This allows you to view the results from an objective point of view. An external pentest provider can act as an extra layer of security for your internal security team. If your internal team is already stretched thin, an external pentest provider might be a good fit for your business. An external pentest provider can also act as a second set of eyes and give you the peace of mind that your network and application are safe. How long does it usually take for a complete Pentest? A complete pen test should take about 1-3 weeks on average. This may extend for a few more days if additional tasks are requested. If a pen test is performed for an organization with an extensive scope, then the test might take more time. The same goes for the organization that has less of a scope. As a rule of thumb, the time it takes to perform a complete pen test depends on an organization’s security requirements. Note that this time frame is only average. For instance, if your organization is starting and you only want to test the security of a single application, then it will probably take you only a day or two. If your organization is a large corporation with complex network infrastructure, the pentest will probably take a week or two. Can Pentest help me in achieving Compliance? Yes, a Penetration Testing Report is one of the major requirements for most compliance standards such as PCI DSS, NIST, etc. A penetration testing report describes the vulnerabilities found in the system and how they were discovered, and the risk associated with the vulnerabilities. A Penetration testing report can be complex if you do not involve the right people and an agile approach is not adopted. Astra has the skills and expertise to seamlessly integrate penetration testing, vulnerability assessments, and security management into your existing processes as a trusted partner of leading organizations. Astra’s penetration testing is completely compliance-friendly, be it NIST, PCI DSS, or others. What would be required from my end for a Pentest? A third-party pentest is an assessment of your security performed by an external party. Before you engage in one, make sure you understand what your third-party vendor is planning to test. It is essential to know what assets will be tested and whether the vendor will scan your internal and external facing systems. Also, here are a few other things to keep in mind before starting a pentest 1. Prepare a Penetration Testing Contract 2. Decide which environment will be scanned. If production is to be tested, inform your customers. 3. If you are undergoing White Box Testing, share relevant accounts and access controls with the team. 4. Authorize IP addresses for automated scanners that third-party pentest teams will be using. 5. Discuss the timeline, pentest methodology, and reporting guidelines before engaging. What are the Top 3 Penetration Testing Tools? Automated penetration testing tools are the best ways to keep your company safe and secure. Here is the list of the top 3 penetration testing tools organizations use worldwide. 1. Astra’s Vulnerability Scanner Astra’s Vulnerability Scanner offers more than 3000 tests that can test your application thoroughly. The test cases are based on OWASP Top 10, CWE Top 25, CERT Top 25, CIS Top 25, NIST Top 25, SANS Top 25, SANS 25 Risks, NIST 800-53, PCI DSS, HIPAA Security Rule, FISMA, GLBA, ISO 27001, etc. Vega Vulnerability Scanner is a free and open-source web security scanner and web security testing platform to test the security of web applications. It is also available as a commercial product. Vega was developed by the team behind the popular open-source penetration testing framework, OpenVAS. 3. OWASP Zap OWASP ZAP is an open-source security tool for finding vulnerabilities in your web applications. It is designed to be used by people with a wide range of security experience. It is ideal for developers and functional testers who are new to penetration testing. Why is Astra a trusted pentest provider? Astra is a pentest provider with a team of highly skilled professionals who help organizations find vulnerabilities in their web apps and security systems. Our team of ethical hackers and penetration testers allows organizations of all shapes and sizes. Astra has the experience and the technical knowledge to proactively protect our clients’ assets. Our assessment includes a broad range of tests and services that cover the complete security chain from web application attacks and web services to networks. Astra is one of the only penetration testing companies to provide a complete security audit with more than 3000 tests at a very pocket-friendly price. Our penetration testing services are based on the latest industry standards, and we use the best tools and methods (including our proprietary tools and methodologies) on the market. Exploitable vulnerabilities are one of the most significant risks to any organization. An unsecured server, weak application passwords, invalidated session IDs, and hackers can exploit other security vulnerabilities to gain access to your information and steal sensitive information, which is why performing regular pentests are essential. Astra’s Pentest Suite is a go-to security solution for all your security needs.
<urn:uuid:0a8b737f-d66a-4629-9305-5b6135e80f7d>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/pentest-related-faqs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00512.warc.gz
en
0.935173
2,134
2.671875
3
What Is Low-Code Development? Low-code development is a form of software development that offers an easy-to-use front-end interface that makes it possible to plot a series of commands, often using underlying application programming interfaces (APIs) to connect data among multiple tools, as well as tying into existing infrastructure, whether in the cloud or onsite. The concept of low-code development, a term coined by Forrester in a 2014 report, is beneficial in many ways. First, it opens the capabilities to larger categories of people, including those without a traditional background in programming. Second, it allows for a faster development process, making it possible to do more in less time. “The ability to easily build applications won’t make companies devote fewer resources toward the initiative,” says Seth Robinson, CompTIA’s senior director for technology analysis. “They will actually want to devote more resources so that they can leverage all the potential.” Low-code development can take a lot of forms; for instance, it can be used as a method of application development, but also as a way to manage servers and other pieces of digital infrastructure, extending concepts such as infrastructure as code. What Are the Best Low-Code Solutions for Banking? A number of low-code applications and platforms exist for different use cases, offering benefits for several industries, including banking. Some of the most common tools for low-code include: - Power Apps by Microsoft: Because it’s plugged in to Microsoft’s ecosystem of apps, including the Power BI analytics platform, Power Apps has become popular as a tool that supports quick development of back-end business applications. Its ability to integrate with Microsoft Azure is also seen as a benefit. - App Engine by ServiceNow: Originally rooted in IT services but evolving into a more all-purpose low-code tool, App Engine has sparked interest among some highly regulated government agencies. How Can Low-Code Development Improve Banks’ Digital Offerings? For the financial industry, low-code could allow a fresh influx of agility in areas that haven’t been quite so easy to serve. With many banks and their underpinning financial systems still reliant on legacy technology, such as COBOL, a 1950s-era programming language, banks may find themselves unable to move as quickly as they’d like when other industries are well positioned to respond swiftly to changes in the consumer climate. In a recent blog post, Lucy Brown, business applications industry lead for financial services with Microsoft, notes that financial institutions could find significant opportunity in leaning away from legacy technology where possible. “On average, financial institutions can spend up to 70 percent of their IT budgets on maintaining and servicing core systems and legacy applications,” Brown writes. “But by allocating resources this way, organizations reduce their time and ability to consider new innovations. At the same time, they risk delivering a poor customer experience with outdated systems.” She recommends embracing modern interfaces, including on mobile devices, that “seamlessly integrate with any existing business applications, workflows and processes,” which can help legacy infrastructure feel newer. How Can Low-Code Development Minimize IT Backlogs? The true benefit of low-code development for banks lies in how much it can do and how quickly it can do it. Given the complexity of most financial institutions, traditional programmers might need years to optimize platforms for modern operations. With 88 percent of IT departments saying workloads have increased, according to a Salesforce study, low-code’s arrival is not a moment too soon. By expanding its reach to more staff, low-code development allows for faster turnaround, which could allow IT departments to make short work of initiatives that otherwise might not be a priority. A recent Forbes article noted that banks are applying technology from companies such as Salesforce to transform manual processes that typically take weeks to complete into operations that can be done in just minutes; for example, researching a client before a meeting, which might require data from dozens of legacy systems. Low-code is also good for managing the occasional curveball, such as the need to change business processes to handle a singular event like the Paycheck Protection Program. According to SiliconANGLE, the critical need for the program, and the speed at which it was launched, required banks to complete in weeks what they might have taken months or years to do normally. “The only way a company could build such an app so quickly was by leveraging low-code functions — which enable people to create applications through graphical user interfaces instead of traditional hand-coded programming — in the underlying platform,” writer Jason Bloomberg explains.
<urn:uuid:85a813e9-e058-4a9a-9e58-35bbda085a50>
CC-MAIN-2022-40
https://biztechmagazine.com/article/2022/04/how-low-code-development-could-help-banks-embrace-digital-transformation-perfcon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00712.warc.gz
en
0.949974
971
2.5625
3
Digital Twin Technology: Where Are We Now? Digital twin technology as it applies to IIoT is still in early stages, but the fiercely competitive nature of the industries most suited to reap its benefits continues to drive innovation, analysts said. The high-stakes oil and gas, aerospace and defense, transportation and manufacturing sectors likewise can make the types of investments in digital modeling and analytics that are relatively inexpensive compared to the cost of developing a refinery or new jet engine. “It’s imperative to know what’s going on, when something is going to break or is performing improperly,” said Ian Hughes, Senior Analyst, Internet of Things, for 451 Research. “Having more accuracy to then put it all together and do more interesting processes with the data, with machine learning and AI, to get a few extra percent improvement … it’s a no-brainer.” The digital twin, which Gartner defines as “a digital representation of a real-world entity or system,” differs from a 3D model or a save-as version of a system architecture that you can sandbox. Conversely, a digital twin collects real- or near-time data from IoT sensors. And once it has that data, it analyzes the information to improve products and systems. Whereas a 3D model captures an object’s geometry, a digital twin incorporates data from connected sensors and potentially physics engines as well. A digital twin can be used to demonstrate how discrete components in a system interact with one another. As of the first quarter of 2019, according to Gartner research, 24% of organizations with IoT technologies in production or IoT projects in progress use digital twins. Another 42% plan to use digital twin technology within the next three years. Their popularity, again according to Gartner, lies in the digital twin capabilities that “significantly decrease the complexity of IoT ecosystems while increasing efficiency.” Beyond physical meters and dials by which manufacturing information is traditionally gleaned, the digital twin approach can “gather data from sensors, process the data using analytics, and use it for predictive maintenance and optimization,” said Scott Raynovich, principal analyst, Futuriom. Among practical industry scenarios, the most common are wind turbines, energy rigs and aircraft engines. And as IoT technology proliferates, Raynovich said, the digital twin approach will become more common across industries to provide insight into existing operations and “things,” and envision new opportunities and model “new things.” In principle — but in limited practice — digital twin technology can incorporate data from an array of systems, said Alexandra Rehak, practice leader, IoT, with Ovum Consulting. “Digital twin is far from a mass market technology at this point,” she said. “It’s complex, not something off the shelf.” To be sure, Rehak added, the cost-benefit analysis of deploying a digital twin is an important consideration for enterprises. Most early use cases are in automotive design and manufacturing, she said, where you have an intricate manufacturing process and a complex product that is constantly redesigned to include new features and technology.
<urn:uuid:06c555e2-16aa-491a-9372-156ac68bf717>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2019/05/08/digital-twin-technology-where-are-we-now/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00712.warc.gz
en
0.927497
659
2.515625
3
Remote Code Execution (RCE) allows attackers to execute malicious code on systems and devices, regardless of their location. Here’s what you need to know about this attack, how it works, and how to prevent it. Also known as remote code evaluation, RCE is part of the broader group of arbitrary code execution (ACE) attacks. It is a way to remotely inject and execute code in a target machine or system using the internet, local (LAN), or wide area networks (WAN). This code can gain access to a web server or application, completely control or compromise it, open backdoors, seize, modify or destroy data, install ransomware, etc. This attack exploits the possibility of executable code being injected into a string or file and executed or evaluated. This can be due to user input not being validated and allowed to pass through the parser of the programming language – a situation often not intended by developers. Injected code is usually in the programming language of the targeted application. Such languages may include PHP, Java, Python, Ruby, etc. Depending on the flaw that attackers exploit, they will typically acquire those privileges granted by the process they are targeting upon executing the code. For example, if attackers inject code as a user, they will seize user privileges. For this reason, RCE is frequently followed by attempts to escalate privileges and gain control on an administrative or root level. Unfortunately, such elevated privilege also allows attackers to hide the attack more skillfully. Yet even without more extraordinary privilege, the remote code execution vulnerability has the potential to cause serious harm. Some other types of remote code execution include CVE-2021-21972. Impact of the RCE attack The impact of a remote code execution attack can vary between simply gaining access to an application and entirely taking it over. Some of the main types of results of an RCE attack include: - Access to an application or server: initially, attackers have access to functions in the vulnerable application due to a vulnerability. They can use this to inject and run the underlying web server commands. - Privilege escalation: gaining access to the webserver and executing commands means that attackers may be able to achieve greater privileges and control over the server. This is particularly dangerous if internal vulnerabilities on the server level are present. - Access to data: using an RCE vulnerability, attackers can gain access to and steal data stored on the server. - Denial of service: attackers can disrupt and crash the whole service and other services hosted on the server by executing specific commands. - Ransomware and cryptomining: installing various types of malware is also a possibility if code can be executed on the server. Such malware could be crypto mining or cryptojacking software that uses the server’s resources to mine cryptocurrency. The remote code execution vulnerability also opens the door to ransomware and having the whole server taken over and held hostage. These are only some of the possible impacts of an RCE attack. Depending on the case’s specifics and the presence of other vulnerabilities, hostile parties may be able to inflict further damage, making this attack highly dangerous. How does remote code execution work There are different ways of performing a remote code execution because it can target different layers of a server. A common way of executing an RCE is through injecting code and gaining control over the instruction pointer. This allows an attacker to point toward executing the following instruction/process. Code can be injected in different ways and places, but attackers must “point” toward the injected code to be executed once this is the case. The code itself may be in the form of a command, a script, or something else. The above, in essence, is how an RCE attack is performed. The type of attack, its attack vector, and how precisely it is executed may differ. For the most common RCE attack types, see below! Remote code execution attack types These are the most common remote code execution attacks and their basic scenarios. This type of vulnerability is when an object, resource, or variable is allocated using one type but is then accessed using another different from the initial one. This results in a bug and logical errors due to the mismatch in the type and properties of the accessed resource. Attackers exploit this vulnerability by including code in an object allocated with one pointer but read with another. The result is that the second pointer triggers the injected code. For example, an SQL injection can be due to type confusion in some cases. Тo transfer pieces of data over a network, it is serialized – i.e., converted into binary. Then, it is deserialized and converted back into an object to be used at its destination. By formatting user input in a particular way, attackers can create an object turned into executable dynamic code once it is deserialized. Buffer overflow and buffer over-read Buffer overflow is also known as out-of-bounds write and buffer overrun. Together with buffer over-read, it refers to memory safety vulnerabilities related to the memory partition known as a buffer. The purpose of a buffer is to temporarily store data while it is moved from one place to another. An application frequently uses buffer memory for data storage, including user-provided data. With buffer overflow, an attacker will rely on flawed memory allocation due to, for example, the lack of bounds-checking measures. This can result in data being written outside the buffer’s bounds and overwriting memory in adjacent buffer partitions. Such overwriting can damage or destroy important data, cause a crash, or lead to an RCE triggered through the use of an instruction pointer security vulnerability. This can be tied to the buffer over-read scenario in which the reading of data goes beyond a buffer’s boundaries and into memory stored in adjacent partitions. RCE attack examples Some of the most significant and most dangerous vulnerabilities and the attacks they have enabled have involved using RCE. Log4J RCE vulnerability Log4Shell (CVE-2021-44228) is a remote code execution vulnerability in Log4j, a popular Java logging framework estimated to affect millions of devices worldwide. It has been called “the single biggest, most critical vulnerability ever.” While it existed since 2013, it became known in November 2021 and was publicly disclosed in December of that year. The vulnerability allows users to execute arbitrary Java code on servers, opening the door for crypto mining, creating botnets, and injecting ransomware. WannaCry is a ransomware cryptoworm attack that used an RCE exploit called EternalBlue, which, in turn, allowed it to deploy the DoublePulsar tool to install and execute itself. The attack targeted Microsoft Windows systems. After installation, the worm encrypts data, and attackers demand ransom. EternalBlue targeted a security vulnerability in Microsoft’s Server Message Block (SMB) protocol. This vulnerability allowed attackers to inject and remotely execute code. These are only two of the most famous RCE vulnerabilities and their enabled attacks. The Common Vulnerabilities and Exposures (CVE) system regularly lists new entries of vulnerabilities that carry the potential for an RCE attack. How to detect and prevent remote code execution RCE attacks constitute a severe threat because they can involve a host of approaches and exploit many different vulnerabilities. Moreover, new vulnerabilities constantly appear, making it challenging to prepare thoroughly. However, there are several measures that you can take to both detect and prevent RCE attacks. Regular security updates Organizations frequently fail to act on the latest threat intelligence and apply patches and updates on time. Attackers will, therefore, usually attempt to target old vulnerabilities as well. Implementing security updates to your system and software as soon as they are made available is highly effective in deterring many attackers. Monitor network traffic and endpoints to spot suspicious content and block exploitation attempts. This can be done by implementing some form of network security solution or threat detection software. Input sanitization and access control Adopt a zero-trust approach and always sanitize user input before using it. A thorough process of sanitization includes whitelists, blacklists, and escape sanitization. This will help filter out many code injection and deserialization attempts. In addition, network segmentation can help limit the impact of any code that does get through. Also, if possible, avoid using code evaluation functions and do not allow users to edit any content that has been parsed. Implement buffer overflow protection and other forms of memory management to avoid giving rise to vulnerabilities that are easy to exploit. Such protection will, for example, terminate the execution of a program when a buffer overflows, effectively disabling the possibility for malicious code to be executed. Bounds checking and tagging are other protection techniques that can be implemented to stop buffer overflow. What is a remote code execution attack? An RCE attack consists of injecting and executing malicious code in a system through the use of a vulnerability on some level of the system. This allows attackers to perform commands in the system, steal, damage, encrypt data, and more. How dangerous is an RCE attack? The impact of an RCE attack can be significant, depending on the vulnerability it has exploited. At a minimum, successful remote code execution will grant access to a system and its data. At a maximum, it can lead to a complete system compromise and takeover.
<urn:uuid:28367037-ab42-4aa1-a539-23833cc71e48>
CC-MAIN-2022-40
https://crashtest-security.com/remote-code-execution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00712.warc.gz
en
0.925902
1,955
3.203125
3
Planned Obsolescence and the Modern Product Life Cycle Planned obsolescence: (otherwise known as designed obsolescence) is a hotly contested topic, but the terms generate different responses. What is Planned Obsolescence and Why do Manufacturers Practice it? If you haven’t heard of the concept, planned obsolescence is the business strategy of producing consumer goods that become obsolete on a schedule, therefore requiring replacement. Manufacturers orchestrate this process by continually updating their design and features offering, discontinuing the supply of available spare parts, and creating their products from nondurable materials. Although the concept of a product that lasts for years is generally appealing to the consumer, the bottom line is that manufacturers don’t want their products to last in perpetuity; it wouldn’t be profitable. If you’re a brand-loyal customer, you’ll upgrade to the next model when it debuts. And there’s a certain cache associated with getting your hands on the latest and greatest model before anyone else. That explains the lines around the Apple store on the eve of a new product launch. Computer manufacturers and car manufacturers follow similar methodologies. Some believe that planned or designed obsolescence is a sneaky practice intended to keep us buying. From a manufacturer’s and marketer’s perspective, planned obsolescence is about an ongoing commitment to innovation – consumers expect innovation. What is The Purpose of Planned Obsolescence? An article published in The Economist in 2009 (“Planned Obsolescence”) sums up the philosophy while describing tactics used by the computer industry: “New software is often carefully calculated to reduce the value to consumers of the previous version. This is achieved by making programs upwardly compatible only; in other words, the new versions can read all the files of the old versions, but not the other way round. Someone holding the old version can communicate only with others using the old version. It is as if every generation of children came into the world speaking a completely different language from their parents. While they could understand their parents’ language, their parents could not understand theirs.” Modern Technology and Product Lifecycle Management The global electronics industry has grown dramatically. According to a white paper entitled “Electronic Part Life Cycle Concepts and Obsolescence Forecasting” (IEEE Trans. On Components and Packaging Technologies), the U.S. electronics industry grew at a rate of three times the overall national economy between 1990 and 2000. An interesting phenomenon has taken place within the industry: “many of the electronic parts that compose a product have a life cycle that is significantly shorter than the life cycle of the product.” That mismatch, in turn, requires manufacturers to stay up to date on the parts that will continue to be available and those parts that will become obsolete during the manufacturing run. At the same time, companies contemplating new technology must make a series of essential determinations: - How much should they invest in new technology, knowing it’ll become obsolete? - How much should they budget for the coming year for parts and products that they anticipate will become obsolete? Regardless of the decisions they ultimately make, companies must stay on top of obsolescence management to keep the business moving efficiently. Planning and Budgeting for Designed Obsolescence Keeping track of all of those moving parts can be extremely difficult. Asset Panda dramatically simplifies the process by giving clients a powerful yet easy-to-use mobile platform that syncs with the cloud. With Asset Panda, clients have access to key dates and details in real-time, and it’s accessible 24 hours a day, seven days a week. With better and more efficient planning, companies can better budget for the coming year and avoid expensive surprises. Asset Panda's asset tracking software comes with free mobile iOS and Android apps, including a mobile barcode scanner. There’s no need for any additional hardware; all you need is the smartphone or mobile device you already carry. Upload any asset, then add pertinent details, including supporting documents, photos, videos, and voice notes. From the palm of your hand, you can pull up an asset’s: - Complete maintenance history - Exact location - Check-in/check-out status - The identity of the person who currently has it - Its warranty and insurance information - Lease/purchase information, among many other details You’re entitled to unlimited custom notifications, including alerts, to stay on top of critical dates surrounding obsolescence. You can add as many users as you’d like for no additional cost. Learn How Asset Panda can Help With Planned Obsolescence Asset Panda allows you to track your assets any way you want and brings everyone into the communication loop to enhance accountability and accuracy throughout your organization. Curb theft and loss while you save time, money, and significant frustration by centralizing all of your asset-related data. Asset Panda is also incredibly intuitive and user-friendly, with a streamlined interface and completely customizable features. We adapt to your unique needs now and well into the future; the app is flexible enough to incorporate new technologies as they become available. If you’re ready to make better budgeting decisions and be a better steward of your resources, it’s time to give Asset Panda a look. Give Asset Panda's asset management software a try with a free 14-day trial (no credit card required)! You’ll receive full access to user guides, video tutorials, free mobile apps, and call-in and live chat support from our friendly Asset Panda support team.
<urn:uuid:e237eb83-50fd-46c6-a459-a8c86d2aaa40>
CC-MAIN-2022-40
https://www.assetpanda.com/resource-center/blog/planned-obsolescence-modern-product-life-cycle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00712.warc.gz
en
0.930928
1,180
2.8125
3
REST (Representational State Transfer) APIs, sometimes also referred to as RESTful APIs, are an increasingly popular design style. REST APIs are built to take advantage of pre-existing protocols within an environment, most commonly over HTTP for a Web API. The REST API design is more lightweight and is known for its vast flexibility in enabling modern business connectivity. What is REST API? The most common way to describe a REST API definition is to first understand what an API is. API stands for application programming interface and is used to bring applications together with the goal of performing a designed function built around sharing data and executing pre-defined processes. Basically, it allows one piece of software to talk to another. REST APIs are one of two of the most commonly deployed forms of an API, the other being a Simple Object Access Protocol (SOAP) API. REST is a client-service architecture that is based on a request/response design. REST APIs have become increasingly popular as part of a Web Services approach. Developers use RESTful APIs to perform requests and receive responses through HTTP functions. How Does REST API work? A REST API works essentially the same way that any website does. A call is made from a client to a server, and data is received back over the HTTP protocol. Facebook’s Graph API is an easy way to show the similarities between a REST API call and the loading of a webpage. Say someone wanted to pull up the Facebook page for YouTube, for example. That person would enter in the URL as normal, www.facebook.com/youtube. If that person was a developer, instead of “www,” he or she would enter “graph.facebook.com/youtube” and would receive a response to the API request that was made through their browser. The result of the request would show structured data, organized according to key value parameters. Sticking with the YouTube page as an example, that structured data would include how many likes the page had, how many accounts were following the page, and so on. Another concept important in the world of REST APIs is parameters. When sending a REST API request, someone can gain the ability to narrow the search request. This modifies a request with key-value pairs and filters the data that is received in a response. REST parameters specify a variable part of a resource, which is the data that is being worked with. Some different types of API parameters include path, query (the most common kind), header, and cookie. Path parameters are a variable part of a URL path and point to a specific resource within the data. Query parameters are located at the very end of a URL path, and can be either required or optional. Header parameters are added as part of the HTTP header of an API request. Cookie parameters are needed when a REST client must authenticate themselves using cookies. The Anatomy of a Request Anytime a call is made to a server using a REST API, that is considered an API integration request. And while the outcome happens quickly and quite simply, there is actually a lot that goes into making that request itself. The endpoint of a REST API is a unique URL that represents an object or group of objects of data. Each API request has its own endpoint, which is what the HTTP client is directed at in order to interact with data resources. HTTP methods (which will be explained in further detail below) are an integral part of a RESTful API request. These methods – GET, POST, PUSH, PATCH, and DELETE – correspond to create, read, update, and delete resources. REST headers contain information that represent metadata associated with every single REST API request. A REST header indicates the format of the request and response, and provides information about the status of the request. A REST API request also consists of data (also referred to as a “body”) that usually works with the POST, PUT, and PATCH HTTP commands and contains the information and representation of the resource that will be created. Guiding Principles of REST Representational State Transfer is known for its simplistic nature, and utilizes interactions in order to communicate via HTTP protocols. REST is a client-server architecture, where the server and the client act independently of one another as long as the interface stays the same when processing a request and a response. The server exposes the REST API, and the client makes use of it. The server stores the information and makes it available to the user, while the client takes the information and displays it to the user or uses it to perform subsequent requests for more information. REST is designed to be stateless, meaning anytime a client and server communicate, it always includes the necessary information needed to perform the request. Stateless also means that there is no session state, and it is located squarely on the client’s side. If authentication is necessary, the client must then authenticate itself every time it performs a request. REST is also cacheable, which means that the client, the server, and any intermediary component connected are all able to cache resources so that they will improve the performance. In a REST architecture, a layered system has a grouping of layers, with each layer having a designed function that it needs to perform. While the layers do have their own responsibilities, they also must interact with one another, and by doing so, create a hierarchy within the REST API architecture. A uniformed interface within a REST API architecture is designed to allow the client talk to the server in one specific language. This allows for the application to evolve independently without any of its services, models, or any other function conjoined with the API itself. Code on Demand Unlike the other guiding principles of REST, code on demand is the one that is actually optional in a REST architecture, not mandatory. Code on demand allows for code and applets to be transmitted through the API to use it within the application. In its simplest form, code on demand enables clients to be more flexible because the server is the one that makes the final determination of how things will be done. REST API Examples REST APIs have grown increasingly popular recently, as part of a Web Services approach. You might not even realize it, but many of the popular websites that you use today are in fact built using REST APIs. Some of the most common examples of REST APIs in use include Instagram, PayPal, Gmail, and Twitter. From a developer perspective, GitHub REST API, Google Developers Map APIs, and Twillio Doc REST API are popular APIs. HTTP Request Methods As outlined above, REST APIs are designed to perform requests and receive responses via HTTP functions. These are the five HTTP commands that REST is based on. The GET request is a nullipotent command that safely retrieves information. No matter how many times it repeats with identical parameters, the results will always be the same. The POST request is used to request the origin server accepts the entity enclosed in the request as a new subordinate of the resource. It can also update an existing entity. PUT and PATCH Requests The PUT request is an idempotent command that can create, update, or replace an entity. The PATCH request is also an idempotent command, which simply replaces an entity. The DELETE request is an idempotent command that a resource be removed. One important detail to note is that the resource does not have to be removed immediately; it could also be asynchronous or a long-running request. There are a few approaches when it comes to REST API authentication. It’s important to note that almost every REST API must have at least some form of authentication. Authentication verifies the credentials of a connection attempt between the sender and receiver. HTTP basic authentication is the simplest way to authenticate a REST API. In this scenario, an HTTP user agent provides a username and a password to verify its identity. Another option for REST API authentication is using API keys. In this approach, a unique generated value is assigned to every first-time user, so that every additional time that user tries to enter the system, the unique key is used again to prove that they are in fact the same user as before. The other method is actually a combination of authentication and authorization, called OAuth. In this scenario, a user logs into a system, and then the system requests authentication, normally in the form of a token. Next, the user’s request is forwarded to an authentication server that either allows or rejects the authentication. From there, the token goes back to the user and then to the requester. The token can be checked at any time by the requester and also over time with a specific scope and longevity. How to Test REST API Step #1 – Enter the URL of the API in the textbox of the tool. Step #2 – Select the HTTP method used for this API (GET, POST, PATCH, etc). Step #3 – Enter any headers if they are required in the Headers textbox. Step #4 – Pass the request body of the API in a key-value pair. Step #5 – Enter the required content type (such as application or JSON). Step #6 – Click the send button. After clicking Send, there will be various responses to the REST API, which details whether the API testing was a success or failure. It’s important to note the response code, response message, and response body. APIs are critical in spanning technical and business boundaries to deliver data, capabilities, and services wherever (and whenever) they’re needed, but the design of APIs has shifted to the more lightweight and flexible varieties that are suited for mobile applications and geo-distributed networks. Because of this approach, REST APIs continue to grow in popularity for mobile apps, social networking sites, and a variety of other offerings. Thousands of enterprises use REST APIs to generate business and grow their services, and REST API adoption will continue as one of the most efficient ways to enable the next generation of business applications. Integrate with Cleo APIs Cleo Integration Cloud combines service and technology to provide the most flexible and frictionless way to exchange B2B data. A cutting-edge ecosystem-driven cloud integration platform focused on creating value at the edges of business networks, Cleo Integration Cloud offers an extensive set of REST APIs, adapters, and connectors for organizations looking to integrate cloud applications and cloud services and extend the power of their integration infrastructure. See a demo today.
<urn:uuid:c7cf6bbb-34ee-4896-b1fc-288bfd1c5dc2>
CC-MAIN-2022-40
https://www.cleo.com/blog/blog-knowledge-base-what-is-rest-api
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00712.warc.gz
en
0.928913
2,216
3.34375
3
When Should I Use an XACML Condition? Targets are an easy way to define the scope of an authorization policy. Targets can be used in all three XACML structural elements (policy set, policy, or rule). Targets always follow an AND / OR / AND structure. For instance, with a target, it is simple to implement citizenship == Norwegian AND department == Sales OR citizenship == Canadian. With targets, you can only use simple comparison functions that compare an attribute to a single value e.g. citizenship == Norwegian. Conditions are more powerful in the sense that they do not have a pre-set AND / OR / AND structure. They can also use any XACML function so long as the final outcome of the Condition is a boolean value. For instance, you could implement the following using a condition: - Permit if the user’s age is an even number The mod() function is a function that cannot be used in a target so you have to use a condition. Because conditions are so much more powerful, they require more care. And it’s easier to make mistakes. When Should I Use a XACML Condition? There are a few use cases when conditions are important: The main driver for using conditions is the ability to implement relationships. Relationships are in fact one of the main drivers for XACML and ABAC. - Indirect relationship: user.department == doc.department - Direct relationship: user.id == doc.owner Most reasons for using a condition will be due to a relationship. The Negative Use Case If you want to have a policy that says “Permit if the user is not Canadian”, then you would need to implement citizenship != ‘Canadian’. However, XACML does not have a ‘not equal’ function. To achieve this functionality, you need to use a condition containing not(citizenship==’Canadian’). The Whitelisting / Blacklisting Sometimes, you want to check whether a user is on a list e.g. a VIP list or a banned user list. To do so, you also need to compare two attributes. This is a special form of a relationship this time using stringIsIn: - stringIsIn(stringOneAndOnly(user.uid), vipList) Handling Messy Data Sometimes the attribute values we get back are not quite clean and require some massaging, for example using lower case or prepending / appending data. XACML provides functions to do some level of data normalization and these functions can be used inside conditions. Conditions make XACML very flexible. There are many possibilities conditions enable. Can you think of one yourselves? Tweet us to discuss @Axiomatics. With conditions, XACML policies become extremely expressive and can cater to many authorization scenarios.
<urn:uuid:27012447-6200-4fd6-b419-8ada3ca47717>
CC-MAIN-2022-40
https://axiomatics.com/blog/4-when-should-i-use-an-xacml-condition
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00712.warc.gz
en
0.877562
643
2.796875
3
Manufacturers, to keep up with the latest changes in technology, need to explore one of the most critical elements driving factories forward into the future: machine learning. Let’s talk about the most important applications and innovations that ML technology is providing in 2022. Machine Learning vs AI: What’s the Difference? Machine learning is a subfield of artificial intelligence, but not all AI technologies count as machine learning. There are various other types of AI that play a role in many industries, such as robotics, natural language processing, and computer vision. If you’re curious about how these technologies affect the manufacturing industry, check out our review below. Basically, machine learning algorithms utilize training data to power an algorithm that allows the software to solve a problem. This data may come from real-time IoT sensors on a factory floor, or it may come from other methods. Machine learning has a variety of methods such as neural networks and deep learning. Neural networks imitate biological neurons to discover patterns in a dataset to solve problems. Deep learning utilizes various layers of neural networks, where the first layer utilizes raw data input and passes processed information from one layer to the next. Factory in a Box Let’s start by imagining a box with assembly robots, IoT sensors, and other automated machinery. At one end you supply the materials necessary to complete the product; at the other end, the product rolls off the assembly line. The only intervention needed for this device is routine maintenance of the equipment inside. This is the ideal future of manufacturing, and machine learning can help us understand the full picture of how to achieve this. Aside from the advanced robotics necessary for automated assembly to work, machine learning can help ensure: quality assurance, NDT analysis, and localizing the causes of defects, among other things. You can think of this factory in a box example as a way of simplifying a larger factory, but in some cases it’s quite literal. Nokia is utilizing portable manufacturing sites in the form of retrofitted shipping containers with advanced automated assembly equipment. You can use these portable containers in any location necessary, allowing manufacturers to assemble products on site instead of needing to transport the products longer distances. Using neural networks, high optical resolution cameras, and powerful GPUs, real-time video processing combined with machine learning and computer vision can complete visual inspection tasks better than humans can. This technology ensures that the factory in a box is working correctly and that unusable products are eliminated from the system. In the past, machine learning’s use in video analysis has been criticized for the quality of video used. This is because images can be blurry from frame to frame, and the inspection algorithm may be subject to more errors. With high-quality cameras and greater graphical processing power, however, neural networks can more efficiently search for defects in real-time without human intervention. Using various IoT sensors, machine learning can help test the created products without damaging them. An algorithm can search for patterns in the real-time data that correlate with a defective version of the unit, enabling the system to flag potentially unwanted products. Another way that we can detect defects in materials is through non-destructive testing. This involves measuring a material’s stability and integrity without causing damage. For example, you can use an ultrasound machine to detect anomalies like cracks in a material. The machine can measure data that humans can analyze to look for these outliers by hand. However, outlier detection algorithms, object detection algorithms, and segmentation algorithms can automate this process by analyzing the data for recognizable patterns that humans may not be able to see with much greater efficiency. Machine learning is also not subject to the same number of errors that humans are prone to make. One of the core tenants of machine learning’s role in manufacturing is predictive maintenance. PwC reported that predictive maintenance will be one of the largest growing machine learning technologies in manufacturing, having an increase of 38 percent in market value from 2020 to 2025. With unscheduled maintenance having the potential to deeply cut into a business’s bottom line, predictive maintenance can enable factories to make appropriate adjustments and corrections before machinery can experience more costly failures. We want to make sure that our factory in a box will have as much uptime with the fewest delays possible, and predictive maintenance can make that happen. Extensive IoT sensors that record vital information about the operating conditions and status of a machine make predictive maintenance possible. This may include humidity, temperature, and more. ML Models Used for Predictive Maintenance A machine learning algorithm can analyze patterns in data collected over time and reasonably predict when the machine may need maintenance. There are several approaches to achieve this goal: - Regression Models: these predict the Remaining Useful Life (RUL) of the equipment. This uses historical and static data and manufacturers can see how many days are left until the machine experiences a failure. - Classification Models: these models predict failures within a predefined time span. - Anomaly Detection Models: These flag devices upon detecting abnormal system behavior. Thanks to the IoT sensors powering predictive maintenance, machine learning can analyze the patterns in the data to see what parts of the machine need to be maintained to prevent a failure. If certain patterns lead to a trend of defects, it’s possible that hardware or software behaviors can be identified as causes of those defects. From here, engineers can come up with solutions to correct the system to avoid those defects in the future. This enables us to reduce the margin of error of our factory in a box scenario. Digital twins are a virtual recreation of the production process based on data from IoT sensors and real-time data. They can be created as an original hypothetical representation of a system that doesn’t yet exist, or they could be a recreation of an existing system. The digital twin is a sandbox for experimentation in which machine learning can be used to analyze patterns in a simulation to optimize the environment. This helps support quality assurance and predictive maintenance efforts as well. We can also use machine learning alongside digital twins for layout optimization. This works when planning the layout of a factory or for optimizing the existing layout. ML Models for Energy Consumption Forecasting If we want to optimize every part of the factory, we also need to pay attention to the energy that it requires. The most common way to do this is to use sequential data measurements, which can be analyzed by data scientists with machine learning algorithms powered by autoregressive models and deep neural networks. - Autoregressive models: Great for defining trends, cyclicity, irregularity, and seasonality of power consumption. To improve accuracy, data scientists can transform raw data into features that can help specify the task for prediction algorithms. - Deep neural networks: Data scientists use these to process large datasets to find patterns of data consumption quickly. These can be trained to automatically extract features from input data without feature engineering like autoregressive models. - Neural networks for sequential data: RNN (Recurrent neural networks), LSTM (Long short-term memory)/GRU (Gated recurrent unit), Attention-based neural networks to store information of previously inputted energy usage data using internal memory. We’ve used machine learning to optimize the factory’s production processes, but what about the product itself? BMW introduced the BMW iX Flow at CES 2022 with a special e-ink wrap that can allow it to change the color (or more accurately, the shade) of the car between black and white. BMW explained that “Generative design processes are implemented to ensure the segments reflect the characteristic contours of the vehicle and the resulting variations in light and shadow.” Generative design is where machine learning is used to optimize the design of a product, whether it be an automobile, electronic device, toy, or other items. With data and a desired goal, machine learning can cycle through all possible arrangements to find the best design. ML algorithms can be trained to optimize a design for weight, shape, durability, cost, strength, and even aesthetic parameters. Generative design process can be based on these algorithms: - Reinforcement learning - Deep learning - Genetic algorithms Improved Supply Chain Management: Cognitive Supply Chains Let’s step away from the factory in a box example for a bit and look at a broader picture of needs in manufacturing. Production is only one element. The supply chain roles from a manufacturing center are also being improved with machine learning technologies, such as logistics route optimization and warehouse inventory control. These make up a cognitive supply chain that continues to evolve in the manufacturing industry. Warehouse Inventory Control AI-powered logistics solutions use object detection models instead of barcode detection, thus replacing manual scanning. Computer vision systems can detect shortages and overstock. By identifying these patterns, managers can be made aware of actionable situations. Computers can even be left to take action automatically to optimize inventory storage. At MobiDev, we have researched a use case of creating a system capable of detecting objects for logistics. Read more about object detection using small datasets for automated items counting in logistics. How much should a factory produce and ship out? This is a question that can be difficult to answer. However, with access to appropriate data, machine learning algorithms can help factories understand how much they should be making without overproducing. The future of machine learning in manufacturing depends on innovative decisions. - Machine Learning - Artificial Intelligence - Industrial Automation - Industrial Internet of Things
<urn:uuid:2d00704b-bc30-4d52-889e-bf5400cf879c>
CC-MAIN-2022-40
https://www.iotforall.com/machine-learning-application-in-the-manufacturing-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00712.warc.gz
en
0.909497
1,974
3.125
3
Have you ever wondered how your PC is identified from the billions of other devices out there? Just like every human being has specific physiological traits that make up who they are, your computer’s IP address is somewhat unique to your machine and can say a lot about you. Therefore it is important you are able to identify your own IP address when you need to. Determining Your Own IP Address Luckily checking your IP address is as easy as going to the website - TraceMyIP.com. Doing so will display a string of numbers that correlates to your IP address. Knowing your IP address is particularly important for your Internet browsing. What Exactly Is an IP Address? It might only look like a random string of numbers, but there’s much more to an IP address. Gartner’s IT Glossary defines an IP (Internet Protocol) address as such: A unique number assigned by an Internet authority that identifies a computer on the Internet. The number consists of four groups of numbers between 0 and 255, separated by periods (dots). For example, 126.96.36.199 is an IP address. As you might imagine knowing how to identify IP addresses is important for a business owner who is responsible for maintaining control over a network of private, proprietary or sensitive information. Knowing how to identify an IP address allows you to see who has been accessing your network. These records are often stored in a log for you to review, and checking this log is a great way to see if there has been any suspicious activity on your network. Who’s That IP Address? You can use an IP address to find out where a computer is coming from, like its country of origin and much more. Here are some red flags to look for in IP addresses: - Countries with a reputation for harboring hackers. - Your competition. - Former employees. - Foreign countries that your business has absolutely nothing to do with. To find out information like this, you can easily copy and paste the IP address in question into a form found here: WhatIsMyIPAddress.com/ip-lookup. Granted, you can’t expect too much from a free online tool - you won’t get specific street names or usernames for example, but you will still be able find out quite a few details. - The ISP and organisation's name. - The IP's host name. - The city (a best guess). - The latitude and longitude of the location (a best guess). - The area code for that region. - Any known services running on that IP. Why Bother Knowing Your IP Address? Most hackers will understand that they can be tracked down by authorities identifying their IP address, so advanced hackers will attempt to make it as difficult as possible for you to find out their identity. They do so by bouncing their signal from different IPs around the globe, making it borderline impossible to pinpoint their location. A hacker could potentially use a local IP address, but really be halfway across the world. It is for this reason that you should always be on the lookout for suspicious network activity from unrecognised IPs. If you want to optimise your network’s security and your ability to respond to threats, you will want to use a comprehensive network security solution like BTA's, that is designed to monitor for suspicious network access. Also, by having BTA strategically monitor your access logs with our remote monitoring service, we can blacklist specific IPs so that they can never access your network again. In fact, a UTM solution from BTA even has the power to block entire countries where hackers spring up regularly. For more information about how we can protect your network from suspicious activity, give us a call today on 020 8875 7676.
<urn:uuid:b598c4ec-42de-41b2-879a-005ab79292a0>
CC-MAIN-2022-40
https://www.newcmi.com/blog/tip-of-the-week-spot-a-hacker-by-investigating-their-ip-address
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00112.warc.gz
en
0.945782
789
2.984375
3
Before you can delete a VPC, you must terminate any instances that are running in the VPC. Monthly Archives:: May 2016 In the previous step, you launched your instance into a public subnet — a subnet that has a route to an Internet gateway. However, the instance in your subnet also needs a public IP address to be able to communicate with the Internet. When you launch an EC2 instance into a VPC, you must specify the subnet in which to launch the instance. In this case, you’ll launch an instance into the public subnet of the VPC you created. A security group acts as a virtual firewall to control the traffic for its associated instances. To use a security group, you add the inbound rules to control incoming traffic to the instance, and outbound rules to control the outgoing traffic from your instance. In this step, you’ll use the Amazon VPC wizard in the Amazon VPC console to create a VPC. The wizard performs the following steps for you: In this exercise, you’ll create a VPC and subnet, and launch a public-facing instance into your subnet. Amazon Simple Storage Service (Amazon S3) is storage for the Internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. You can accomplish these tasks using the AWS Management Console, which is a simple and intuitive web interface. This guide introduces you to… Read more » This introduction to Amazon Simple Storage Service is intended to give you a detailed summary of this web service. After reading this section, you should have a good idea of what it offers and how it can fit in with your business. Topics Overview of Amazon S3 and This Guide Advantages to Amazon S3 Amazon… Read more » Amazon S3 is cloud storage for the Internet. To upload your data (photos, videos, documents etc.), you first create a bucket in one of the AWS regions. You can then upload any number of objects to the bucket. In terms of implementation, buckets and objects are resources, and Amazon S3 provides APIs for you to… Read more » Amazon S3 is a simple key, value store designed to store as many objects as you want. You store these objects in one or more buckets. An object consists of the following: Key – The name that you assign to an object. You use the object key to retrieve the object. For more information, see… Read more » By default, all Amazon S3 resources—buckets, objects, and related subresources (for example, lifecycleconfiguration and website configuration)—are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. Amazon S3 offers access policy options broadly categorized as resource-based… Read more » Amazon S3 Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage. Amazon S3 is easy to use object storage, with a simple web service interface to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you pay only for the storage… Read more » Source By: <docs.aws.amazon.com> Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you’ve defined. If you’ve already signed up for Amazon Web Services (AWS), you can start using Amazon EC2 immediately. You can open the Amazon EC2 console, click Launch Instance, and follow the steps in the launch wizard to launch your first instance. Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. Follow @ASM_Educational Description of OSI layers The recommendation X.200 describes seven layers, labeled 1 to 7. Layer 1 is the lowest layer in this model. Follow @ASM_Educational The Trunk port between the router and switch had to be manually configured using sub-interfaces. Note, however, that the DATA vlan traffic and the phone VOICE vlan traffic for each host is carried over the same link (multiple vlan traffic over the same port). Follow @ASM_Educational Build the following topology in packet tracer. After testing our configuration, we will deploy on devices. Follow @ASM_Educational Cisco CCNA- How to Configure Multi-Layer Switch Layer 3 Switch Now that we have seen how a “router on a stick” works, we can introduce the Layer 3 switch. In the “router on a stick” topology, what if we could bring the router inside the switch? Configuring your Xwindows No matter what desktop environment you chose, it is most likely that it will use the Xwindows architecture. Media Linux installation can be done using a variety of different media. Each installation method has different pros and cons depending on the environment you have. Here are some examples: Linux Uses Linux is a pretty flexible operating system. Although it has got a lot of credibility over the years as a stable server platform, it is also an excellent desktop platform. Databases, mail servers as well as many appliances can be installed. Linux is a 32 bit open source operating system. It is based on the very popular Unix operating system and it’s code is freely available (thus explaining the “open source” label as opposed to closed source where the code is not available freely). Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. A heat sink is an electronic device that incorporates either a fan or a peltier device to keep a hot component such as a processor cool. A hardware device that keeps the overall computer or a computer device cool by circulating air to or from the computer or component. The picture is an example of a fan on a heat sink. A chipset is a group of microchips that are designed to work with one or more related functions that were first introduced in 1986 when Chips and Technologies introduced the 82C206. The BIOS and CMOS are often times thought to be the same thing, but they are not. Southbridge The southbridge is an IC on the motherboard responsible for the hard drivecontroller, I/O controller and integrated hardware. Integrated hardware can include the sound card and video card if on the motherboard, USB, PCI, ISA, IDE, BIOS, andEthernet. Short for super input/output or Super I/O, SIO is anintegrated circuit on a computer motherboard that handles the slower and less prominent input/output devices shown below. Less commonly referred to as the Centronics interface or Centronics connectorafter the company that originally designed it, the port was later developed by Epson. Originally known as the Analog-to-Digital port (A-to-D port), the as a game port, joystick port, or game control adapter is a 15-pin connector port first found onIBM computers in 1981. If your computer is losing its time or date settings, or you are receiving a messageCMOS Read Error, CMOS Checksum Error, or CMOS Battery Failure, the CMOS battery needs to be replaced. To do this, follow the steps below. Alternatively referred to as a Real-Time Clock (RTC), Non-Volatile RAM (NVRAM) or CMOS RAM, CMOS is short for Complementary Metal-Oxide Semiconductor. There are many types of expansion cards that can be installed in a computer, including sound, video, modem, network, interface card, and a number of others. In many cases, these expansion cards will fit in a slot in the computer called a PCI slot. Result contains: Identified critical functions and required resources Identify organization’s critical business functions Review the BIA BIA contains the prioritized list of critical business functions Should be reviewed for compatibility with the BC plan Project Initiation and Management Develop and Document Project Scope and Plan Computer/Cyber Crime CryptoLocker Ransomware – Spreads via email and propagates rapidly. Encrypts various file types and then a pop-up window appears to inform user about the actions performed on computer and, therefore demand a monetary payment for files to be decrypted. Computer as incidental to other crimes Involves crimes where computers are not really necessary for such crimes to be committed. Instead computers facilitate these crimes and make them difficult to detect. Examples of crimes in this category may include money laundering and unlawful activities on bulletin board systems. Source mc mcse Certification Resources DoS (Denial of Service) – A DoS attack is a common type of attack in which false requests to a server overload it to the point that it is unable to handle valid requests, cause it to reset, or shut it down completely. Source mc mcse Certification Resources Physical Security – physical security is just as it sounds, locks on the doors, cameras everywhere, and so forth. User authentication is the verification of an active human-to-machine transfer of credentials required for confirmation of a user’s authenticity; the term contrasts with machine authentication, which involves automated processes that do not require user input. Source mc mcse Certification Resources PKI (Public Key Infrastructure) – A public key infrastructure (PKI) is the combination of software,… Read more » Source mc mcse Certification Resources ACL (Access Control List) – An ACL is a table in an operating system or network device (such as a router) that denies or allows access to resources. Source mc mcse Certification Resources Application Layer vs. Network Layer – An application layer firewall works at the application layer of a protocol stack.
<urn:uuid:2c2b4993-de95-4439-a3f6-84d57fd13f25>
CC-MAIN-2022-40
https://asmed.com/2016/05/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00112.warc.gz
en
0.905394
2,079
2.859375
3
The number of online data breaches as well as other forms of cybercriminal activity has steadily increased over the past several years. In the months of April, May, and June of 2018, 765 million people worldwide were impacted either directly or indirectly by a data breach. With cyber-attacks occurring once every thirty-nine seconds, the number of people who are impacted by data breaches is expected to increase with each passing year. This is because more businesses are conducting themselves online and poorly secured internet-capable devices are used to connect to computer networks as well as the internet. In addition, cybercriminals have developed more advanced tools for penetrating supposedly secure computer networks. With these tools, cybercriminals are also able to remain undetected for longer periods of time after compromising a computer network. As a business owner, it is important to take the security of your computer network seriously so as not to become a victim of a cyber-attack. In addition to using basic security measures such as antivirus and anti-malware, it is also necessary to use a network intrusion prevention system (IPS). A network IPS serves not only to protect your network from an attack, they also serve to prevent an attack from being initiated. Discussed below are five of the top benefits your business stands to gain from using a network IPS to protect and prevent your business network from malicious cybercriminals. The Top 5 1) IDENTIFY UNFAMILIAR HOSTS AND NETWORKS At any given point in time, there are a number of devices connected to your business network. Using it for communication and other forms of data transmission is essential to your business. In addition, your business network may exchange data with other networks as information is transmitted to disparate geographical locations. The devices connected to your network, as well as the other networks that you may be connected to, form attack vectors that can be used by cybercriminals to access and compromise your network. Using a network IPS, all devices connected to your network are identified and validated. Alerts are generated for unfamiliar devices or networks attempting to connect with your business network as this may be indicative of a potential cyber-attack. 2) CONTROL ACCESS TO YOUR NETWORK A major aspect of ensuring the security of your network is controlling the devices that have access to it. Only trusted devices should be allowed access to your network. In addition to allowing only trusted devices, access control also reduces the number of attack vectors that can be used by cybercriminals to target your network. You can use a network IPS to control which devices have access to your system and what kinds of administrative rights each device has. 3) NOTIFICATION OF POTENTIAL THREATS Once a potential cyber threat has been recognized, it is essential that timely alerts are sent to the network administrator and other relevant parties necessary in securing your network. The earlier these individuals are engaged, the sooner action can be initiated to mitigate or even prevent the cyber-attack. Using a network IPS offers you the ability to configure various types of network alerts depending on the needs of your business. 4) NETWORK TRAFFIC ANALYTICS Knowing what kinds of traffic and the source of the traffic that has access to your network is important in ensuring its integrity as well as security. It is therefore important to perform regular and frequent network analytics of the traffic accessing your network; reports can be generated for further analysis, if necessary. Once this analysis is performed, you can control the kinds and the sources of data traffic that can get into your network. With a network IPS, you can easily analyze and regulate data traffic in and out of your network. 5) YOU SAVE MONEY USING NETWORK IPS Ultimately, using a network IPS will save your business money in the long run. An IPS fully protects your network against any form of compromise. Businesses that become victims of data breaches often lose considerable amounts of money dealing with the fallout such as the loss of customer trust, legal consequences, and rebuilding the network. In some instances, these businesses may never recover from a cyber-attack, but 60 percent of small businesses fail to recover after a cyber-attack and, therefore, collapse. The Bottom Line At Cyber Sainik we have experts on hand ready to work with you in setting up an intrusion prevention system for your business. We work with you in determining your network configuration and develop a solution suited to your business needs. Contact us today for more information.
<urn:uuid:fa59daaf-95ef-45c8-ae2b-9952a47bd84b>
CC-MAIN-2022-40
https://cybersainik.com/top-5-reasons-your-business-needs-intrusion-prevention-and-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00312.warc.gz
en
0.955989
907
2.65625
3
Blood Donation - Information & Importance of Blood Donation History of Blood Donation The earliest documentation of blood transfusion is found in the religious text of many civilizations. The first documented demonstration of blood transfusion was between two dogs by Richard Lower in 1665. Landsteiner discovered the ABO Blood Group system in 1901, which is one of the most important landmark discoveries in the Transfusion Medicine. In the 1970s voluntary donors were accepted as blood donors. These donors were later on found to people having high-risk activities and the recipients were found to be suffering from liver diseases. This lead to another discovery of Hepatitis-B transmitted by donated blood. Since then testing for the hepatitis B antigen was implemented and this together with the cessation of paid donors reduced the incidence of post transfusion hepatitis. Further studies also made us include tests for Malaria, Syphilis, AIDS, and Hepatitis C to make the donated blood as safe as possible to the recipient. What is blood? One can almost say that blood is that magic potion which gives life to another person. Though we have made tremendous discoveries and inventions in Science we are not yet able to make the magic potion called Blood. Human blood has no substitute. The requirement of safe blood is increasing and regular voluntary blood donations are vital for blood transfusion services. Why donate blood? Do you know, someone in the US needs blood every two seconds, so donating blood is a phenomenal choice of doing a good deed. More than 41,000 blood donations are needed each day, and because blood cannot be manufactured, the only way to supply this need is via generous blood donors. Who can donate blood? The eligibility criteria for blood donation is simple. A donor should be between 18-55 years of age with a weight of 50 kg or above with pulse rate, body temperature, and blood pressure should be normal. Both men and women can donate blood. There are only a few conditions in which donors are permanently excluded. The donor with a history of epilepsy, psychotic disorders, abnormal bleeding tendencies, severe asthma, cardiovascular disorders, malignancy is permanently unfit for blood donation. Donors suffering from diseases like hepatitis, malaria, measles, mumps, and syphilis may donate blood after full recovery with 3-6 months gap. Also, people who have undergone surgery or blood transfusion may safely donate blood after 6-12 months for woman donors who are pregnant or lactating blood is not taken as their iron reserves are already on the lower side. How much blood can be taken during blood donation? Our body has 5.5 liters of the blood, out of which only 350 ml to 450 ml of blood is taken depending upon the weight of donor. The majority of healthy adults can tolerate withdrawal of one unit of blood. The withdrawn blood volume is restored within 24 hours and the hemoglobin and cell components are restored in 2 months. Therefore it is safe to donate blood every three months. What is done with the blood collected? The blood is collected in sterile, pyrogen free containers with anticoagulants like CPDA or CPDA with SAGM. This prevents clotting and provides nutrition for the cells. This blood is stored at 2-6 C or -20 C depending on the component prepared. Donated blood undergoes various tests like blood grouping antibody detection, testing of infections like hepatitis, AIDS, Malaria, syphilis and before it reaches the recipient it undergoes compatibility testing with the recipient blood. Modern Blood Transfusion Practice Modern blood transfusion basically deals with the optimal use of one unit of blood. One unit of whole blood is separated into components making it available to different patients according to their requirement. Thus one unit of blood is converted into packed cell volume, fresh frozen plasma, platelet concentrate, cryoprecipitate and granulocytes concentrate. Another important practice is apheresis. This is the separation of only desired component from the donor and returns the remaining constituent back to the donor. This technique is also used for remaining pathological substance in patients. Withdrawal of blood for transfusion is regarded as a safe procedure now and blood donor has emerged as the single most vital link. Health Benefits of Blood Donation There are many benefits of blood donation to your health. Some are mentioned below. - Blood donors have been found to be 88 percent less likely to suffer from a heart attack. - Donating blood keeps the balance of Iron Levels in your blood. For each unit of blood donated, you lose about one-quarter of a gram of iron. You may think this is a bad thing, since iron-deficiency may lead to fatigue, decreased immunity, or anemia, which can be serious if left untreated but what many people fail to realize is that too much iron can be worse, and is actually far more common than iron deficiency. - Repeated blood donations may help your blood to flow better, possibly helping to limit damage to the lining of your blood vessels, which should result in fewer arterial blockages. - Blood donors are less likely to get heart attacks, strokes, cancers, and people who volunteer for altruistic reasons (those help others rather than themselves), appear to live longer than those who volunteer for more self-centered reasons. - Repeated blood donations may reduce your risk of getting diabetes. - The satisfaction that you get after donation cannot be expressed. It can only be felt. It is said that God loves a cheerful giver.
<urn:uuid:4fa24461-6afd-4784-a01e-cf44458d5ceb>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/335/blood-donation-information-importance-of-blood-donation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00312.warc.gz
en
0.938094
1,118
3.125
3
In the infancy of the internet, AOL Instant Messenger was a state of the art means of communication, used as an early social platform and helping office-workers get faster responses from colleagues. A few years on, the dot-com entrepreneurs stormed ahead with innovation after innovation, making early messenger services seem as obsolete as they had seemed ground breaking a few years prior. In the last few years though, chatbots have given us a glimpse into the future. The Bad and the Good! Millions of people already use platforms like Facebook Messenger and conventional text messaging every day - but chatbot technology is set to take the message format to the next level. Admittedly, chatbots have not been without their problems. While being the world’s largest software firm, Microsoft’s first steps into using this technology were halted due to racist remarks generated by its chatbot, ‘Tay’. What started out as something innocent and helpful, turned into something rather more sinister when the chatbot started posting inflammatory and offensive tweets, as a result of gathering offensive comments that had been posted online. Thankfully, that’s not the case for the many chatbots that are making a positive impact on people’s lives. In fact, Chatbots Magazine have stated that 2018 is the ‘year of the chatbot’, owing to huge advances and investment in chatbot technology by major firms. A great example is the Lark chatbot, which is already saving lives and improving health as a piece of technology that assists people who live with chronic diseases, and it has gathered a great reputation by doing so. By checking people have been eating the right food, or have taken their medication at the right time, support is offered by giving feedback which is comparable to having a nurse on hand - at a fraction of the cost, and with less intrusion. This is just one example of the importance that chatbots can and will have in our lives, and rather than being novelty items, they are extremely useful tools. Cost Saving and Expenses Reducing unnecessary staff is an example of saving time and effort through a chatbot, but it also saves money for organisations by reducing wage bills and avoiding many costly mistakes that can occur through human error. As well as these benefits, it is easier to monitor expenditure in terms of payroll access and expense processing, both of which can be made faster by giving and receiving employee feedback, to ensure all parties agree over monetary amounts. Expense claims can be a bugbear for many employees who want them processed as quickly as possible, but face delays due to archaic processing methods and outdated technology. Consider this scenario. A sales person has to stay away from home for two nights to meet with customers. They keep all their receipts in a safe place for when they get back to the office to make an expense claim. They fill in an expense form and post the receipts to the payroll or expenses department. There is then a long wait for the expenses to be approved and the money paid. It’s a frustrating process for the employee, particularly if they are left struggling because they’re out of pocket. Had they used a chatbot, it would have read the image, turned the text from the paper into digital text and automatically sent the info to their expenses software for manager approval; with a response back in minutes. They could also check and confirm the amount, saving precious time, without having to keep mounds of paperwork! Appointment Scheduling Software Chatbots work for a diverse mix of businesses, including makeup retailer and online brand Sephora. A historic brand, Sephora has moved from its origins on the Parisian high street, to using an innovative chatbot through Facebook Messenger, adding value for its customers. Already offering ‘virtual makeovers’ online through an augmented reality app which uses customer photos to show how products would look, the company are a good example of how technology can keep your business fresh. Their chatbot is used by customers to reserve appointments, currently for their US stores only, but with a far-reaching application for all businesses worldwide, and in every sector. By implementing scheduling software of this type, employees can manage themselves, arrange meetings and meet with customers – but the real benefit is that they can do all this whenever and wherever they are. The mobile and intuitive nature of chatbots means that appointments can be easily arranged in environments where using technology is not usually practical, for example on a building site. By using the chatbot, a few pressed buttons can do all the work, and chatbots can memorise several common commands, such as ‘site survey’ or ‘boiler repair’ which are most frequently used, meaning mobile workers can organise themselves mid-job if required. An Interactive Online Holiday Booking System Busy staff sometimes need a rest from the challenges the day throws at them and they, their managers and the HR department all have limited time and energy for carrying out administrative tasks. When a break is required, booking a holiday doesn’t need to be a challenge for anyone, as chatbots now have the functionality to do this in an instant. Imagine you are that staff member: After checking the dates and availability, your company chatbot can automatically send a message to one of your managers to authorise your annual leave, and the usefulness does not stop there. Once leave is authorised, it is now possible to book your actual holiday through a chatbot, selecting the best flights or using a questionnaire to establish the best kind of holiday for you. Many holiday companies like Kayak and Booking.com are already using chatbots to good effect, taking the hassle out of booking. The traditional airline British Airways took the bold step recently of using emojis to operate its messenger chatbot, after finding that more people prefer communicating by emoji than words. The so-called ‘Emojibot’ works by asking a chain of questions about the kind of holiday you are after, so a quick response can be made to speed up the search. This quick-fire rendition of a chatbot shows that as AI chatbots continue to develop, the time saved for businesses is going to be very significant, and bots a joy to use. The process of employees organising their schedule, booking annual leave, then booking a holiday for that break, shows that the multi-use nature of chatbots is hugely important for the future of business, and the present direction of technology; hugely benefiting businesses and their personnel management. The Ultimate Attendance App Attendance is a challenge and high priority for many employers. By using a chatbot, staffing levels can be checked through a series of questions. Staff can also virtually check-in when they have arrived at work, training or customers’ premises. This makes it simple to see how many staff are working, and what staff are doing; for example who’s in or out of the office. Why We All Need a Chatbot From a practical standpoint, chatbots are a great thing. Saving money, time and (when programmed effectively) offering consistently useful help to both customers and employees. Having already been adopted successfully by many international companies, they do not seem to be going away. The chatbot can be a tool with a human voice, something more approachable than the software it drives – employees may find self-service procedures dull or complex for example, but by adding the illusion of a human assistant, the customer journey is far smoother, and the results will speak for themselves. Additionally, using a chatbot on a phone offers privacy to employees who wish to keep certain appointments confidential, such as a medical appointments or an application for a promotion - things they may not feel comfortable having displayed on their desktop monitor or work laptop for their colleagues to see, and comment on. When a new technology emerges, consumers and business leaders want to know the benefit of purchasing the product, and what it does. With an AI chatbot, the question is almost ‘what can’t it do?’ From organising, to engagement, to cost saving - a new era of technology has begun, and its possibilities are endless. Hannah Jeacock, Research Director at MHR (opens in new tab) Image Credit: Montri Nipitvittaya / Shutterstock
<urn:uuid:aa204f22-37a4-4fd7-81b6-6a6ec2463b43>
CC-MAIN-2022-40
https://www.itproportal.com/features/chatbot-your-latest-recruit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00312.warc.gz
en
0.960644
1,707
2.8125
3
Circular Definitions: QuizQuestion: Can you spot the circularity among these three definitions? Solution: a specific way of satisfying one or more needs in a context Stakeholder: a group or individual with a relationship to a change or a solutionAnswer 1: The definition of “need” references “stakeholder”, whose definition references “solution”, whose definition references “need”. The definitions don’t need to be circular. It’s simply bad definition/glossary practice. How can the circularity be removed? “Need” should simply be defined as “problem, opportunity, or constraint”. That’s the essence of the concept. Whether it has “value to a stakeholder” is an assessment. Just because one person says something (a need) has value and another person says it (the need) does not (which happens all the time by the way) doesn’t mean the thing is not a need to either: (a) the person who has it, or (b) the person who says it has no value. The latter person would say “that need has no value”. He/She just called it a “need”, so how can it not be a ‘need’? Answer 2: There is actually a second circularity in the definitions: need –> value –> stakeholder –> solution –> need. Changing the definition of “need” as I propose above will fix that second circularity too. “Need” is a very basic concept. If there were no need, there would be no reason to assess what value it has to what stakeholder (including the one who proposed it). If there were no need, there would be no solution, context or change (in the BA’s world). It’s a seed concept. Principle: In developing a vocabulary it’s best to start from terms whose definitions use no other terms … i.e., from seed concepts. Otherwise, circularities will make Swiss cheese of your brain. www.BRSolutions.com
<urn:uuid:aa094987-e984-41f2-8807-6994d7f3d473>
CC-MAIN-2022-40
https://www.brsolutions.com/circular-definitions-quiz/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00312.warc.gz
en
0.935151
451
3.328125
3
You've entered the Cognigy.AI Lab. This content and example code is inspirational and not quality assured for production. Getting the mindset Let’s take the common box analogy, say that we want to enlarge it and won’t care about best practices such as enlarging it with the same material that it is made of. You have to think outside the box to get the target dimensions, but there might be materials inside the box that you can use. If the box has a lid that you do not need, you can use it as material as well. Imagine that you want something to exist. You either do not have the resources or do not want to spend the resources to build it from scratch. However, you have a decent enough substitute for your needed resources, which you could form, mold and twist until you have created your desired outcome. Hacking software is usually a lot more forgiving than hacking hardware, since creating backups costs less time and effort, it is harder to create new boxes than copying files on a hard drive. If something breaks, you simply go back and try something else. When trial and error is such a viable option, you do not necessarily need to have a clear path, just a clear goal. Defining the box When working with chatbot tools such as Amazon Alexa, Cognigy or Dialogflow you define “intents” (definitions of what the user wants with corresponding example utterances) and the chatbot tries to map one best matching intent to a single user input. I wanted a way to be able to map multiple intents to a single user input and it to be easily configurable. So what does that mean? Usually you have intents such as “Where is the pizza I’ve ordered” or “I want to know the status of my Amazon package”, with corresponding answers such as “Your pizza is being delivered”, “You can track your parcel on …”. The problem we want to fix here is that if a user says “Where is the pizza I’ve ordered and what is the status of my Amazon package” either the pizza or the package intent and answer get triggered, instead we want both the pizza and the package intent to trigger and the user should receive the answers for both. Keeping it simple Most NLP tools use example sentences to train their intents and build up a machine learning model from them to map user input to one of the intents. However, the example sentences can often be reduced to certain keywords and keyword combinations that are unique to that specific intent. If you think about it, “I want to order a pizza” and “I want to buy a car” are very similar sentences and the actual intent difference is only expressed by “pizza” and “car”. If you group “order” with “buy” together and have an opposite key-phrase “sell”, you can already distinguish between four intents: - buy/order pizza - buy/order car - sell pizza - sell car Key-phrase intents excel in simple solutions for simple problems. However, they are ill-suited for “fuzzy” conversations such as small-talk. They are still used here as they represent an easy-to-understand starting point if you are new to conversational AI. They also allow us to outsource the intents to a table. Outsourcing the Intents Setting up the Google Sheets The multiple intent mapping that we want needs to map the key-phrase intents to answers/output phrases and therefore be a two-dimensional matrix/table. Excel was the first thing to come to my mind, but I needed it to be accessible by the chatbot platform, so the Google Sheets API was the cheapest thing that fulfilled the requirements. The intents would be simple key-phrase triggers, this ensured that one column of a sheet was able to map one intent: - The first line would contain the “intent name” or tag - All following lines would contain trigger key-phrases The mapping sheet contains all the answers in the first column, the other columns all map the intents/tags: - “t” means that tag needs to be triggered - “n” means that the tag is not allowed to be triggered - An empty cell means that it does not matter whether the tag is triggered or not This approach allows a lot of possibilities, generally speaking you want each line to have a unique combination of tags. If you have multiple lines with the same combination, all of them will be triggered by the same user input, in the same order they are in the sheet. In the example given, the answers from line 4 and 5 can be triggered with one user input, as they are not excluding each other. If none of the tags is triggered the default answer from line 6 is presented. Connecting the Chatbot Implementing the Cognigy connection Cognigy has HTTP-Nodes that are able to retrieve the necessary sheets from the Google Sheets API and allows its Code-Nodes to create Lexicons on the fly. With these tools it only required a bit of coding to get the logic into the project. The final Flow was very tiny, since nearly all of the conversational logic had been outsourced to the sheets file: All of the Nodes that are attached to the Once-Node import the Flow logic from the Google Sheets and store it in “cc.answers” at the first user interaction/the session start. The Think-Node allows the user input to be processed for Slots by the generated Lexicon. You can find the contents of the Code-Nodes on GitHub, the HTTP-Nodes use the following structure: Thinking final thoughts We have completely ignored Cognigy’s Intents, which represent a large part of its core functionality, but achieved our goal in doing so. One thing to keep in mind is that the principles proposed here could actually be used together with Cognigy’s Intents if necessary.
<urn:uuid:56aaf7cc-39d5-49d4-a924-bb1808856756>
CC-MAIN-2022-40
https://support.cognigy.com/hc/en-us/articles/360017274920-Hacking-external-Content-into-your-Conversational-AI
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00312.warc.gz
en
0.935234
1,358
2.671875
3
We all know that cybercrime is on the rise. Organization are also well aware of the risk to their reputation and financial damages that cyber-attacks and data leaks can cause. Studies have found that up to 90% of all successful cyber attacks and data leaks stem from some kind of human manipulation. But how to measure human cyber-risk? AwareGO has just released the tool that you need to measure and manage human risk in cybersecurity: The Human Risk Assessment. What is human cyber-risk? Human risk is everywhere, from general physical risk (like not wearing appropriate safety gear) to cyber risk and data leaks. If employees are unaware of cyber-risks, regularly bypass internal cybersecurity measures or disregard the organization’s policies, that amounts to your human cyber-risk. Most organizations offer some type of cybersecurity training, often to check in a box and comply with data privacy regulations. What they’ve been missing is a way to measure human cyber-risk to really know where they stand and manage the human risk in cybersecurity. How NOT to measure human cyber-risk? Until now the only tools available to measure human cyber-risk have been phishing simulations and USB-drop simulations. These are designed to show organizations how many employees fall for phishing emails or would connect an unknown USB drive to their computer. What these simulations don’t show is a holistic overview of employee knowledge and behavior. There can be many reasons why a person does not click on an email or pick up a USB drive, such as being too busy during that particular moment. We’ve also heard instances of phishing simulations being made too hard to spot, even using the organization’s real domain, which resulted in malicious compliance from the employees where they refused to open any emails or respond to meeting invitations. Measuring human cyber-risk must therefore be done in a way that does not set employees up to fail and show real and actionable results that are not a fluke. So how to measure human cyber-risk without that negative component? A holistic solution to measuring and managing human risk in cybersecurity For the last 2 years, cybersecurity and behavioral experts at AwareGO have been working on an overall solution to measure and manage human risk in cybersecurity. The solution is called the Human Risk Assessment and it is finally available to all. The Human Risk Assessment is a holistic solution to measure, detect and manage human risk in cybersecurity. It is an interactive space where organizations can assess their employees’ knowledge and behavior in a safe and positive environment. Employees get to see their own result in a granular way and get information on wrong and right answers so that they are learning and becoming more aware in the process. The results can then be used to give training to employees who need it in the correct threat areas. How to use result from the Human Risk Assessment? Depending on how security admins categorize participants they can get granular results on various threat areas categorized by geographical areas, titles within the organization, divisions and more. Once they have the results, they will know what type of training is needed and which groups should receive it. Cybersecurity training and managing human risk in cybersecurity will become less of a guessing game and more of a data-based approach. We recommend sending out risk assessment both before and after training to see if the training is working and which threat areas your organization needs to tackle first. The Human Risk Assessment can be available as a stand-alone product because it works with any type of training. It measures both knowledge and behavior so there is no guessing game when it comes to the results. Each question is designed to measure every factor of human knowledge, behavior, and awareness regarding different threat areas. This goes far beyond just phishing and USB drives as the Human Risk Assessment has multiple other threat areas, such as password handling, hybrid work, data security, social media behavior, physical security and more. The value of cybersecurity Cybersecurity teams often have a hard time showing the value of their work. With the Human Risk Assessment, they will be able to show the organization’s cybersecurity score and where it is vulnerable. They will also be able to show results of training in those threat areas and how continued awareness training and human risk management in cybersecurity benefits the organization. CISOs and their cybersecurity teams can now present visible results to the C-suite in a comprehensive way that demonstrates the value of their work. Because the Human Risk Assessment measures human cyber-risk in multiple threat areas it will also be clear to both cybersecurity admins and executives that excelling in one area of cybersecurity is not enough to keep the organization safe. Human risk management in cybersecurity is needed across all threat areas to minimize it and keep the organization safe. The human risk factor As already stated, a vast majority of cyber-attacks stems from some type of human error or manipulation. That means that cyber criminals are increasingly turning their methods towards breaching humans instead of breaching firewalls and antivirus software. Humans can be the weakest part of any organization’s cyber defense. The human factor can also be turned into the organization’s greatest asset when it comes to cybersecurity. A workforce that is aware of cyber-risks and trained to recognize the red flags of cyber-attacks could safe your organization thousands of dollars – or more depending on the size of the organization. By measuring the cybersecurity awareness, knowledge, and behavior you can manage human cyber-risk with the right training and awareness programs. The Human Risk Assessment is the best tool to do that and eliminate guessing from your cybersecurity efforts. Sign up – no credit card or commitment needed. Our award-winning content is part of the package.
<urn:uuid:e4856d43-8a68-40ae-8b76-5b6262b010ca>
CC-MAIN-2022-40
https://awarego.com/how-to-measure-human-cyber-risk-finally/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00312.warc.gz
en
0.953
1,157
2.515625
3
Web Services are software systems that are designed to support inter device communication over a network. Different types of network platforms are used for this communication. There are several protocols available in OSI layer model and they are used at different layers for multiple functions such as data transfer, messaging, encryption etc. They give a uniform way of doing communication or exchange of information. Today we look more in detail about SOAP (Simple Object Access Protocol) an XML based messaging protocol for exchange of information among computers. About SOAP Web Services SOAP is evolved as an object access protocol and released as successor for XML-RPC in June 1998. SOAP is an acronym for Simple Object Access Protocol is a network platform used in web services for exchange/ communication of data between two different hosts on a network. XML format is used for data transfer over HTTP protocol. In web services SOAP let the user interact with other programming languages and provides a way of communication between applications running on diverse operating systems such as Linux, AIX, Windows and so on. SOAP acts as an application of the XML specification. SOAP can be used with different variety of messaging systems and can be delivered using variety of transport protocols such as SMTP and HTTP, but initial usage focused via HTTP which is most popular method of data communication on Internet. SOAP may also be used with HTTPS with either simple or mutual authentication. Features of ‘SOAP’ - Open standard protocol used in web service to communicate using Internet - SOAP extends HTTP for XML messages - It provides data transport for web services - Used as broadcast protocol over Network - Used to call remote procedures and complete document exchanges - Platform independent and supports many languages (Hence language independent too) - SOAP structure comprises of a header, envelop and body element - Enables client applications to connect to remote services and invoke remote methods Pros and Cons of ‘SOAP’ - Recommended by W3 consortium for web services and application programming interface to communicate with client. - Lightweight communication protocol - Platform independent operating system which can run on Linux, Windows and MacOS - Use of default protocol to send message over network which is supported by all web applications - Easy to build no toolkits needed - Uses only XML format in web services and don’t support lightweight formats like JSON etc. - Slow parsing speed of XML and payload is high - No security features supported - No state interface for remote object in SOAP client - Limited to pooling and no event notifications . one client can use service of one server typically - Tend to firewall latency when firewall is analysing the HTTP transport SOAP Message Structure and Working SOAP message is a simple XML document having below elements: Envelop – start and end of message is defined using envelop and it is a mandatory element and contains detail of SOAP message. Header – it has optional attributes such as credential information (authorization) , authentication etc of the message used for processing message either at end point or in between. This is an optional element. Body – XML data comprising message to be sent and mandatory element and contains request and response information in XML format. (Actual content of message to be sent) Fault – errors occur during processing captured here an optional element. It holds status of SOAP messages and errors. Sub elements of Fault: |Sub Fault element||Description| |<faultcode>||Used to identify fault code in SOAP message| |<faultstring>||Provide human readable description of error| |<faultfactor>||Indicate the fault occurred during processing| |<detail>||Application specific status error in body element|
<urn:uuid:ea2a2f01-47da-43fa-a496-a9fef090ef39>
CC-MAIN-2022-40
https://networkinterview.com/understanding-soap-web-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00312.warc.gz
en
0.871115
776
3.5625
4
If you're searching for an RTU to monitor your specific network system, there are some important points to keep in mind so you'll end up with a perfect-fit device. One of these points if the transmission of data in your network - how will your RTU send information to your master station? Some of our clients can't choose which method of data transport they prefer, simply because they have to work with whatever their network structure is. For example, you may want to use ethernet, but your remote site might be so distant that the only transport method available is the data transmission over telephone lines. Getting to know the transport methods available is an indispensable step to make a well-informed decision, so let's dive in. To define serial transmission, think about the transmission of data one bit at a time. In other words, one data bit after the other on the same wire. Most legacy RTUs transfer their information through a serial connection. This serial binary data transfer between your RTU and master happens directly through an RS-232 or USB B port With a serial transfer, the data is sent in sequence one bit at a time over a communication channel or bus. This serial binary data transfer between your RTU and master happens directly through an RS-232 or USB B port. A common protocol used for serial communication is MODBUS. This open protocol will establish the information exchange between remote and master by defining a message structure that the devices will recognize and use. This older type of transmission has now been largely replaced by Ethernet. Using ethernet in your Local Area Network, or just LAN, allows linked devices that are able to access and share data between themselves to communicate with each other. This means your master station will receive data from your RTU via Ethernet over your LAN, because they'll be connected through that. The type of protocol that your remote supports will determine the rules and specifications for sending and receiving data. This type of network covers a somewhat small area, however, you can use telephone lines or radio waves to connect multiple LANs over greater distances. If you have a system of LANs connected together, then you have a Wide-Area Network, or WAN. The protocol often used for Ethernet data transfer is TCP/IP. There are several benefits to sending your alarm data over a TCP/IP network. Some of them are: TCP/IP is simply a faster network than a dedicated land-line (circuit-switched network) due to a lot of paths that data can take in a packet-switched network. TCP/IP features automatic recovery from any dropped or lost data. It can guarantee that the data is transferred without the need for any intervention by the user. Mediation of legacy gear to different computer systems TCP/IP is compatible with any computer system, eliminating the need for proprietary systems for telemetry monitoring. In summary, if you want higher communication and data transfer rates, without recurring costs for a phone line - considering your remote sites are able to support it - Ethernet is the way to go. Do you have remote sites that are completely unmonitored because you don't have LAN? Frame Relay/T1 is a common solution for this problem in WANs worldwide. Since it can be cheaper and simpler than other connection types, it has become quite popular. This can, however, present a problem for network reliability management tasks. Monitoring equipment doesn't typically have native Frame Relay/T1 support, and therefore has no way of communicating T1 alarm monitoring data to a master from sites that are only connected via this transport. In response to this problem, some forward-thinking vendors have created new devices capable of Frame Relay/T1 data transmission without the need for any additional equipment. These advanced units are capable of "T1 Alarm Monitoring." As an added bonus, they can also serve as T1 routers, providing Ethernet to other site devices using only your existing Frame Relay/T1 connection. T1 RTUs change the game of monitoring your outside-plant sites. Unlike traditional LAN RTUs, WAN RTUs can be deployed directly at sites that don't have LAN. Once you do that, you'll instantly have monitoring at your remote sites. If your remote sites are distant from both Ethernet and serial connections, an option is to purchase an RTU that reports alarms over its dial-up modem. POTS (plain old telephone service) is one of the most common methods of data transfer RTUs can support. This telephone network employs an analog signal for data transmission. Since they make use of leased telephone lines, dial-up connections can get very expensive. Sadly, for sites on mountain tops, for example, the phone company may charge you a large amount of money just to wire each site to POTS. Once the site is wired, they will charge you monthly service fees that can add up quickly. You need a voice system that can connect the sites across your network through existing LAN, without service fees or requiring new lines to be installed. VoIP - short for Voice over IP technology - is a great way to accomplish this. Using a VoIP system, you can send voice communications through your sites existing LAN connection. While there are many systems like this on the market, it's essential to find one that is durable and has all the features you need. Additionally, you want something that is easy for your techs to operate. Of course, it must be reliable for those that are hard to connect. Look for a VoIP system that can call station-to-station, but also has hoot 'n' holler or "all-call" functionalities. These are standard functions on many traditional (serial) order wire systems that your techs may be familiar with from earlier in their careers. One issue with VoIP technology is the tendency for multi-point voice communications to increase the load on your network. Broadcasting traffic slows the network by increasing network strain through the transfer of extra packets. This impacts everything else on your network. Do you plan to frequently host multi-site "conference calls" using VoIP? If so, look for a system that uses a dedicated hardware server for conference calls. This changes broadcast traffic to a much more efficient central-server model, reducing network load. You should also consider purchasing a system that has the capability to make outside phone calls. This is a surprisingly handy tool. If something breaks, your techs may need a way to notify outside agencies, even from a mountain top. Many companies have implemented the remote monitoring and control systems. Some companies find themselves in the unpleasant position of deploying updated revenue-generating core equipment at sites without access to a LAN uplink. Updating LAN infrastructure, especially across the vast geographical territories of many companies is an expensive and sometimes slow process. What transport options are available for these sites before LAN reaches them? What options does an on-site technician have for accessing the internet or company VPN portals? Cell phones and the networks they operate on have created some interesting transport options. Many companies have begun to adopt technology into their infrastructure that comes from the phone in their pocket. The information exchange in this form of transport is very effective due to the fact that - as long as you're within the reach of a cell tower - you can make use of existing networks. Cellular networks offer new transport possibilities for Remote Telemetry Units (RTUs) and Master Stations to communicate without traditional connections. How can equipment establish a connection and transmit this information through cellular towers? The fastest solution is to use an external GSM modem. Many LAN enabled monitoring or mission-critical devices are still fully capable of performing their job, but lack the LAN path that many technicians strive for to help make their job more efficient. By including an external GSM modem, they have a wireless alternative that allows all the benefits of modern transport at a fraction of the cost of a full LAN upgrade. Through upgrading to GSM-capable networks, businesses also may be left with a back-up plan in case their LAN, or in some cases hardware, fails. This also allows them to have messages sent to the Master Station, notifications sent to individuals working in the field, or even to trigger an alarm response across the country in a split-second. Signals can be promptly sent across the internet through the cellular service's data packages or to a determined person through an SMS message. The drawback on this method is that you'll have monthly costs since you're using another organization's services. It's possible that your sites are so distant that the nearest cell tower is hundreds of miles away - so there's no way that you're able to catch any signal. Imagine you are operating a water-level and temperature monitoring station out in the most remote parts of Alaska. The nearest cell tower is hundreds of miles and several mountains away. The odds of receiving a signal are slim to none. This doesn't mean that communications aren't still necessary though. You might be inclined to use radio transmissions, but due to the terrain and availability of service technicians, this route is also out. All the previous methods of communication are not an option for your specific case, but there's a solution. The one "for sure" connection still available is already above your heads. Satellites are a bond between the most remote sites and civilization. With data speeds rivaling that of hardline connections, and the ability to link up with equipment in the most remote areas, satellites have become a staple that holds together some of the largest and most remote network monitoring solutions. Now that you know the types of RTU data transmission, you might be wondering what your best option is since you have a complex system, with different remote sites that support different transports. If you're in the path of transport migration, then you might be confused as well. It's understandable that you think you've got a big problem. You may need to install alarm monitoring now, but don't want to buy an RTU for your present method of transportation when you know it'll be switched to another method in the near future. For those types of scenarios, DPS offers flexible RTUs that can handle multiple transport methods. A good example of that is the NetGuardian 832A. This device can support three methods for data transfer: Ethernet, dial-up, and serial. This way, it'll be capable of working with whatever situation you have at your remote sites. If your problem is that your network is transitioning, this NetGuardian can support Ethernet, dial-up, and serial connections at the same time. As your system changes from one method of transport to another, you can still use this same remote. With the NetGuardian 832A's capabilities, you won't have to deal with numerous devices and screens to view all your remote sites' activity. As a matter of fact, you'll even be able to configure a standard set of alarms for each site. You won't have to go through the trouble of training your techs on several kinds of units either. Our RTUs help you get your job done in a more easy and effective way. So, call us today (or send us an email!) to talk to one of our experts about your specific scenario and we'll come up with the perfect-fit solution for you. You need to see DPS gear in action. Get a live demo with our engineers. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:92d7b0b5-9808-47d4-9acf-2fb47d525efc>
CC-MAIN-2022-40
https://www.dpstele.com/blog/top-six-rtu-transport-methods.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00512.warc.gz
en
0.949537
2,426
3.09375
3
How many times have you seen passwords attached to monitors on sticky notes? How about people who use the password “password” or “123456”? With a lot of us having to work from home because of COVID-19, data security and privacy has become more important than ever, since we are not in the protective confines of an office and many of us may have to use our home computers. In 2020 we have a lot of great technology to access our computers, tablets and phones. You can access my phone with my face and your laptop with your thumb, but they are all still based on an initial password. We’ve all read stories about using strong passwords and how easy it is to guess people’s passwords. The fatal flaw in the system is that we need something that isn’t obvious, but something that we can remember. Some of the simplest methods of creating a more complex password is to use upper and lower case alphanumerics plus a symbol. There is a great site that can help you understand this. Go to http://howsecureismypassword.net/ and type in combinations of letters, numbers and symbols to see what it tells you. Another great site to help generate a stronger password is https://www.safetydetectives.com/password-meter/. These are not foolproof methods of choosing a password, but will give you a good idea of what is secure and what’s not. Here are a few examples. If you use “password”, a person or program will crack my password and access my information in seconds. If you add some symbols into it and use “pa$$word”, it would take a desktop PC about 3 minutes to crack it using a brute force attack. If you add a capital letter, a few symbols and a phrase after it to make it “Pa$$wordiseasy123”, it will take more time to crack than the history of the universe. You can see by adding some simple variety the job of stealing your password becomes harder. Here are a few easy to remember tips for passwords: - Don’t use a simple word or phrase, like password or 123456 - Use at least 10 characters, but preferably 12 or more - Use upper & lower case letters, numbers and symbols in your password - Use something that you can remember, so you aren’t tempted to write it down - Don’t write your password on a sticky note and put it on your monitor There are many systems, such as biometrics, smart cards and single sign on systems based on SAML and OAuth, that are more sophisticated than using passwords, but many of these still use passwords as the basis for them. Fortunately these are becoming more ubiquitous across computer systems and websites, but the simple password still rules. Until we come up with another authentication system as simple and ubiquitous as the password, we are stuck with them. Make sure you use a little common sense when choosing yours. Here are some more tips on choosing a strong password.
<urn:uuid:1e567c6e-baf6-40f3-a37b-009d9a352c89>
CC-MAIN-2022-40
https://en.fasoo.com/blog/please-steal-my-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00512.warc.gz
en
0.945381
638
3.1875
3
Apple recently shared its 2019 Environmental Responsibility Report which details the progress the tech giant has made over the past year and alludes to what sustainable goals the company will be working towards in the years to come. The report release follows a string of electronic scrap related announcements that the company has made this year, including the announcement of a research lab tasked with studying new electronic recycling processes. The new Austin, Texas-based 9,000-square-foot facility will “look for innovative solutions involving robotics and machine learning to improve on traditional methods like targeted disassembly, sorting and shredding.” Apple reports that the lab will work with Apple engineering teams as well as academia to address and propose solutions to today’s industry recycling challenges. Apple SVP, Lisa Jack, comments, “At Apple, it’s simple. We apply the same level of innovation that goes into everything we create, design, power and manufacture to make things better for people and the planet. And we make it simple for customers and partners who share our passion to join us in this work. In a time where the threats facing our planet are too great to ignore, we are demonstrating that businesses must play a vital role. We are proud to do the hard work, to make the breakthrough, and a tirelessly search for ways to ensure a better future for our planet that we all deserve.” Additionally, the company said that the last year it further developed its iPhone recycling robot, called Daisy, by giving it the ability to process six additionally iPhone models. The robot can now disassemble 15 iPhone models, and it can process 200 devices per hour. Apple says this equates to each robot processing 1.2 million devices per year. Other recycling takeaways from the report include: - Apple recycled more than 52,000 short tons of e-scrap in 2018 and refurbished 7.8 million devices. - The company began using recycled cobalt sourced from iPhone batteries that were removed by Daisy. - The company introduced 82 product components that together used an average of 38 percent recycled plastic. - Daisy recovers small iPhone components that contain rare earth elements.
<urn:uuid:17fbe095-d9b7-4787-978e-e3012d4019e1>
CC-MAIN-2022-40
https://hobi.com/apples-sustainability-report-outlines-recycling-goals/apples-sustainability-report-outlines-recycling-goals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00512.warc.gz
en
0.947528
437
2.609375
3
Business Intelligence dashboards and data visualization tools are slowly becoming an integral part of all businesses. There’s an exponential growth in business data and numerous choices for visualization tools available, which help users to make the most out of the data. But these tools often bombard business users and decision-makers with dashboards filled with data points and analytics, which is overwhelming and difficult to understand. To make the data and insights more meaningful, it must be presented using data storytelling. This article will take you on a journey to understand the need for data storytelling and how it impacts businesses and helps them improve analytics. The Need for Data Storytelling The data analytics process consists of the following layers: Data collection → Data preparation → Data exploration/analysis → Data modelling/presentation → Data deployment First and foremost, the data is collected and prepared for analysis. It is then used to analyze and interpret deep insights from dashboards using advanced analytics in the data analysis or exploration stage. Once the analysis is ready, it is translated into a format that is consumable by the decision-makers in the data presentation or data modelling stage. The data deployment layer, also known as the last mile of analytics includes collaboration and consumption of data, wherein the insights generated from data are shared with stakeholders and consumed by business users to understand and act on the data. Business users and stakeholders often fail to get insights in the way they need them and are required to take the help of analysts to understand the data, which leads to a lot of back and forth between analytics teams and their stakeholders. As data becomes a core part of organizations, everyone in the organization needs to know how to communicate the insights from data effectively. Data storytelling plays an invaluable role in communicating the insights and helping business users to understand and act on their data better. To understand the concept of data storytelling in detail, read this blog: The art of Data-Driven Storytelling - What is it and why does it matter. The Role of Language in Data Storytelling “Humans are not ideally set up to understand logic, they are set up to understand stories.” - Roger C. Schank, Cognitive Scientist While the data tells you ‘what is happening’, when it is presented in the form of a story, it guides you to an understanding of ‘why it is happening’. It’s all possible using Natural Language Generation (NLG) which translates structured data into written narratives in plain language using Artificial Intelligence. It assesses, analyzes, and communicates data-rich insights in the form of descriptive narratives. It uses the power of language to automatically transform complex information into simple narratives and bullet points and helps business users to quickly grasp important insights from data. Data stories help users to decipher what is being communicated and helps them make informed business decisions. NLG does it exponentially faster than humans and also eliminates human error, which adds a level of certainty to the analysis. Why is Data Storytelling Important for Businesses? There are many methods to communicate insights from data. However, according to a study, a mix of words and visuals drives more engagement than text-only blogs, articles, and write-ups. Harnessing the power of visual analytics and narrative analysis of insights can therefore help businesses to make better sense of their data. Data storytelling has a host of benefits, and when used correctly can help businesses in the following ways: 1. Turn metrics into actionable insights Storytelling with data empowers businesses to better utilize important metrics by turning them into useful and actionable insights and presenting them in the form of data stories. It analyzes the key performance indicators (KPIs) that align with their core business goals and transforms quantitative data into result-driven narratives. An interactive Business Intelligence (BI) dashboard allows businesses to select the KPIs and uses visuals, charts, and graphs to form a narrative that brings the data to life. 2. Improve processes with plotting Data stories have a definite plot with a well-constructed beginning, middle, and end. Data storytelling tools, templates, and platforms already have preset themes and formats, which changes the story and visualization based on the input data, and drives the narrative in a way that the message is conveyed in the most effective way possible. Businesses can outline the plot and create a framework by taking into account the primary aim of the data-driven story or report and aligning it with their overall organizational strategy and goals. Populating the plot with the most relevant visualizations and KPIs can help businesses to increase the productivity and efficiency of their processes. 3. Boost communication and engagement It often gets difficult for companies to maintain an engaging conversation with their clients and stakeholders for a longer time. The key to great communication and engagement is not just limited to compiling all the information in a presentation, but conveying it in a way that is engaging and easy to understand for everyone. With data storytelling, companies can collect and analyze data and present insights in the form of engaging and easy-to-comprehend narratives and visuals. This way, they can better demonstrate the value of their services and build a lasting relationship with their clients and stakeholders. 4. Creates a visual appeal With the attention span of humans constantly decreasing, it has become more important than ever to create a visual appeal to gauge their interest and make an impact. Data-driven storytelling uses visuals such as charts, graphs, heatmaps and narratives for communicating the insights to the end-user. The visuals and narratives enable clients to visualize the story behind their results, which can help in enhancing client reporting. Data-driven Storytelling Examples with Phrazor Phrazor leverages Natural Language Generation to help enterprises create unique AI-generated data stories. It analyzes the data, creates visualizations, augments the visuals using narratives, and presents the insights in the form of data stories. Let’s take a look at a few narrative storytelling examples by Phrazor. 1. Talent Acquisition Report The talent acquisition department deals with various metrics related to the distribution of vacancies, number of open positions, hiring costs, and efficiencies of hiring teams. Data-driven stories help the talent acquisition teams to effectively keep a track of their performance and make necessary improvements to improve efficiency and quality of hires. Take a look at the data story. 2. Online Store Performance Report Retail and e-commerce companies need to keep a constant check on their online store performance to drive their marketing and sales campaigns. Using AI-generated stories, they can track the critical metrics and KPIs about page views, competitor analysis, and revenue trends, presented in plain natural language. Here’s how the data-driven story looks: To auto-generate such data stories and reports, check out Phrazor templates available for a range of business functions and industries. Data storytelling can undoubtedly help businesses to make the most of their data and communicate it better. Phrazor combines the power of language and storytelling with data to create compelling stories from complex datasets in real-time. To know more about how Phrazor can help you become a data-driven business, get in touch with us.
<urn:uuid:91c489ce-7c7b-4e96-83b8-1f59d7a10800>
CC-MAIN-2022-40
https://phrazor.ai/blog/data-storytelling-for-business
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00512.warc.gz
en
0.927829
1,468
2.59375
3
This is how FONES may help you improve your ICT Resilience Since the focus of FONES is not cyber security in general but to ensure Switzerland remains supplied with crucial goods from food and water over medicine to power, the standard has a slightly different focus and approach than most other cyber security standards. It isn’t meant to compete with these, but rather to provide entry-level information and guidance while aiming at a high degree of protection. Its intended audience must after all be relied on by everyone else, even in crises. There are nine sectors of critical infrastructure that FONES is concerned with: Energy, Finances, Food and Water, Therapeutic Products, Transport, ICT, Waste disposal and Public Health, Safety and Administration. Information and communications technology as an area is distinct from the ICT that the standard is about, and refers to service providers and other companies who work in the field. However, all fields are dependent on computers and networked devices for their operation – from the software used to manage the electrical grid over industrial automation to milking robots in the agricultural sector, digital threats can potentially affect all areas. Since each of the sectors has different threats and risks to contend with, FONES aims to provide implementation guidelines for each of the different fields in order to spur adoption. These additional standards are created in close cooperation with organizations in the field. Currently, they have produced three such documents: One for the water suppliers in cooperation with the Swiss Gas and Water Industry Association, one for the food sector in cooperation with two retailers’ associations and one for the energy sector in cooperation with the Association of Swiss Electrical Companies. Implementation of these standards is voluntary, in the sense of an industry self-regulation, assisted by FONES. In all cases, the standard is aimed at senior management and ICT officers, and presents a high-level view of the necessary steps. Before going into the steps necessary to increase resilience, the standard explains that risk assessments and risk management are a pre-requisite factor. Only if you know what you need to protect against and what your threat model is, can you take the right steps. It is also clarified that there is no such thing as absolute security and that resilience is also about coming back from being hit with an attack. Lastly before getting into the actual model of the standard, there is also a reminder that effective cybersecurity measures answer the following questions: The actual measures of the standard are then organized in five functions: Identify is about comprehensively identifying all relevant systems, including external ones and ones in the supply chain and having processes to keep that list up to date. Similar to the risk analysis, this is a necessary prerequisite for defensive measures to be most effective. It’s not unusual in a network assessment or system review to find an old backup or a forgotten system that can then be used to attack further targets. Such situations show the failure of the Identify steps, as the risk wasn’t dealt with because it wasn’t properly tracked. Protect contains most of the steps commonly thought of as cybersecurity but at a high, process-oriented level. Importantly, the measures in this function include both awareness and training and extend to suppliers and other 3rd parties. For many companies, it might not be possible to audit their suppliers or mandate them to have certain training, but for the operators of critical infrastructure, these are important measures, in particular now that suppply chain attacks are rising. Detect really is what it says on the tin. Watch for anomalies, monitor your systems continuously, make sure you are able to notice a breach fast. Respond and Recover bring in elements from business continuity planning. These functions stress the importance of having plans for crises and the value of organized communications in bad situations. Not just internally for an efficient and effective response, but also externally, towards the public. This sort of PR is important for any company but is highlighted in these critical areas where poor public perception can lead to knock-on effects from a frightened public. Another important part of the Recover function is to ensure that learning from previous recoveries and improvement happens. On top of the measures in these functions, the FONES Minimum Standard also comes with an assessment system for judging your maturity. Tasks are scored from 0-4 with the following meanings: |1||Partially implemented, not fully defined and adopted| |2||Partially implemented, fully defined and adopted| |3||Implemented, fully or largely implemented, static| |4||Adaptive, implemented, continuously reviewed, improved| Obviously, while a 4 is ideal, the risk management of a business determines the goals and targeted levels. An assessment using this tool does not generally identify specific vulnerabilities in a company’s ICT infrastructure but weaknesses in their security posture and where they might or might not yet have reached their maturity goals. A full review would be a fairly major undertaking, as it would involve reviewing processes and specific technical steps affecting the company in many areas. However, as the process of implementing all measures would likely take a considerable amount of time, such a review could likely be done in multiple steps, as well. All in all, the FONES Minimum Standard for Improving ICT Resilience is a high level overview of all the necessary measures a company may need to take to handle cybersecurity risks. It is very good for setting goals and understanding why all these measures are important, without overwhelming the reader with technical details. At the same time, that makes it too vague to derive a specific implementation from and the goals it sets are high enough that they might not be feasible for most SMEs. Which is fitting given its target audience, and only somewhat limits its usefulness for security experts. The FONES and their collaborators from the various private companies have come up with a well-researched document that can certainly help guide companies to greater resilience. Our experts will get in contact with you! Our experts will get in contact with you! Further articles available here
<urn:uuid:4da780f5-a475-47a6-a5bc-1f62b2d63d10>
CC-MAIN-2022-40
https://www.scip.ch/en/?labs.20191017
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00512.warc.gz
en
0.952652
1,256
2.6875
3
Thank you for Subscribing to CIO Applications Weekly Brief Key Use Cases of Emotional AI in the Mental Healthcare Profile Suppose the integration of emotional AI will offer them much-needed assistance while also reducing the country's medical costs. In that case, it is advantageous to invest in this technology as soon as feasible. Fremont, CA: Humans have frequently questioned the emotional range of future AI. While in the first, the protagonist falls in love with an intelligent computer operating system personified by a female voice called Samantha, who does not reciprocate, causing heartbreak, in the latter, the protagonist ends up restarting life on Earth after a difficult journey guided by its emotions and wit to comprehend things around him. However, the image of Emotional AI is becoming clearer. Emotional recognition technology is also known as face coding. AI can monitor the driver's emotional condition and level of tiredness using computer vision technologies. It might be useful in the workplace to assess the anxiety and stress levels of employees who've had demanding tasks. • AI as Medical Assistants By analyzing patient records and providing reports based on the data, performing administrative responsibilities, and even aiding with diagnosis or intervention, emotion AI can free clinicians to spend more time with their patients. This allows patients to receive better and more personalized care based on their requirements, medical conditions, and preferences without having to reveal anything to the examiner. In addition, using speech analysis, the program can assist patients with mental health difficulties. It can also be better addressed and managed by their emotions, even in a highly stressful or traumatic situation. • AI as Emotional Support A 'nurse bot' not only reminds elderly patients on long-term medical programs to take their medications but also communicates with them daily to assess their general health. In addition, they make an effort to connect with those who have just been in an accident or who are depressed to lift their spirits. Furthermore, it provides a clear picture of how a patient reacts to new treatments. • AI as Chatbot Seeking medical care can be difficult in a society where mental illness is stigmatized and taboo. Where physical accessibility is not practical, AI may assist bridge the gap via chatbots. Researchers observed that individuals prefer chatting to avatars versus talking to therapists. Not only that, but these chatbots enable individuals to discuss their problems at any time of day or night, from anywhere in the world.
<urn:uuid:167b9e2a-428d-4e6b-86f6-1ee6d4d7154a>
CC-MAIN-2022-40
https://www.cioapplications.com/news/key-use-cases-of-emotional-ai-in-the-mental-healthcare-profile-nid-9973.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00512.warc.gz
en
0.96291
492
2.578125
3
If you follow technology news, then it’s almost impossible to avoid some mention of “the Internet of Things” or IoT, for short. With the proliferation of smart home devices ranging from lighting to garage door openers to thermostats to cameras and the use of other smart devices in enterprises, the challenges and growth in IoT can be very difficult to pin down. Usually, when conjuring an image of a botnet poised to mount a massively distributed Denial-of-Service (DDoS) attack, the thought of a small army malware- and virus-infected PC’s and servers controlled by some shadowy, anonymized command and control (C&C) server comes to mind. However, the largest botnets and associated DDoS attacks recently have not been sourced from compromised laptops, servers, and smartphones. Attackers have found that it’s much easier to compromise a vulnerable IoT device than to trick a user into clicking a malicious link or download malware-infected file. The Mirai and subsequent Persirai botnets were comprised almost completely of compromised IoT devices which used the popular embedded-Linux distribution, BusyBox. These botnets were built via self-replicating (worm-like) malware which, once infected, scanned the Internet for other vulnerable hosts. These botnets were not built overnight, as data shows the scans increasing over time with no attacks immediately thereafter. This infection pattern has the result of keeping the IP addresses of the infected IoT devices or Thingbots off many ISP and threat feed blacklists. Other vigilante Thingbots have also surfaced, such as Hajime, which seeks to inoculate IoT devices using default administrator usernames and passwords. These botnets are built in the same way, but the vigilante attacker merely changes the username and password and leaves a note behind. These activities, while helpful, still do damage by locking legitimate users out of their own devices. Patching these devices is often difficult, if not impossible, if the IoT device manufacturer is not actively maintaining the firmware. While a server or laptop running a popular operating system is easily updated, these potential Thingbots have tightly-controlled update mechanisms (if any, at all). Attempting to independently update the embedded-Linux BusyBox could easily result in bricking the IoT device, as the dependencies between hardware and software are often quite brittle. To prevent IoT devices in your network (at home or in the enterprise) from becoming another Thingbot, follow a few simple steps. - Know what’s on your network. Maintain an asset inventory. - Seek reputable IoT vendors (e.g. avoid bargain bin Internet-connected security cameras.) - Disable UPnP in home office routers. - Avoid the use of port-forwarding or any-any firewall ACL’s. - Of course, never use default username/password combinations. A few pro tips strictly for those in the enterprise: - Monitor outbound traffic, highlighting any traffic sourced from IoT devices. - Know the firmware levels in the asset inventory. - Enable 2-factor authentication in addition to privileged user access controls. - Use advanced WiFi security protocols to authenticate endpoints, where possible. - Demand better security features and defaults from IoT vendors who market solutions to the enterprise. With DDoS attacks escalating in an ongoing effort by attackers to overwhelm the infrastructure of enterprises and service providers alike, it is imperative that we all do our best to secure all the would-be Thingbots in the networks we maintain. A safer Internet is everyone’s responsibility, and password maintenance tops the list of things we can all easily do to further that goal.
<urn:uuid:44dcf202-f795-49bf-8521-ac31eb4df53a>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/the-internet-of-thingbots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00512.warc.gz
en
0.907305
753
2.796875
3
There have been many misconceptions on green development and the ICT industry, which is holding back progress on the ultimate goal of running completely sustainable businesses, for the benefit of the planet. This was the view of Huawei Carrier Business Group CMO Dr Phillip Song who highlighted the five biggest misconceptions on green development at the Huawei Day0 event. Green development has always been an ICT industry goal but misconceptions have arisen over the last few years about the industry. The first major misconception is that the ICT industry will contribute greatly to increasing carbon emission levels, but according to a report from the Global Enabling Sustainability Initiative (GeSI), the ICT industry will only contribute to 1.97% of carbon emissions by 2030 as technologies evolve. Song stressed how innovations in the ICT industry will enable other sectors to “significantly” reduce their carbon footprints. In the next eight years, the ICT industry has the potential to reduce global emissions by 20%, which is 10 times the carbon emission emitted by the ICT industry itself. Effectively, Song argued this will “decouple” economic growth from emission growth. Huawei calls this ICT enablement and it increases a company’s ‘carbon handprint’ as opposed to their carbon footprint. Song pointed to an example of the ICT industry reducing carbon emission is through data back-ups. To upload and back up 100 petabytes of data, firms would upload onto a large disc array and physically transport it in a lorry, resulting in 300kg of carbon emissions. But through using all optical transmission technology in cloud computing only 20kg will be emitted. If all data was backed up on the cloud and used optical transmissions, over 120 million tons of CO2 emissions could be saved annually, which is equal to planting 200 million trees, enough to afforest the whole of Europe. The second misconception is over focusing on supply chain emissions from the production of network equipment such as base stations. Only 2% of carbon emissions come from manufacturing, however 80% to 90% of emissions come from using the equipment. Song argued the focus should be on the use of equipment through innovative technologies. Another major misconception is the belief that solar and renewable energy power sources are the ultimate goal to being truly sustainable. There are incredible challenges to taking a power intensive industry such as ICT to being 100% reliant on renewable energy, which is presently not sustainable, said Song. However, ICT supply chains are actually more green than assumed and this is projected to improve. According to a GSMA report, the proportion of energy use by ICT supply chains will be 45% renewable (30% solar and 15% wind) by 2030. Song noted that more work is needed on the consumer side as only 1% of telecom tower power sources was renewable in 2021. Song presented a three-layer architecture strategy to tackle this larger problem, involving the use of better technology in data centres, optical cables, and migrating users to 4G and 5G networks. To illustrate his point Song showcased how if 10% of 2G/3G users migrated to 4/5G, a 2% energy saving can be achieved. To further illustrate his point, he noted on a 3G connection 30 HD movies can be downloaded, whereas 5G can be used to download 5000 movies of the same quality. Both processes would take 1 kilowatt per hour. Overall, consumers will see the benefits of a faster connection and operators can save more energy. The fourth misconception is believing making each piece of telecoms equipment will result in overall energy efficiency, a thinking Song said is false. He argued this narrow-minded belief prevents companies from thinking of the wider picture for long term sustainability. Huawei proposed for a jointly defined unified energy indicator system to drive energy flows in tandem with information flow, which will result in energy savings for an entire network, instead of a certain part. For example, if firms only considered energy efficiency of wireless master equipment, they won’t be able effectively complete the planning of another segment of the network, argued Song. The final misconception is that energy saving will massively hinder network performance. Song conceded there will be a small sacrifice in peak rates and some other indicators for vital sustainability targets, however it is negligible. Operators can for example shut down some operations at night when data traffic is low to save power, this will admittedly lower download speeds but only marginally and still yield a good service for consumers still active. Song concluded by stating everyone from all walks of society can benefit from a digitally enabled green future. The future should be shaped by more data “bits” and less “watts”.
<urn:uuid:b89c5773-6b8c-421c-8754-bfee7b78832f>
CC-MAIN-2022-40
https://phpmyadmin.developingtelecoms.com/telecom-technology/energy-sustainability/13132-huawei-s-philip-song-corrects-misconception-on-green-agenda-and-pushes-sustainable-development.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00512.warc.gz
en
0.944381
954
2.765625
3
How social engineering caused one of the biggest cyber attacks in 2020. Unpicking the high profile Twitter cyber attack Did you ever think that anyone would be able to hack Twitter? The major social cyber attack of 2020 proved that even high profile accounts aren’t invincible against cyber criminals’ devious tactics. Last July, over 130 Twitter accounts including Barack Obama, Elon Musk, Bill Gates and Kanye West, were hacked in a sophisticated phishing attack. Twitter branded the incident a ‘coordinated social engineering attack’ and had to temporarily suspend all tweets as the incident was investigated. Like a digital police chase, the action unfolded fast. Let’s take a closer look at what happened… On 15 July 2020, some Bitcoin-related Twitter accounts began requesting donations in cryptocurrency, in exchange for double the amount back. They said they wanted to ‘give back’ to the community and were aiming to trick naïve users into transferring. The hoax Tweet soon spread to high profile celebrity accounts such as Kanye West and Kim Kardashian, while also targeting political and tech figures such as Joe Biden, Barrack Obama, Elon Musk and Bill Gates. Twitter took action right away, temporarily suspending all verified users from tweeting. But what they failed to realise, was that the cyber attackers had already gained access to Twitter’s own internal admin tools. What caused it? When analysing the aftermath of a cyber attack, sometimes it’s obvious what has happened. But often, because of the increasing sophistication of cyber criminals, the exact technique can be difficult to identify. In the case of the 2020 Twitter attack, cyber criminals orchestrated a series of social engineering attacks against Twitter staff. A phone spear phishing attack allowed hackers to gain access to employees’ credentials. Once they had these, they could then use their credentials to act as them and target additional employees. These staff members had access to the internal administrative tools that Twitter can use to recover and reset accounts. Once they had access to these, they could reset the password for anyone! What would be the role of a cyber security professional in this incident? During the incident, Twitter responded by immediately locking down the affected accounts and removed Tweets posted by the attackers. They also disabled the ability for verified accounts to send new tweets. In the wake of an incident like this, a Digital Forensic Analyst might be called upon to urgently respond to a case and act as a trustworthy agent to determine the reliability of evidence. They act as the detective in the world of cyber security, piecing together information to understand what happened and how to move forward amidst chaos and panic. Think you could be a good Digital Forensics Analyst? Take our personality trait test to see which cyber security role would suit you best! What can we learn from this? When we think of a cyber attack, we often think of technological flaws and faults. The Twitter attack reminds us that cyber criminals will constantly exploit human vulnerabilities as well as system weaknesses. Remaining vigilant, alert and always questioning the credibility of a call, text, social post or email is crucial to protect yourself and businesses. Experience how cyber attacks just like this unfold with CyberStart Game! The challenges in CyberStart are based on real-world cyber security incidents and allow you to get hands-on experience in dealing with the aftermath of a cyber attack. Whether you want to be at the forefront of a digital investigation or improve your digital skills to be aware of sophisticated phishing attack, CyberStart can help you progress with fun online learning games and challenges!
<urn:uuid:90311f62-534c-423a-bfb5-2a1acd526ede>
CC-MAIN-2022-40
https://cyberstart.com/blog/how-did-cyber-criminals-manage-to-hack-kayne-and-obama-unpicking-the-twitter-cyber-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00712.warc.gz
en
0.956187
731
2.703125
3
As DDoS attacks are among the most damaging experiences that can happen to online businesses, it is crucial to take necessary steps to protect against them. Research states that small businesses can suffer damages of up to $120,000 per DDoS attack, while enterprise-level attacks can cost as much as $2 million. DDoS attacks are relatively simple to pull off, making them appealing to cyber criminals around the globe. As a result, experts believe that the total number of DDoS attacks will double from the 7.9 million seen in 2018 to over 15 million by 2023. Whether you’re a small business or a huge multinational conglomerate, your online services—email, websites, and anything that faces the internet—can be slowed or completely stopped by a DDoS attack. In this article, we list the most common types and offer resources to protect against DDoS attacks. Most common DDoS attacks Fundamentally, DDoS attacks are a type of brute-strength attack. They overwhelm your services through sheer numbers. However, they are growing more sophisticated as technology improves. There are three basic categories of attacks: - Volumetric attacks: High traffic is used to completely fill up a network’s bandwidth - Protocol attacks: Server resources are exploited and overwhelmed - Application layer attacks: Specific vulnerabilities in web applications are exploited to crash the server entirely There are various sub-types of distributed denial of service attacks that fall into these categories depending on how much traffic is generated and what vulnerabilities are exploited. The most common ones include : This application layer attack uses specially-crafted GET or POST requests to overwhelm the service. This type of attack often uses interconnected computers that have been taken over with the aid of malware. Essentially, the requests force the application to work as hard as possible for every single request, leading to a crash. Instead of targeting something specific, this application layer attack floods random ports on a network with UDP packets. The network host looks for a non-existent application there. When nothing is found, it replies with a “destination unreachable” packet. The result is a network that’s too busy responding to nonsense to reply to legitimate requests. TCP connection sequences involve a “three-way handshake” where hosts and requesters must bounce several responses back and forth. In SYN attacks, the requester never follows up on the host’s response, leading the host system bound up waiting for acknowledgments that will never come. The oldest type of DDoS attack, this assault sends as many pings as possible. The goal is to simply use up a server’s bandwidth as much as possible. Instead of trying to push as much traffic through a door as possible, this HTTP attack aims to stand in the door and block it as long as possible. The Slowloris program opens a server connection and uses HTTP flooding to keep it open as long as possible. Doing so, preventing the server from opening other connections. Famous DDoS attacks in recent history As mentioned prior, even the largest businesses aren’t immune from denial of service attacks. Here are three of the most famous DDoS attacks in recent years. Learning from these experiences can create awareness and point you in the right direction to protect against DDoS attacks and avoid problems in the future. The AWS customer attack of 2020 One of the most recent, large DDoS attacks targeted an AWS customer. The attack reportedly reached a maximum rate of 2.3 terabits per second — the second highest reported rate ever. The AWS service had some protections in place that helped mitigate the attack, but it was still a devastating attack. The GitHub attack of 2018 Another large attack in recent history was aimed at GitHub. The assault reached a maximum rate of 1.3 terabits per second and knocked GitHub offline for 20 minutes. Luckily, the famous code-management service had a DDoS protection system in place, or the attack could have lasted much longer and had even worse consequences. The Google attack of 2017 Even internet giants like Google are vulnerable to DDoS attacks. In October 2020, the company disclosed an attack from September 2017 that reached rates as incredibly high as 2.54 terabits per second. This record-breaking attack – the largest one on record to date – lasted for six months. While Google’s systems were capable of handling the load, they may be the only company in the world that could. Easy steps to protect your website against DDoS attacks Unless you have systems like traffic profiling and early threat detection already in place, it can be challenging to spot DDoS attacks. More often than not, you only find out when your website and services actually crash – which unfortunately is a little too late… You can’t prevent attacks entirely, but you can mitigate and protect yourself against the consequences of these assaults. Here are four effective ways to guard your website against DDoS attacks. Bullet-proof your network hardware configuration You can help to prevent a DDoS attack by making a few simple hardware configuration changes. For instance, you can configure your firewall or router to drop incoming ICMP packets or block DNS responses from outside your network (by blocking UDP port 53). This will help protect against certain DNS and ping-based volumetric attacks. Deploy a DDoS protection service Your best chance of mitigating a DDoS attack would be by deploying a DDoS protection service. Companies that heavily rely on digital services are particularly vulnerable, so extra help would be most welcome. For companies like this, it is essential to use multi-layered protection, because some DDoS attacks may actually hide other, more dangerous security breaches. At Mlytics, we use our own advanced DDoS protection technology and provide an easy-to-implement DDoS protection service that offers always-on, multi-layered protection against DDoS attacks. Plan for DDoS attacks in advance Planning for attacks in case they would happen, enables you to respond quickly before they actually start harming your website. A proper cyber security plan includes a list of employees who will respond to the attack. It should also outline the way the system will prioritize resources to keep your most apps and services online, which could keep your business from crashing. Finally, you can also plan how to contact the ISP that’s supporting the attack, since they may be able to help stop it entirely. One of the most basic steps to protect against DDoS attacks is to make your hosting infrastructure “DDoS resistant”. Simply put, this means that you prepare enough bandwidth to handle traffic spikes that may be caused by cyber attacks. Do note however that purchasing more bandwidth alone does not satisfy as a complete solution to mitigate DDoS attacks. When you buy more bandwidth, it does raise the bar which attackers have to overcome before they can launch a successful DDoS attack, but you should always combined this with other mitigation techniques to fully safeguard your website. If you’re a business with an online presence, you should take steps to be protected at all times. A complete security solution that takes into account every potential threat can safeguard you and your business for dangerous security threats and cyber criminals. A DDoS attack can cause data breaches, take down your services, interrupt your daily operations, and even lead to the end of your company’s existence. If you’re working towards a comprehensive security solution, a DDoS protection service that can be easily implemented into your cloud infrastructure is imperative to help prevent web attacks.
<urn:uuid:38a8bb1e-8fc8-4fde-bb60-6ce07f74182f>
CC-MAIN-2022-40
https://www.mlytics.com/blog/simple-tactics-to-protect-your-website-against-ddos-attacks-in-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00712.warc.gz
en
0.949717
1,574
2.578125
3
Cynet's 24/7 MDR with the latest security updates and reportsDownload the Cyops Solution Brief Written by: Eran Yosef njRAT is a variant of jRAT, which is also called Bladabindi; it is a remote access trojan used to control infected machines remotely. Because of its availability and its techniques, njRAT is one of the most widely used RATs in the world – first detected in 2013. The njRAT trojan is built on the .NET framework. This RAT gives hackers the ability to control the victim’s PC remotely. njRAT allows attackers to activate the webcam, log keystrokes, and steal passwords from web browsers. Also, Bladabindi gives hackers access to the command line on the infected machine. It allows the malicious actor to kill processes as well as remotely execute and manipulate files. On top of that, njRAT is capable of manipulating the system registry. When a PC is infected, Bladabindi Trojan will collect several bits of information about the PC that it got into, including the name of the computer, operating system number, country of the computer, usernames, and OS version. Moreover, Bladabindi can target cryptocurrency wallet applications and steal cryptocurrency from PCs. For example, it can grab a bitcoin wallet and access credit card information, which is usually stored in cryptocurrency apps as a way to purchase cryptocurrency. After infecting a computer, the malware copies into “TEMP,” “APPDATA,” “USERPROFILE.” It can also copy itself into .exe files, to ensure that it will be activated every time the victim switches on their computer. The njRAT trojan has a few tricks up its sleeve to avoid detection by antivirus softwares. For example, it uses multiple .NET obfuscators to obstruct its code. Another technique that this malware uses is disguising itself into a critical process, blocking the user’s ability to shut it down, making njRAT hard to remove from infected PCs. Bladabindi RAT can also deactivate processes that belong to antivirus softwares, allowing it to stay hidden. njRAT also knows how to detect if it is running on a virtual machine, which helps the attackers to set up countermeasures against researchers. For spreading, njRAT can detect external hard drives connected via USB. Once such a device is detected, the RAT will copy itself onto the connected drive and create a shortcut. Anti-Virus/AI Mechanism – Detection Wngine The “Detection Engine” alerts refer to any alert generated by Cynet’s Next-Gen Antivirus or by Cynet’s machine learning mechanism. This alert triggers when Cynet’s AV/AI engine detects a malicious file that was dumped on the disk. The alert triggers when Cynet’s AV/AI engine detects a malicious file that was loaded to the memory. Threat Intelligence Detection – Malicious Binary – Blacklist – Fast Scan Mechanism This type of alert is based on the Cynet’s threat intelligence database, which contains millions of IoCs. The feed of our threat intelligence database includes third-party information (such as VirusTotal) as well as information derived from attack investigations and pre-defined suspicious patterns and auto-analysis of large, intricate patterns. This alert triggers when Cynet detects a file that is flagged as malicious in Cynet’s internal threat intelligence database. Malicious Binary – ADT Mechanism This alert triggers when Cynet detects a file that is flagged as malicious in Cynet’s EPS (endpoint scanner) built-in threat intelligence database. This database contains only critical IoCs (such as IOCs of ransomware, hacking tools, etc.). Once we first open njRAt, it asks us to choose the port we want the software to listen to. [We selected 4444] After this, we need to build our “server” file. We created a .exe file that will open a remote connection to the victim’s machine. Choose the following: HKEY_CURRENT_USER\Software\32 characters and digits, you can be sure that is njRAT. As you’ve entered all the information you want, hit the “build” button, and it creates the .exe file. Now, we need to deliver the server file to our victim. The njRAT trojan uses quite a few attack vectors to infect its victims. For example, the malware is known to target Discord users as part of spam campaigns. Another method is through a compromised website that tricks users into downloading a fake software product update, which in turn installs the njRAT malware to the PC. Once we deliver njRAT to our victim, and it infected the target device, the malware starts its malicious activity. A note will pop up in the victim machine and then all the options we will be available to us. If we want a remote desktop connection to the victim machine – we press remote desktop, and then it will open a window with access: To gather passwords from the web browser of the victim machine – we press get the password: If we want to open a file on the remote victim, press on Run file and explorer will open to choose a file to run. There are a few more options like: “njRAT is a remote access tool (RAT) that was first observed in 2012. It has been used by threat actors in the Middle East.” Application Window Discovery njRAT gathers information about opened windows during the initial infection. njRAT can launch a command shell interface for executing commands. Credentials from Web Browsers njRAT has a module that steals passwords saved in victim web browsers. Custom Command and Control Protocol njRAT communicates to the C2 server using a custom protocol over TCP. njRAT uses Base64 encoding for C2 traffic. Data from Local System njRAT can collect data from a local system. Disabling Security Tools njRAT has modified the Windows firewall to allow itself to communicate through the firewall. File and Directory Discovery njRAT can browse file systems using a file manager module. njRAT is capable of deleting files on the victim njRAT is capable of logging keystrokes. njRAT can create, delete, or modify a specified Registry key or value. Peripheral Device Discovery njRAT will attempt to detect if the victim system has a camera during the initial infection. Registry Run Keys / Startup Folder njRAT has added persistence via the Registry key HKCU\Software\Microsoft\CurrentVersion\Run\. Remote Desktop Protocol njRAT has a module for performing remote desktop access. Remote File Copy njRAT can upload and download files to and from the victim’s machine. Remote System Discovery njRAT can identify remote hosts on connected networks. Replication Through Removable Media njRAT can be configured to spread via removable drives. njRAT can capture screenshots of the victim’s machines. System Information Discovery njRAT enumerates the victim’s operating system and computer name during the initial infection. System Owner/User Discovery njRAT enumerates the current user during the initial infection. Uncommonly Used Port njRAT has been observed communicating over uncommon TCP ports njRAT can access the victim’s webcam. We have uploaded the “server” file to VT: njRAT on VT: To clean up an infected host, it crucial to revert each of the steps taken by the payload of the attack. • Clean the Registry for any of the manipulated values. • Delete Malicious Child’s instances from the memory. • Block Network Traffic to any domain contacted throughout the attack. • Use Cynet’s built-in remediation options to delete the file and prevent f it from spreading over the network. • Use Cynet’s built-in remediation option to disconnect the HOST from the network. • Investigate the incident according to the organization’s policy. The Cynet CyOps is available to clients for any issues 24/7, questions, or comments related to Cynet 360. For additional information, you may contact us directly at:
<urn:uuid:16c6bf9e-565b-4b2b-a3d0-208b33790ba9>
CC-MAIN-2022-40
https://www.cynet.com/attack-techniques-hands-on/njrat-report-bladabindi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00712.warc.gz
en
0.868301
1,951
2.71875
3
Whether it’s schools, hospitals, or even critical infrastructure, ransomware is causing destruction in almost every sector of the economy. With each passing year, ransomware is becoming more widespread and more dangerous. and knowing how to prevent ransomware attacks is becoming more essential. Making matters worse, a mistake by almost anyone in an organization can lead to a ransomware infection. This means a basic understanding of cybersecurity is rapidly becoming as essential as knowing how to use email. Strong operational procedures are important, but hackers are continuously changing their methods. The best means of protecting against ransomware attacks is if all network users have a strong understanding of how and why ransomware attacks happen. This article will look at some of the most common ransomware attack vectors, and how to stop them. Protecting Against Ransomware Attack Vectors When building up your capacity to prevent ransomware attacks, it’s important to get your priorities straight. Some types of attacks are more common than others, so it’s best to start with the highest risk areas, and then work your way down. Of course, the ideal is to implement all of the upgrades at the same time, but that’s not always possible in every case. Social engineering attacks, also known as phishing attacks, are the most common way that a network gets a ransomware infection. Phishing works by exploiting human errors, making it hard to defend against using traditional cybersecurity measures. Phishing attacks typically work by tricking a network user into clicking a malicious link or downloading a malicious email attachment. Viruses have come a long way over the years— viruses are no longer delivered in .exe files, but can also appear in .pdf attachments and other less suspicious formats. Most of us know by now that we shouldn’t open strange attachments from unknown emails, but hackers are finding ways around that. In some cases, hackers will take control of the email account of a colleague in preparation for an attack. They may study the patterns of communication and even the style of writing of the person behind the account, and then use it to deliver an email or message that appears as part of a normal conversation. Hackers may use other tricks, such as using SEO bombing to catch people in high volume search terms, like those relating to holidays or major sporting events. Fake websites that look identical to the real ones may also steal usernames and passwords. This is one reason it’s very important to have separate passwords and security practices for work and personal use. Phishing is a complex subject, and one that highlights the need for more communication between cybersecurity experts and other network users. The best way to prevent phishing is to make sure everyone using a network is up to date with the best practices for handling links and attachments. A good way to accomplish this is to have monthly or quarterly meetings with cybersecurity experts to learn about the latest phishing trends, and how to identify and avoid attacks. Hiring employees who have some understanding of cybersecurity can also help to avoid mistakes. Cybersecurity may be the next “must have” item for your resume! - Stay up to date with the latest phishing trends. - Have a program in place for educating network users with anti-phishing techniques. - Don’t use the same passwords for personal and business accounts! - Use tools for detecting malicious links and attachments. Remote Desktop Protocols Remote Desktop Protocols (RDPs) are another of the most common ransomware attack vectors. It’s not hard to see why— RDPs basically allow someone to remotely use a computer as if they were sitting in front of it. It’s easy for hackers to scan the internet for open RDP ports, and simply run brute force attacks on them. This means that many RDP attacks can be prevented simply by using strong passwords. The first thing to check, however, is if you need RDP at all. A huge number of computers have RDP enabled, although they don’t need it at all. Many of those use weak passwords. These practices make “low-hanging fruit” for hackers. If you do need to use RDP, you can configure it so that only approved IP addresses can access it. Rate limiting also increases security by limiting the number of attempted logins a brute force attacker can make. Finally, multifactor authentication (MFA) can also make things much more difficult for hackers. - Disable RDP on any machines where it’s not needed. - Use strong, unique passwords. - Limit access to “whitelisted” IP addresses. - Use rate limiting to reduce how many times a brute force attacker can guess your password. - Use Multi-factor authentication (MFA). Although less common than phishing or RDP attacks, a significant number of attacks originate from software vulnerabilities. In most cases, this can be prevented by avoiding outdated software, and staying up to date with all software patches and upgrades. Every organization should have regular audits of their software suite, and compare it to databases of known vulnerabilities. This is important not just for preventing attacks, but also to avoid making yourself a target. Using out of date software is often a sign of lax security practices, so many hackers look for this when choosing their victims. - Conduct regular audits to make sure your software is up-to-date with all patches and upgrades. - Don’t use out of date software with known vulnerabilities. It’s natural to focus on preventing attacks completely, and this is the ideal. However, it’s important to “hope for the best, plan for the worst.” Some internal security measures can dramatically reduce the scope of damage hackers can do after they breach a network by preventing ransomware spreading laterally. Using MFA inside the network and exercising the “principle of least privilege” are some of the most important methods of achieving this. MFA can be a nuisance for employees, but it makes unauthorized access to critical parts of a network much more difficult. The principle of least privilege means giving every user on a network no more privileges than what they absolutely need. Most organizations with good cybersecurity practices conduct regular audits to make sure the principle of least privilege is well implemented. Network monitoring services, either in house or outsourced, can also make the difference between a minor breach and a catastrophic network shutdown. - Conduct regular audits to make sure your network conforms to the principle of least privilege. - Enable MFA on movement between different parts of a network. - Get professional network monitoring services to detect suspicious activity. Preventing Ransomware Attacks Requires Continuous Effort There is a lot more to preventing ransomware attacks than just buying premium antivirus software. It requires fundamental changes in business practices and continuous effort. This is one reason so many organizations neglect it. However, cybersecurity is one case where a little bit of prevention is worth a lot of cure. As annoying as it may seem, the effort that goes into protecting against ransomware attacks is much less than the damage caused by a typical ransomware attack. Rather than focusing on specific security measures, your ransomware defense efforts should center around implementing good cybersecurity principles, like the principle of least privilege and network compartmentalization. Once these concepts become part of your regular workflow, cybersecurity can become second nature at all levels of your organization.
<urn:uuid:06695e58-31f5-43c9-998a-a18743c7623c>
CC-MAIN-2022-40
https://www.beforecrypt.com/en/how-to-prevent-and-protect-against-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00712.warc.gz
en
0.943858
1,547
3.265625
3
Data Governance Definition, Best Practices and Benefits Any organization with a data-driven strategy should understand the definition of data governance. In fact, in light of increasingly stringent data regulations, any organization that uses or even stores data, should understand the definition of data governance. Organizations with a solid understanding of data governance (DG) are better equipped to keep pace with the speed of modern business. In this post, the erwin Experts address: - Data Governance Definition - Why Is Data Governance Important? - What Is Good Data Governance? - What Are the Key Benefits of Data Governance? - What Is the Best Data Governance Solution? Data Governance Definition Data governance’s definition is broad as it describes a process, rather than a predetermined method. So an understanding of the process and the best practices associated with it are key to a successful data governance strategy. Data governance is best defined as the strategic, ongoing and collaborative processes involved in managing data’s access, availability, usability, quality and security in line with established internal policies and relevant data regulations. It’s often said that when we work together, we can achieve things greater than the sum of our parts. Collective, societal efforts have seen mankind move metaphorical mountains and land on the literal moon. Such feats were made possible through effective government – or governance. The same applies to data. A single unit of data in isolation can’t do much, but the sum of an organization’s data can prove invaluable. Put simply, DG is about maximizing the potential of an organization’s data and minimizing the risk. In today’s data-driven climate, this dynamic is more important than ever. That’s because data’s value depends on the context in which it exists: too much unstructured or poor-quality data and meaning is lost in a fog; too little insight into data’s lineage, where it is stored, or who has access and the organization becomes an easy target for cybercriminals and/or non-compliance penalties. So DG is quite simply, about how an organization uses its data. That includes how it creates or collects data, as well as how its data is stored and accessed. It ensures that the right data of the right quality, regardless of where it is stored or what format it is stored in, is available for use – but only by the right people and for the right purpose. With well governed data, organizations can get more out of their data by making it easier to manage, interpret and use. Why Is Data Governance Important? Although governing data is not a new practice, using it as a strategic program is and so are the expectations as to who is responsible for it. Historically, governing data has been IT’s business because it primarily involved cataloging data to support search and discovery. But now, governing data is everyone’s business. Both the data “keepers” in IT and the data users everywhere else within the organization have a role to play. That makes sense, too. The sheer volume and importance of data the average organization now processes are too great to be effectively governed by a siloed IT department. Think about it. If all the data you access as an employee of your organization had to be vetted by IT first, could you get anything done? While the exponential increase in the volume and variety of data has provided unparalleled insights for some businesses, only those with the means to deal with the velocity of data have reaped the rewards. By velocity, we mean the speed at which data can be processed and made useful. More on “The Three Vs of Data” here. Data giants like Amazon, Netflix and Uber have reshaped whole industries, turning smart, proactive data governance into actionable and profitable insights. And then, of course, there’s the regulatory side of things. The European Union’s General Data Protection Regulation (GDPR) mandates organization’s govern their data. Poor data governance doesn’t just lead to breaches – although of course it does – but compliance audits also need an effective data governance initiative in order to pass. Since non-compliance can be costly, good data governance not only helps organizations make money, it helps them save it too. And organizations are recognizing this fact. In the lead up to GDPR, studies found that the biggest driver for initiatives for governing data was regulatory compliance. However, since GDPR’s implementation better decision-making and analytics are their top drivers for investing in data governance. Other areas in where well governed data plays an important role include digital transformation, data standards and uniformity, self-service and customer trust and satisfaction. For the full list of drivers and deeper insight into the state of data governance, get the free 2020 State of DGA report here. What Is Good Data Governance? We’re constantly creating new data whether we’re aware of it or not. Every new sale, every new inquiry, every website interaction, every swipe on social media generates data. This means the work of governing data is ongoing, and organizations without it can become overwhelmed quickly. Therefore good data governance is proactive not reactive. In addition, good data governance requires organizations to encourage a culture that stresses the importance of data with effective policies for its use. An organization must know who should have access to what, both internally and externally, before any technical solutions can effectively compartmentalize the data. So good data governance requires both technical solutions and policies to ensure organizations stay in control of their data. But culture isn’t built on policies alone. An often-overlooked element of good data governance is arguably philosophical. Effectively communicating the benefits of well governed data to employees – like improving the discoverability of data – is just as important as any policy or technology. And it shouldn’t be difficult. In fact, it should make data-oriented employees’ jobs easier, not harder. What Are the Key Benefits of Data Governance? Organizations with a effectively governed data enjoy: - Better alignment with data regulations: Get a more holistic understanding of your data and any associated risks, plus improve data privacy and security through better data cataloging. - A greater ability to respond to compliance audits: Take the pain out of preparing reports and respond more quickly to audits with better documentation of data lineage. - Increased operational efficiency: Identify and eliminate redundancies and streamline operations. - Increased revenue: Uncover opportunities to both reduce expenses and discover/access new revenue streams. - More accurate analytics and improved decision-making: Be more confident in the quality of your data and the decisions you make based on it. - Improved employee data literacy: Consistent data standards help ensure employees are more data literate, and they reduce the risk of semantic misinterpretations of data. - Better customer satisfaction/trust and reputation management: Use data to provide a consistent, efficient and personalized customer experience, while avoiding the pitfalls and scandals of breaches and non-compliance. For a more in-depth assessment of data governance benefits, check out The Top 6 Benefits of Data Governance. The Best Data Governance Solution Data has always been important to erwin; we’ve been a trusted data modeling brand for more than 30 years. But we’ve expanded our product portfolio to reflect customer needs and give them an edge, literally. The erwin EDGE platform delivers an “enterprise data governance experience.” And at the heart of the erwin EDGE is the erwin Data Intelligence Suite (erwin DI). erwin DI provides all the tools you need for the effective governance of your data. These include data catalog, data literacy and a host of built-in automation capabilities that take the pain out of data preparation. With erwin DI, you can automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog and then make it accessible and understandable via role-based, contextual views. With the broadest set of metadata connectors, erwin DI combines data management and DG processes to fuel an automated, real-time, high-quality data pipeline. See for yourself why erwin DI is a DBTA 2020 Readers’ Choice Award winner for best data governance solution with your very own, very free demo of erwin DI. Data Governance Preparedness This white paper explores the importance of data literacy and intelligence and how the right data governance platform, including a modern data catalog, is essential to organizations in preparing for and recovering from a crisis.Get the white paper
<urn:uuid:f8ad7710-e367-4369-8662-a39dae9d0b90>
CC-MAIN-2022-40
https://blog.erwin.com/blog/what-is-data-governance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00712.warc.gz
en
0.917897
1,809
2.859375
3
Microsoft, Samsung, and IBM are the main Smart Cities leaders worldwide, while the United States, China, and the United Kingdom dominate the world of Smart Cities’ research. Despite the definition of a smart city is an “under-development” concept, experts around the world agree that Smart Cities are the quintessence of the Internet of Things. It’s the use of sensors to generate data that can be communicated, integrated and analyzed to improve sustainability. And what are the factors that make a city “smart”? There are 10 key dimensions of urban life: 2. Urban planning, 3. Public administration, 5. The Environment, 6. International Outreach, 7. Social Cohesion, 8. Mobility and Transportation, 10. and The Economy. The results show that New York is the world “smartest” city, followed close by London, Paris and San Francisco. But those leading smart cities wouldn’t rank so high today if it wasn’t for the previous work and innovations of many different companies. In this post, we analyze the companies and organizations that will shape up the smart cities of the future. Top Organizations leading Smart Cities Innovations Samsung stands out with 177 smart cities’ patents filled since 2010. Their research revolves around topics such as smart electronics, power consumption, or access networks. Top 10 Entities Leading the Smart Cities Innovations. Source: Linknovate.com NOTE: Since we are constantly updating our data, results may vary. Click on the link for more updated results. Top 5 Smart Cities Leaders Samsung has been the undeniable leader in Smart Cities ‘ innovation. However, Microsoft has managed to surpass their efforts in 2017, due to their Smart Cities for All Toolkit for building accessible and inclusive cities. Also worthy of mention is IBM‘s trajectory. Out of the top 5 smart cities’ leaders, they have been the most steady ones. They were amongst the first ones making Smart Cities a reality and they have been issuing records since 2010. Countries Leading the Advancements in Smart Cities As customary in regards to emerging technologies and trends, the United States is the country that is leading the innovations. However, countries such as China, India, and Spain are in a higher position in the ranking than usual. For example, Sterlite Tech (India) is leading the end-to-end design, development, and management of the first city in India, Gandhinagar, with citizen-centric Smart City services. Top Smart Cities Trends Smart Cities’ main industry trends comprise City Management, Sustainable Development and, of course, Internet of Things applied to city monitoring. And, fundamentally, startups and SMEs are the driving forces that pull the research, innovations, and breakthroughs. As relevant examples, we find All Traffic Solutions (USA) on the one hand. They bring connected solutions for intelligent traffic management. On the other hand, Libelium (Spain) designs and manufactures wireless sensors for Smart Cities and the Internet of Things (IoT). Smart Cities Over Time When looking at the aggregated set of records Linknovate has collected for Smart Cities, we can see the evolution of the topic draws a beautiful, exponential growth curve.
<urn:uuid:54b2675d-a352-40f3-bac1-b684983f2c65>
CC-MAIN-2022-40
https://blog.linknovate.com/smart-cities-leaders-you-must-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00712.warc.gz
en
0.866298
948
2.53125
3
Technology has become more powerful and portable, allowing a more significant amount of information to be created, stored, and accessed. This shift in the information technology landscape (mobile, cloud, IoT, etc.) has made the collection and analysis of digital evidence a critical factor in investigating and solving virtually all types of crimes. There are serious Internet related crimes, that need to be investigated quickly, child exploitation related offenses for instance. For example, in the United Kingdom, the National Crime Agency recently estimated that about 140,000 out of nearly 3 million registered dark web accounts registered on child abuse sites are UK-based. Governments need to fund digital forensics to empower investigators and forensic examiners and grow their capabilities in response to the proliferation of digital crime. The Director General of National Crime Agency (NCA) in the United Kingdom, for instance, recently asked for an extra funding boost of £2.7 billion for the next three years to keep up with the changes in serious and organized crime. If approved, the agency would receive an additional £650 million per year, doubling its budget for 2019/20. While every agency can benefit from increased funding, it is also important (and perhaps especially for those agencies that are under funded) to also adopt a more efficient and effective methodology of processing digital evidence. One of the most effective methods for improving digital forensic workflows is to develop an Early Case Assessment (ECA) methodology which leverages forensic triage at the forefront of investigations. As agencies adopt easy to use forensic tools, such as Mobile Device Investigator™ and Digital Evidence Investigator® for triage and the initial collection and analysis of volatile memory (RAM Capture) and critical evidence, front-line agents will be able to make on-scene decisions faster. Making decisions at the scene helps officers improve their interactions with witnesses and victims while they quickly identify suspects and capture valuable data. In the UK, the number of offenders in serious and organized crime is estimated to be about 181,000 and is more than double that of the British Army. Ms. Owens further said that serious and organized crime in the United Kingdom is corrosive and chronic, killing more people than war, terrorism, and natural disasters combined. In the United Kingdom, serious and organized crime affects citizens more than any other security threat costing the country more than £37 billion a year, which is equivalent to about £2,000 per family. The offenders of organized crimes are indiscriminate, and they do not care about the crime they get involved in as long as they are making a profit. Funding digital forensics programs can be cost-effective. The United Kingdom is a model country in the use of ADF digital forensic software with the vast majority of United Kingdom digital media investigators and police agencies using ADF Digital Evidence Investigator® or Triage-Investigator® to triage and solve investigations beginning on-scene. Any authorized organization can contact us to request a free trial of ADF digital forensic software for computers, electronic devices or iOS and Android smartphones and tablets.
<urn:uuid:8f25d3b5-f027-45d4-a1ee-f09a9935a741>
CC-MAIN-2022-40
https://www.adfsolutions.com/news/building-efficiency-into-digital-forensic-programs
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00712.warc.gz
en
0.936224
613
2.734375
3
The famous fictional astronaut Buzz Lightyear once said: to infinity and beyond! Wise beyond his years, Buzz saw the potential for toys. In what’s become a disruptive innovation for traditional hard plastic dolls and board games, embedded toys are true game changers for the toy industry. For the young at heart, the idea of all the gadgets in today’s digital age can sometimes put a damper on nostalgia. The days of playing outside for nine hours with a deflated kick-ball and a jump rope might appear to be a thing of the past, but the spirit of play still lives on strong… even in electronic devices. Enter smart toys. Today’s intelligent toys come in all shapes and sizes, and while most are controlled using an external device (like a cell phone or tablet), there are a growing number of toys being developed using embedded technology – no external control device needed. When most people think of programming, they think about typing in lines and lines of code. And when most people think about typing lines of code, they think it sounds extremely boring and/or intimidating. Adding intelligent toy technology and embedded Android into the mix makes programming more tangible. And all of the sudden, different learning styles take more interest in programming because it’s easy to see immediate results through working with smart toys. Sphero 2.0 Robotic Ball By Orbotix Calling for kids to upgrade their play, the Sphero is roughly the size of baseball but a lot more versatile. This robotic ball can travel 4.5 miles per hour, be submerged for use in water games, uses Bluetooth technology, and is programmable. The benefits of the first features are obvious, but why is the device’s ability to be programmed important? Because, simply put, it means the limits and possibilities for the play are bound only by imagination! Recognizing the need to adapt to the changing world of children’s toys, Lego created Mindstorms. In addition to their iconic hard plastic building blocks of creativity, Lego now offers robotic connectors and functions to enhance the building experience. If you thought your imagination could run wild with traditional Legos, just imagine all the possibilities when building a scorpion controlled by your smartphone… DragonBot is an example of a growing trend in embedded Android toys: devices that actually build a personality. Essentially, a toy with a memory. The technology works to remember previous settings, actions and responses. Enter the growing world of creepy (and cute too, watch the video here). Developed by the Personal Robots Group at MIT, this little guy offers a truly unique, interactive experience to users. Even adults need to escape from reality once in a while. While we can’t always get away with building Legos, drones are becoming a popular adult alternative to the traditional toy. While drones sometimes make headlines for invading privacy and crashing into natural wonders, when used appropriately they are pretty darn cool. Designed for everything from aerial photography and videography to just plain cruising around in the open sky, drone technology is advancing rapidly, and many people would consider them to be a fun toy. The current state of intelligent toys is really only the beginning of what promises to be an exciting journey. The use of embedded Android, as well as other platforms, is rapidly expanding the capabilities of toys as we know them today. So strap in, playtime is only going to get more fun! Do you have an upcoming project and wantus to help speed up your time to market? These cookies are necessary for the website to function and cannot be switched off. These cookies allow us to monitor traffic to our website so we can improve the performance and content of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited or how you navigated around our website. These cookies enable the website to provide enhanced functionality and content. They may be set by the website or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
<urn:uuid:6c4ece4b-57ab-4275-a6eb-034b32e60a58>
CC-MAIN-2022-40
https://hsc.com/Resources/Blog/Playing-Smart-Android-Toys-Other-Intelligent-Devices-That-Are-Transforming-Playtime-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00712.warc.gz
en
0.941193
996
2.640625
3
Over the last 100 years, large-scale manufacturing gained momentum, and consumerism worldwide grew leaps and bounds, thereby resulting in more waste, impacting the ecological balance in more than one way. This means municipalities and local corporations have to spend a substantial part of their budget on solid-waste management. But addressing a problem with exponential growth demands an innovative solution and the employment of the latest technologies to deliver efficient, low-cost approaches. New technologies offer faster and agile learning processes with iterative designing, prototyping, and feedback cycles. Artificial intelligence (AI) can play a defining role in enabling this systemic shift. AI is the fundamental driver of the Fourth Industrial Revolution, which uses models and systems that mimic human intelligence functions such as reasoning and learning. The power of AI is that it allows us to learn faster from feedback, address any complex problem effectively, and derive actionable insights from data. A growing number of experiments are being conducted to apply AI can accelerate the transition to a circular economy. The three governing principles of circular economy are to keep out design waste and pollution; produce and retain products and materials at their highest value, and regenerate natural systems. AI provides the capability for AI-Powered Circular Economy innovation to take place across many industries in three ways: Product design, prototype, and testing When you design for circularity, you have to design the products and select components and materials that can be conducive for disassembly, upgrade, and recycling. Added to this complexity of the design considerations, we have a broad choice of materials, manufacturing techniques, and design options. AI and ML-assisted design can accelerate this complex process through rapid prototyping, testing, and feedback to improve the overall product offering, lead times, and product development cost. Develop new circular business models In the last decade, we have witnessed and participated in several circular economy models such as asset sharing or product as a service. As we have become comfortable with sharing cars and bikes, we may develop higher comfort factors; several other industries are ripe for disruption by introducing circular business models. The development of circular business models that can offer a compelling alternative to the existing linear models demands the cohesive working of all functions such as design, development, production, sales, marketing, pricing, customer support, and reverse logistics unified by circular economy principles. Effective advancement and adoption of shared economy models in many industries require a robust, dependable infrastructure for handling reverse logistics and remanufacturing. AI as technology lends the technology strength for this. AI uses Real-time insights and historical behavioral data from products and users to predict demand and pricing curves. It also serves well to increase product circulation and asset utilization. Further, AI empowers manufacturers to offer better predictive maintenance for large machinery and adopt more innovative inventory management techniques. Optimize Circular Infrastructure A vital feature of a circular economy is that materials and products are repeatedly used rather than the widespread use and throw model. Hence the products naturally require the feature of reusability, repairability, remanufacturing, and recycling capability. It becomes challenging when we are looking at reclaiming the nutrients from biological waste streams as collecting, sorting, separating, and treating this for redistribution is laborious and expensive. Similarly, winning back the material from used computers is tricky as the waste streams are mixed and heterogeneous. Suppose we can build an effective channel that allows a homogeneous flow of material and products. In that case, our recovery levels improve exponentially, and we can efficiently sort the components that can be put to reuse or remanufacture. AI-assisted reverse logistics brings process improvements to material handling, sorting and disassembly of products, recycling materials, and remanufactured components. For an AI-powered circular economy to succeed and organizations to use it as a reliable business model, it requires the commitment of all stakeholders. The following examples here can inspire different industries and their leaders to explore further in this direction. - Manufacturing design teams can use AI to improve and speed up the material selection process. - Material scientists involved in the plastic packaging industry can use AI to develop solutions that allow an effective way to repurpose. - Engineers and Architects can use AI to optimize the building design features based on circular design principles. - By integrating AI technology and computer vision, we can automatically identify components or parts that require maintenance and reduce the lead time for spare part procurement. - The AI-driven robot system can autonomously or semi-autonomously inspects and disassembles returned pieces of equipment. - Advanced data analytics and AI is being used in the chip design industry to reduce the lead time of the yield ramps and the number of iterations required to eliminate the problems with new products. - Researchers at Stanford have used AI and machine learning to screen more than 12,000 lithium-containing compounds using several criteria such as stability, cost, and abundance of availability, to identify 21 solid electrolytes that could potentially replace the volatile liquids. Other examples of using AI to optimize a system for a beneficial outcome include: using real-time traffic camera data to reduce traffic congestion in cities; optimizing energy usage for cooling the thousands of servers housed in data centers; using EVs for the last mile delivery, integrating AI and autonomous network solutions to ensure that charge points and refueling stations can match electric vehicle demand. To realize the full potential of AI-powered circular economy solutions, one needs a thorough understanding of the maturity and limitations of AI. Executing solutions similar to those shared above will need a four-step framework that includes data collection, engineering, development, and refinement of the AI models. We also need access to relevant, high-quality data to train algorithms and use as input data to develop AI applications. Above all this, it is evident that for organizations to harness the power of AI to help reshape the economy into regenerative, resilient, and fit for the long term, it needs a trusted technology partner who has the entire know-how.
<urn:uuid:7662c5ef-b115-465c-8005-1a9997535790>
CC-MAIN-2022-40
https://saxon.ai/blogs/building-an-ai-powered-circular-economy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00712.warc.gz
en
0.933344
1,208
3.1875
3
If you thought there was only one kind of phishing attack, you'd be wrong. There are a handful of types and "vishing" is becoming increasingly common. To understand vishing, a definition of phishing itself is in order. Phishing is the fraudulent attempt to obtain sensitive information such as usernames, passwords, and credit card details (and money), often for malicious reasons, by disguising as a trustworthy entity in an electronic communication. Wikipedia quoting Handbook of Information and Communication Security. Most of the time when we think about phishing we think about "electronic communication" as referring to email or the web. But there is another form of electronic communication fraudsters are using for phishing: voice telephone calls. That's the "vishing". The idea behind vishing is virtually identical to traditional phishing - use social engineering to try to convince the victim to disclose potentially confidential information. How Does It Work? Some preparation is required: one can purchase lists of names and corresponding phone numbers on the dark web fairly inexpensively (or so I've heard). It is also easy to copy the look of a website using tools such as wget. A programmer can readily create a malicious backend for a website if the user interface has been copied. Now consider the perpetrator making a phone call to the potential victim. He or she may call the potential victim directly or (and this seems to be a common characteristic of the attack) the caller's "caller-id" information may be spoofed to be something the callee is likely to answer. That could be a bank, a large retailer, or a computer hardware or software company. I routinely receive calls to my cell where the caller-id matches my number's area code and exchange. I generally ignore those calls as I suspect they may be vishing. If the potential victim answers, the caller could explain (using the callee's name, potentially) that there is an issue with an account so the business (a bank perhaps) has set up a special site to address it. The potential victim is directed to contact the site and change a password or answer a security question or two. This is not fundamentally different from an email phishing attack! There is the lack of a link to hover over to verify, but the caller can try to explain that away. What Can I Do? First and foremost, don't go to the sites these people promote. If you need to change a password, follow a bookmark or type in the actual site URL. As I mentioned above, I also generally avoid answering telephone calls when I do not know the number. There are a couple of exceptions to this (e.g. clients with many numbers), but I try to follow it as a rule. Another option is to let everything go to voicemail - the attacker is less likely to be persuasive in a recording and may just decline to leave a message at all. There will likely be over one and a half million unique phishing attacks this year alone. I cannot guess how many will be vishing, but it will likely be a significant amount, if the reports I have been reading are correct. Don't be a victim: never go to a URL from a phone call or email; go to the official URL of the desired organization. Period.
<urn:uuid:1eed1f59-1ed7-4bb3-a326-cee680e0b8f1>
CC-MAIN-2022-40
https://www.learningtree.ca/blog/vishing-another-way-go-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00112.warc.gz
en
0.944342
684
2.953125
3
Kubernetes is an open source container orchestration platform developed by Google for managing microservices or containerized applications across a distributed cluster of nodes. Kubernetes is highly resilient and supports zero downtime, rollback, scaling, and self-healing of containers. The main objective of Kubernetes is to hide the complexity of managing a fleet of containers. It can run on bare metal machines or on public or private cloud platforms such as AWS, Azure and OpenStack. Kubernetes architecture follows a client-server architecture. Main Components of the Kubernetes Master Server etcd cluster – a distributed key value storage that stores Kubernetes cluster data kube-apiserver – the central management entity that receives all REST requests for modifications to cluster elements kube-controller-manager – runs controller processes like replication controller (sets number of replicas in a pod) and endpoints controller (populates services, pods and other objects) cloud-controller-manager – responsible for managing controller processes with dependencies on the underlying cloud provider kube-scheduler – helps schedule the pods (a co-located group of containers inside which our application processes are running) on the cluster nodes based on resource utilization Main components of the Kubernetes Node (Worker) Server kubelet – the main service on a node, taking in new or modified pod specifications from kube-apiserver, and ensuring that pods and containers are healthy and running kube-proxy – runs on each worker node to deal with individual host subnetting and expose services kubectl is a command line tool that interacts with kube-apiserver and send commands to the master node. Each command is converted into an API call. Kubernetes is a complex system, and learning step by step is the best way to gain expertise. In this page we’re compiled all the valuable Kubernetes tutorials from multiple sources – from the big players like Google, Amazon and Microsoft, to individual bloggers and community members. Tutorials are classified into the following categories: The Kubernetes Bible for Beginners & Developers – Level Up Kubernetes is a powerful open-source system that was developed by Google. It was developed for managing containerized applications in a clustered environment. Kubernetes has gained popularity and is becoming the new standard for deploying software in the cloud. Learn Kubernetes in a simplified way. The article covers Kubernetes’ basic concepts, architecture, how it solves the problems, etc. Read the article on level-up.one Kubernetes Tutorial: Getting Started with Containers & Clusters | Codeship Kubernetes is a highly popular open-source container management system. The goal of the Kubernetes project is to make management of containers across multiple nodes as simple as managing containers on a single system. In this tutorial, you’ll learn about Kubernetes by deploying a simple web application across a multinode Kubernetes cluster. Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers By the end of this article, you will be able to run a Microservice based application on a Kubernetes Cluster. You will learn how to run a Microservice based application on your computer, build container images for each service of the Microservice application and how to deploying a Microservice based application into a Kubernetes Managed Cluster. Kubernetes Tutorial | An Introduction To Kubernetes | Edureka Kubernetes Deployment Tutorial For Beginners A Tutorial Introduction to Kubernetes Learn Kubernetes Basics What is Kubernetes – Learn Kubernetes from Basics A Beginner’s Guide to Kubernetes Deploying a containerized web application | Kubernetes Engine Tutorial: Configuring a Kubernetes DevTest Cluster in DigitalOcean The 3 Kubernetes Essentials: Cluster, Pipeline, and Registry Kubernetes Tutorial For Beginner – Kubernetes Basics, Overview, Features Kubernetes Basic Tutorials Blazing-Fast Log Management and Observability for Modern Apps | Scalyr This tutorial is going to explain the basics you need to know to get started with Kubernetes, including concepts and real code examples that will help you get a better idea of why you might need to use Kubernetes if you’re thinking about using containers. Getting started with Cloud Endpoints for Kubernetes with ESP How to Create, Delete, Scale, and Update the Pods of StatefulSets Architecting Applications for Kubernetes | DigitalOcean Using GPGPUs with Kubernetes | Ubuntu Kubernetes Deployment | The Ultimate Guide | Platform9 How to quickly install Kubernetes on Ubuntu Kubernetes AWS and Azure Tutorials Kubernetes on AWS: Tutorial and Best Practices for Deployment Set up a Production-Quality Kubernetes Cluster on AWS How to move from single master to multi-master in an AWS kops kubernetes cluster Kubernetes on Azure tutorial – Prepare an application – Azure Kubernetes Service Tutorial: Run Kubernetes on Amazon Web Services (AWS) — Heptio Docs Tutorial: Deploy Kubernetes on Hetzner Cloud + Ingress + OpenEBS Storage – stytex Blog How to Deploy Node js App to Google Container Engine Using Kubernetes | blog.fossasia.org Kubernetes Python/Django Tutorials Running a Python application on Kubernetes This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes. Deploy a simple Python application with Kubernetes Make sense of Kubernetes with these step-by-step instructions on how to deploy a Python Flask application to the IBM Cloud Kubernetes Service. Running a Python Application on Kubernetes Read our blog Quickstart: Deploying a language-specific app This page shows you how to do the following: Page not found – Mitra Innovation Kubernetes with Other Frameworks: Ruby/Rails, Spring, Neo4j Kubernetes Tutorial: Running a Rails App in Kubernetes How to Use Kubernetes to Quickly Deploy Neo4j Clusters – Neo4j Graph Database Platform Migrating a Spring Boot service to Kubernetes in 5 steps Kubernetes Monitoring and Prometheus Tutorials Introduction to Kubernetes Monitoring Monitoring a Kubernetes cluster allows engineers to observe its resource utilization and take action when something goes wrong. This article explores what you should be monitoring and how to go about it with Rancher, Prometheus, and Grafana. How to Setup Prometheus Monitoring On Kubernetes Cluster Logging & Monitoring of Kubernetes Applications: Requirements & Recommended Toolset How to monitor Kubernetes + Docker with Datadog Kubernetes monitoring with Prometheus in 15 minutes Simple Kubernetes cluster metrics monitoring with Prometheus and Grafana Monitoring Kubernetes Architecture – DZone DevOps Kubernetes on Windows and Mac Getting started with Docker and Kubernetes on Windows 10 How to set up Kubernetes on Windows 10 with Docker for Windows and run ASP.NET Core Tutorial : Getting Started with Kubernetes with Docker on Mac Kubernetes Tutorials in Other Environments Getting started with IBM Cloud Kubernetes Service Tutorial: Run a custom LAMP application on Kubernetes — Heptio Docs How to run HA MongoDB on Kubernetes Deploying Java Applications with Docker and Kubernetes SAP Tutorial Navigator | SAP Older Kubernetes Tutorials, But Still Worth a Look Azure Kubernetes Service (AKS) | Microsoft Azure Tutorial : Getting Started with Kubernetes on your Windows Laptop with Minikube Just a Few Steps Away from Mastering Kubernetes… Kubernetes is extremely powerful but has a steep learning curve. We hope this compilation of tutorials will help you make your next steps towards expertise in this important platform. You can help us grow and improve this list: Found a tutorial that should be removed? Have a new tutorial we should add? Let us know using the form at the bottom of this page. Aqua Security is the largest pure-play cloud native security company, providing customers the freedom to innovate and accelerate their digital transformations. The Aqua Platform is the leading Cloud Native Application Protection Platform (CNAPP) and provides prevention, detection, and response automation across the entire application lifecycle to secure the supply chain, secure cloud infrastructure and secure running workloads wherever they are deployed. Aqua customers are among the world’s largest enterprises in financial services, software, media, manufacturing and retail, with implementations across a broad range of cloud providers and modern technology stacks spanning containers, serverless functions and cloud VMs.
<urn:uuid:4fd3e406-a619-4bca-bcca-a1176db171f8>
CC-MAIN-2022-40
https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-tutorials/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00112.warc.gz
en
0.761392
2,223
2.734375
3
“Use of Arabic online has increased proportionally with the increase in Internet users. In comparison English remains essentially flat, 25% in 2013 and 28% in 2017, despite the increase in Internet use. “ (Mideast Media) The concept of giving machines the ability to process human language in the western world is a concept that has existed since the birth of the computer itself. We’ve been watching language and speech technologies unfold and strengthen for the past couple of years, while those are still steadily improving and gaining more momentum, Arabic language and speech technologies are now battling to touch the level of maturity of their English counterparts in order to support the evident boom of Arabic language use online. “Arabic-speaking Internet users are expected to reach 197M in 2017” Can you imagine how many Arabic online interactions that could mean? “88% of the Middle East population uses social networking daily” (Go-Gulf). If you take 88% of 197 that will give you around 170m people making on “average 2 interactions a day”, giving you a total of 340 million interactions a day and 124 billion a year. However, the supply of technology available to capture, decipher and analyze this valuable data is not mature enough yet. Knowing that “89% of businesses are soon expected to compete mainly on customer experience, and that by 2020 85% of customer interactions will be managed without a human.” (Gartner) Adapting to the growing need for Arabic speech and language technologies and applications has become more than a necessity. Although, many are working on delivering technologies or applications such as machine translation, information retrieval and extraction systems as well as speech recognition and text to speech, the complexity and structure of the Arabic language is slowing down the progression within the Arabic NLP domain. The aspect that differentiates language processing from other forms of data processing is simply their essential knowledge of language. The Arabic language has always been considered as an interesting language due to its complex linguistic structure. However, when it comes to NLP and speech technology this poses a major challenge. The complexity of the Arabic language makes it very difficult for machines to understand the intent or context of the language. There are many reasons why it is difficult to decipher the Arabic language but the main ones are: 1. Dissimilar Arabic Dialects Levant, Gulf & Egyptian Arabic are the main dialects found in the Middle East and North Africa, although similar in sound, these dialects are extremely distinct from one another when it comes to spelling, vocabulary and grammar. This distinction makes it impossible to build NLP applications for the Arabic language in general, meaning that language models have to be built for each dialect and can be very difficult to combine them into one language model and offer Arabic language solutions catered for all. 2. Spoken Colloquial vs. Formal Language Everyday spoken Arabic highly differs from formal standardised Arabic. Creating language solutions for informal everyday spoken Arabic can be challenging as this differs not only from one country to the next, but also from one generation to the other. 3. Grammar, Punctuation & Slang When it comes to online interactions be it on social networks, reviews or even Google searches people tend to shorten words or write in slang making it close to impossible to extract context from these interactions or sometimes even make sense of them. Moving forward, go for an integrated approach With fewer amounts of data to process you’ll most likely obtain the most value from human-driven practices where you can maximize the accuracy of data processing through the rational judgment of the human. You should consider machine powered NLP applications and technology when operating at larger scales, dealing with large amounts of data or for time-sensitive activities: for example if you need to respond to customers in a timely manner. Although, not all Arabic natural language-processing (ANLP) applications are advanced enough to process Arabic data with high accuracy, some areas or technologies are more mature than others. In order to overcome the downsides of these technologies think of a more integrated approach: People & technology must work together. Train people to exploit the benefits of the technologies available to gain the maximum value of both people and technology. IST’s Language & Speech Innovation Center (LSIC) is currently working on advanced NLP applications and technologies, such as text to speech and sentiment analysis. For more information on Arabic speech & language technologies contact us here. Source: IST Blogs
<urn:uuid:1c45db27-9b92-4646-9e3b-f26c42cd77c1>
CC-MAIN-2022-40
https://www.istnetworks.com/blog/the-new-battleground-arabic-language-and-speech-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00112.warc.gz
en
0.94212
923
2.546875
3
In our previous article we described the most recent and most sophisticated attack techniques used by Ransomware, which today represent the biggest and most frequent threat for companies all around the world. As we explained, ransomware attacks account for 67% of all malware attacks in 2020 and it is expected that in 2021, all around the world, a ransomware attack targeting companies will take place every 11 seconds. Measures of “cyber hygiene” This type of attack is enjoying great success because it proved to be extremely profitable for the perpetrators. And, since it is carried out remotely, the risk of being discovered is, for cybercriminals, currently very low. Nevertheless, ransomware attacks do not show any special ways of hitting their targets: as we explained in the above-mentioned article, the ways in which a ransomware attack can penetrate into our computers are essentially the same as with many other types of attacks. Therefore, the protection and prevention measures that we should adopt in order to avoid being the subject of a ransomware attack are the same as the ones we use to defend ourselves from any other type of cyberattack. In most of the cases, we can talk about “cyber hygiene”, meaning actions that should become part of our daily practice. Also because these little gestures may protect us not only from ransomware attacks but from any type of attack. Protecting yourself from ransomware: prevention A lot of the preventive protection measures that we will hereafter suggest might appear even elementary and obvious. But we shall not underestimate them: in most of the cases, ransomware can penetrate into our IT systems exploiting human errors, sometimes even trivial ones. The most widespread attack technique, since unluckily it works and is easy to realise for the cyber criminal, is still as of today represented by phishing emails: this technique, which exploits social engineering, is used in over 50% of ransomware attacks. Let’s then focus on how to protect from ransomware attacks. » Never get carried away with “compulsive clicking”: in general, it is always better to dedicate some seconds to examining the email we received since most of the times phishing emails have something strange and unusual. Therefore, we could notice them if only we had enough patience to observe the email before acting in a way that could be dangerous. » Never open attachments of uncertain origin. If we are dubious, when we receive a suspected email, it is advisable to ask the sender if that email was authentic! » Pay attention also to emails coming from known addresses (they could have been hacked using a falsifying technique called “spoofing”). » Enable the option “Show the filename extension” in the Windows settings: the most dangerous files have the following extensions: .exe, .zip, .js, .jar, .scr, etc. If this option is not enabled, we won’t be able to see the actual file extension and could be more easily deceived. » Disable the option of autorun for USB devices and other movable devices and, more in general, avoid to insert these objects in our laptop if we are uncertain about their origin. This attack mode is known as “Baiting”: it entails using a bait targeting someone who has the possibility to access a given IT system (like a Trojan). An external memory device, like a USB key or a hard disk, that contains malware which will self-activate as soon as the item is connected to the computer, is on purpose left unguarded in a common place (like the company hall, canteen or parking lot). Human curiosity will do the rest. In most cases it is human curiosity that makes this bait work: the person will insert the unknown device in his/her laptop. As we discussed in another article, it was exactly through a USB key that attackers were able to activate the centrifuges in a nuclear power plant in Iran and make them explode! It is the well-known Stuxnet attack to the Iranian Natanz power plant of 2010. Today, this threat has become real, this is why some companies have passed very restrictive policies and disabled the USB ports on laptops given to employees. In this way, the USB port can be used to connect the mouse or charge your phone but it won’t be able to send and receive data. In other, more frequent cases this level of restriction is not reached since it is difficult to make users understand and accept it. Nevertheless, it is always useful to train the users on how to be careful when using movable devices and make them aware of the risks that they are facing. » Disable the execution of macros by Office programmes (Word, Excel, PowerPoint). Office attachments that include malicious macros represent today one of the most widespread attack techniques. Enabling the macro will allow it to automatically activate and hence start the process of infection through ransomware. » Always pay attention before clicking on banners or pop-up windows in unsafe websites. As we already explained, ransomware can hit us not only through phishing but also by visiting websites that have been infected, with a technique known as “drive-by download”. » Always update operating systems and browsers. In general, it is good to always update straight away the security patches offered by the producers of the softwares we have installed. An updated browser is safer and it represents by itself a protection layer, especially from “drive-by download” attacks, also known as “watering hole”, which can take place when you navigate in compromised websites. » Make sure that the plugins you are using (Java, etc.) are always updated. These plugins are known to represent a preferred point of access for most cyber attacks. Keeping them constantly updated reduced the vulnerabilities that affect them (although it does not eliminate them completely). » Use – when possible – accounts that do not enjoy administrator rights: if an administrator account is violated, the attacker will be able to enjoy the same privileges as the administrator and to carry out more actions resulting in higher damages. Viceversa, a non-administrator user has limited privileges and the same limits will apply to the attacker. This is the basic “principle of least privilege”, which every company should adopt systematically with its users. » Since phishing emails represent the most frequent attack technique, it is important to install effective and updated antispam systems, which implement SPF protocols (Sender Policy Framework), DKIM (Domain Keys Identified Mail) and DMARC (Domain-based Message Authentication, Reporting and Conformance). They won’t be able to block all phishing emails, but the best systems can reach an efficiency, nevertheless, over 95%. » Pay attention to the use of the Remote Desktop Protocol (RDP): it represents an exposed port on the network, which – if not necessary – will be closed. If instead we have to use it (especially in these moments where smart working is so frequent) we will have to protect this access with strong passwords and possibly with double authentication 2FA). » Install Antimalware (antivirus) and keep them up to date. We must, however, be aware that “traditional” antivirus, that is, those defined as “signature based” (based on signatures) guarantee a fairly limited protection (not exceeding 50 60%), because they can be easily circumvented by polymorphic viruses, that is modified. » Instead, implement “User Behavior Analytics” (UBA) solutions on the corporate network (web traffic anomaly analysis) with IDS (Intrusion Detection System), IPS (Intrusion Prevention System) and EDR (Endpoint Detection & Response) systems. These tools are today the most advanced protection against ransomware. It is known, in fact, that these malware present a series of typical behaviors (access/write to system folders, connection to external servers for downloading encryption files, etc.). The UBA analyze therefore the behavior of every computer of the company and are in a position to understanding if are taking place events “anomalous” (such as above-average data traffic, access to IP addresses classified as malevolent, and access and write to system folders that should not be used). At the detection of abnormal and suspicious events, they can isolate the offending computer and block (at least circumscribe) the attack. » Implement the use of Sandbox. Sandbox, that literally refers to the sand box for kids, in IT vocabulary identifies a test environment, isolated from the main system. It is used to develop and test applications and to execute operations that represent a potential danger to the system’s integrity. These instruments are usually available in the UBA systems discussed above and permit to analyse and run in an isolated environment, the sandbox, suspicious files before opening them in the main system, where they could cause damages. » Adopt accurate Backup procedures of your data. This is a fundamental security measure, even a vital one: if, despite everything, a ransomware manages to hit us, the only way to save ourselves is if we have saved our data somewhere else. And it is important to back up your data often and completely. When the backup is not available you are left with the only option of paying the ransom. And finally, we should never forget that the weakest point in security is represented by the human factor. It is therefore fundamental to train and inform users so that they won’t fall prey to phishing attempts, the most used technique in these attacks. In practice, human factors and user awareness are all too often underestimated. In every cyber attack there is at least one human error: in most of the attack techniques, ransomwares cannot act unless an action on our side allows it to! The simple antivirus systems aren’t sufficient anymore and cannot grant full defence (because of the mentioned phenomenon of polymorphism). Never underestimate the human factor: it is important to train employees at all levels. Unluckily, the errors or negligence of just one person can compromise the data of the entire company. To sum it up: the first and best protection is always the user. Giorgio Sbaraglia (https://www.giorgiosbaraglia.it), engineer, is a consultant and trainer on the topics of cyber security and privacy. He holds training courses about these topics for numerous important Italian companies, including the 24Ore Business School (read here). He is a member of the Scientific Committee CLUSIT (Italian Association for Cyber Security) and an Innovation Manager certified by RINA He is the scientific coordinator of the Master “Cybersecurity and Data Protection” of the 24Ore Business school. He has DPO (Data Protection Officer) positions in companies and Professional Associations. He is the author of the following books: “GDPR kit di sopravvivenza” – “GDPR survival kit” (Edited by goWare), “Cybersecurity kit di sopravvivenza. Il web è un luogo pericoloso. Dobbiamo difenderci!” – “Cybersecurity survival kit. The web is a dangerous place. We must defend ourselves!” (Edited by goWare), “iPhone. Come usarlo al meglio. Scopriamo insieme tutte le funzioni e le app migliori” – “iPhone. How to use it to its full potential. Let’s discover together all the functions and best apps” (Edited by goWare). He collaborates with CYBERSECURITY360 a specialised online magazine of the group Digital360 focusing on Cybersecurity. He writes also for ICT Security Magazine, Agenda Digitale, and the magazine CLASS. You can activate the FlashStart® Cloud protection on any sort of Router and Firewall to secure desktop and mobile devices and IoT devices on local networks.
<urn:uuid:54acfac4-e372-4641-bb40-720535d7e15a>
CC-MAIN-2022-40
https://flashstart.com/ransomware-how-to-defend-yourself-from-ransomware-prevention-and-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00112.warc.gz
en
0.931468
2,486
2.890625
3
Artificial Intelligence (AI) and Machine Learning (ML) present a boon for businesses as technologies with the potential to help organizations make better predictions, create innovative services for customers, and deliver faster business outcomes. Finance teams, operations, customer success, and marketing departments stand to benefit. That said, the overarching reason that organizations are facing challenges and delays in bringing ML models into production stems from the fact that models are different from traditional software, and most organizations don’t yet have frameworks and processes for dealing with these differences. Let’s take a look at how a MLOps platform can help with efficiency and collaboration. What is MLOps? MLOps is a practice for a subset of the ML model life cycle to help teams deploy, manage, and maintain machine learning models and drive consistency and efficiency across an organization. Similar to DevOps—a set of practices that integrate software development with IT operations—MLOps adds automation to streamline the orchestration of steps in the workflow that begins once a model is ready to go into production. A machine learning model’s life cycle spans many steps, and typically they are all managed by different people across discrete systems that need to be connected. These systems are used for data collection, data processing, feature engineering, data labeling, model building, training, optimizing, deploying, risk monitoring, and retraining. And in each organization, different people and teams may own one or more steps. In an ideal environment, ML models are solving company problems and driving better decision analyses. However, only a fraction of these models enters production. Even then, it typically takes months for a successful model to become active, according to Gartner. That’s because the process of deploying a machine learning model into production is often disjointed. Siloed teams of data engineers, data scientists, IT ops professionals, auditors, business domain experts, and ML engineering teams operate in a patchwork arrangement that bogs down the process. Downfalls of MLOps Part of the problem is that MLOps is still an emerging discipline, and different people perform the tasks that span MLOps in each organization. In some organizations, data scientists are involved in nearly every step of a model’s life cycle; in others, there may be discrete teams for each phase or teams that own one or more areas. For organizations to realize the full value of ML, models need to be put into production quickly and at scale. There needs to be a guide for how to handle MLOps that makes sense for an enterprise’s goals and the structure of its team. Because of this, MLOps platforms are taking on an increasingly critical function in expediting the ML efforts of organizations. Platforms have the potential to deliver a blueprinted strategy to create repeatable and streamlined processes, regardless of whether the industry is manufacturing or financial services—or any other industry. End-to-end platforms can save an innumerable amount of time because of how many models can be deployed and monitored simultaneously while operating at the speed businesses need. The best MLOps platforms provide solutions for all ML stakeholders so they can not only deploy and manage models at scale but also foster efficiency through collaboration and communication across the different people using the platform at different stages. Let’s take a look at the four main aspects of a successful MLOps platform. A Successful MLOps Platform #1: A Collaborative Experience for all Stakeholders Given that the key stakeholders—from data teams to engineers to risk auditors—tend to function in silos in many organizations, simplifying the process to enable any user to perform a specific role leads to better outcomes for efficiently performing tasks. Platforms that enable collaboration across an organization provide teams the ability to quickly operationalize models, regardless of the tools data scientists used to create those models. There is no longer a need to restrict any other user, such as machine learning engineers or IT teams. Each platform user should be able to use the tools they already have and leverage their expertise with those tools. Having a single, collaborative interface that can intuitively guide a user through the steps that abstract the complexity of the process is a beneficial component of MLOps. #2: A User-First, Modular Architecture Given that many organizations may be handling MLOps differently, platforms that meet them where they are offer immediate value. A platform with a modular architecture provides organizations the necessary flexibility to get up and run quickly by enabling each person to use the platform functionality that they need when they need it rather than forcing them to operate in a linear fashion. For example, an organization may have data scientists with a preferred set of tools but who lack the ability to easily deploy or monitor models in production. An MLOps platform designed with openness and users in mind will offer easy plug-and-play components so each user can make decisions on the best cloud, database, repositories, and other components to use without having to make sweeping changes. Every company will implement the process of operationalizing models a bit differently, and modular architecture enables MLOps teams to leverage their entire suite of tools and seamlessly bring specific components of the platform into their ML workflows. #3: An Emphasis on Optimization As models become larger and increasingly more complex, one of the challenges organizations often run into is a dramatic increase in hardware or computing needs. Machine learning is, by definition, data-intensive and will cost organizations a lot of money without careful consideration of the infrastructure in place. Models that take a long time to head to production coupled with rising costs for hardware are a recipe for tension with executives and leadership weighing ROI in an organization. MLOps platforms that can optimize models and present model performance and cost-saving data in a format that helps users make decisions based on the factors most important to them can alleviate some of the challenges organizations face as they ramp up ML modeling and production. As more companies deploy more ML models to more devices that can be on any cloud, edge devices, or on-prem, the ability to optimize models will become increasingly important. #4: An Ability to Continuously Monitor Models in Production It’s important for an MLOps platform to accelerate the process of bringing models to production. But once there, the real work begins, and platforms need to enable teams to continuously monitor risks, such as model performance and unstructured data, and quickly take action to mitigate operational and reputational risk. ML models are not static. They are trained and tested in environments that are controlled, but when models are deployed into production they are making predictions based on real-world data which can be quite different for a variety of reasons. For example, a model’s performance or accuracy in predictions can change. Models also experience different types of drift, such as data drift, for when there is a significant change in buying patterns. This happened during COVID, for example, and led to former distribution patterns no longer being accurate. Simplifying the Process To help teams continuously monitor models in production, MLOps platforms should simplify the ability to: 1. Set alerts based on custom thresholds. 2. Provide quick at-a-glance access to key data points showing which models are failing. 3. Rapidly identify the root cause and take action. Leveraging an integrated platform allows for the creation of a customized risk monitoring plan before and after deployment. A comprehensive approach to mitigating risk includes evaluating uncertainties within the data to guide AI/ML teams along the right path. Platforms Must Put Humans First We are still in the early stages of figuring out how to best use ML in enterprises. Any MLOps platform should take a human-centered approach—meaning it is designed to provide users with the critical information they need, an intuitive way to complete the tasks they need to complete, and the ability to collaborate and communicate with other stakeholders and colleagues. Platforms that put human workers first help build trust between people and ML. This garnered level of trust between person and machine helps ease the mind of the worker and allows the technology to perform many of the statistical tasks to aid these workers. The intentional design of such platforms will continue to focus on augmenting and amplifying human intelligence and delivering new opportunities to promote collaboration with advancing AI and ML initiatives.
<urn:uuid:9e198b5d-3b3c-4a50-ae9b-1e5d7299785d>
CC-MAIN-2022-40
https://www.iotforall.com/features-of-mlops-platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00112.warc.gz
en
0.945338
1,687
2.546875
3
Offset printing is a common printing technique in which the inked image is transferred (or “offset”) from a plate to a rubber blanket and then to the printing surface. When used in combination with the lithographic process, which is based on the repulsion of oil and water, the offset technique employs a flat (planographic) image carrier. Ink rollers transfer ink to the image areas of the image carrier, while a water roller applies a water-based film to the non-image areas. Offset lithography is one of the oldest and still most common ways of creating printed materials. A few of its common applications include: box packaging, folders, magazines, brochures, books and posters. Compared to other printing methods, offset printing is best suited for economically producing large volumes of high quality prints We utilize computer-to-plate systems as opposed to the older computer-to-film work flows we used in the 90’s, which further increases our quality. Advantages of offset printing compared to other printing methods include: - Consistent superior high image quality. Offset printing produces sharp and clean images and type more easily than, for example, wide format printing; this is because the rubber blanket conforms to the texture of the printing surface. - Quick and easy production to printing plates. - Offset printing is the cheapest method for producing high quality prints in large volume quantities. WIDE FORMAT PRINTING Wide format also known in the industry as Large Format Printing are generally computer-controlled printing machines (printers.) Our printers support a print roll width of up to 120″ and board size up to 5′ x 10′ up to 2″ thick. MEDIUS has capacities are considered super-wide or grand format. Our other large format printers utilize a lay-flat bed printer for rigid print mediums up to 2″ thick. Wide format printing is used to print banners, poster-boards, trade show graphics, wallpaper, decals, murals, directional signage, backlit film, vehicle wraps, floor graphics, backdrops, media sets and so much more. Post print finishing in the wide format department is extensive. Digital Cutting Plotters allow any desired shapes to be cut out very precisely, and repeatably on a roll. Cutting plotters use tiny knives to cut into a piece of material such as mylar or vinyl film. The cutting plotter is connected to a computer, which is equipped with cutting design software. We send the necessary cutting dimensions in order to command the cutting knife to produce the correct perfect cut out for stickers or any custom sharped decals. For the ridge materials that need to be cut into shape we use a layflat routing machine equipped with spindles with several knife options for cutting vinyl, gator board, styrene, PVC or just about any other material that requires a custom cut to shape. Digital printing is a method of printing from a digital-based image directly to a variety of paper types. It usually refers to printing where lower volume jobs from digital sources are printed using laser or inkjet printers. Digital printing has a higher cost per page than more traditional offset printing methods, but this price is offset by avoiding the cost of all the technical steps required to make offset printing plates. It also allows for on-demand printing, short turnaround time, and even on press modifications of the image or text (variable data) used for each impression. The savings in labor and the ever-increasing capability of digital presses means that digital printing is reaching the point where it can match or supersede offset printing ability to produce larger print runs of several thousand sheets at a low price. Digital printing is typically utilized to print brochures, flyers, pamphlets and mailers with variable images or data.
<urn:uuid:55ca6622-710a-4172-88dd-1197a6696f11>
CC-MAIN-2022-40
https://www.mediuscorp.com/?page_id=199
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00112.warc.gz
en
0.910635
789
2.765625
3