Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
1,700
My Own Stop Look and Listen Moment: How I botched a Job interview and I’m Ok with It.
My coaching program called Stop Look and Listen is a 3-month program. But sometimes, doing all three of those things, hereafter referred to in shorthand as SLL, can be done in a single moment. Here’s a story about when I did just that and it saved me a lot of headache and embarrassment. The Job Interview I had a job interview last week for an accessibility position. I really wanted this job. It looked to be in my wheelhouse. Well, I overslept on the day of the interview and woke up 10 minutes before it was supposed to start. I got on the call after getting cleaned and quickly groomed, and told the interviewer as much. He asked if I wanted to reschedule, I said no, let’s go for it. Well, then it all went downhill. I realized, to my horror, that I had not prepared for the interview. I had not read the description of the job in some time. When I started answering his questions about the job, you know, the usual questions like, “what are your strengths and why would you be a good fit for this job,” I then realized I was having a major panic attack. I asked him to hold on a moment while I got my breathing under control and then, at that moment, I had to do my own SLL. Stop Look and Listen Stop. Dave, you are not prepared for this interview and you know it, stop talking and breathe a few times. Look inside. Do you want to just try and bullshit your way through it, the guy already knows you’re grasping at straws and not at full energy capacity. Listen. Listen internally to your panic, you’ve never been good at the “fake it til you make it” mentality. Maybe this is not the time to do that. And also listen externally. This guy’s voice sounds kind of harsh, very professional business-like and he’s done this type of interview many times before. Having had that moment in literally a few seconds, I asked him if we could reschedule. He said, “yeah you’re maybe not fully awake, that’s fine, go ahead and email …” I told him truthfully, “I know I’m blowing it and I really would like this job, thanks for your understanding.” Fast forward a few hours and email exchanges later. Turns out, yep, I blew it. They sent me a message saying something like, “Upon further internal discussion, we’ve decided to go in another direction, good luck with your search.” I knew that rescheduling would be too good to be true. From their point of view, they probably thought, “Wow, if he just woke up, how responsible can he be with his time, and oh yeah he was not prepared.” Maybe that wasn’t their thinking at all but it would be mine if I were them. So, what did I learn from using my own coaching philosophy on myself? First, it works. Second, that I personally am proud of myself because had I gone further in the interview, it would have just been more embarrassing and painful for both myself and the interviewer. Also, by doing Stop Look and Listen and then being rejected, I know that there are plenty of other opportunities out there. I also know to prepare for things more. That’s all on me, and I’ll be the first to admit my failings. But I don’t want to call them failings, they have to be learning experiences. So I’m curious, where do you do those momentary check-ins of SLL? Let me know in the comments, or, if you like the concept and ideas I’ve put forth, shoot me an email or schedule a sample coaching session on the calendar. My program takes a good deep dive into how to do all of these things to make you Stop, Look, and Listen.
https://medium.com/@dave-35461/my-own-stop-look-and-listen-moment-how-i-botched-a-job-interview-and-im-ok-with-it-fa7cc55bf835
['Dave Bahr']
2020-12-18 13:16:28.604000+00:00
['A11y', 'Technology', 'Job Interview', 'Disability', 'Coaching']
1,701
Diagnosing Covid-19 in Tears
Covid-19 / Medicine Diagnosing Covid-19 in Tears A new study may seek to replace nasal swabs to diagnose coronavirus Image by LhcCoutinho from Pixabay Testing remains at the forefront in the fight against Covid-19. Several pressures from politics to reliability and availability of of tests in the US have hampered the effort to test people for Covid-19. Early on, the roll out of testing in the US was criticized by many as experts as too slow to address the rise in cases. Even to date, the US falls short of the level of testing that many experts feel is needed to begin to contain the virus. Beyond political pressures, Covid-19 tests themselves have come under scrutiny for failing to provide reliable results. Tests that detect antibodies against SARS-Cov-2 have shown limitations in reliability, and may provide people with a false sense of security. Even with these limitations, testing is still one of the most critical public health efforts in containing an outbreak. Researchers are continuing efforts to develop convenient and reliable tests for Covid-19. By now, most people have seen images of current Covid-19 tests. Social media has been inundated with videos of people succumbing to the insertion of a long nasal swab into their nose. This swab is intended to go deep beyond the nasal cavity into the nasal pharynx, where respiratory viruses can be detected. These images, and people’s reactions to their own testing have left people with the impression that Covid-19 testing is uncomfortable at best, and painful at worst. Public appetite and demand from the scientific community for reliable and convenient Covid-19 tests has pushed some researchers to look elsewhere in the body to detect the virus. Researchers at the University of Minnesota Medical School have turned their attention to the eyes, and in particular the tears to test for the virus. A team, lead by Opthalmologist Dr. Hossein Nazari have launched a study looking into using human tears as a possible test for Covid-19. Why test tears for Covid-19? Tears, produced by lacrimal glands deep within the eye socket, have a close anatomical relationship to the nasal cavity. Anyone who has shed tears in the time if sadness knows how quickly a runny nose follows tears. In fact, there is a physical structure that connects the tissue that lines the eye socket — called the conjunctiva — to the nasal cavity that allows for drainage of tears. Dr. Nazari’s team is investigating the idea that viruses can permeate the conjunctiva and be detected there. Their test would pull tears form the inner corner of the eye into a a microcapillary tube through capillary motion and use them to analyze for presence of SARS-CoV-2. The test would rely on a technique commonly used in biomedical laboritories called real-time polymerase chain reaction or RT-PCR. This assay uses synthetic primers to amplify segments of the SARS-CoV-2 genome, which can be detected in real time to determine the presence and quantity of viral load in the body. RT-PCR assays have some documented shortcomings, but remains one of the most reliable techniques for detecting the presence of viral genetic material.
https://deadlywarbler.medium.com/diagnosing-covid-19-in-tears-5fcf09b434ba
['Jesse Smith']
2020-06-26 16:50:34.984000+00:00
['Health', 'Medicine', 'Science', 'Covid 19', 'Technology']
1,702
The student entrepreneur who interviewed Stormzy about race and privilege
The student entrepreneur who interviewed Stormzy about race and privilege Recent graduate and former president of the African and Caribbean Society, Toni Fola-Alade, talks about advocacy, start-ups and fundraising for Nigerian fishermen — and looking forward to the day when he doesn’t have to talk about race. I never expected Stormzy, the Vice President of Malawi and the General Counsel of GE Africa would accept our invitation to the Motherland Conference, but they did. And I don’t think we expected to fill out the Cambridge Union. That was just a fun bonus. The Conference came about after I’d attended a debate about race and educational privilege in my first year, hosted by Tiwa Adebayo (Sidney Sussex 2016). After the debate, a couple of us were hanging around and chatting about the issues raised. My friend Daniel Afolabi (Caius 2017) and I decided that there was no point complaining about this, we should do something about it. Someone made a joke about how cool it would be to get a rapper and an African President in a room together and everybody laughed. But as Daniel and I were walking back to College we thought: “there’s something in that”. We got to work with the amazing African and Caribbean Society committee to make it happen. Toni interviewing Stormzy at the Motherland Conference 2018 Talking to Stormzy was an amazing experience. I’d had the privilege of being on the selection panel for the first Stormzy Scholars, so it was exciting to be interacting with the man behind this incredible initiative. We talked everything from his music and personal story, to politics and mental health. He emphasised the role of foresight in everything he does; sensing trends, planning and positioning himself accordingly, and believing unwaveringly in the outcome. Outside of our Q and A, he was very gracious, incredibly witty and even obliged me in taking a selfie with the audience. Selfie with Stormzy Motherland was a new introductory event for people interested in learning more about the African continent or the Caribbean, but who didn’t really know where to start. We brought together people from culture, politics and business. We just wanted to see what it would look like for those ideas to kind of flow together. The morning after the event, I gave an interview on BBC Radio 4’s Today programme with Cambridge’s Vice-Chancellor, Professor Stephen J Toope. Afterwards, the Vice-Chancellor said he had a project he’d like me to be a part of. A couple of months later I was asked to sit on the Advisory Group for the enquiry into the University’s role in Legacies of Enslavement, which I accepted. It was brave of the VC to proactively tackle this sensitive subject, and I’d love to see this proactiveness ripple throughout the University. While I can’t talk on behalf of the Legacies of Enslavement board, I can share my hopes for the outcome of the work that is taking place. Advocacy comes with the territory when you take on a role, as I did, like President of the African and Caribbean Society (ACS). ACS exerts a surprising amount of influence within the University and the media. This means that you can advocate for an improved experience for members of your community but also comment on wider issues. I still find it funny how much people are prepared to listen to a bunch of 19-year-olds in a university. But they are, and so it’s important to use this position, and the platform it gives you, well. Access is growing rapidly at Cambridge. Everyone I’ve met and sat on boards with genuinely cares about diversifying the University and are making great strides to do so. However, there’s still a much-needed enclave for people with a cultural affinity to be together, especially when there are inherent difficulties of being black at Cambridge. First and foremost, ACS serves as a community to make you feel valuable and welcome. As a black student at Cambridge, there are things that you just have to accept. For example, there are artefacts that are valuable to your culture that are in the University. There is an awareness that the College you’re sleeping in has probably taken donations from the oppression of people that look like you, and the curriculum doesn’t necessarily reflect some of the knowledge that is inherently from the culture you come from. These are things that students campaign on but cannot always afford to labour on excessively, with the pressures of the degree. They are things you are conscious of as you study here. I would welcome seeing a commitment to diversifying the academics and promoting black research at the University. In terms of continuing to drive access, getting more black students into Cambridge is really important. I think increased financial support would make it possible for more black British students and especially students from African nations or the Caribbean to study here. I’ve recently co-written a paper that seeks to quantify the economic contributions of black Britons to the country, with Maro Okiti (Trinity Hall 2019), Tabitha Balogun (Jesus 2018) and Demi Akinjide (University of Bristol). We look at how the economy can be re-imagined so that it is inclusive for all British people. The paper also highlights the worsening effect that COVID-19 could have on the black wealth gap. “Thank you DoGood Africa” Fishermen in Makoko by Oluwabusayomi Sunmonu Economic empowerment is the overriding theme of everything I work on, whether it’s in Britain or in Africa. A couple of years ago, a mentor of mine asked if I would like to join the founding team of DoGood, a not-for-profit organisation he was setting up which would use crowdfunding to provide essential financial and technical support for social impact organisations in Nigeria and beyond. We ran a campaign and raised $8,000 (~ £6,223) in ten days. This was used to buy 12 fishing boats which we donated to the fishing community in Makoko, which is the largest slum in Africa. We’re now building schools out of plastic bottles, bringing electricity to rural communities and providing orphans with digital skills. Travelling to Makoko was such an impactful experience that I chose it as the subject of my dissertation. I wrote about the historical, political and economic factors that had prevented the slum from improving and how we could bypass these to help the community develop. Inspired by an internship in the economic team of the Office of the Vice President of Nigeria, at the end of my second year, I cofounded a FinTech start-up with my friends, Nyasha Fraser-Yerro and Ayoola Oluwanusin. We focus on empowering small businesses in Nigeria. This involves building an ecosystem of products that will make running a small business easier, making it possible for the average Nigerian to build up prosperity for themselves. Toni in Makoko by Oluwabusayomi Sunmonu In March this year, just before lockdown began, I unexpectedly became CEO of our start-up, Nomad, and secured investment for a six-month incubation period for the business. This meant that for the next six months I was waking up at 4am to pray and go for a run, then I’d study from 5am–9am while the house was quiet, and then from 9am–5pm I would be working on building the business. It was a crazy few months, but it taught me a lot about adaptability and resilience. To graduate with a First class degree showed me that anything can be achieved, with strategy and focus. I’m writing and self-publishing a book The Game has changed about studying at Cambridge, leadership and starting a business. I hope to be able to help young people develop skills for a dynamic world of work and the uncertainty of a rapidly changing economy. I’m passionate about representing my community and talking about African issues, but it’s not the entirety of who I am. There are lots of things I’m excited about that I’m doing from a business and technology perspective. There’s a cool Toni Morrison quote: “The very serious function of racism, is distraction. It keeps you from doing your work. It keeps you explaining, over and over again, your reason for being… None of that is necessary.” Ultimately, I believe in a world where everyone gets an opportunity, where we can all enjoy peace and prosperity. I can’t wait for the day when I don’t have to talk about race, when I can just do my work and, hopefully, let my contribution to that vision speak for itself. This profile is part of our This Cambridge Life series, which opens a window on to the people that make Cambridge University unique. Cooks, gardeners, students, archivists, professors, alumni: all have a story to share. Interview: Charis Goodyear.
https://medium.com/this-cambridge-life/the-student-entrepreneur-who-interviewed-stormzy-about-race-and-privilege-c19a816ab314
['University Of Cambridge']
2020-10-23 13:20:42.618000+00:00
['Alumni', 'Cambridge University', 'Race', 'Technology', 'Startup']
1,703
Sneak-Peak into Autonomous Vehicles
If you have noticed, the technological advances for the past decade mainly focused on Machine Learning, Artificial Intelligence, and Computer Vision. Considering the present situation and the technological advances, we desired to possess a requirement wherein we can automate the minimum basic requirement of transportation. Isn’t it quite exciting? Think about the time where we can travel anywhere without merely having to drive. Just get into the vehicle, choose a destination and you will be right there. Quite relaxing and effortless right? The Future The race towards Autonomous Vehicle has already begun with the development in the electric mobility sector. There has been a significant shift in the automobile industry from IC engine vehicles to electric vehicles and in future autonomous vehicles. The baby steps towards an autonomous mode of transport have already been implemented such as in auto gear vehicles or automated parking systems and collision detection. The most important quality of an autonomous vehicle is vehicle-to-vehicle communication. An AV should be able to communicate with other vehicles thus preventing any kind of road accidents or detecting traffic and navigate accordingly. In simple words, an autonomous vehicle also known as self-driving cars or driverless vehicles is a mode of transport where the vehicle senses its surrounding environment and acts accordingly, and can move with less or no human effort. Autonomous driving systems include the car navigation system, the location system, the electronic map, the map matching, the global path planning, the environment perception, the radar perception, the visual perception, the vehicle control, the perception of vehicle speed and direction, and the vehicle control method. The driverless car designer must be capable of sensing accurate data and would be able to detect other vehicles and control the AV based on the road conditions. Driverless vehicles require deep machine learning with many computational stages to enable the neural network to learn and execute the best course of action. Want to pursue a career in Autonomous Vehicles? Join best rated Autonomous Vehicle Course: Easy To Pursue With Regular Academics or Jobs . - Goals to an Autonomous Vehicle In your normal daily car, ordinary cruise control help with long distance driving with some power steering and auto gears. Adaptive cruise control which keeps safe distance with you and any other obstacles and lane keep assist reduce driving fatigue. Adaptive cruise uses radars and cameras to automatically apply brakes at slow moving traffic and resume speed when traffic clears. Lane keep assist will try to keep up with your lane but still require the driver to be in control. Toyota Corolla and Nissan Sentra possess intelligent cruise control for driver assistance. Have you ever wondered what if you lose control over your speed? Yes, Level 2 automation can assist in controlling speed and steering even with the driver with steering control. It provides steer centering assist along maintaining the distance between you and the vehicle in front of you during stop-and-go traffic. Tesla Autopilot, Volvo Pilot Assist, Audi Traffic Jam Assist are some examples of Level 2 autonomous capabilities. This is a future technology where autonomous vehicles are considered to be capable of driving themselves only under ideal conditions and with limited-access divided highways at a certain speed. A driver may still have to keep their hands on the wheel but it is a step to another level of fully autonomous car. The next generation Audi A8 is expected to be the first to market a level 3 autonomous driving system3. Level 4 autonomous vehicles are supposed to drive themselves without human interactions with known destinations. The day is not too far when we can see driverless cars on our roads! Vehicles should be capable to monitor and maneuver through all road conditions and require no human interventions eliminating the need for a steering wheel and pedals. The building blocks of driverless cars are on the road now. Google’s Self Driving car — Waymo meets the requirement of collision control, cruise control, and driverless systems. By 2030, self-driving cars are expected to create $87 billion worth of opportunities for automakers and technology developers. The biggest tech firms and car companies will end up investing in autonomous-tech to lead to a different future. If you like the idea of work in New-Gen vehicles that are going to be driverless, this is the right career for you. To walk into your career, you should be aware of autonomous vehicles and their working principles and components. Join our self paced course from basics to an expert level about Autonomous Vehicles. The course enables you to make your own Self-Driving Cars with technologies used by Google, Tesla, and Ford!
https://medium.com/nerd-for-tech/sneak-peak-into-autonomous-vehicles-54b37f34cd4b
['Rinika Paul']
2021-03-07 13:55:30.029000+00:00
['Technology', 'Autonomous Vehicles', 'Engineering', 'Future', 'Machine Learning']
1,704
4 Things About Software Development Today That Would Surprise the Coders of the Past
Computers have now been in our lives for decades — and for most of that time, we’ve been watching them and trying to predict the future. Some changes weren’t hard to anticipate. We knew that faster, more powerful hardware would drive languages up the ladder of complexity. We predicted the way that code would infiltrate the devices around us, replacing all-in-one personal computers. And we’ve spent years anticipating the seismic shifts of virtual reality and artificial intelligence. But other changes came as more of a surprise. Here are four things that the the pioneering programmers of the past might not have expected. 1. Programming languages are still text In 1978, Brian Kernighan wrote this, the world’s first “Hello World” program, not knowing how famous his code would become. His starter example is preserved as a museum piece: The first hello / WikiCommons Today, nearly a half-century later, you’ll have no trouble recognizing the syntax of this basic C code. Which tells us something: programming languages are conservative. Even though language churn is real, even though new APIs and frameworks spring up overnight, the mainstream shifts slowly. Object-oriented programming was invented with Simula in the 1960s. It rose to prominence with C++ in the 1990s, and — despite some worthy challengers — it’s still the dominant paradigm today. In fact, one of the most striking features of modern programming is that so many of the world’s most important languages share a similar C-inspired syntax, from C# to Swift, Dart to Go, Java to JavaScript. The more things change, the more they stay the same. Programming luminaries have predicted for years that the way we write programs would change. We’ve been expecting natural-English languages that anyone could use, then flowchart-style visual languages that wouldn’t have any text at all, then AI-powered languages that would write themselves. Interesting experiments and niche tools aside, we’re still writing code in more or less the same way we did a generation ago. 2. Most software is cheap or free Twenty years ago, installing a new piece of professional software meant spinning up a CD, clicking through a setup wizard, and watching one slow-moving progress bar after another. Some setup programs would cheekily suggest you use the wait to grab a coffee. Installing software also meant shelling out serious cash, assuming you wanted a legitimate license. Consider this fine software specimen, on which an empire was built: Microsoft Office, 25 years ago Office 95 could be yours for a couple of hundred dollars, if you qualified for the right promotion. But a business that needed a new, full copy of the professional version would be on the hook for $599. And that was back in 1995. In inflation-adjusted dollars, buying Office 95 today would cost an eye-watering $1033. (At the time, businesses still saw it as something of a deal because it bundled together several applications in one package and underpriced its most serious competitor, Novell PerfectOffice.) For corporate developers, the story was even more expensive. Most Microsoft developers either purchased an MSDN subscription from Microsoft (for several thousand dollars) or worked for a company that did, because of the tremendous cost of buying operating systems, development tools, database software, and other tools separately. Even the much-loathed Microsoft FrontPage was introduced with a list price of $699. Today, that model has died. What happened? First, open-source software won the development world. It had the most enthusiastic community, some of the world’s most valuable pieces of software, and a price that was impossible to beat (free). Then it spread to the business world, where companies are now more likely to pay a metered usage cost through something like Amazon Web Services or Azure than buy seat licenses. And then it spread to the consumer world, with tech companies that give their software away and sell other people the chance to put ads in front of their audience. Some companies still survive on a software-for-money model (Adobe comes to mind, along with many game developers). But these tend to be small or mid-sized software companies. The big players like Microsoft, Google, and Apple make increasingly small amounts from software sales. Some, like Facebook, don’t even have a way to charge for their software. 3. Desktop apps are just a special type of web app Ever since the internet first reached the public consciousness, sometime in the early 1990s, we’ve been expecting it to change the world. As a result, many of the transformations it triggered were small surprises, even though they tore down and remade fundamental parts of our world. If you’ve been around long enough, you’ve seen the decimation of the music industry, the remaking of commerce, the death of print media, and the rise of distributed social groups for sharing news and information. All of these shifts were monumental, but none of them was unanticipated. Another change that wasn’t too surprising was the steady erosion of desktop software in favor of web apps. Once it became possible to do something with JavaScript in a web browser — with universal compatibility and no installation process — it quickly became preferred. This process was repeated over and over again across different domains. Of course, desktop applications were still important for some capabilities and some professional niches… or were they? Enter Electron. With the help of Node.js, Electron created a desktop shell wrapping the client web stack (JavaScript and the DOM) and the web server stack (with some extensions for plugging into operating system APIs). If that sounds like some serious overhead — you aren’t wrong. Even worse, every Electron app gets its own distinct copy of this heavy-weight execution environment. It sounds like a bloated nightmare, but computing power and memory were already so plentiful that developers could afford the overhead. The rest is history — Electron became a professional-quality tool suitable for building applications like Atom and Visual Studio Code. Of course, native desktop apps aren’t all dead. But Electron is thriving, and there’s a new wave of technologies coming that help developers hide desktop applications inside web applications. WebAssembly has broken the door down, leading the way to new projects like Blazor. The world has inverted. Back in the early days of the internet, we knew that the performance gap between desktop and web would shrink. We knew that the concerns of the day (the slow interpreted nature of the JavaScript language, the extra layer of the HTML DOM, the non-native graphics API) would become less important. Some of us even knew that every type of app would eventually be made to run in a web browser. But did anyone realize that the advantages of this development model — its broad reach, and the lure of using the same technology stack everywhere — would lead us to put desktop applications inside fake, virtualized web browsers and web servers? And that the cost of that design would become almost trivial? 4. We lost the security war In the early 2000s, we were looking at new ways of computing — and steadily increasing security risks. Untrustworthy apps could install viruses. Hackers could crash websites, and steal credit card data, documents, and emails. Identity fraud was difficult to fight. Fast forward to today and the situation is… basically the same. Data is still stolen. Networks are still hacked. The only challenge of early-internet computer security that we’ve come close to solving is that of viruses, because we’ve created giant sandbox models for games and most consumer apps. But with universal global computing and the Internet of Things, the perimeter we need to defend has grown impossibly wide. Security experts know that foolproof security is never possible. Instead, we strive for something called defense in depth — a layered sequence of defense measures that force attackers to thwart several systems to steal our secrets. Defense in depth in a computer network But here’s the problem. In the last two decades, we’ve failed to shore up the weakest link: the end user. Even though we’ve developed technologies for biometrics, we’re more likely to use facial recognition to index our Facebook pictures than to protect our data. Identity between people can’t be (practically) proved. We have facilities for encryption that we rarely use on our own data or communication. And here’s something that would stun the security expert of two-decades-ago: mysteriously, text passwords with arbitrary rules about capitalization and special characters are still a thing. It’s true, of course, that security is an ongoing struggle, and that human nature is hard to change. But when billions of dollars can be stolen simply by asking, we’ve failed to protect our end users. And how can we explain when the world’s most technologically advanced country holds an election that’s influenced by a low-tech phishing attack?
https://medium.com/young-coder/4-things-about-software-development-today-that-would-surprise-the-coders-of-the-past-41e9a2e389ca
['Matthew Macdonald']
2020-04-13 15:23:43.153000+00:00
['Technology', 'Security', 'Software Engineering', 'Programming', 'Computer Science']
1,705
Role of Technology during Covid-19 Pandemic
Role of Technology during Covid-19 Pandemic The covid-19 Pandemic has wreaked havoc on the whole world. Our lives, both personal and professional have been greatly impacted. During this time of fear and uncertainty, our willingness to adapt to technology has been our lifeline. Pranav Mohan Jul 5·2 min read Covid-19 has directly impacted close to 1.45 billion students as schools and colleges have shut down. Most educational institutions have started to offer online courses to make sure that education is not disrupted due to the pandemic. The online grocery market has witnessed tremendous growth as people are preferring home delivery to stay safe. Online teaching and work from home is a blessing that comes due to technology and is the greatest solution that helps in social distancing. Work from home has ensured business continuity. But just like the sides of a coin, technology has its own minus points. We have become more dependent on technology during the pandemic. More exposure to mobile radiation is increasing the cases of eye problems like myopia. People are watching movies late at night which affects their sleep cycle which results in fatigue and sleepiness. People are falling prey to fake news which is one of the factors in the increase of Covid-19 cases. An increase in digital gadget use is causing reduced physical activities. Increased use of technology by children can cause low creativity. I do not mean to say that we should not use technology. But we should not let it rule over us. The pandemic has hit labour-intensive sectors like food, retail, logistics and manufacturing businesses. Robots are used to clean infected areas and for delivering food to quarantined people. Swab collection is conducted to help medical professionals. Thanks to technology, we can develop vaccines. Thermal guns are used to measure the temperature of people. Technology has helped in three medical activities- diagnosis, surveillance and prevention. Covid-19 has proved that technological innovations have been helping in managing the pandemic in a timely, systematic and calm manner. In the end, I would like to add that advancement in technology is steadily progressing; it will undoubtedly continue to grow exponentially. It’s we humans who have to adapt to changes in technology faster and continue to invest in building the technology systems for better preparedness. I hope you like this blog. Don’t forget to share your suggestion in the comments section. Do share this blog. Visit my Blog website- https://dailyboxchat.blogspot.com/
https://medium.com/@pranav-mohan365/role-of-technology-during-covid-19-pandemic-7b2c309fa157
['Pranav Mohan']
2021-07-15 04:25:56.771000+00:00
['Technology', 'Covid 19', 'Blog']
1,706
The Biggest Lie In Open Source
It Can’t Be a Source of Income Open source software is free. Therefore, its maintainers and creators can’t make a living out of it. Wrong. At first glance, open source software is free for its users. But there is a lot more to understand before saying it can’t be a valid source of income. Like with any digital product, making money is all about your business model and the marketing strategy behind it. If you’re interested in making money from open source projects, here are some ideas for you to consider. Selling professional services This is the most common one. As I’ve said before, people tend to assume that because you’ve built a project and published it to the world, you need to support it 24 hours a day. That’s not only untrue, but it’s definitely a whole different area of work. So why not charge for it? In fact, why not charge for training as well and even provide support for companies trying to use your free product? Those are what we call professional services (services meant for companies using your product). There are big examples of open source projects doing this exact thing. For example, RedHat, IBM, Hortonworks (around Apache Hadoop), and Percona (for their open-source database). Selling related content How many books have you seen (or even read) about React or PHP? The books weren’t free, were they? If you managed to build an open source project that people like and use, then you can make money by giving those people products they can use to learn how to use it. This is very similar to the professional services model, yet that one implies you’re personally involved (thus allowing you to charge higher rates). However, with products, you can build cheap alternatives that are accessible to non-company users (i.e. developers trying to use your code). Even if you’re not the project’s author, you can benefit from their success by doing this exact same thing. You’re building products around an open source project (just not your own). We’re talking about writing books about it, creating video courses for platforms such as Udemy, or even writing sponsored blog posts about these OS projects. Why not? Sometimes, authors will be willing to pay you money to write about their projects. Donations You can make money from people donating to your cause. Don’t be afraid to ask for money. As long as it’s done tastefully, it’s definitely a valid income option. And if you’ve built a project that a big community is using, you’ll be surprised at the results. Looking at projects such as Git, you’ll see that they do receive donations from anyone interested in their cause. It’s all about the reach your project has and the community built behind it. If it’s big enough, then there is probably a way to make money out of it. There are many other ways you can go about building income from your open source work. It’s just a matter of getting creative.
https://medium.com/better-programming/the-biggest-lie-in-open-source-de38f71aa88c
['Fernando Doglio']
2020-12-22 17:15:13.480000+00:00
['Open Source', 'Software Development', 'Technology', 'Software Engineering', 'Programming']
1,707
It’s Blockchain Baby, Okay Now Baby Steps
Disclaimer: This is NOT financial advice, nor do I endorse ALL of the principles, practices or viewpoints of the people who wrote/filmed/spoke this content. So you want to get into #crypto aye 🤔 well ❗️Be Warned❗️ at times this market is akin to a roller coaster up/down even sideways. There are a few things you’ll need to know before you get started, such as where the good resources are, who to listen to, what all the jargon means, how to get some crypto and finally the best possible way to secure your funds. In this article I aim to cover all this and more! On that note, let’s get started but first the bad news: Bad News You probably won’t get rich investing into cryptocurrency 💸 You can lose money instantly and irrevocably for not paying close attention. There are no lambos on the moon 🏎🌚😑 Good News The community involved with Blockchain and Cryptocurrency is amazing and hilarious. The possibilities for improving the lives of people and restructuring the current inequalities in our world are HUGE. There will be cats, sooo many cats. Also rainbows and cats riding llamas with spaceships and all of this will be eloquently captured on T-shirts for your wearing pleasure 🐈🌈👕 Credit: Vitalik Amaze One of the first places I tell people to start is Reddit, now I know what you’re saying “reddit is like a black hole, how will I ever find anything?” Well there are these nifty little threads called “subreddits”. Here are my favorite: r/blockchain r/cryptocurrency r/ethereum. I actually found this great analogy on Reddit while gathering resources for this article explaining how blockchain transactions actually function: See it’s not that complicated, it’s just magic 🧙🏽‍ Seriously though it’s incredibly important for people (YOU) to understand what’s happening at the moment and what is at stake for the parties involved, make no mistake big changes are headed our way. There is a LOT of misinformation about crypto so it’s important to get your facts straight, this was recently published in the Mirror UK… Please don’t be one of these people 🤦🏻‍ You Speak Blockchain 🗣 The following list will help you familiarize yourself with basic blockchain terms. For a deeper dive into terminology or when you come against words that make you say “what the eff” see The Decryptionary 🛠 The Blockchain: an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value, more on blockchain. an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value, more on blockchain. Node: a computer connected to the blockchain network using a client that performs the task of validating and relaying transactions. a computer connected to the blockchain network using a client that performs the task of validating and relaying transactions. Mining: is the process used for securing the blockchain network and validating the transactions essentially 24/7 computer accounting called ‘verifying transactions’, more on mining. is the process used for securing the blockchain network and validating the transactions essentially 24/7 computer accounting called ‘verifying transactions’, more on mining. Web3.0: is many things but primarily a way of referring to the complete reinvention of the web, more on Web3.0. is many things but primarily a way of referring to the complete reinvention of the web, more on Web3.0. DApp: is an abbreviated term for a decentralized application that runs on a peer-to-peer network, more on DApps. is an abbreviated term for a decentralized application that runs on a peer-to-peer network, more on DApps. DAO: is an abbreviated term for decentralized autonomous organization. A business or organization whose decisions are made electronically by a written computer code or through the vote of its members, more on DAOs. is an abbreviated term for decentralized autonomous organization. A business or organization whose decisions are made electronically by a written computer code or through the vote of its members, more on DAOs. Decentralization: as described by Webster “the dispersion or distribution of functions and powers”, but this definition has grown to encompass a field much wider than power, I recommend reading more on Decentralization and really getting a feel for what it’s all about. Remember the more you read the more you know (tweets don’t count) and in this space the more knowledge you have the better informed you will be about trades, investments and changes in the market. Think of this as dipping your toe in the pool because it barely scratches the surface. Who’s Who Round Here Now that you know what’s what, you need to know who’s who. I like to start with the highly meme worthy, Vitalik Buterin, founder of the Ethereum Blockchain. He’s known for his amazing wardrobe and ruthless criticism of ICO’s and other project he does not support, see twitter go wild. The BBC even made a nice “intro to crypto” video with him and some others from the Ethereum Project! BBC Talks Blockchain For a slightly more technical intro with Vitalik, see this Techcrunch Interview. The second notable figure is Satoshi Nakimoto. Satoshi created the first digital currency known as Bitcoin. Currently Bitcoin has the largest market capitalization of any coin at an astounding $151,640,096,037 as of 04/26/2018. You can check this yourself at Coin Market Cap. The thing is, that no one really knows who “Satoshi Nakimoto” is, because he has never publicly revealed his identity. Many people believe it is the man in the image to the left but he fiercely denies it. There’s a slew of conspiracy theories about who he/they are but for the sake of staying on track I’ll just give you this one. There are plenty of other folks who gladly claim credit for their work though, for starers Andreas Antonopoulos, Omar Bham AKA Crypt0News and Bettina Warburg on Twitter. The next subject is storage and exchanges, what are they, which ones to use and how to protect yourself from the gov’mnt, “ooops did I say protect yourself, I meant comply with” 😇 The first thing you need to know is that if you are using a website (aka exchange) to store your coins, then you are putting them at risk. Exchanges are huge targets and they get hacked all the time. You want to hold your own keys, I suggest getting a Ledger Nano S, which is a hardware wallet. This is by far the most secure way to store and hide your crypto, just make sure you back it up very well. You will get a string of 24 words that you need to store somewhere safe.. Ideally you put them in 2 safe places! Ledger Nano S Next up, markets: Like I mentioned above it’s NOT a good idea to keep your coins on an exchange long term, but markets are necessary for trading/selling currency when you need to. I understand not everyone can just #HoDL for life, although I wish they would. The fastest way to get started is to open a Gemini account. Deposit cash, then buy ETH/BTC/… whichever your choose. Then when/if you’re ready to start trading head over to Bittrex. You’ll have to set up an account and register your identity and everything, so be prepared with your documents. Also, make sure you set up a 2FA security log-in, which is an additional log-in code that is usually sent to your phone or email. Remember all these transactions are now being registered with the IRS, so you should “probably” consult a tax agent at some point. If you want to go off the books and avoid exchanges then something like localbitcoins.com is a good option, and once again reddit is your friend: ask questions, read other people’s findings or even go out and find others who want to buy currency from you. This will serve 3 purposes: #1 Helps you avoid paying taxes (which is illegal and I would never advise you to do…🤫) #2 Allows you to pull some profit from your investment which reduces your exposure to lose. #3 Brings others into the crypto space which expands the market for all of us! Think about it….. At this point the question you need to be asking yourself is, “Why do I need to know this?” and the answer will vary wildly from person to person. I chose to get involved with crypto because at the time it seemed exciting and interesting during a period when job satisfaction was low and desire for novelty was high. This might not be a good enough reason for you to go any further, but I recommend figuring out if you have one and writing it down. Things can move really fast here, with an unlimited amount of lambo memes and ICO scandals it’s important to remind yourself what you’re here for. There are many schools of thought about what this decentralized movement is all about, so decide for yourself. The most powerful motivator of all is your own will power: if you believe in something, you can overcome great odds to achieve it.
https://medium.com/coinmonks/its-blockchain-baby-okay-now-baby-steps-539f208da02c
['Yalor Ethernal']
2018-04-29 12:55:56.098000+00:00
['Cryptocurrency', 'Bitcoin', 'Ethereum', 'Blockchain Technology', 'Blockchain']
1,708
API3 Enterprise Development Report, August/September 2021
Source: blockgates.io This is the first progress report for the Enterprise Development team — DAO Cycle #4. For our avid API3 readers, Enterprise Development’s goal is to target large organizations in partnering or adopting API3 technologies while assisting enterprises in solving their use-case and proof of concepts (POC). These requirements pivot around bringing real-world off-chain data to the on-chain environment. This involves long sales cycles, multiple decision-makers, and higher levels of risk. An enterprise-first mind-set is needed whereby knowing the enterprise’s needs, concerns and monetization models are centre stage throughout the Enterprise Development process. In addition, the Enterprise Development team pursues highly strategic and important areas that touch all verticals in the API3 DAO. These relationships lay the foundation for growth and amplification while thinking holistically from data source through to dApp. August Update The first month of the cycle was focused on driving and nurturing relationships with large insurers and market data providers. Being patient while understanding the customers’ needs and concerns has paid-off in terms of additional interactions taking place over the month and into September. This indicates a growing exploration and understanding of how enterprises are entering the blockchain space. Concurrently, the start of the cycle resulted in identifying new opportunities to drive enterprise customer interactions. The enterprise team remains agile while keeping scaling in mind, so that when the need arises, the team can scale in line with the required capacity while being nimble enough to add value in closing the gaps internally and externally. The Enterprise development team has also provided cross-team support in closing high ticket items and partnerships and will provide negotiation facilitation and internal working groups across DAO streams. The Enterprise development team attended the API Days conference with the Open Bank Project speaking about Insurance and Open Banking. This resulted in a further interaction with a large automotive company. The Open Bank Project API3 has a long-term partnership with the Open Bank Project (OBP). This mutually beneficial partnership aids in growing traditional financial institutions adoption of Web 3.0 technologies. We are also collaborating with OBP to bring Airnode powered Hackathon’s to these financial institutions which assists in use-case development around connecting real-world data sources to dApps and smart contracts. Updates will occur when ready due to the importance of such partnerships. Over the next month — September Our enterprise customers have signaled that they would be interested in API3 providing the end-to-end dApp development and data sourcing, however, API3’s core business is in data sourcing and connecting the API economy though the Airnode or first-party data oracle. For this reason, we will reach out to implementation partners that can assist API3 enterprise partners build the solution while powering these dApps and smart contracts by API3 Airnode. Therefore, for the month we will be looking to partner with at least five implementation partners whom we can provide an end-to-end convenience for enterprise customers while bolstering the adoption and use of API3 Airnode’s. API3 is the only data oracle that is General Data Protection Regulation (GDPR) compliant, extending the envelope of data compliance. This is built with the purpose of Enterprise while preserving and ingraining privacy-by-design into the Airnode. Given API3’s focus on placing privacy and data control at the fingertips of the API Provider, there is also a need at the Enterprise level to investigate ISO27001. ISO27001 (formally known as ISO/IEC 27001:2005) is a specification for an information security management system (ISMS). An ISMS is a framework of policies and procedures that includes all legal, physical and technical controls involved in an organization’s information risk management processes. In concerted effort, the Enterprise and Operations teams see this as an organizational-wide augmenting activity for API3. This certification complements GDPR while adding a seal of approval in compliance standards. This will allow Enterprises to be more confident due to an international standardization organization vetting the quality assurance of the processes and Airnode product. Furthermore, an exploration of a large cloud provider integration will be done along with the process requirements in order to further the accessibility and ubiquity of the Airnode as a plug-and-play solution for all enterprise-grade solutions. The scoping and requirements phase will commence this month. The enterprise team has also tapped into venture capital networks for warm introductions in order to grow meaningful partnerships. In addition to warm introductions and networks, the team is also focusing on customized outreach uncovering enterprise needs and curiosities when it comes to data oracles. The enterprise team will attend other virtual and face-to-face conferences in order to increase touch-points with enterprises. Already thinking about the next DAO Cycle #5 There has also been a shift in the organizational needs to drive enterprise adoption. In the next DAO cycle, there will be a consolidation of the business development team with some team members focusing on enterprise development. Is your company looking to build on Blockchain? If you are an enterprise looking to explore and delve into the blockchain space, we would be more than excited to do a use-case exploration session to see if we can work together. Reach out to us at the following email address: gio@api3.org or you can also reach the API3 team on the API3 Discord and Forum! Know an API data provider that needs to join Web3.0? Click Here!
https://medium.com/@gio_lesna/api3-enterprise-development-report-september-2021-da09e065121
['Gio Lesna']
2021-09-16 14:31:50.001000+00:00
['Api3', 'Data', 'Api Gateway', 'Enterprise Technology', 'Oracle']
1,709
Midwestern Business Challenges and Opportunities We’re Following Into 2021
At Heartland Ventures, we often see things from a different perspective due to our work connecting large Midwestern companies with technology being developed on the coasts. Here are some trends we’re following as we close one year and start another. Sustainability is hot, but Midwestern businesses aren’t moving as quickly as the national and international dialogue Our small to medium Midwestern business partners seem to agree that they aren’t incentivized at the moment to be the first movers on sustainability measures. Most business-to-business operations aren’t feeling the full force of consumer demand and the financial cost of making a dramatic shift in business models remains uncertain. As policy debates heat up, it’s likely that large enterprises will need to make the first moves and pave the way for the rest of America’s economy. Change is coming, it’s just a matter of timing. “Made in America” is set for a huge comeback in Middle America With a global pandemic, the rise of nationalism, and trade wars, American industry (making made in America goods) has seen a resurgence. We firmly believe that American manufacturing will grow stronger. The dropping costs of automation will enable American business to become cost competitive again. The connectivity of even the smallest operators will create supply for ever increasing demand. This year, international multi-channel supply chain risks became blatantly obvious. The importance of supply chain transparency and knowing your suppliers (and your suppliers’ suppliers) demands that businesses have contingency plans for even the most dire situations. Interstate has become safer than international. “Can’t find the right people” is a regular lament from SME employers The demand for skilled workers is the number one “pain point” we hear about from employers in the Heartland. Structural unemployment, meaning the mismatch between the skills that workers in the economy can offer and the skills needed by employers, continues. Smaller markets like Elkhart, IN, home to over 80% of recreational vehicle manufacturing, are not only fiercely competitive on product and distribution, but also have to draw from a limited labor pool. Many employers have been forced to nix normally accepted hiring standards, such as background checks and drug testing. We believe this leads to further incentives for automation in manufacturing facilities, an increase in training and upskilling for non-task-oriented work, and a dramatic shift in internal culture to promote employee retention and development. Turnover in some facilities has been as high as 400% per year, which doesn’t help the employer who needs to continue production or the employee who needs a steady job. Safety in the workplace is essential and now part of corporate strategy Since late March, worker safety has propelled to the top of the priority list for almost all American businesses. Whether you’re a fast-growing technology company forced to WFH or an industrial corporation closely monitoring the health of thousands of in-person workers, you’ve been forced to make shifts and investments in procedures and protocols…quickly. StrongArm Tech, one of our portfolio companies, created the new Fuse Flex which is smaller than a playing card but will keep millions of Industrial Athletes safer on the job. Health and safety leaders in many organizations now have a seat at the table when discussing corporate strategy and future planning. We have heard that this will be a mainstay due to the increase in demand on operations and the ability to scale up to meet that demand so effectively. We anticipate that worker safety and optimizing conditions for peak performance will remain a key initiative for most industrial companies in 2021. Privately held enterprises will move slower than publicly traded companies on diversity measures (but Heartland is here to help move them along!) Nasdaq’s newly proposed regulations signal that board diversity may no longer be optional for publicly traded companies. These are welcome actions. Diverse companies are statistically more likely to have greater financial returns than their less-diverse industry peers. Diversity of background, gender, ethnicity, race, and thought lead to creative problem solving and stronger companies. We believe that there will likely be a slower adoption among privately-held companies than their closely watched public counterparts. In many cases, family owned companies and single-shareholder companies are unlikely to make changes to their boards. However, with calls for racial justice, equity, and representation throughout the year in the US, customer voices and leveling policies will drive nationwide change in the future. Flexible WFH options will become more universal post-pandemic and will force Commercial Real Estate to ‘get creative’ The most obvious business challenge to emerge during the pandemic has been the necessity for many people to remain at home and set up home offices or virtual meeting spaces. It’s tough to build a sense of camaraderie and energy outside of the traditional office setting. The Washington Post reports that ‘head of remote work’ is becoming a hot job title. Companies will need to grapple with all of this and devise ways to train, onboard, upskill, and inspire workers remotely well into next year and beyond. This will unquestionably have an impact on the Commercial Real Estate industry where many of our partners have indicated: “now is not the time to panic, but a great time to get creative.” Large anchor tenants in buildings should remain, but we can also expect to see alterations to the way that space is leased, the length of these leases, the design and layout of the facilities, and the amenities offered. Automation will continue to change how the world works and our future Automation has already drastically changed our lives and how many people work across the world. It’s not just limited to robotics and heavy machinery, but process automation and the digitization of manual workflows is becoming more apparent with our corporate partners and how they make their products. Since it’s so difficult in many industries to recruit, train, upskill, and retain strong workers, employee time needs to be used as efficiently as possible to process orders and meet demand. At Heartland, we consistently look at deals that seek to automate almost anything, from insurance paperwork to ordering a pizza, and we expect more startups to find ways to automate business practices and optimize worker time. E-commerce logistics hubs are sprouting up across the Midwest The connectivity of the Midwest is a huge economic driver. Heartland’s LPs are predominantly spread throughout Indiana and Ohio — both states with big logistics advantages. Indiana ranks first in going the shortest distance to reach the median center of the US population and Ohioans can reach 60% of Canadian and American customers within a day’s drive. This makes the Heartland a natural place to strengthen logistics and distribution hubs. This “Amazon-ification” is changing how people shop and purchase goods and also the economies of many American communities. We’re hearing from our construction partners that more E-commerce hubs will be popping up across the Midwest in the years to come and traditional brick and mortar retail locations will partially transition into smaller distribution hubs for national brands. What do Midwestern unicorns mean for markets outside of Silicon Valley? The recent Root Insurance IPO and Olive AI’s emergence as another Columbus unicorn have shed more light on VC markets outside of Silicon Valley. These moves have brought legitimacy to tech development and investing outside of California and have shown the importance of proximity to your target customer base when scaling a company. A diaspora of sorts is happening from Silicon Valley. Many entrepreneurs are starting businesses elsewhere, tech talent can now live and work anywhere, and VCs and companies feel as though they can build a team and operate in new home bases. The most apparent example of this trend is in Elon Musk’s move to Texas, along with Hewlett Packard and Oracle. Some of these moves relate to California’s tax rates, but culturally and economically, the shift over time eastward and southward will change how startups are born and take off. We’re glad to be positioned as the connector between coastal tech and the American Heartland as we start a new decade.
https://medium.com/heartland-ventures/midwestern-business-challenges-and-opportunities-were-following-into-2021-ccafae29e8cf
['Max Brickman']
2020-12-18 14:46:40.053000+00:00
['Trends', 'Leadership', 'Business', 'Innovation', 'Technology']
1,710
City Manager Marcus D. Jones signs digital alliance with Microsoft
Creating a workforce of the future City Manager Marcus D. Jones shakes hands with Microsoft President Kate Johnson after signing the digital alliance agreement. Charlotte is using technology to improve the lives of residents, businesses and visitors. The new agreement City Manager Marcus D. Jones signed with Microsoft President Kate Johnson not only aligns with our sustainability strategy, but it develops us further as a smart city. “Inspiring innovation and applying data science to make better decisions makes resident quality of life better,” said Jones. “We are excited to work with Microsoft and value their commitment to our sustainable city strategy.” What is a smart city? Smart cities use technology like sensors to collect data and use that information to manage resources more efficiently. This connection between data, technology and everyday life is also known as the internet of things. Think of things like how we use existing traffic patterns to reduce congestion. The agreement makes Charlotte the second city in the nation to take such a comprehensive smart city approach and provides Charlotte with access to Microsoft technologies to improve city services, support digital literacy and promote science, technology, engineering and math (STEM) workforce development. Aligning with our programs The alliance ties directly into our workforce development and Career Pathways program, creating job opportunities for all in Charlotte. We’re creating the workforce of the future by creating jobs in growing technology fields. The agreement gives us an opportunity to develop new skillsets and adopt new technology. The city also committed to community building and improving upward mobility. In addition to existing programs like affordable housing and early childhood education programs, the agreement also supports digital literacy. With digital literacy, we help our community learn how to use new technology to improve their lives. What we’ll be working on The City of Charlotte and Microsoft have identified five focus areas for pilot programs that will leverage technology and education to benefit the city’s residents, businesses and aspiring entrepreneurs. These focus areas are: Upward mobility for residents Smart transit systems Public Wi-Fi connectivity Public safety infrastructure, and Safer neighborhoods. Throughout this collaborative program, Microsoft will support the city through various volunteer initiatives and will provide hands-on technology training at these events throughout the city. The public-private initiative aims to increase digital skills along the full continuum of the educational pipeline.
https://medium.com/around-the-crown/city-manager-marcus-d-jones-signs-digital-alliance-with-microsoft-844c6662fb64
['Nicole Eaton']
2019-11-15 16:30:04.146000+00:00
['Employee Engagement', 'Smart Cities', 'Workforce Development', 'Technology', 'Government']
1,711
Key takeaways from my time in civic tech
Key takeaways from my time in civic tech One of many virtual talks with civic tech groups, this time with the USDS! Also, crab claws. I was ecstatic to be visiting Washington D.C. for ten weeks in February 2020, contributing to a cause that impacted millions nationwide and learning within a group of like-minded individuals with similar goals ahead of us. I even had a ‘bucket list’ folder in my chrome bar with bookmarked ‘hidden gems’ I was going to cross off every glimpse of free time I had in the summer days of D.C. We all know what happened in the coming months, but despite the Civic Digital Fellowship becoming virtual many of these initial hopes would still hold true. I wrote these three questions in my journal the night before the Fellowship started: 1) What resources are being put into government technology to change the stigma around our generation that civic and technology are independent of one another rather than dependent? 2) What do the full-time opportunities or career paths look like after this aside from the Presidential Innovation Fellows and how do we apply for them? 3) How can I further change my mindset from checking boxes in life to making an impact? I now have a greater understanding of these thoughts through the Fellowship, how ‘impact’ is defined by all of us, what civic tech is, and so much more. I was paired with working on Vote.gov, the site that provides information for voter registration. In anticipation of the 2020 federal election and National Voter Registration Day, our team was preparing for an influx of web traffic and wanted help in displaying information to the public. A typical week would consist of me attending meetings with the Election Assistance Commission, meeting with various internal teams (analytics, UX, development), updating branches for front-end development, and finally listening in on USAGov’s weekly call center live sessions. An important discovery through these live call center sessions was developing a sense of empathy for who we’re building our products for. Especially in the private sector, it’s easy to think of the 1% using specific products. But when you’re listening in on a woman from rural Kentucky voicing her troubles applying for her stimulus check, the problems get very real, very quickly. Here’s a list of takeaways that I, a new graduate technologist, have uncovered through my time here at the Civic Digital Fellowship. 1. Understand that there are hoops and ladders This title may as well just be labeled ‘bureaucracy’. Our generation must contribute to hacking the bureaucracy; we need our generation to build upon existing frameworks and navigate through year-old laws, tech stacks, and practices that are prohibiting technologists from iterating and innovating. What I once thought was a nifty, unique way of incorporating a popular civic election API into our product turned into several what-ifs — for instance, our team manager needed to ask the General Services Administration (GSA)’s legal team for contradictions in the terms of service — until several weeks passed and the passion for the feature subsided on top of piling work. I imagine similar stories of creative attempts or ‘hacker mentality’ may be echoed throughout the field. A lesson I learned through my virtual summer was that working in civic tech is painstakingly difficult yet ultimately rewarding. Julia Lindpaintner (CDF ’18) graciously pointed me to the slack channel called #why-we-do-this during our coffee chat, wherein the Technology Transformation Services (TTS) at GSA, we set aside work topics to focus on the purpose, no matter how trivial or small, of our work. #why-we-do-this GSA TTS slack channel I mean, in how many other professional places are you reminded of the impact of your work daily? My supervisor Jessica said she acknowledged these obstacles as well, but at the end of the day, wanted me to remember the impact of our work. Eddie Hartwig, Deputy Administrator of the U.S. Digital Service (USDS), compared fresh eyes in government like new workers coming into a construction site and wondering “why are these unkempt wires scattered all over the grounds, and what are all these obstacles doing in my way?” DJ Patil, the former Chief Data Scientist of the United States, told us that working in government is “90% banging your head on the wall and 10% actually seeing results.” But at the end of the day “that small percentage of impact you do have” will be more memorable in retrospect than any other work done in the private sector or academia. The list of quotes could go on and on. 2. There is a breadth of talent in civic tech, you’re just not looking in the right places A subversive thought in my mind coming into this summer was “well, the work being done must be second-rate. Since I don’t hear about it publicly that means that no talented, intelligent people are working on public-facing problems.” This was a brash overgeneralization that I had. Everywhere I looked, there were individuals whose intellect and careers were immensely impressive. Song Hia, my mentor and a Product Manager at the NYC Mayor’s Office for Economic Opportunity shared an entire document of resources with me, making me realize the impact that the marketing of large corporations/tech companies has done to draw my generation towards them. I realized I needed to have a shift in perspective that talent doesn’t only come from the best-marketed companies or those with the most money. 3. Impact is hard to quantify, but in civic tech our impact is undeniable One of the common lines Chris Kuang, co-founder and Director of Operations at Coding it Forward, mentions in an introduction to a virtual call is how we Fellows are looking to do more than drive ad revenue or click through rates. This is something I reflected on — that instead of building products or features that service the 1% of the 1% of the population, we could be making a different sort of impact, one larger yet harder to quantify, in solving problems that include more individuals. Take the Census Bureau, for example. In counting every single person in the United States, that team is servicing (at least in theory) every citizen in the country. Every single one! To think that I grew up with technology, but a large percentage of the world’s population has yet to be introduced to technology or even the internet goes over my head. Instead of building out SaaS enterprise software, why not build out products to service those in need before they get left behind? 4. Many of the same areas of work found in academia or the private sector are found in the public sector Commissions are working on the policy for Artificial Intelligence, defense tech companies hacking bureaucracy around Air Force software, and so much more. So many knowledgeable, qualified individuals are supporting initiatives to help push civic tech in the right direction in anticipation of accelerating technologies. They are allowing government, data, policy, and innovation to happen in a contemporary setting. Fun fact: One of the co-founders of Django, a popular web framework based on Python, is a full-time employee in the General Services Administration working on login.gov. 5. The same roles and processes that are popular within tech companies are also in civic tech Yes, there are product managers, designers, and developers within civic tech. Every day, the Usa.gov team holds 10 minute daily stand-ups that follow agile methodology with sprints. The U.S. Digital Service (USDS), U.S. Digital Response (USDR), Code for America, Presidential Innovation Fellows, and virtually every civic tech organization is looking for talented individuals with tech backgrounds who can simply put, get stuff done. By the end of the Fellowship, I was satisfied with my work. I was able to contribute in multiple facets of Vote.gov: analytics, design, and development. I can only imagine how the workplace must feel now in anticipation of the federal election. As for me, I’ve realized the value of performing large-scale, public-facing work. Civic tech is a community that will forever be in my mind, and will certainly be a place to contribute to in the future. It’s up to our generation to step up in anticipation of the acceleration of technologies (Moore’s Law is scary) in all areas — policy, software, hardware — before those who can’t understand them get left behind. Regardless of the circumstances, I’m glad this experience still happened and hope to see my fellow Fellows soon.
https://blog.codingitforward.com/key-takeaways-from-my-time-in-civic-tech-f3a205d35517
['Derek Tam']
2020-09-09 18:05:41.123000+00:00
['Civic Digital Fellowship', 'Internships', 'Civictech', 'Government', 'Technology']
1,712
Blockchain Revolution: Part One
Blockchain Revolution: Part One Understanding blockchain and its application Blockchain is an exhaustive topic and therefore we have decided to run a cycle of articles under this series named ‘Blockchain Revolution’. From understanding the concept to building your own ERC20 token, we will be covering most critical and functional aspects of blockchain technology. Our aim is to help our readers understand the concept of blockchain and its application across sectors, build simple blockchain applications, and understand smart contracts and how they are powered. This is the first post of the series that introduces blockchain and its applications. What comes to your mind when you hear the word blockchain? — Bitcoin? Wrong, it’s a misconception that blockchain is bitcoin. Bitcoin is cryptocurrency that is built on the blockchain technology. Blockchain is one of the most talked about subject right now. While the integration of this technology is in its nascent stage, industry leaders believe that blockchain is set to transform the economic and social system in years to come. The global blockchain technology market which is at over $500 million now, is predicted to grow to $2.3 billion by 2021. Let’s break it down and understand what is blockchain technology, how it works and its application across sectors. What is blockchain “The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” Don & Alex Tapscott, authors Blockchain Revolution (2016). To put it simply, blockchain is a distributed ledger which anyone, anywhere can access over the internet. As the name indicates, blockchain is a sequential chain of records called blocks. Let’s understand what a block is. It is just a record of any data. Each block contains mainly three elements — data, hash of the block and hash of the previous block.
https://medium.com/technology-nineleaps/blockchain-revolution-part-one-c57641eda327
['Abhishek Sengar']
2018-09-25 12:34:20.052000+00:00
['Blockchain', 'Disruption', 'Technology', 'Bitcoin', 'Revolution']
1,713
The 3 Technology Trends That Are Transforming Dentistry
Like other industries, dentistry is shifting rapidly according to the latest technological advances. These changes have the potential to not only improve patient results and satisfaction but to redirect the industry into completely new standards of operation. These are some of the biggest trends shaking up dental offices that professionals, patients, and investors can all get excited about. 1. Patient-directed e-commerce Companies like Warby Parker have already brought prescription healthcare items, such as glasses, online. Now, companies are developing ways to bring traditional dental services to the Internet, too. For example, orthodontics business Candid aims to allow patients to complete the braces process with just one initial dental check — everything else is done by mail and website. Other businesses are taking similar approaches to services like veneers. Going digital can be much less expensive, and it can give patients a wider choice of providers. It can also accommodate people who might move around frequently for work or who otherwise have trouble getting into a physical dentist's office. 2. 3-D and holographic imaging 3-D imaging, also known as cone-beam computed tomography, involves taking scans all the way around your mouth, jaw, throat, nose, ears, and throat. It allows dentists to rotate images on the screen and examine a case from all angles. Subsequently, the providers can often better visualize your unique physiology so they don’t miss anything and can plan treatments in more customized ways. It also allows dentists to look at both hard and soft tissues at the same time, and it can help them better educate patients about physical needs and procedures. The technology is already in use, but it will definitely become much more widespread. A subcategory of 3-D imaging is holographic imaging, which can allow dentists to peel off virtual layers in physical space for measurement or assessment before starting work. Films can be stored with medical records, too, which can save space and money compared to stone or plaster models. Experts have projected that this technology will cross a revenue threshold of $3 billion between 2017 and 2024. 3. Lasers Lasers can be applied in dentistry for both hard and soft tissue procedures, such as handling cavities, biopsies, teeth whitening, or extractions. They can even be applied for more therapeutic options, such as nerve repair or as an analgesic. They can reduce the anxiety some patients feel from other tools (e.g., drills) as well as produce less pain, bleeding, or swelling. Lasers can be significantly more expensive compared to other equipment. A standard drill, for example, runs about $600, whereas a laser can cost several thousand dollars or even into six figures, depending on the laser’s capabilities. But as the technology becomes more available, the cost will likely decrease and make more widespread adoption an option in more clinics. As applications become more advanced and expanded, and as laser cutting becomes cleaner and more precise, the American Dental Association will likely start to provide formal approval in addition to the approvals the Food and Drug Administration has already started to offer. Bonus: Teledentistry Teledentistry can be basic, with patients and dentists doing physical consultations online. But the real gem is in remote procedures that incorporate robotics. The medical industry already uses visualizations and robotics for certain procedures, such as laparoscopic surgery. Applied to dentistry, a patient in one state or country could sit in a pre-prepared dental chair, and the dentist could control tools through the Internet or other systems to perform the procedure remotely from another state or country. As with e-commerce, robotic teledentistry opens up convenience and provider choice for patients. And while we may not be quite ready for this yet, it’s not as far off as it may seem. As technology changes, patients and providers have plenty to celebrate Technology never sits still, and dentistry, like other industries, is transforming as technology evolves. Professionals admittedly still have some big hurdles to overcome with the above trends, such as sorting out how dentists might work across state lines given licensure requirements. Although it might take some time for all of the training, regulatory, production, and funding considerations to come together, the benefits of the above trends should leave dentists and their patients highly optimistic for the future.
https://medium.com/@dr-steven-ghim/the-3-technology-trends-that-are-transforming-dentistry-bc301b2c6ace
['Dr. Steven Ghim']
2020-12-22 19:12:53.927000+00:00
['Trends', 'Dental Care', 'Dentistry', 'Technology', 'Dentist']
1,714
JavaScript Problems — Strings, Arrays and More
Photo by Susan Mohr on Unsplash Like any kind of apps, there are difficult issues to solve when we write JavaScript apps. In this article, we’ll look at some solutions to common JavaScript problems. Count String Occurrences in a String We can use regex to find all the occurrences of a given string with the match method. For instance, we can write: const str = "foo bar baz"; const count = (str.match(/bar/g) || []).length; We search for all occurrences of the 'bar' string in our str string. Repeat Character N Times Strings have the repeat method to repeat the string multiple times. For instance, we can write: "foo".repeat(10) to repeat 'foo' 10 times. We can also write: Array(11).join("foo") We need 100 since join puts the 'foo' in between the entries. Therefore repeat probably makes more sense, Simplest Code for Array Intersection We can get the intersection of 2 arrays by using filter to return an array with the entries that are included in the other array. For instance, we can write: array1.filter(value => array2.includes(value)) If we have arrays array1 and array2 , then we use filter to return array entries of array1 that is also in array2 by using the includes method with array2 inside the callback. This would return a new array with arrays that are in both entries. Remove CSS Class from Element with JavaScript To remove a class from an element by using the classList property's remove method. For instance, we can write: element.classList.remove("foo"); Given that the element element has the foo class. We can call classList.remove with the argument 'foo' to remove it. Subtract Days from a Date We can subtract days from a Date instance with JavaScript. To do that, we can use the getDate method to get the day of the month. Then subtract that by whatever number of days we want. For instance, we can write: const d = new Date(); d.setDate(d.getDate() - 5); We call setDate to change the date by changing the day of the month with the number we want. It doesn't have to be within the month range because it’ll automatically wrap. Check if an Object is a jQuery Object We can use the instanceof operator to check if an object is an instance of the jQuery constructor. For example, we can write: obj instanceof jQuery Now we can call jQuery methods in it. Create an Array Filled with Zeroes To create an array filled with zeroes, we can use the Array constructor with the fill method. For instance, we can write: new Array(len).fill(0); where len is the length of the array. fill fills the slots with 0. Check if a JavaScript Object is a Date We can check if a JavaScript object is a Date instance with the instanceof operator. For example, we can write: date instanceof Date This works if the object isn’t passed across frame boundaries. If a Date instance is passed across frame boundaries, we can write: Object.prototype.toString.call(date) === '[object Date]' We get the string representation of the Date instance. Force Refresh of a Page We can set the Cache-Control header of our code. For example, we can write: Cache-Control: max-age=86400, must-revalidate to set the cache to last 1 day, which is 86400 seconds. We can disable caching by using the: Cache-Control: no-cache, must-revalidate header. Check if an Array has a Specific String We can use the indexOf method to check if an array has a specific string. For instance, we can write: const arr = ["foo", "bar", "baz"]; const hasFoo = arr.indexOf("foo") > -1; to check if arr has the string 'foo' . -1 means that the string isn’t in arr , Otherwise, the index of the string in the array is returned. There’s also the includes method if we don’t care about the index. We can write: const arr = ["foo", "bar", "baz"]; const hasFoo = arr.includes("foo"); Then it returns true if the string is in arr and false otherwise. Photo by Terricks Noah on Unsplash Conclusion We can set the cache header in our web host to set when the cache expires. Also, we can check Date instances with built-in operators. We can get array intersections with array methods. JavaScript In Plain English Did you know that we have four publications and a YouTube channel? Find them all at plainenglish.io and subscribe to our YouTube channel!
https://medium.com/javascript-in-plain-english/javascript-problems-strings-arrays-and-more-2af5d8ecba4a
['John Au-Yeung']
2020-06-28 18:17:43.323000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
1,715
How We Read (Or Don’t Read) Online
If you’ve contributed any moderate amount of essays, articles, or writing to blogs or other content creation sites, then looked over the stats for those pieces, you’ve probably noticed a familiar pattern. On this site, the read ratio for most of my own work generally hovers around 30%. That is, most people only make it through about a third of what I’ve written before they venture off to other pages, links, and sites. Of course, this varies somewhat based on word count and other factors, but usually this figure remains fairly stable. Now before jumping to conclude that my read ratio is so low because my content isn’t that interesting or isn’t written that well (save the effort; I assure you, writers are almost always their own worst critics), there is some additional information worth considering. Our Online Reading Habits In 2008, Jakob Nielsen authored a fascinating article on a research paper by Harald Weinreich and colleagues that examined the time people spent on various pages online. Nielsen’s own earlier research into online reading patterns established as far back as 1997 that a vast majority of readers — 79% — selectively scan what they read on the Internet, whereas only 16% read word-by-word. Consistent with this, Weinreich and his co-authors found that on average users read about 28% of the text on their screens, though that number is realistically estimated to be closer to 20%. The study found the average amount of time spent on a page to be 25 seconds, but this likely also includes time we spend figuring out the page layout and navigation, as well as looking at images. Another study by Nielsen tracked the eye movement of 232 users across thousands of web pages and documented a common F-shaped reading pattern. Users start out reading the upper part of the page horizontally, skip down a bit and read horizontally again, before finally scanning the left side in a vertical movement. There are variations to exactly how many words people read and how linear their eye movements are, so the F-shaped pattern is not pixel-perfect but serves as a general representation of how many of us read online. The above research is all more than a decade old by now. Since then we’ve seen the rise of responsive web design, the popularization and standardization of search engine optimization guidelines, and other developments that have been aimed at improving user accessibility and retaining readership. These developments have come largely as a result of years of study into our online reading habits. Just last month, the Nielsen Norman Group published a newly updated report on how people read online. In the last decade, several other common reading patterns have been documented, the majority of which still show selective scanning and fragmented reading. Overall, the report upholds many of the findings from their previous work, with the general consensus remaining that “people rarely read online.” Reminders about not believing everything you read on the Internet may be good advice, then, partly because it turns out that much of what we do read isn’t read thoroughly at all, and bits of important information could well be lost along the way. Print or Digital? One argument that’s been going on ever since the web became a thing and started to be more widely available is the debate over reading print versus reading online. Although there may be fewer people reading print now than there were two or three decades ago, recent research like a 2016 study by the Pew Research Center has found that a majority of Americans read at least one print book per year, including a surprisingly high number (72%) of Millennials. A 2012 study by Daniela Zambarbieri and Elena Carniglia used eye movement analysis to compare how people read on their computer, tablet, and e-reader in contrast to reading from a print book, and the results showed very similar behaviors across all four. We should be careful what we conclude from this, though. It would be too quick and easy to use findings like these to argue that there are no real or important differences between these different types of media. In The Shallows, Nicholas Carr (no relation) discusses why online reading often is a more distracted, fragmented form of reading that fosters a more distracted, fragmented mode of thinking. The content we read on the web frequently has links (guilty), ads, menus, navigational tools, and even things like the recently popular pull-quotes that all shift and steer our attention in different ways. Perhaps it’s telling that one of our newer cultural narratives involves aimlessly browsing online or ‘falling down a rabbit-hole’ of pages, sites, links, and resources. This might sound too anecdotal, but as Ferris Jabr explains in an article on “The Reading Brain in the Digital Age” for Scientific American, there are numerous studies that have shown differences in how our brains approach digital versus print material. Some of the findings in them relate quite directly to the advantages of books as a physical medium, the way online material can scatter and divert attention, how the latter may impact our memory and comprehension, and more. Additionally, a recently published meta-analysis of some of the literature by Virginia Clinton suggests that one especially notable difference is that online readers overestimate their level of comprehension in comparison to print readers. In many respects, to read a book is to make a commitment. It often means setting aside other activities and finding the right time and space to give it your attention. A book is also far more resilient than most of the devices we consume digital content on. It can go almost anywhere with you, doesn’t need to be charged, it won’t break if dropped, and it can have something spilled on it without ruining it. I think in these and other ways, a book demands our attention and focus, and unlike much of the web, it won’t return that by dispersing our thought and attention, or diverting our eyes across a vast assortment of elsewheres. (Not) Writing for (Non-)Readers Originally I had pictured this article being titled, “Writing for an Audience That Doesn’t Read.” I scrawled it down on a yellow sticky note and stuck it on my desk along with a question: why would you want to? Lately I’ve been thinking about just how much content is out there on the Internet these days, how easily accessible it is, how anyone can create their own site or blog, and write and publish their own essays. I wonder and worry, like I’m sure many others do, about how this has affected the overall quality of what’s out there and what people are exposed to. And I can’t help but question if all the marketing strategies, algorithms, and other tools now used to promote an online presence and catch and hold attention spans are smart solutions to a general problem or if they’re making the problem worse. Even in the little over ten years that I’ve been putting my work online, I’ve seen some major changes. Not all have been bad, of course. But there is a fine line between fine-tuning your work in order to maximize your audience, clicks, likes, and shares, and fine-tuning it to be accessible, readable, and marketable without sacrificing quality or playing into and thus encouraging reading habits that create a harmful environment. At a certain point we must understand that in an age so regularly characterized by misinformation and “fake news,” we are not just mindless consumers of media, but the decisions we make and the habits we help form in ourselves and others are part of what shapes the form taken by media, too. That yellow sticky note has stayed on my desk for a few weeks now. I’ve been thinking about the question on it for almost that entire time. Some days I felt like it might come to nothing and briefly thought about throwing it away. But for some reason, I just couldn’t. One of the things I still carry with me from my religious days is a love for what’s sometimes called ‘wrestling with the text.’ In traditions that rely a good deal on religious documents, wrestling with the text is an exegetical exercise that can mean bringing out and comprehending the theological, historical, social, and moral significance of the text, but can also be about letting the text speak to you on a more personal level. This can be likened to the notion of ‘deep reading,’ but what I like about wrestling with the text is the implied sense of struggle and diligent effort. When was the last time you wrestled with something you read online? I don’t mean disagreed with it, disliked it, or that it gave you new information. I mean that it really made you think and reflect on things, that it spoke to you in a profound or meaningful way. I’ve had these experiences reading some things on the Internet, but they are few and far between. Wrestling with the text, whether digital or not, is a two-way road at minimum. I think this does well to answer my question. I write for an audience that doesn’t read because the struggle — the dialogue — is worth it. Because the fact of the matter is that even though I love my print books, I’ve read many things online word-for-word, and I’ve experienced some of the frustrations of writing and publishing online firsthand, I am still a member of that audience myself. Why did I change the title of this article in the end? The original just seemed a little unwieldy and not that catchy.
https://medium.com/swlh/how-we-read-or-dont-read-online-b0458d9c3c5
['Taylor Carr']
2020-05-16 00:37:45.224000+00:00
['Writing', 'Society', 'Culture', 'Technology', 'Reading']
1,716
Asking AI the Wrong Questions and Getting a Frightening Result Is Your Fault?
Asking AI the Wrong Questions and Getting a Frightening Result Is Your Fault? AI has incredible promise for the future, but that future is compromised by a lack of critical thinking on the part of programmers and trainers. Dr. Patricia Farrell Follow Jun 8 · 5 min read Photo by Yuyeung Lau on Unsplash AI has opened a world of possibilities much like a new Garden of Eden where we are the unknowing humans exploring an environment beyond our understanding. In this world we are creating, one central feature of it must be the fallibility of we humans. It is this flaw over which we must show power, growth, and new competence, and we must remember one thing; we cannot know the answers if we do not know the questions. Questions seem straightforward in our minds, and our egos blanch at the thought that we may have inexplicably missed a step somewhere in our building process vis-a-vis algorithms. But missteps are exactly what we are facing, and humility or better critical thinking are required to avoid untoward results. We are experiencing a new period of evolution. Critical thinking is not only a prerequisite to becoming adept at algorithms; it may have been lost somewhere in the mix. What is one method of increasing this necessary skill? The RED Model’s critical thinking skills framework has three main indicators including (1) Recognize Assumptions; (2) Evaluate Arguments; and (3) Draw Conclusions. The RED Model’s critical thinking skills framework and its indicators are expected to assist in encouraging the development of critical thinking skills and measuring critical thinking skills. An overview of the RED Model with examples of business tasks requiring critical thinking can be accessed here. The downloadable PDF also includes 50 ideas for improving your critical thinking. In addition to this model, there is a downloadable chart with questions/prompts that guide approaching any issue or problem. When an Algorithm Gets It Wrong The belief exists that computers and their algorithms are superior to humans because they can process so much information so quickly — faster than we could in months or even years. Therein lies a major problem of our belief system and our bias regarding algorithms. When it's wrong, the result can be catastrophic for some individuals, as we’ve seen. The algorithm at issue, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was designed to assess a defendant’s risk of recidivism — that is, the potential risk that the defendant will commit a crime in the future. An extensive analysis of the COMPAS methods and the outcomes was run by Politico. After multiple statistical analyses were performed on various portions of the algorithms as well as data sets that may have been used, the results were quite revealing. As might have been expected, “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as low risk 63.2% more often than Black defendants.” This is only one illustration of how misusing statistics can bring about life-changing calculations which mitigate against certain individuals. Photo by Eva Blue on Unsplash Valid or not, the COMPAS algorithm is used to provide quick, if not accurate, data on potentially violent offenders. It is a model for inefficiency, not efficiency. According to a presentation at a TED Talks by Peter Haas, the reason, again, for its use in criminal justice situations to determine future rates of violence, is the hoard of cases that must be dispatched quickly. Because the algorithm is privately held (and used in about 13 states for criminal justice decisions), the source code is not available for inspection by anyone because of the trade secret law, raising concerns. Unbelievable as it may seem, no one knows what datasets were used to create COMPAS. The criminal justice system is only one example where algorithms can cause untold havoc in people’s lives. Other instances include trying to get a loan for a home, a job interview, qualifing for government benefits, or even if they are stopped on the highway for a driving infraction. Haas asked, “Would you want the public to be able to inspect the algorithm that’s trying to make a decision between a shopping cart and a baby carriage or a self-driving truck, in the same way that the dog/wolf algorithm was trying to decide between a dog and wolf?” The latter distinction is notable since the algorithm training made an error identifying between a dog and a wolf. The reason was that in the training sessions only wolves in snowy backgrounds were shown. Any dog-like animal not in a snowy background was indicated to be a dog. Critical thinking was an issue here that was missing. Clever Hans, 1904, Wikipedia.org Public Domain Answers and Questions Asked In her book, Atlas of AI, Power, Politics, and the Planetary Costs of Artificial Intelligence, Kate Crawford uses a well-known story of illusion and bias (the horse Clever Hans) when discussing AI. “The story of Hans is now used in machine learning as a cautionary reminder that you can’t always be sure of what a model has learned from the data it has been given. Even a system that appears to perform spectacularly in training can make terrible predictions when presented with novel data in the world.” Regarding facial recognition algorithms, Crawford questions the utility of incorporating criminal mug shots into the mix. The fact that mugshots are available to feed into datasets indicates a frenzy to create a dataset rather than to question what should go into it. Frantically collecting images for databases has turned into a mindlessness that is never questioned because the programs demand and the programs receive what they demand. Yet now it’s common practice for the first steps of creating a computer vision system to scrape thousands — or even millions — of images from the internet, create and order them into a series of classifications, and use this as a foundation for how the system will perceive observable reality. These vast collections are called training datasets, and they constitute what AI developers often refer to as “ground truth.” Is the completed algorithm from the Multiple Encounter Dataset and others ever questioned and is there any weeding out of training images or data that may be in some way inappropriate to the mission? In fact, what is the mission and what biases have been incorporated into the collection? Is scraping the internet for images the most appropriate method of gathering data? Could we even, at this stage, resolve these seminal issues? Who will be damaged by permitting these issues to go unresolved? What questions have not been asked and answered? Where have we failed? https://www.amazon.com/Patricia-Farrell/e/B001HMSWYQ
https://medium.datadriveninvestor.com/asking-ai-the-wrong-questions-and-getting-a-frightening-result-is-your-fault-a835ea679184
['Dr. Patricia Farrell']
2021-09-14 23:21:30.822000+00:00
['Programming', 'Technical', 'AI', 'Critical Thinking', 'Technology']
1,717
Why Learnapalooza Matters for The Future of Tech
Learnapalooza is a conference dedicated to ‘The Future of Learning’. Now in it’s fifth year, the conference is headed by two experts in adult learning, Darren Nerland and Erin Petershick, who brought both experience and energy to the space that really made this conference a game changer. It is in all of the ways Learnapalooza goes beyond the slide decks — beyond the buzz, and straight into what it means to do meaningful work that makes it such a powerful event. With Learnapalooza sitting firmly in the Learning and Development space I expected team building exercises, inspirational quotes and graphs on changing workplace demographics. If you’re looking for a crash course on L&D buzzwords and trends then worry not, you’ll get that here. But it is in all of the ways Learnapalooza goes beyond the slide decks — beyond the buzz, and straight into what it means to do meaningful work that makes it such a powerful event. I’m here to tell you just exactly what that means for the future of tech. We’re firmly in the Fourth Industrial Revolution now: computing, data science and artificial intelligence are the new prime movers. To give you a sense of the pace, this means that 77% of current jobs will require new technical skills and over half of the highest demand jobs on Linkedin did not exist 10 years ago. There’s no sign of this slowing down, and no one can quite predict what the world will look like in 2030. That’s not all that much of a scary thought, if you boil it down. It means hiring based on ability to learn, not on the skill sets brought on arrival. It means building a workplace culture of learning that allows everyone to thrive. When new technologies come in, they are much more complex and interactive than before. No modules, courses, or certificates can do the work of showing demonstrable knowledge of these new tools, and that’s a problem. A simple solution may be bringing workplace learning back into an apprenticeship style: when learning to use new tools and tech- a user will be more likely to retain both attention and information by connecting the dots for them through the experience of it’s use. Create a culture of coaching in your workplace for your team to thrive. We should also be building workplace cultures that thrive on self-directed learning. Carving the time and space in your culture to make that happen allows for the diffusion of new knowledge across a team, and prevents burnout for the workers who are closest to the new tech on-boarding. When you work on learning and development in tech, you’re really getting to the heart of the issue in the field: working in tech means you never stop learning a new coding language, software system, just keeping up the pace of today’s digital innovation. We can only do it by taking the best learning and development practices out there and situating them in this new world where people move at the speed-of-tech. Learnapalooza exists to bridge that gap in a powerful way, and I can’t wait to see how it changes the tech world in the years to come.
https://medium.com/@salkimmich/why-learnapalooza-matters-for-the-future-of-tech-9f85decc2588
['Sal Kimmich']
2019-05-06 03:22:53.649000+00:00
['Technews', 'Learning And Development', 'Technology', 'Future Of Work', 'Tech']
1,718
7 predicted UX trends for the 2021
7 predicted UX trends for the 2021 Predicted UX trends for 2021. VR headset and 3D objects by Freepix. To say that 2020 was a rollercoaster ride is an understatement. While it was a year of pure chaos from a health, social, and political perspective, it was also a fast-paced 12 months of product, innovation, and experience. With 2020 nearly behind us, let’s check out what the 7 predicted UX trends are for 2021. 1. Digital health is trending As if we hadn’t heard enough of COVID-19, but the pandemic has been the instigator of new digital health opportunities including wearables, telemedicine, and artificial intelligence. And in 2021, we can expect these behaviours to continue to trend. We have also seen some impressive self-monitoring applications being applied to wearables, providing instant data visualisations on steps, sleep, heart rate or blood pressure. These sophisticated functionalities empower users to take charge of their own health and inform them on how to make lifestyle changes. The healthcare artificial intelligence software, hardware, and services market is predicted to surpass $34 billion worldwide by 2025 — source. Telemedicine design concept by Rabi Islam Telemedicine — the practice of doctors consulting patients virtually — increased significantly during the pandemic as people were encouraged to reduce their physical encounters. We can expect this type of remote interaction to continue to trend as health services look to distribute their skills and funds to the patients in most need. Then there’s artificial intelligence. Precision medicine, genomics, drug discovery, and medical imaging will all be empowered by AI; take cancer treatments, for example, by using AI’s pattern recognition, doctors can prescribe personalised treatment plans tailored to a patient’s genetic makeup and lifestyle.
https://medium.com/@kairos-euriah/7-predicted-ux-trends-for-the-2021-1970c38838f1
['Kairos Euriah']
2020-12-22 17:16:54.014000+00:00
['User Experience', 'Visual Design', 'UX', 'Technology', 'UI']
1,719
Need-For-Speed in the Volterra Distributed Application Infrastructure
Creating a Virtual Kubernetes Cluster We will first need to create a new namespace. This namespace allows users to create and organize the different functions we will be running. Here, as you can see, my namespace is called maarham-test . In this namespace, we will need to create a Virtual Kubernetes Clusters that will go to all the Regional Edges and we will launch our Application in all REs with various pods. We will fill out the name and the Virtual Site we will be using, which is ves-io-all-res . In this, you will create a deployment based on your configurations, which is known as the manifests file in YAML code. Here is the Deployment YAML File that I used: apiVersion: apps/v1 kind: Deployment metadata: name: speedtest-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: speedtest image: maarham/maarham-speedtest:progress ports: - containerPort: 8080 In this, you can see that I used an image that may seem unfamiliar. I will go more in-depth on that subject matter later in the blog. After that is created, the deployment will run, and then, the pods will begin to run. We will then create a service in this cluster as well with a YAML file. Here is the Service YAML File: apiVersion: v1 kind: Service metadata: name: Speedtest spec: selector: app: MySpeedtest ports: - protocol: TCP port: 80 targetPort: 8080 We will use the port 80 and target it at 8080, which is the port of the deployment. KubeConfig Method There is another method to this step. In this method, we will use vK8s cluster from our PC, instead of implementing the Deployment YAML File through the VoltConsole, we will using a manifests file with the KubeConfig function in the Terminal. It requires the same type of steps to create the cluster however, after creating the cluster it is quite different. We will need to download the KubeConfig file of the Virtual Kubernetes Cluster. This should download the file into your downloads folder. Locate and move it to the file you are using for this demonstration using the mv [KubeConfig File Name] [Location] function. Once that is done, we will create the Deployment YAML File vi deployments.yaml and copy-paste this file: apiVersion: apps/v1 kind: Deployment metadata: name: speedtest-deployment labels: app: nginx spec: replicas: 6 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: speedtest image: maarham/maarham-speedtest:progress ports: - containerPort: 8080 It is the same as the last one but we will be implementing it in a different way. We will use KubeConfig. If you do not know what that is, KubeConfig is compatible with kubectl , the CLI Tool of Kubernetes, and allows the user to organize information about clusters. We will be able to organize this deployment file to the specific cluster. To do that, we will need to use this command: kubectl apply -f deployment.yaml --kubeconfig [KubeConfig file] When doing this, the deployment will be created in the cluster and can be seen in the VoltConsole when you click on the Virtual Kubernetes Cluster. You will be able to see the deployment running along with the pods running at each Regional Edge. We can do the same thing for the service. We will first create the Service YAML File with vi service.yaml and copy-paste this code: apiVersion: v1 kind: Service metadata: name: Speedtest spec: selector: app: MySpeedtest ports: - protocol: TCP port: 80 targetPort: 8080 It is the same Service YAML File we use previously but this time with KubeConfig. In order to do that, we will use this command to create the service within the cluster: kubectl apply -f service.yaml --kubeconfig [KubeConfig file] This can all be viewed in the Virtual Kubernetes Cluster in the VoltConsole.
https://medium.com/@amanarham/how-i-utilized-the-volterra-environment-bdc311bb82e9
['Aman Arham']
2020-09-10 01:15:47.013000+00:00
['Company', 'Cloud Computing', 'Technology', 'Volterra']
1,720
Blockchain for Agriculture
Blockchain for Agriculture In the agriculture domain, self-executing smart contracts together with automated payments are poised to be a game-changer. The role of smart contracts especially in agricultural insurance, green bonds, and traceability is going to be very effective. Commercial Agriculture/Agribusiness interest in blockchain technology is rapidly growing. Increasingly, companies are recognizing how the emerging technology’s enhanced data management capabilities could create supply chain efficiencies and reduce friction in transactions. The agriculture sector stands to benefit from the technology’s potential to lower transaction costs, optimize logistics, increase traceability, and enhance food safety protocols. Agriculture joins a variety of industries that are using and developing blockchain applications to improve business transactions. Prominent technology companies such as IBM and Microsoft are already creating partnerships with global logistics companies and retailers to develop blockchain applications that more closely align links in the supply chain. Amazon, through its web services arm AWS, has just released a set of blockchain templates with network access control to increase blockchain usage without users worrying about the actual manual setup of a blockchain network. According to The Report Linker, the blockchain in food supply chains and agriculture ecosystem was estimated to be USD 60.8 million in 2018 and is projected to reach USD 429.7 million by 2023. The Dutch Ministry of Agriculture, Nature and Food Quality financed the first research project, “Blockchain for AgriFood” that has been proposed to explore blockchain implications for AgriFood. Pilot studies carried out by the CoBank Knowledge Exchange in 2018 indicate that blockchain technology could facilitate the tracing of food from the farm to the grocery store in just a few seconds. For the agricultural supply chain, blockchain technology promises increased efficiencies through enhanced data management, lower transaction costs, optimized logistics, more robust traceability, and enhanced food safety protocols. Prominent technology companies are creating partnerships with global logistics companies and retailers to develop blockchain applications that can be used for the efficient tracking and delivery of agricultural products. Blockchain will likely accelerate the agriculture industry’s movement toward greater transparency and traceability from the field to the table and fork. This will bring opportunities and tools for farmers and the rest of the supply chain to combat food fraud and offer verified products to consumers. Blockchain also helps to keep tabs on abundant commodities and reduce cases of illegal harvesting and shipping frauds. The United Nation reveals that food frauds cost the global economy around $40 billion per year because of illicit trades. Agricultural insurance built on the blockchain with key weather incidents and related pay-outs drafted on a smart contract all linked to mobile wallets with weather data being provided regularly by sensors in the field and correlated by data from proximity weather stations will facilitate immediate pay-outs to farmers in the case of a drought or flooding in the field. However, the framework to support such innovations, such as high-quality data, enabling policies and regulations, should be first addressed in order to ensure the maximum efficacy for smart contracts. The process of designing, verifying, implementing and enforcing smart contracts in traditional agricultural value chains is still a work in progress, with only a few pilot implementations to show proof-of-concept. Opportunities for Blockchain Technology in Agriculture Blockchain-based transactions are being piloted in many sectors including the financial, manufacturing, energy and government sectors. They are also being used in relation to agriculture supply chains, land registrations and digital IDs. Initially born out of a need for a more decentralized financial system (together with cryptocurrencies), this technology is finding innovative uses in a wide range of applications. A blockchain by design is cryptographically secure (although the content is not necessarily encrypted), it is a write once-append only, distributed and decentralized system. It is important to note that the statement that the blockchain is the only mechanism to build trust, reduce costs and accelerate transactions is not entirely true. Blockchain-based implementations still suffer from traditional challenges such as lack of or poor infrastructure, failures of interoperability, and other technical issues. Although the trend now is to try a blockchain-based implementation on traditional processes, in most cases this adds unnecessary overheads and does not yield any tangible benefits. What the blockchain does promise is to deliver is a transparent, decentralized, secure transaction process and may reduce transaction costs. This brings us to the main question: Which processes in the agriculture domain suffer from lack of transparency and would benefit from decentralization and are now affected by non-secure transaction processes? Traceability and Auditability The major challenge for the agriculture sector is how to track and pay for the delivery of produce. These days the process depends on a third-party to coordinate the delivery. The sellers usually have an agent who ensures that the goods are delivered safely and buyers have an agent to recommend payment and audit the delivery. The involvement of multiple agents adds high costs to the system and makes the entire process time-consuming. With the blockchain, the whole process can be simplified to a single distributed ledger. Commodity buyers can directly interact with the supplier that speeds up the process and reduces the time to settle a payment. Also, the companies can save on additional agent fees and farmers can receive a larger share of sales directly with a blockchain-based solution. With the features like traceability and auditability, farmers can directly sell crops or food to the market without the need for intermediaries. Example: Walmart leafy green suppliers According to the Agricultural Data Coalition (ADC), a non-profit organization focused on connecting the data dots across food & agriculture, high-value crops that have more pressing needs for traceability are already doing this. Walmart announced last year they would require all leafy green suppliers to implement their blockchain system. 2. Logistics and Payments Today basically every medium to large farm operation know that logistics is a big challenge in the agricultural supply chain. At this point, we are talking about dealing with high amounts of perishable products in uncertain conditions with lots of money on the line. There are also cases where supply can be uncertain and that, of course, is a totally different ball game. Of course, everything boils down to losing hundreds of thousands of dollars if things don’t go as planned. So is there any good news? Well, of course, there is! Implementing blockchain into logistics can go a long way in simplifying deliveries. Yes, with smart contracts, farmers will be free from the long chains of intermediaries. And of course, they’ll get to rest easy knowing that the product will reach the end consumer in good shape. Moreover, smart contracts will eliminate unnecessary delay and ensure farmers get paid for their product on time. Even better, the solution can be configured to spread out payments to farmers throughout the year as opposed to paying them seasonally. 3. Crop and Food Production With the help of smart farming, IoT sensors could fetch important information like the temperature of the soil, water level, fertilizer details and more and send it to the blockchain. Based on the data saved in blockchain, smart contracts could trigger and execute the specific actions. It will help in enhancing the quality of the farming process as well as produced crops. 4. IoT and Quality Control Let us face it; the process of monitoring the quality of crops (right from harvest to delivery) has never been easy. It is basically a huge challenge for farmers and growers throughout the world. But of course, the good news is, the power of blockchain technology can be harnessed in this regard. Interestingly, IBM is already working on IoT (Internet of Things) tools that make it possible for growers to monitor soil quality, irrigation, and pests in a precise and highly efficient manner. It is also good to point out that there are initiatives to leverage sensors to track the quality of stored crops over time. Of course, the ultimate goal of these applications is to automate and digitize just about everything that has to do with record-keeping and quality control. The good news is, most of these incredible innovations are presently happening in modern-day agriculture. For the most part, these sensors can gather data automatically in real-time. And of course, provide quick and easy access to growers who need the information to perform various farm operations. 5. Data Security The question that still remains is that of data security. Here, the blockchain uses another age-old feature, namely cryptographic hashing, that permits data subjects to better manage and control who has access to their (personal) data. Blockchain/DLT may also help accomplish the goal of data portability (including by incentivizing data sharing and automating payments for shared data), thereby supporting the establishment of a data economy envisaged by the General Data Protection Regulation (GDPR) (European Parliament, 2019, p. III). Undoubtedly, there are still several shortcomings in the technology that make it susceptible to the same risks as in existing centralized systems. Nevertheless, the above features of blockchain, together with the fact that it is also a programmable platform that permits new applications, including smart contracts, to be built on it, make Blockchain/DLT a highly attractive option for several use cases. Smart contracts execute money transfers without the need for any intermediary, especially when cryptocurrencies are adopted. Undoubtedly, the technology is still in its nascent stages of development and user adoption is relatively low due to the greater convenience (currently) of centralized systems. Nevertheless, Blockchain can significantly reduce transaction costs and the need for interventions by regulatory authorities. It has been estimated, for example, that “blockchain could facilitate global savings of up to US$6 billion per year in business transactions” (Addison and Lohento, 2018). Would the role of governmental and law enforcement agencies then be eliminated in a blockchain world? No. Instead, perhaps more accurate would be to say that their role, power and authority will become more distributed, transparent and/or susceptible to contradiction or verification by independent stakeholders. Further, the value of their certifications and regulations would depend on how much each individual player (e.g. buyer of seeds) chooses to rely on their authority. Accordingly, attributes like quality and reliability, which are currently considered ‘objectively verifiable’ based on certification requirements of centralized authorities, may become more and more ‘subjective’, varying based on which ‘node’ any stakeholder considers on personal judgement and experience to be reliable, and which attribute (e.g. of the seeds being sold) is being sought by the buyer. The establishment of such a system, is not, however, without its challenges — both technically and practically speaking. More benefits and opportunities of the blockchain in agriculture: Traceability throughout the value chain Financing and insurance for small farmers Facilitation of financial transactions in emerging economies Fair pricing through the whole value chain for all actors Emissions reductions and support for environmentally friendly initiatives Consumer awareness and increased consumer satisfaction More informed consumer purchasing decisions Sustainable business and reduction of waste Decreased transaction fees and less dependence on intermediate services Transparent transactions and elimination of fraud Improved quality of products and fewer diseases originated from food Data accessibility while maintaining privacy according to regulations Get the full book |Amazon |Gumroad | Bitcoin|
https://medium.com/swlh/blockchain-for-agriculture-5b0a0baa0aa3
['Tendai Tomu']
2020-10-26 15:20:02.047000+00:00
['Blockchain', 'Cryptocurrency', 'Agriculture Technology', 'Bitcoin', 'Farming']
1,721
Why We do This?
Photo by Joel Muniz on Unsplash The KitaJaga community is well-known for its peer-to-peer organization and its commitment to achieving food security, stable basic supplies, and provide financial aids. This alleviates the COVID-19 pandemic’s hardship faced by the people. The platform hopes to save lives by allowing individuals who are motivated to help others on their own terms and comfort effectively and aid directly within their communities. We believe in the grassroots movement from the small scale of individuals helping one another, and slowly build communities to tackle bigger issues faced by our society. Our vision is to empower individuals and communities to help one another and build a better future for all of us. The figures above show the rate of hunger in Malaysia according to the Department of Statistics Malaysia and Global Hunger Index (GHI) One out of every nine people on the planet still does not have enough to eat. According to the Global Hunger Index, Malaysia was placed 59th. We have a hunger rate of 13.5 percent in 2020, which was deemed to be at a ‘moderate’ level. It is projected to rise by the end of 2021 and reach a ‘serious’ level as a result of the effects of COVID-19, which is now in effect. Aside from the hunger issue, the current crisis has resulted in high levels of unemployment across the globe as a result of the global economic downturn. Since 2020, the rate of unemployment has risen dramatically, and it is set to continue to rise through the end of 2021. Consequently, most businesses are unable to do business as normal and must lay off part of their employees to alleviate the financial strain placed on the organization. Malaysia, on the other hand, is likewise dealing with the same problem as well. The unemployment rate has risen, and many employees are unable to maintain or feed their families as a result of the lack of a consistent source of income. Because of this, KitaJaga has taken the necessary steps to enable people to assist one another in a peer-to-peer manner by giving nutritional assistance and medicines, as well as establishing a path to peace, stability, and prosperity for those who have survived the recent pandemic, COVID-19.
https://medium.com/kitajaga/why-we-do-this-235211b8fb31
['Muhammad Raffiq Mohd Khalil']
2021-08-17 03:40:39.431000+00:00
['Technology', 'Peer To Peer', 'P2p', 'Humanitarian', 'Volunteering']
1,722
Five Bullet Friday: Week 05
Five Bullet Friday: Week 05 If this is the first time you’re reading our newsletter you can subscribe here and guarantee you don’t miss out on important news. Komodo And Coinbene Announce Strategic Technology Partnership Komodo is pleased to announce a Strategic Technology Partnership with Coinbene, a major crypto exchange based in China. Coinbene has independently audited and validated Komodo’s Blockchain Security Service and will be officially endorsing it to at-risk coins and tokens listed on the exchange. The partnership will also explore opportunities for Coinbene to implement Komodo’s industry-leading atomic swap technology to make crypto trading more secure and more decentralized. We’re happy to be working with Coinbene to support their 2019 security initiative and to be making the blockchain space more secure for developers, exchanges, and investors alike. 51% Attacks Are a Growing Threat to Smaller Blockchains; Komodo May Be the Solution In the wake of several 51 percent attacks on relatively large blockchains including Ethereum Classic, Verge, and Vertcoin, BREAKERMAG takes a look at Komodo and our security solution, delayed Proof of Work. Also discussed is our partnership with Coinbene and a report from the Blockchain Transparency Institute on wash trading on exchanges. Exclusive: Komodo GM Explains How to Build on Bitcoin without Being Constrained by It Komodo GM Ben Fairbank explains how small altcoins can build on the security of the Bitcoin blockchain without being constrained by it in an exclusive interview with CCN. With Komodo’s Blockchain Security Service, any UTXO-based blockchain can get Bitcoin-level security to mitigate the risk of a 51% attack. This allows altcoin development teams to focus on innovation and delivering new tech, rather than monitoring their network and worrying about keeping their chain safe. Agama Wallet Update: What’s New in v0.3.3a? We have a new Agama release version 0.3.3a. This version fully supports dPoW Conf display in both modes for chains that are being notarized. All users are asked to update to this latest version soon because it contains some important fixes and security upgrades. Komodo joins Binance Info’s V Label Verification After sharing project-related information, such as news and progress reports, with Binance Info, Komodo was recently given the Binance V-Label Verification. This is a Binance-led transparency initiative to provide accurate information about blockchain projects to investors. Komodo is pleased to be participating in this effort, allowing us to keep investors and community members informed while also helping to promote increased transparency in the blockchain space. Check out our page here: Thank you for being a part of our community. Want to get these Five Bullet Fridays in your inbox? Subscribe here.
https://medium.com/komodoplatform/five-bullet-friday-week-05-1d610da177d4
["Ben O'Hanlon"]
2019-02-06 16:00:32.068000+00:00
['Blockchain', 'Newsletter', 'Cryptocurrency', 'Blockchain Technology', 'Fintech']
1,723
Gmail to PDF Converter — Convert All Gmail Emails & Attachments
There are many users who are searching for the best Gmail to PDF Conversion software through which they can perform the procedure to convert Gmail emails and attachments into PDF file format. The Gmail account is widely usable email client on the other hand the PDF file format this the best file format to save any kind of document data. Google provides free cloud space to the users of the Gmail account and also allows users to operate other applications like Google Docs, Google Sheets, Jamboard, Slides, and other types of apps. But it provides limited free space which can be extended by the use of the licensed edition of the Google apps. Most of the users want to save their Gmail data on their computer system as PDF file format so that users can get access to their emails and other details of Gmail account from any device without the use of the internet. So, in this article, we will discuss the perfect solution that is the Birdie Gmail to PDF Converter software. With this software, any user including non-technical users can easily handle and perform conversion processes from the Gmail account to PDF file format without any error. We will view the complete process step by step and understand the relevancy of this tool. Steps for Conversion Procedure from Gmail App to PDF Step 1. Start the tool that is the Gmail to PDF Converter software and then open it. Step 2. Through the utilization of the Add Account option select the Gmail account and enter the login credentials. Step 3. Now, choose the Gmail to PDF conversion-related options like File Naming options, Saving PDF options, Attachments options, Advanced Filter options, and Destination Path. Step 4. Now, click on the Next options and view the live status of the Gmail email conversion process. After this complete procedure, you will get you converted Gmail items on your opted location of the computer system. Through the utilization of this tool, you can easily convert all Gmail emails & attachments with contacts, calendars, header details, and so on. Features of Gmail to PDF Converter Software The tool contains many different features through which users can get their Gmail data into PDF file format as per the requirement. It provides Saving PDF options through which users can save separate PDF files for each Gmail email or can save a single PDF for all selected Gmail emails. It also allows users to choose the desired options for attachments of selected Gmail files. It provides four options related to the attachments which include embed attachments in PDF file, save attachments in a separate folder, convert attachments to PDF file format, and add or append email attachments in PDF files. The tool contains the Preview section so that the user can perform the Gmail to PDF conversion process only with the desired Gmail data. Through the same section of the tool, you can deselect those items which are not required for the further conversion procedure. Conclusion Through the help of all this discussion, all users can easily understand the Gmail to PDF conversion procedure step by step. Through the utilization of this amazing software, users can perform this conversion process with the bulk Gmail database with all attached items, and other types of elements of all selected email files. It is suggested to all readers to download and utilize this utility with its Free Edition, as through this edition you can check the relevancy of this tool and view the steps of the tool with the first 25 Gmail emails with a single processing way. The complete Gmail database can be converted through the utilization of its licensed edition only.
https://medium.com/@smithpeter9927/gmail-to-pdf-converter-convert-all-gmail-emails-attachments-6f9adca7cd9d
['Peter Smith']
2020-12-15 10:30:16.773000+00:00
['Gmail', 'Pdf', 'Software', 'Technology', 'Email']
1,724
Data Journalism Crash Course #4: Open Source Communities
Image by the author Co-creation, collaboration, sharing, community. What do these words have in common? The essence. Each of these terms is rooted in cooperation, in mutual aid, in joining efforts for a common goal. And depending on where these principles are applied, the result can be efficient and highly profitable — if not financially, at the very least, with better use of time and human resources. For those working with digital development, the most encouraged form of collaboration is Open Source Software, also known as FOSS, an acronym for Free and Open Source Software. Open projects are great for anyone who wants to receive collaboration from others, learn from analyzing real projects or getting their “hands dirty” with true intellectual craftsmanship. Open source is a term that means exactly what it says. This concerns the source code of the software, which can be adapted for different purposes. The term was created by OSI (Open Source Initiative) which uses it from an essentially technical point of view. Because it does not have a license cost, open-source software offers the opportunity for greater investment in services and training, ensuring a greater and better return on IT investments. In the vast majority of cases, these tools are shared online by the developers, and anyone can access them without any restrictions. The term open-source, as well as it is ideal, was developed by Eric Raymond and other OSI founders to present free software to companies in a more commercial way, avoiding an ethical and rights debate. The terminology “Open Source” appeared during a meeting that took place in February 1998, in a debate that involved personalities who would later become a reference on the subject. Examples include Todd Anderson, Chris Peterson, Larry Augustin, Jon “Maddog”, Sam Ockman, and Eric Raymond. The acronym FLOSS, which means Free / Libre and Open Source Software, is an aggregating way of using the concepts of Free Software and Open Source in favor of the same software, since both differ only in argumentation, as mentioned before. The developers and supporters of the Open Source concept say that this is not an anti-capitalist movement, but an alternative for the software industry market. This collaborative model present in open source led the author’s right to be looked at in another light. The creation of the Open Source Development Lab (OSDL) is an example of the great efforts made by several companies such as IBM, Dell, Intel, and HP to work with the creation of open source technologies. OSI imposes 10 important points for a software to be considered Open Source: Free distribution The program license must not in any way restrict free access through sales or even exchanges. Source code Of fundamental importance, the software must contain a source code that must also allow distribution in compiled form. If the program is not distributed with its source code, the developer must provide a means to obtain the same. The source code must be readable and intelligible to any developer. Derived works The software license must provide permission for modifications to be made, as well as derivative works. You must also allow them to be distributed, even after modification, under the same terms as the original license. Integrity of the source code author The license must, clearly and explicitly, allow the distribution of the program built through the modified source code. However, the license may require that derived programs have a name or version number that is distinct from the original program. This will depend on the preference of the code developer. Non-discrimination against persons or groups The license must be available to any group of people and any individual. Non-discrimination against areas of activity The license must allow anyone from any specific branch to use the program. It should not prevent, for example, a company from using its code. Distribution of License The rights associated with the software must apply to all those whose program is redistributed, without the need to execute a new license or additional license for these parts. Non-product specific license The program is not part of another software, and to use it, the entire program must be distributed. If the program is extracted from that distribution, it is necessary to ensure that all parties are made available and redistributed to every one, since everyone has the same rights as those guaranteed in conjunction with the original program distribution. License should not restrict other programs The license is not considered open source if it places restrictions on other programs that are distributed together with the licensed program. Technology neutral license The license must allow the adoption of interfaces, styles, and technologies without restrictions. It means that no clause in the license can establish rules for these mentioned requirements to be applied to the program. EXAMPLES OF OPEN SOURCE COMMUNITIES CREATIVE COMMONS Creative Commons Reel video Creative Commons (CC) is a non-profit entity created to allow greater flexibility in the use of copyrighted works. The idea is to make it possible for an author/creator to allow the wider use of their materials by third parties, without them doing so in violation of intellectual property protection laws. With a Creative Commons license, a composer can allow other artists to use some of his compositions by creating a mixture of rhythms, for example; a writer can make an article available and allow other authors to use it, either by publishing in other media, or by applying part of the content in a new text, or using the original, but making changes, anyway. Thanks to the internet, this “collaborative spirit” has become much greater. The problem is that copyright protection laws are strict and often end up hindering the desire of many creators to not only give away their materials but also to use the creations of others who also want to share their work. With Creative Commons, authors and creators can allow their works to be used in a much more flexible way. They can decide how and under what conditions their materials can be used by third parties. An example: a writer can allow anyone to use and change a text of his own, except in commercial applications. Note that, in this case, the Creative Commons license gives more freedom of use to the work, but does not remove the possibility of generating income from the original author: he may charge for the use of the text in the case of for-profit activities. WIKIPEDIA Almost 20 years ago, a discreet and humble way to disseminate and contribute to knowledge appeared on the internet. With the support of the Wikimedia Foundation, Wikipedia was born, and today has more than 54 million articles in 309 languages, written by volunteer collaborators around the world. Virtually all articles can be edited by those who wish to contribute, cite sources and references to enrich the information. Jimmy Wales and Larry Sanger were the creators of the project, which went public on January 15, 2001. The name Wikipedia came from the fusion of a reference to the Hawaiian word wiki (meaning fast, light) with the British term encyclopedia. Because it is an open encyclopedia, many users and Internet users question the writing quality of the articles, the rate of virtual vandalism, and the accuracy of the information. Many articles in the Wikipedia database contain unverified or inconsistent information; however, anyone who takes the digital encyclopedia seriously and contributes to it has its merits: the scientific articles that Nature magazine compared in 2005 reached almost the same level of precision as those of the Encyclopædia Brittanica. Wikipedia is often seen in academia as a source of inadequate information. Scholars point to its troubled environment due to “editing wars”, in which contributors struggle to maintain their text while suppressing that of others, although admitting there are great interest and originality in the work. However, international literature seems to converge to what they conclude that Wikipedia’s success shows that self-organized communities can build high-quality information products. Wikipedia’s analyzes are increasingly detailed and critical, with no room for simplistic praise or rejection. Wikipedia is not only an online encyclopedia but also a common good, a commons. Its quality and maintenance depend on the cognitive surplus, that is, in the free time of schooled people. It is recognized among the most successful collaborative initiatives on the Web, based on trust among millions of contributors and readers, supported by standards that promote reliability and objectivity. IF YOU WANT TO KNOW MORE ABOUT. In: WIKIPEDIA. The free encyclopedia. Florida: Wikimedia Foundation, 19 march. 2017. https://en.wikipedia.org/wiki/Wikipedia:About AIBAR, E. et al. Wikipedia at university: what faculty think and do about it. The Electronic Library, v. 33, n. 4, p. 668–683, 2015. Brasseur, VM (Vicky). Forge Your Future with Open Source: Build Your Skills. Build Your Network. Build the Future of Technology.Pragmatic Bookshelf.2018. DALIP, D. H. et al. A general multiview framework for assessing the quality of collaboratively created content on web 2.0. Journal of the Association for Information Science and Technology, v. 68, n. 2, p. 286–308, 2017. Herstatt, Cornelius / Ehls, Daniel. Open Source Innovation: The Phenomenon, Participant’s Behaviour, Business Implications.Routledge. 2018 Gain Access to Expert View — Subscribe to DDI Intel
https://medium.datadriveninvestor.com/data-journalism-crash-course-4-open-source-communities-857cbf504b36
['Deborah M.']
2020-10-31 19:08:36.116000+00:00
['Journalism', 'Data Science', 'Data Journalism', 'Open Source', 'Technology']
1,725
Why Two-Sided Marketplace Startups Aren’t For Everyone
Why Two-Sided Marketplace Startups Aren’t For Everyone Offering everything to everyone could result in offering nothing to anyone image by katemangostar 2020 was the come-to-mainstream year for marketplace startups. There’s certainly good reason for the marketplace startup trend to continue — fueled by a perfect storm of user-friendly no-code options for prototyping, a quarantine-accelerated demand for digital business models, and the headline-grabbing IPOs of several success stories — AirBnB, DoorDash, and Vroom, just to name three. There’s never been a better time to at least try to build a market of buyers and sellers around almost any product or service. And truth be told, a lot of marketplace startups are finding success. The question is: Is that success sustainable? We’re still in the early stages of 2SMs as fertile ground for disruption. But their success will only be sustained if those marketplaces are built properly. To do that, they need to avoid the mistake I see most often. A marketplace should never try to be everything to everyone. “A costly, painful, and time-consuming failure.” That is a word-for-word quote from a fellow entrepreneur who watched their marketplace customer base splinter, fragment, and then fade away, done in by a problem he described succinctly in just two words: “Shitty vendors.” In 20-plus years of building startups and digital products, I’ve spent a bunch of that time focused on marketplaces, from my first self-founded startup — writers network Intrepid Media — to my most recent project — a startup advice-on-demand app called Teaching Startup. In that time, I’ve learned that even if you choose the exact right product or service sector to disrupt, even if you develop amazing technology, and even if you perfect the customer experience, the success or failure of your marketplace startup will be determined solely on your sellers meeting the expectations of your buyers. It doesn’t matter if the vendor side of the marketplace is made up of providers, producers, or procurers, they have to be consistent and the marketplace’s offering has to be thorough. Of course, vendor vetting is critical to that equation, and the deeper and more integrated a marketplace’s vetting process, the less likely their platform will buckle under the weight of poor quality, poor customer service, or even fraud. But to be a realist, vetting is a secondary problem, because no amount of vetting is going to solve a supply-and-demand problem. A marketplace can offer the most-vetted providers in the world, but if their products or services don’t fulfill the customer’s needs at an acceptable price point, the marketplace is already doomed. With marketplace startups, staring has become easy, but scale is everything. And the scale mistakes on the vendor side are usually made way before the vetting process is even considered. Completeness: Those disengaged customers aren’t coming back The most common mistake I see is when a marketplace sacrifices vendor depth for breadth. It’s the same difference in retail between your corner specialty store (depth) and Walmart (breadth). For whatever reason, probably in anticipation of attracting venture capital, most marketplace startup entrepreneurs want to build Walmarts right out of the gate. Their marketplace platform attempts to be everything to everyone within their chosen product or service sector, and they attract a lot of customers. But unfulfilled customers aren’t customers at all. In fact, they’re detractors. When a marketplace opens its doors to all customers and all vendors, the result is chaos and the model almost immediately breaks. Instead, a marketplace must offer a compelling reason for a buyer to engage with its vendors. The 2SM platform does that by offering those buyers a value proposition they’re not going to find anywhere else. That offering starts with a narrowly-defined product and a narrowly-defined market. And this is the part that’s most often skipped. Instead of trying to copy the incumbent market’s breadth of offering, a 2SM needs to pick apart that offering and decide if each component is intended to provide value to the customer or merely increase the margin for the vendor. The marketplace should only offer the value components. Then, rather than aiming for the widest option of vendors to serve the largest possible customer market, a 2SM needs to start by onboarding select vendors who can offer the exact right value to a very specific type of customer. Once the platform starts to see traction with that niche product in that niche market — and by traction I mean demand volume from customers that the platform can consistently meet with its vendor supply — then the offering is complete. Then, and only then, should the marketplace expand into less-niche variations of that product and wider segments of that market. Variability creates friction, friction pushes away customers The other side of the completeness equation is just as big a problem and even gets a little Catch-22: The more variability you offer the customer, the more friction you create for those customers. The good news is that friction can always be removed. But the mistake here is the same mistake inexperienced coders make: If you want to be able to debug your codebase, you need to introduce code changes in small chunks with lots of testing in between. When a 2SM rushes to offer a multitude of products and services, even if those products and services have a common thread within a given sector, the customer experience will vary wildly from product to product or service to service. This is especially true with services. Different providers execute with different strategies, and the marketplace platform not only has to provide the infrastructure to accommodate those differences, but the platform should also be constantly pushing its vendor population to conform to a consistent model whenever possible. That means consistent pricing, time to delivery, fulfillment or execution, quality, and communication. Not only does offering variety in product or service type create more friction, execution on the same product or service is going to be handled differently by different vendors. So not only should a 2SM onboard different types of vendors one at a time, they also need to strategically onboard each of the first few vendors for each type. This means building a platform infrastructure flexible enough so that the customer experiences none of the friction caused by two different vendor approaches to fulfilling the same customer expectation. Onboarding includes vendor vetting. There will be different vetting requirements for different types of vendors and the vetting process will inevitably fail and need to be revised. It’s better to onboard a handful of vendors with a flaw in the vetting process than onboard hundreds and not know where the process went wrong. Balance between customers and vendors is the key Sustaining a successful marketplace is a constant struggle of maintaining equal volume of supply and demand. The moral of my friend’s 2SM story is that scale without balance will crash on the customer side. In other words, shitty vendors don’t result in more customers, they result in more detractors. A 2SM will always need fewer vendors than they might think. And if that’s not the case, then the marketplace doesn’t have enough customers. Don’t build 2SMs like Walmarts, build them like the corner specialty store. Then, when that store is a raging success, do the digital equivalent of buying the building the next door and expand there too. Hey! If you found this post actionable or insightful, please consider signing up for my newsletter at joeprocopio.com so you don’t miss any new posts. It’s short and to the point.
https://jproco.medium.com/why-two-sided-marketplace-startups-arent-for-everyone-9361140a0183
['Joe Procopio']
2020-12-21 12:24:00.787000+00:00
['Technology', 'Startup', 'Entrepreneurship', 'Product Management', 'Business']
1,726
Bharat Desai Biography | Net Worth | GujaratCelebs
Bharat Desai is not just a name but a whole brand. Along with his wife, Neerja Sethi, Desai founded Syntel in 1980. The multinational IT service and consulting provider company have become a phenomenally successful investment making Bharat Desai one of the renowned billionaires in the United States of America. Together, the power couple stood as the masterminds for their success. So, let’s read Bharat Desai's biography. Early Life Desai was born in Kenya into a Gujarati origin family. His childhood was mostly spent in Mombasa and Ahmedabad. Bharat Desai completed his bachelor’s degree in electrical engineering from the Indian Institute of Technology, Bombay. He had also pursued an MBA in finance from Stephen M. Ross School of Business. He went to the States as a programmer for TATA in 1976. Later in 1980, he co-founded his own IT service organization Syntel with his wife, Neerja Sethi. The business was based in Troy, Michigan. Sethi stands as one of the richest businesswomen in America. At present, Desai is 68 and is the father of two children. Currently, they are residing at Fisher Island, Florida in the USA. Business & Net Worth Syntel was stared up with an investment of $2000 in 1980. In the very first year, it earned a total of $30,000, making the business a success. It is one of the first US-based IT service companies that employed a Global Delivery model. By 2005, Syntel had its global development centers in major cities in India that are Mumbai, Chennai, Pune, and Phoenix, Arizona. It also opened technology campuses in Chennai and Pune. Read: Manoj Joshi Biography Desai held the position of Chairman of Syntel. In 2018 the company merged with French IT provider Atos S.E. and started operating as Atos Syntel. The merging gained $3.4 billion. Before the merging, Syntel’s annual revenue increased to $900 million. Desai’s name became more eminent when he stood among Forbes’ ‘Richest in Tech 2016’, and ‘Forbes 400 2016’. He was named in Forbes’ ‘Billionaires 2020’. As of 2020, his total net worth is $1.4 billion. Desai has joined board memberships of multiple educational institutions that are John F. Kennedy School of Government at Harvard University, Students in Free Enterprise (SIFE), Stephen M. Ross School of the Business at the University of City Michigan. Sum UP The power couple Bharat Desai and Neerja Sethi have risen the bar of triumph. Desai remains one of the most successful Indian originated American billionaires. His achievements are something that every rising entrepreneur dream of. His journey and wit are recognized globally. If you are interested to know more success stories of people around the world who are making India Proud every day, scroll down the page. You can find biographies of influential honorary humans who aimed and achieved their goals.
https://medium.com/@gujaratcelebs/bharat-desai-biography-net-worth-gujaratcelebs-1ce42adef3de
[]
2020-12-14 03:29:18.936000+00:00
['Technology', 'Biography', 'America', 'Michigan', 'Net Worth']
1,727
Can hackers hijack your clicks? — Clickjacking Attacks!
Not just flights but all your clicks could be hijacked too. The threat that’s hidden right in front of you! The internet and technology have transformed not just our lives but those of cyber-criminals as well. With the advancement of technology, cybercriminals are totally into developing new techniques to trick victims for their benefits. Clickjacking, made up from two words, “click” and “hijacking” is a similar method as to flight hijacking where attackers trick the user into clicking content on a hidden website. This was initially discovered by the amazing Robert Hansen and Jeremiah Grossman We often see in the movies, how hijacking is done. Usually, it is shown how armed attackers take over or hijack the flight to accomplish their objectives by using the innocent co-passengers their victims. How does Clickjacking function? This is also known as UI redress attack. Kind of attack which is conducted by utilizing UI vulnerabilities to edit, add multiple transparent layers over it in such a manner that it looks like a legitimate website. And when the user clicks on specific links their clicks get hijacked and will be routed to, interacting with a completely different malicious website. The basic idea behind how it’s implemented Attackers lure users with an attractive looking page with amazing offers. In the background, the attacker checks if the user is logged into any crucial sites such as banking applications. The bank fund transfer page will be displayed in an invisible iframe above luring buttons. The victim visits the page and clicks the button or the link. But reality stays far beyond the user clicks on the invisible iframe and has clicked the “Confirm Transfer” button instead of an attractive link or button with offers. By conducting such attacks, attackers most often target passwords, credit card numbers, valuable data they can exploit. And sometimes also used to download malware into the victim’s device. Where has the clickjacking attack been used? Clickjacking has also been used for various purposes in the past. Some of the most common fields include: By rendering invisible elements over the Adobe Flash settings page, victims were tricked into turning on their webcam or microphone. Worms were spread on very famous social media sites like Twitter. like Twitter. Diverting victims to malicious download links. How to mitigate such an attack? One of the best ways could be the usage of HTTP security header that tells websites on how to behave help add one layer of security to mitigate attacks like clickjacking. 1. X-Frame Option This is an HTTP response header, also referred to as an HTTP security header that tells the browser how to behave when handling the site’s content. This also provided clickjacking protection by not allowing the rendering of a page in a frame. There are three values allowed for the X-Frame-Options header which are as follows: DENY — does not allow any domain to display this page within a frame. Some of the famous sites using this header are Facebook and Github. — does not allow any domain to display this page within a frame. Some of the famous sites using this header are Facebook and Github. SAMEORIGIN — allows the current page to be displayed in a frame on another page, but only within the current domain. Some of the most famous sites using this header are Twitter, Amazon, and eBay. — allows the current page to be displayed in a frame on another page, but only within the current domain. Some of the most famous sites using this header are Twitter, Amazon, and eBay. ALLOW-FROM URI — allows the current page to be displayed in a frame, but only in a specific URI How to check for X-Frame Header You can check if the X-Frame header by simple steps Go to your site Open up the Network panel in Chrome DevTools And search for X-Frame in the header tab i. Enable on Nginx You can enable in Nginx by simply adding the command in server block config. add_header X-Frame-Options "sameorigin" always; ii. Enable on Apache You can enable in Apache by simply adding the command in server block config. header always set X-Frame-Options "sameorigin" iii. Enable in IIS You can enable in IIS by adding the command in the site’s Web.config file: <system.webServer> ... <httpProtocol> <customHeaders> <add name="X-Frame-Options" value="SAMEORIGIN" /> </customHeaders> </httpProtocol> ... </system.webServer> 2. Maintaining Content Security Policy HTTP Header also allows one another layer of security also known as Content Security Policy which allows developers to whitelist domains that are allowed to embed pages and domains from which resources such as scripts and fonts could be loaded. 3. Frame killing Frame Killing is the process of inserting frame killing Javascript into pages that you don’t want to be included in foreign iframes. These are mostly used for old browsers.
https://medium.com/digital-diplomacy/can-hackers-hijack-your-clicks-clickjacking-attacks-5dade6464bfe
[]
2020-08-03 12:02:55.515000+00:00
['Technology', 'Information Technology', 'Information Security', 'Cybersecurity', 'Awareness']
1,728
Technology Law: Turkey’s ICTA Regulated the Privacy Matters in the Electronic Communications Sector
Information and Communication Technologies Authority of Turkey (“ICTA”) has recently issued the Regulation on the Processing of Personal Data and Protection of Privacy in the Electronic Communications Sector (“Regulation”), bringing out critical arrangements in terms of standards of personal data processing, further obligations of the operators and additional opportunities provided to the user/subscribers in the electronic communications sector. The Regulation introduces certain definitions such as “explicit consent” in regard to the Personal Data Protection Law numbered 6698 (“Law №6698”). Accordingly, explicit consent means any consent that is given with free will and based on information regarding a certain subject. The Regulation adopts further definitions such as “user” meaning that a real person or legal entity benefiting from the electronic communication services, regardless if they are a subscriber or not. Whereas a “subscriber” means a real person or legal entity who is a party to a contract for the provision of electronic communication services by an operator under the Regulation. By thus, unlike previous regulations in terms of personal data processing, the terms of “user” and “subscriber” also involve legal entities by virtue of the Regulation. Obtaining the Explicit Consent Following to the Regulation, operators providing electronic communication network and operating infrastructure or servicing electronic communications in terms of authorization, are subject to several conditions while obtaining explicit consent from user/subscriber(s): -The statement of explicit consent in a specific issue is obtained before the transaction. General consents that are not limited to any other subject and not related to the relevant transaction are not accepted. -The statement of explicit consent must be given with free will. -The user/subscriber is informed regarding the scope, purpose, duration, types of personal data and spatial data to be processed before obtaining the explicit consent. -The statement of explicit consent including “yes/accept/consent” is obtained electronically or in writing subsequent to the information. -Records indicating statements of explicit consent must be retained pursuant to the provisions of the Regulation. If operators violate these obligations, the ICTA is authorized to impose an administrative fine up to 3% of the operators’ net sales in the previous calendar year. Exceptional Provisions Regarding Obtaining the Explicit Consent As per the Regulation, operators are not subject to condition of obtaining explicit consent from user/subscriber(s) in transactions of establishing a subscription and providing basic electronic communication services or devices. On the other hand, statement of explicit consent can be requested from the subscriber/user for additional benefits such as gift minutes, short message services and data. Additionally, in case that the traffic and spatial data are subject to third party transfer, followings are subject to information and separate statement of explicit consent: (i) information regarding the scope of the transferred data, (ii) full address and name of the transferred party, (iii) purpose and duration of the transfer, (iv) the name of the country where the transferred party is in abroad. The Regulation also stipulates that, statement of explicit consent cannot be combined with the acceptance of a contract or service, or statement of free will for marketing purposes, communication, approval, confirmation, and similar legal transactions. Security Issues The Regulation elaborates the obligations of operators in terms of security issues and thus, stipulates that they are obliged to retain transaction records regarding access for personal data and other related systems for two years. Moreover, the Regulation brings additional responsibilities to operators including, ensuring the confidentiality, security, integrity, accessibility and use of the data obtained within the scope of the services they provide, including the violation of the provisions of the Regulation by the parties they authorize. Opportunities Provided to User/Subscriber(s) The Regulation also establishes caller ID blocking. Operators are obliged to provide user/subscriber(s) the opportunity to hide their phone numbers, and the opportunity to hide their phone numbers for incoming calls, via a simple and free method. Operators are also obliged to give subscribers/users the opportunity to end an automated redirected call through a simple and free method correspondingly. The Regulation stipulates that, operators shall provide the opportunity to revoke the explicit consent of subscribers/users through text messages, call centers, internet, and similar methods, always free of charge, through the same or a simpler method. Consents Obtained re Previous Regulations The Regulation enlightens that, the consents obtained in accordance with the law before the effective date of this Regulation are deemed valid. However, if the processing of personal data continues without the explicit consent of the parties whose data is processed by obtaining their consent before the effective date of this Regulation, such process is stopped within one month following the effective date of this Regulation. Ezgi Ceren Aydoğmuş
https://medium.com/@herdemattorneysatlaw/technology-law-turkeys-icta-regulated-the-privacy-matters-in-the-electronic-communications-sector-2328e7714243
['Herdem Attorneys At Law']
2020-12-07 08:20:49.383000+00:00
['Personal Data', 'Data Security', 'Technology', 'Turkey', 'Data']
1,729
6 tips for managing an effective CMS
1. Avoid the Failure to Assess Needs It’s easy to rush into selecting a CMS based on all the excellent features you hear of, without first considering if the features fit your business and website needs. When running a CMS for your business, it is essential to have a dedicated website and specific business goals. Do your CMS needs include meeting existing needs, enabling you to attain estimated growth, or both? Before considering matching the CMS to your website goals, ensure that it first meets your overall business goals. Consider your target audience, expected result, time, and resources you will require. By failing to examine what your business needs first, you’ll likely find yourself in a position where you purchase a CMS that does little or nothing for your website and business growth. Additionally, do not consider only your current goals when picking a CMS for your business. For instance, a writing service review website such as Pick the Writer, may focus mainly on paper writing and not a lot of blogging. However, it doesn’t mean that they can’t consider other channels to increase content marketing and the impact it could have on the business and budgeting.
https://medium.com/the-kickstarter/6-tips-for-managing-an-effective-cms-90133a4019d4
['Aleksey', 'Aleks']
2020-07-09 02:06:51.860000+00:00
['Technology', 'Marketing', 'Startup', 'CMS', 'Business']
1,730
Latest Technology Trends That Will Impact Businesses in 2021
Throughout 2020 we have seen some impressive innovations in the technological world, and we, the public, can be a big bunch and when it comes to this business. Many factors can affect our appetite. As I always quote when addressing future technology, we need to look at external factors that affect future outcomes. AI, 5G, Autonomous Vehicles and Blockchain were at the top slots in 2019. 2020 saw the rise of automated processes, self-curing operating systems, and databases and a significant increase in personal data security. Data eventually became a commodity on the radar for real value. The external impact of some high-profile media coverage (social network monopolization) has affected and changed the direction of digital currency and data protection and has given more importance to usage ethics. But what do we plan in 2021 with all these in mind? Data Privacy — Some significant improvements I sincerely believe that significant improvements are coming this year focused on our data footprint. With the EU’s Global Data Protection Legislation as a primary influencer, more countries and states are now extending their data privacy laws. We will see more power for the user to govern whether, how, and where Countries will use their data. This, along with data retention, will be high on the 2020 agenda! 5G — super-fast connectivity For 2019, 5G was selected as the prevailing technology, but 2020 will be the year of maturity. Upgrades to infrastructure and greenfield installations are set up to counter the enormous boom of connective device initiatives (38 billion are suggested to be in place throughout 2020). This will require faster and more efficient communications — which is possibly why we are moving the management of such devices to our networks' edge! SDG’s been heard! Sustainable Development Goals (SDGs) finally get the recognition they deserve, and it is the time! In 2019, we saw an increase in understanding of increasing climate change, natural disasters (fires), and our oceans’ contamination. Global media coverage has been drastic but successful in raising awareness of these issues. In 2020, we expect related sustainable technologies to be dominant (electric transport with SMART city approaches) and reduce our carbon footprint. The technology behind these worthy targets is tried, tested, and most under-utilized! Automation is getting true! IPA’s growth (Intelligent Process Automation) will continue as complex automated tasks build on RPA (Robotic Process Automation) initiatives and become more resonant and lateral, not vertical. Now we are going to start using more reference points to have broader automation. Healthtech, market automation (building on digital twin analogies), and AI with a twist would be good examples of this (finally, dependable use cases). We will also see the rise of the bot (again), which will question the position of certain technological functions. However, it is not all doom and gloom and for certain technical functions and managerial functions because we expect to see new automated bot manager positions. Blockchain: Is it a buzzword? that is the question! So, are we all over Blockchain as a buzzword? Yeah, 2021 will see the rise of many use cases with real business value as we gradually conquer the crypto stigma. Throughout 2020, we have seen many new and creative uses emerge, and most of them have been custom tech ventures. Applications that are layered over this technology will emerge to illustrate the use of cases that provide business value requiring less in-depth skill sets to be applied. Oracle Intelligent Track and Trace is one example of this. Wait for more collaborations. With the proliferation of connected devices required to report data use and manage diverse interconnectivity collaboration, we must look at how and where to connect! Expect more strategic collaborations (e.g., Microsoft Azure and Oracle Cloud) that provide a more straightforward solution and allow for more seamless interactions between services. Most global technology providers are now working together, which will only expand over the course of 2020, so let us plan for collaborative service delivery networks, which has to be suitable for the customer, right? I am looking forward to this year’s technological evolution. So, as in a month, you can see what is in store for one of the biggest tech giants (Oracle) at Oracle OpenWorld’s London launch, which promises to display all of the above.
https://meetadeveloper.com/latest-technology-trends-that-will-impact-businesses-in-2021-d754e608e742
['Vishnu Narayan']
2020-12-24 07:35:31.550000+00:00
['5g', 'Sdgs', 'Business Development', 'Technology Trends', 'Blockchain']
1,731
My Perspectives of News Break. Sharing my personal experience and…
Probably you heard a lot about the recent News Break mission for creators. It created a sensation among the writers and freelancers. With an open mind, I joined the bandwagon because the offer was too good not to pursue. In this story, I want to share my experience, observations, and insights gained from fellow writers on News Break. Information I provide in this article is publicly available. I don’t want to share contractual information due to confidentiality agreement. As you may know from my stories, I always keep my neutrality for platforms, products, or services. I don’t condemn or endorse them. I see them as they are. Therefore, this story is not a sales or marketing pitch but information to empower writers whom I care for deeply. Like many writers, I applied to News Break using an application form provided by one of our writers. My Initial Dilemma Even though I applied at the same with many writers a month ago, my friends received approval within a week. I was worried about why my application took so long. It was a ridiculous reason. News Break accepted my application and sent several email messages requesting me to start in the program. However, all emails from News Break went to my Spam folder, including 50,000 plus messages. I am oversubscribed and the poor Gmail, like any mailing system, struggles with the diverse emails coming from every part of the globe. But I know it is protecting me. A wise friend advised me to check my spam folder. Bang! All messages from News Break was there. I felt stupid, of course. But I took immediate action. I signed the creator program contract. The support team was responsive and resent it within 24 hours with clear instructions. I am grateful for the service they provided. The Publishing Process The publishing process is a breeze. News Break provided a link to my profile. It took me only a few minutes to update my profile. The link to the publishing dashboard was easy to start submitting my stories. The dashboard is well designed and intuitive. You can create a story with ease. The story board has essential tools. Content Creator Guidelines I read the terms and conditions in the Content Creator Guidelines and Agreement, which are similar to other platforms. I cannot go into details for confidentiality reasons. But as public knowledge there are a few points which can be useful for the new writers to consider. Your stories need to be a minimum of 1,000 words. As you remember from my previous stories, it is 600 words for Vocal Media. And Medium does not have any restrictions. Each story must have at least one picture credited to the source. All platforms require this. We are extremely sensitive about this on my publications. Extract from other sources must be cited. Every platform on earth requires this. They expect original content written by you, not others. I have never seen a platform which accepts plagiarized content. Have you? As you can see, these are similar requirements from writers on all platforms. This means that you can transfer your writing skills to News Break easily. Perks for Creators There are many benefits for writers. You can find incentives from this publicly available link. I will not go into details as many writers cover them. For example, a comprehensive story was by Tim Denning published on Better Marketing titled The Brutal Truth About News Break for Writers. Tim provides details in this story and compares both platforms neutrally. Even though some criticizes Tim’s content, I agree with every point Tim does in this article based on my experience with News Break and Medium. As you may know, Tim is a superstar on Medium, LinkedIn and several other social platforms. He is a source of inspiration for many of us. From my observations, Tim will shine on News Break as well since he shares high quality and high-impact stories that matters to individuals and society. My Initial Experience with News Break Let me share how my stories perform within 24 hours. I submitted around ten stories on Saturday. All of them were published within 24 hours. The stats on the profile do not refresh, I guess within 24 hours, but the creator dashboard refreshes frequently and shows actual impressions, page views, and shares. My first story performed beyond my expectation. It was a story I published on Medium and refined a bit to make it appealing to the News Break readers. Here is the story which received 32K+ impressions in a few hours. This is an outstanding performance for me because when I published similar story on Medium earlier this year, I received around 20 views and was not distributed to topics. However, News Break distributed it to Beauty & Fashion topic. Only after my sharing my Medium story in my circles in social media, the story was indexed by search engines and turned to be a reference material. I received around 5K views on Medium, but the views came through my efforts. My story did not earn income since external sources generated the traffic. By the way, receiving views and reads are more important than earning income for me, so I am still grateful for Medium to host my story. My second story on News Break also performed well. News Break distributed my second story to the Health topic immediately. The similar story I published on Medium was not curated even though my discerning readers adored it. It remained as a mystery to many of my readers why Medium did not curate this story. If you want to read my stories on News Break, you can follow my account. I write about health, mental health, leadership, technology, and content marketing. I will be posting many stories to empower writers on News Break and other platforms as I did on Medium. Other Benefits for Creators As an appreciation for creators, News Break also gives an affiliate link to the creators to bring more high-quality writers to their platforms. If you haven’t applied yet, you can use the link they provided to me. This is the first affiliate link I use in my stories, so I do it with full disclosure here. I add this writer application form not just to get points from News Break but for convenience of our writers who what to apply for this program. Finding the link could be a challenge. It is entirely up to you to use it or not use it. Diversifying our writing portfolio is essential for our survival. I don’t know about you, but I don’t want to be exclusive to any writing platform. They can change their terms and conditions at any time. As you may know, I also started writing for Vocal Media which brought me considerable benefits as discussed in this story titled Vocal Is Not Just For Money. You can check my Vocal Media profile to read my insights aimed for specific communities. The only exclusive investments I make is on my mailing lists and my website. I cannot control and influence my readers on other platforms, but I can do to my long term readers who are a fan of my content. They understand me and forgive the small mistakes I make like everyone else does. By the way, News Break does not allow any action to calls on your stories. We cannot even add our mailing lists to our stories. However, we can add it to our profile page. If you started writing for News Break, please remember to add a Follow widget as it must be done manually. image screen capture by writer — follow DrMehmetYildiz on News Break I did not know about this feature and overlooked adding them to my previous stories, but one of my mentors reminded me today as a useful feature. How to Collaborate with News Break Writers To help News Break writers, I created a Quora Space to allow them to share the links of their News Break stories. It is a public forum, and all News Break writers are welcome to participate. Please join the new Quora Space I created. One of the super News Break writers has already joined and shared her content. For ILLUMINATION writers, I created a News Break channel on our Slack Workspace. All writers can share their News Break stories. You can learn about our Slack Workspace form this story. I am impressed that many writers I follow on Medium write for News Break. One of the most inspiring ones is Matt Lillywhite, who has already gained 485K views publicly available on his great profile. I also found writers of our publication whom I closely follow such as top writers Sinem Günel, Rose Bak, Devin Arrigo, Tim Ebl, John Cousins and Jordan Mendiola who submitted an article to ILLUMINATION-Curated about his News Break experience yesterday. Jordan is an inspiring writer who leads the way and shares his experience generously. I will compile many more stories in my publications and provide a reference point for our writers. What If Your Application Declined It is common. News Break declined several applications at least in my circles. I helped a few friends to re-apply with relevant information. Some of them were accepted. My understanding is that you need to present yourself as an entrepreneur, not just a content producer. Your application should convince them your content will be in demand, and you will be capable of bringing new readers to their platform. I believe that News Break wants to recruit top writers who can take the responsibility of their content and audience. You need to talk to your walk. If you need help, please contact me. I will help you with your re-application. Conclusion I attempted to introduce News Break Creator Program as an alternative for writers. Freelance writers need to diversify their writing portfolio. This post is not about condemning or endorsing any platform. Each platform has its mission, strategy, and goals. We need to adhere to their rules to use their services. I enjoy writing on Medium, Vocal Media, and News Break. I also write for myself for my specific audience who consume my specialized content which cannot be found on public platforms. To achieve this, I use my mailing list and serve my tailored materials to a segmented group of readers. Each platform has its pros and cons. The biggest pro for Medium for me is collaboration. Medium presented me the opportunity to connect with thousands writers and readers. I am grateful. I hope News Break and Vocal will do the same. I am optimistic and keep an open mind for opportunities. Thank you for reading my perspectives. I wish you the best in your writing career. Please always feel free to contact me when you need help. I am one message away from you. How to connect with me I established three significant publications supporting 6,500+ writers and serving 65,000+ readers on Medium. Join my publications requesting access here. You are welcome to subscribe my 100K+ mailing list, to collaborate, enhance your network, receive technology and leadership newsletters reflecting my industry experience. I am on ILLUMINATION Slack Workspace. I use Linktree to share my social platforms. Connect with me on News Break. Connect with me on Vocal Media.
https://medium.com/illumination/my-perspectives-of-news-break-3a8e82a8ffc6
['Dr Mehmet Yildiz']
2020-12-22 23:09:40.059000+00:00
['Writing', 'Technology', 'Self Improvement', 'Entrepreneurship', 'Freelancing']
1,732
How Blockchain Will Impact Academics?
Blockchain in the education sector In the current scenario of this digital world, the term Blockchain Technology has become a buzzword which becomes an unavoidable technology in many industry verticals. Now blockchain plays a vital role in the education sector. Here let us discuss, what are the challenges in educational institutes? How will blockchain technology transform the academic world? can’t wait, let us jump into the under discussion about Blockchain in academics. Education is one of the main key features for the growth of many fields like healing, agriculture, science, and almost all kinds of fabrications. It’s a compound task to manage an academic company. There are lots of matters faced for education from pre-kindergarten to Ph.D. students. Here we knew about what are the Challenges in the Education Sector: 1 . As the world is transformed, education suits an unavoidable one between the people. Thus there are billions and billions of students who agree on education in various institutes in a year. It is a great task to continue the records of those students’ details in paper certificates. 2 . When those documents are in paper formats, it becomes a compound task to get back the particularity of a student who was careful 4–5 years back. It takes a lot more time to remember the old data and documents. 3 . Keeping the certificates of students in a secure fashion becomes difficult and they need manpower. 4 . The difficulty filled part of the education part is considered, which needs time and people to compute and keep the finance of an organization. This calculation of employee/staff salary, student fees, preservation, and operative costs of that institute. 5 . Observing the scholarships for the students is also becoming hard as there are more number of students and also scholarships supplied. These challenges move the academic sector into a hard zone of the concert. This covers the way for the use of technologies in education and the most important technologies that help education to face all those challenges is “ Blockchain Technology “. Enters Blockchain in Education The use of blockchain in the academic sector can bring a big rising within 2020. As the world is getting fashionable and digitized, it is time to make the education system also stylish and digital. This in turn increased the organization and colleges that provide online courses, digital certificates, etc. The use of blockchain in education also created a way for the receiving of cryptocurrencies as payments/fees for courses in many institutions. This may result in a simple and a translucent way for both students as well as the institution.There are many use cases of blockchain in education and it all jointly helps in conduct a healthy educational system Here we discuss some benefits of Using Blockchain in Education 1 . Restore Paper records. The use of blockchain to keep the records of students, staff, and employees of an institute. can reduce the use of paper document continuation. It also helps in avoiding the risk of forge or the expectation of losing records. 2 . A depletion Method As blockchain supplies digital storage, the cost for the paper, record printing is eliminated. Also, the time and people’s effort to continue those records are also saved. Thus the use of blockchain in education succeeds to be a saving apparatus. 3 . Data Transparency As we already know that blockchain is a decentralized and distributed ledger technology that spreads out ownership to cover individuals. Thus, blockchain in the academic system can provide transparent records storage of students. 4 . Financing and Accounting made Easy The use of blockchain in the education system can make the computation of scholarships for students and salaries for educators easier and transparent. The assumption of bitcoins for the payment process in education systems makes it more methodical. 5 . Digital documents a better way Most of the institutions embrace blockchain and provide online courses and digital certificates and this makes it simple to connect students and institutes from different regions of the world. There are more benefits of using blockchain in academics and it is verified to be a cost -effectual method.
https://medium.com/@brugusoftwaresolutions/how-blockchain-will-impact-future-academics-527a00f453f3
['Brugu Software Solutions']
2020-12-30 06:09:17.918000+00:00
['Blockchain', 'Blockchain Technology', 'Blockchain In Education', 'Blockchain Startup']
1,733
7 Gift Ideas for Kids Who Want to Code
Buying gifts for a budding software engineer? These are some of the best coding toys and games to get your kids interested in STEM and spark their inner geek. By Carol Mangis & Jake Leary Kids are constantly glued to their devices: texting friends, playing games, watching (or creating) videos, and sharing selfies. But most children—whether or not they know it yet—are thrilled to have a more hands-on, under-the-hood experience with digital technology. Parents who want to encourage and inspire their kids to stretch their science, technology, engineering, and mathematics (STEM) muscles probably already know this, but there are plenty of toys and kits claiming to provide just that. Here’s a roundup of products that give kids a real, creative experience of electronics, robotics, and coding and to build interest and confidence in the most neophyte techies. Elenco Snap Circuits Discover Coding A new kit from the venerable kit-maker Elenco offers another way to introduce kids to coding. Once they learn the basics, they’re able to control Snap Circuits projects’ lights, sounds, and motors via a phone or tablet. True beginners can start with graphical coding; once they’re ready, they can move up to Blockly coding. The kit comes with more than 30 Snap Circuits part s— colorful, rugged electronic components that snap together. Ages 8 and up. Piper Computer Kit 2 Piper’s Computer Kit 2 includes all the components you need to build your own computer: a Raspberry Pi 3 Model B, mouse, battery, blueprint, and set of screws, plus nuts, wood boards for the chassis, and a screwdriver to put the case together. The kit also comes with wires, buttons, switches, LEDs, and breadboards to use with post-build coding projects. Parents and teachers should be on standby throughout the build process, especially for kids at the low end of Piper’s recommended 8-and-up age range. Once it’s built, you can play games in a special Minecraft mod that shows you how to build hardware projects you can use, in turn, to accomplish goals within the Minecraft universe. Even better, the Piper computer also serves as an actual computer; it goes online, has a word processor, and so on. The Piper Computer Kit remains a clever smart toy and, despite its steep price, is worth every screw-turning, breadboard-wiring, code-writing second. Kano PC The Kano PC kit is a great launchpad that can get kids ages 6 and older interested in computers and software coding. The kit contains the hardware components you need to create a working Windows 10 tablet — including an Intel Atom x5-Z8350 quad-core processor clocked at 1.44GHz, 4GB of DDR3 RAM, and 64GB of storage, as well as a MicroSD card slot and detachable keyboard. You connect everything together by following storybook instructions. Projects include coding and making art, music, and games, and kids will learn about the inner workings of a PC and how to code through onboard apps.Your DIY tablet will also work with Windows apps such as Word and Excel, along with many others. And you can visit Kano World online to safely share projects with the Kano community. Ages 6 and up. Robo Wunderkind Starter Kit Young coders will have everything they need to build their own little bots, as well as code and operate them, with the Robo Wunderkind Starter Kit. You get several types of motors and sensors, wheels, a programmable button, an RGB light, and a main block with a speaker, mic, and battery; everything snaps together easily. You use two apps: Robo Code, to set your robot’s behaviors; and Robo Play, which lets you remote-control your bot. The former has tutorials and guides you through building projects — and eventually, you can create your own robots. Even better, Lego adapters are included, so you can augment your creations exponentially. Ages 6 to 12. Microduino Itty Bitty Buggy A builder kit that’s compatible with Lego blocks as well as with other kits from the company, the Microduino Itty Bitty Buggy comes with a base buggy to build on, as well as more than 50 snap-together mobiles with varying functions. Kids can build four fun projects with the kit: a Sloth, Ladybug, Dodo bird, and Alien. They can then code behaviors for their creations using either simple drag-and-drop coding (based on Scratch 3), more sophisticated Python, or even higher-level text-based Arduino IDE (C++). For those who want more, the Microduino Creative Expansion Kit ($19.99) features components for three more projects — or just invent your own. Ages 8 and up. LittleBits Code Kit We’re fans of LittleBits modular electronics kits; we’ve awarded them several Editors’ Choice awards, including the second-edition Gizmos & Gadgets Kit. The LittleBits Code Kit is designed to be used in educational settings to introduce kids from third to eighth grades to programming principles, by creating games through coding. LittleBits encourages parents to urge their schools to buy the kit. But if you really want to get your kids involved in coding and are willing to take an active role in the process, you can purchase the kit yourself. It may be overkill for some, but it’s a robust solution and an effective tool. Ages 8 and up. Lego Boost Creative Toolbox For kids who are a bit too young to handle Lego’s venerable MindStorms robotics kits, look no further than its Boost Creative Toolbox. Designed to introduce children to coding and robotics, the Boost kit provides building blocks with sensors, motors, and app-based coding to help them build a variety of robotic toys that can respond to stimuli. It’s a simple, fun, and relatively affordable approach that teaches the principles of programming, making it worthy of our Editors’ Choice designation. Ages 7 to 12.
https://medium.com/pcmag-access/7-gift-ideas-for-kids-who-want-to-code-2ec682ed493
[]
2020-12-14 17:02:35.359000+00:00
['Coding', 'Kids', 'STEM', 'Gifts', 'Technology']
1,734
Long on $NETE, (Mullen Technologies) Merger
Net Elements is working on a merger deal w/ Mullen Technologies. Net Elements is also competitively working in “Point of Sale” transactions and systems, and may even be competition for Shift4 Payments. Having predicted surges like what happened with $ADOM and been long on $NIO, $TSLA, and other companies, I actually like some of Mullen’s tech. They also own CarHub, and been around since 2014. While Mullen is a bit late to the party compared to other EV players, who would not want this? If they increase their brand image popularity, then you can be looking at alot of room for growth. $NETE also currently has only a $54.85M market cap, which is quite minimal. Disclosure: Please keep in mind, everything I say is on an opinion based basis. This is not meant to be taken seriously or as actionable financial advice. Do your own due diligence and any trades/investments you do is at your own risk. We are not responsible, proceed with caution and we are trying to voice an opinion not meant to warrant action. This is solely meant to be viewed as a non actionable opinion not meant to be taken as seriously or as a form of actionable advice.
https://medium.com/quantportal/long-on-nete-mullen-technologies-merger-58095c28d574
['Andrew Kamal']
2020-12-10 02:41:56.335000+00:00
['Stocks', 'Electric Vehicles', 'Tech', 'Technology', 'Stock Market']
1,735
How to Get Around Newspaper Paywalls in 2020
A paywall is a method of restricting access to content via a paid subscription. Beginning in the mid-2010s, newspapers started implementing paywalls on their websites as a way to increase revenue after years of decline in paid print readership and advertising revenue. In academics, research papers are often subject to a paywall and are available via academic libraries that subscribe. Wikipedia Founder Jimmy Wales has stated that he “would rather write [an opinion piece] where it is going to be read”, declaring that “putting opinion pieces behind paywalls [makes] no sense.” Without easy access to both read and share insights and opinions, the online news platform loses an essential characteristic of democratic exchange. This article is not meant to debate the commodification of information. If you use a news-source regularly for work or personal use, and derive significant value from it, you should pay for it. But in an increasingly fragmented media landscape, it is not economically feasible for a casual reader to pay for a costly monthly or yearly subscription to dozens of news sites. Below is a (nearly) comprehensive guide to the various methods allowing you get around paywalls, pop-ups, and adwalls, that are common on many news sites. There will always be one or two articles that you cannot access without a purchase or compromising your personal information, but you should be able to access at least 95% of news content for free using these tricks. These techniques will help you get around paywalls for the Wall Street Journal, New York Times, Washington Post, Financial Times, and more, without requiring username and password logins credentials or illegal hacking. One last note before you start hacking paywalls. If you only need access to these sites for a brief period of time, you may be better off taking advantage of the free trial periods many publications offer and cancelling your subscription before it renews: Source: Paywall Hacks How to Get Around Almost any Paywall Easily (UPDATED 12/2/19 with new WSJ Bypass) I vowed to find a way around their paywall after they sent a cease and desist to Outline (still an amazing resource for many news sites). It took me a few tries to find something that works, but here you go: 1. Use The Following Firefox Browser Add-on Link: https://github.com/iamadamdev/bypass-paywalls-firefox/blob/master/README.md Note: this add-on reportedly works on these other sites as well (although I have not tested all of them): Baltimore Sun (baltimoresun.com) Barron’s (barrons.com) Bloomberg (bloomberg.com) Caixin (caixinglobal.com) Chemical & Engineering News (cen.acs.org) Central Western Daily (centralwesterndaily.com.au) Chicago Tribune (chicagotribune.com) Crain’s Chicago Business (chicagobusiness.com) Corriere Della Sera (corriere.it) Daily Press (dailypress.com) Denver Post (denverpost.com) De Tijd (tijd.be) de Volkskrant (volkskrant.nl) The Economist (economist.com) Examiner (examiner.com.au) Financial Times (ft.com) Foreign Policy (foreignpolicy.com) Glassdoor (glassdoor.com) Haaretz (haaretz.co.il / haaretz.com) Handelsblatt (handelsblatt.com) Hartford Courant (courant.com) Harvard Business Review (hbr.org) Inc.com (inc.com) Investors Chronicle (investorschronicle.co.uk) Irish Times (irishtimes.com) La Repubblica (repubblica.it) Le Temps (letemps.ch) Los Angeles Times (latimes.com) Medium (medium.com) Medscape (medscape.com) MIT Technology Review (technologyreview.com) Mountain View Voice (mv-voice.com) National Post (nationalpost.com) New Statesman (newstatesman.com) New York Magazine (nymag.com) Nikkei Asian Review (asia.nikkei.com) NRC (nrc.nl) Orange County Register (ocregister.com) Orlando Sentinel (orlandosentinel.com) Palo Alto Online (paloaltoonline.com) Quora (quora.com) SunSentinel (sun-sentinel.com) Tech in Asia (techinasia.com) The Advocate (theadvocate.com.au) The Age (theage.com.au) The Australian (theaustralian.com.au) The Australian Financial Review (afr.com) The Boston Globe (bostonglobe.com) The Globe and Mail (theglobeandmail.com) The Herald (theherald.com.au) The Japan Times (japantimes.co.jp) TheMarker (themarker.com) The Mercury News (mercurynews.com) The Morning Call (mcall.com) The Nation (thenation.com) The New York Times (nytimes.com) The New Yorker (newyorker.com) The News-Gazette (news-gazette.com) The Saturday Paper (thesaturdaypaper.com.au) The Spectator (spectator.co.uk) The Business Journals (bizjournals.com) The Seattle Times (seattletimes.com) The Sydney Morning Herald (smh.com.au) The Telegraph (telegraph.co.uk) The Times (thetimes.co.uk) The Toronto Star (thestar.com) The Washington Post (washingtonpost.com) The Wall Street Journal (wsj.com) Towards Data Science (towardsdatascience.com) Vanity Fair (vanityfair.com) Wired (wired.com) You should see a page like this: If you scroll down, you will see the following: Click the download link. Accept the Firefox permission popups that appear. Customize. You can customize the browser extension. If you have other existing logins, make sure you deselect these news sites as this add-on will log you out of them.
https://medium.com/paywall-hacks/how-to-bypass-virtually-every-news-paywall-705602c4c2ce
['Casey Botticello']
2019-12-30 02:12:51.022000+00:00
['Technology', 'Productivity', 'Entrepreneurship', 'Privacy', 'Social Media']
1,736
US vs China in AI — A Realistic, No B.S. Assessment
With headlines like the ones below appearing in respectable publications, one could not be faulted for being misled into believing that China might be ahead of the US in artificial intelligence (AI) already. But is it really? Let’s take a closer look beneath the headlines and the hype… Quantity does not equal quality Yes. China has applied for a lot of AI patents and the rate is growing at the fastest in the world. But if you read this article closely, it also goes on to point out, “Though China has made more applications, it still has fewer high quality, high-value patents than world leader US and Japan…” Companies looking to secure high-value intellectual property typically apply for an international patent via the Patent Cooperation Treaty, where the US makes up 41 percent of all applications — more than any other country — indicating its national strength in the sector. — Chinese Firms Apply for More AI Patents Than Any Other Country, Yicai Global And in terms of turning those patent applications into actual businesses generating significant revenue and brand power, China is still very far behind. According to research done by one of China’s ‘Ivy League’—Tsinghua University, only one Chinese entity ranks among the top 10 global AI patent owners — the state owned power grid enterprise. Source: China Institute for Science and Technology at Tsinghua University Talent is still hugely lacking and lagging China’s AI workforce is also still far behind the US in both quality and quantity. China has 18,232 AI talents, compared to 28,536 in the US. More importantly, only 5.4% of the Chinese talents are considered ‘outstanding’, whereas the proportion is 18.1% in the US. In case you think the study might be biased, again the numbers were counted by Tsinghua university. Source: China Institute for Science and Technology at Tsinghua University To dig further, in a survey done by China Money Network, the founders of China’s top 50 AI companies were mostly educated abroad, including Harvard, Brown and several others in the US. Source: China Money Network Funding vs Value created Here’s where the picture starts to look a bit bleaker for the US. Research firm CB Insights found in their 2018 AI Trend Report that China AI startups overtook the US in funding amount in 2017 for the first time ever, spiking up to 48% from just 11.3% in 2016. Source: CB Insights To be fair, the US still has a lot more deals than China — 50% of the global total compared to China’s 9%. So that means the US still has a lot more AI startups being funded, though clearly China saw some very large deals in 2017. Otherwise, it wouldn’t have been able to account for nearly half the funding amount with just 9% of the global deal volume. But that dominance in deal numbers is also trending down for the US, as shown by this chart below: Source: CB Insights And here’s a less talked about fact that should make the Americans worry a bit. Almost half the money invested into AI by the three China tech giants — Baidu, Alibaba and Tencents —over the last four years are in US startups. Source: CB Insights Catching up? Nonetheless, even though China is clearly still some distance behind the US now in AI, it doesn’t mean it can’t catch up, or even overtake. Indeed, if anything, money might be a key factor in China catching up. “We often believe that the US is so far ahead, that the US invented everything…And it has dominated the tech scene, but a miracle happened 10 years ago. That miracle is money, of course. The Chinese government has vowed to be become a world leader in AI by 2030 with its strategic roadmap…Entrepreneurs and VCs started pouring cash into new AI startups.” — Kai-Fu Lee, speaking at the O’Reilly Artificial Intelligence Conference in September 2018 Lee was formerly the president of Google China. He is now one of the most famous venture capitalists in China. Wired magazine calls him “something of a rock star in the Chinese tech scene, with more than 50 million followers on social networks within the country”. Lee has helped launch five AI companies now worth a combined US$25 billion. He’s written about it in a new book call ‘My Journey into AI’. In his book, besides money, he said that China also has the advantage of very hungry entrepreneurs and abundant data on their side in the AI race. His old boss at Google seems to agree with him. Eric Schmidt, CEO of Alphabet (Google’s parent company), said in November 2017 that the Chinese central government’s AI development strategy should “set alarm bells ringing in America.” “… the U.S. needs to get our act together if it doesn’t want to fall behind on the technology that could determine the future of both the defense and commercial sectors… It’s pretty simple. By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries of AI.” — Eric Schmidt Maybe it comes as no surprise that the folks from Google are the ones ringing the alarm bells. It could be their own regrets knocking on their conscience. Their China equivalent, Baidu, started working on AI long before they did… Source: CB Insights So the bottom line is this: China is still lagging in innovation and talent. But with money, highly-driven young entrepreneurs and government support, China has a good chance to dominate the world in AI. Most of the top AI startups are focused on software aspects like natural language processing, computer vision and voice recognition. Given my own limited experience with some of their products, I would say their technology is as good if not better than western competitors already. Here’s two more useful infographics on the valuations and spread of the top Chinese AI companies.
https://medium.com/behind-the-great-wall/us-vs-china-in-ai-a-realistic-no-b-s-assessment-a9cef7909eb6
['Lance Ng']
2019-01-26 01:55:14.016000+00:00
['China', 'Technology', 'USA', 'Artificial Intelligence', 'AI']
1,737
Failed Predictions for 2020 We Wish Came True
It’s always interesting to wonder how much our ancestors, predecessors, and younger selves knew where they were going. But equally fascinating, in my opinion, are those bold predictions from the past that hit completely wide of the mark. Not only is it a neat insight into the way the minds of the past considered their place in human history, but it serves as a reminder that no matter our achievements, we can never gauge our societal momentum with any real exactness. 2020 has been an eventful and chaotic year, so I figured I would turn my ear back to the voices of the past and delve into a little alternate history. It seems that those voices had a lot of ideas about what 2020 in particular might look like. Prophecies aren’t interesting, so I made sure to stick to considered, thoughtful predictions made by futurologists, writers, engineers, scientists, and other trend forecasters. These are the unmet expectations, forgotten dreams, and unrequited wishes for the 2020 that never was. A 26-Hour Work Week Photo by You X Ventures on Unsplash This one is probably the most disappointing. In 1968, physicist Herman Kahn and futurist Anthony J. Weiner predicted that by 2020 the average American would be working 26 hours per week- about 1370 per year. It was a pretty bold prediction considering the average American worked approximately 37 hours per week in 1968. And it speaks to the optimism of the Post-War period that envisioned a future of linear progress and continuous economic growth. As it stands, the average American now works roughly 35 hours per week according to the Bureau of Labor Statistics- and that figure varies according to factors such as gender, age and marital status (the average for men is 41 hours for instance). It also doesn’t include “side hustles” which many modern Americans increasingly feel the need to support themselves with. The U.S also has a relatively high figure of 11% of its employees that work over 50 hours per week (according to the OECD). Sadly, the idea of a 26-hour work week seems less realistic now than it did in the 1960s- and not just for Americans. But if any country is going to get close to making it less of a fantasy, it will be a progressive nation like Denmark, Norway, or The Netherlands. Humans Will Land On Mars Photo by Nicolas Lobos on Unsplash Although this prediction is, ultimately, wrong- it’s not far off. The idea that we would send human beings to Mars by 2020 is something I remember growing up with, in fact. Humans setting foot on Mars by the early 21st century was a recurring promise in the books and documentaries I consumed as a kid. In a 1997 issue of WIRED, Peter Leyden and Peter Schwartz gave 2020 as the year we would finally succeed in sending a manned spacecraft to the Red Planet. We’re on our way, having successfully landed several robotic craft (such as probes, rovers, and landers), but current estimates for a manned mission put it a good decade hence. What’s most interesting about Leyden and Schwartz’s prediction however, is not that we would reach Mars by 2020, but that we would do so as part of a “joint effort supported by virtually all nations on the planet”. They describe four astronauts from a multinational team beaming images of the Martian landscape back to 11 billion people- which is also interesting, as the most recent United Nations estimates for the world population (as of September 2020) sit at 7.8 billion, with 10 billion not expected until 2057. The beaming of those images are an important part of the prediction though, and tell us that this was as much a prediction about sociology as it was scientific discovery. The images that never were beamed to us this year have an emotional weight to them. Leyden and Schwartz envisioned the 2020 Mars landing as being a turning point in history, a triumph of global cooperation that would put an end to an Earth divided by nations and give rise to a more collective mindset. “The images from Mars drive home another point: We’re one global society, one human race. The divisions we impose on ourselves look ludicrous from afar. The concept of a planet of warring nations, a state of affairs that defined the previous century, makes no sense.” It’s poignant to think that this, rather than our technical capabilities, has proven to be the most unrealistic aspect of their prediction. It makes me think of classic science fiction from the Cold War era (think Gene Roddenberry’s Star Trek or Poul Anderson’s Tau Zero) in which a future spacefaring Earth always had a single identity. Nation-states were gone, but cultural identities were never lost. Ethnic and religious conflicts were seen as archaic. Although it may seem far away right now, there is hope in the idea that through technology we can achieve social progress. The Death of Nationalism Photo by Jørgen Håland on Unsplash This one ties in quite nicely to the previous prediction. If you think about it, they’re essentially the same: through advances in technology, we can overcome national and ethnic divides, and come together as one. In 1968, political science professor Ithiel de Sola Pool confidently proclaimed that “By the year 2018 nationalism should be a waning force in the world,” due to our enhanced capabilities for translation and communication. While it’s true that the internet has facilitated a more interconnected world, our technical innovations haven’t brought about the greater empathy de Sola Pool hoped for. Quite the opposite, in fact. Trump, Brexit, Bolsonaro, Erdoğan, Orbán, the Front National, and the Alternative für Deutschland were and are driven by a viciously-xenophobic, fervently anti-intellectual brand of populist nationalism. The question that remains is whether de Sola Pool’s prediction was wrong entirely or whether it was simply premature. If we are to think of human history in terms of Hegelian Dialectics, then the process of nationalism’s erasure could very well be underway. It’s just not a smooth and linear process. Rather, it’s a messy, generational progression of “two steps forward, one step back”. The French Revolution deposed a tyrannical monarchy but led to a little something known as The Terror, and from that chaos emerged a new tyrant in the form of Napoleon- a political opportunist who derailed the very liberty he professed to love. It was a good half-century before the fruits of the Revolution came to bear insofar as individual liberty was concerned. By that same token, the rise of Trump, Brexiteers, and those like them could be the last fightback of populist nationalism as the world moves inexorably to a more interconnected and interdependent future. The more they swing in one direction, the likelier it is that the next generation of policymakers will move to compensate. My point being, we won’t know for definite that de Sola Pool was off the mark until many years hence. Hyper-Intelligent Apes Will Be Our Personal Slaves Photo by Margaux Ansel on Unsplash No, I’m not kidding. During my research for this article, this was the prediction for 2020 that seemed to crop up the most in my internet searches. Probably because people can’t quite believe that this was a serious prediction for the world in which we now live. In 1967 The Futurist published an article that stated “By the year 2020, it may be possible to breed intelligent species of animals, such as apes, that will be capable of performing manual labor.” According to the writer this included everything from vacuuming the house to pruning the rosebushes, and even driving our cars. These apes, which would be specially-bred and trained as chauffeurs, would supposedly reduce the amount of car crashes. Now I’ve never seen a chimp drive a car outside of a circus, so I can’t attest as to whether or not they would be more adept at spotting potential hazards on the road than we are. But these aren’t just any old apes- the article implies they’re a kind of super-ape, bred for specific purposes in the same manner as dogs. Alas these apes don’t exist, but the basic idea that by 2020 we will use our enhanced technology to find new uses for animals is not incorrect. Scientists and mechanical engineers at Singapore’s Nanyang Technological University have recently experimented with the creation of “cyborg insects”, successfully implanting electrodes into the leg-muscles of beetles in order to control how they move. These remote-control bugs- far cheaper than robots of the same size- can theoretically be put to a number of uses- from espionage to search-and-rescue. It’s not as impressive as a baboon trying to scrub dried oatmeal from a breakfast bowl, but it’s in the spirit of things. Telepathy & Teleportation Photo by David Clode on Unsplash Perhaps the most surprising aspect of this prediction is not so much that it exists, but that it was made as recently as 2014. Michael J. O’Farrell, founder of The Mobile Institute and veteran of the tech industry, proclaimed in the 2014 book Shift 2020 that both telepathy and teleportation will have been made possible by the current year. This breakthrough was supposed to have been achieved through a process known as “nanomobility”. O’Farrell writes that “By 2020, I predict people will become incubators for personally controlled and protected Embodied Application Platforms and Body Area Networks, with a primary source-code Physical State and hyper-interactive genetically reproduced Virtual States. All states would host a mass of molecular-sized web-servers; IP domains and AP transport protocols capable of self-sustaining replication, atomically powered quantum computing and persona-patented commerce. I have coined the phrase nanomobility to capture and describe this new uncharted state.” So what’s the modern reality of telepathy and teleportation? Well the truth is that they simply don’t exist- at least, not in the way we typically imagine these concepts. The closest we’ve gotten to telepathy is electro-encephalography (EEG), in which a device not dissimilar in shape to a swimming cap is outfitted with large electrodes and placed upon the scalp of the subject. These electrodes record electrical activity which is then interpreted by a computer. Scientists have used this interface to both send signals from the brain and receive electrical pulses in turn. Volunteers have been able to transmit brain activity to each other, to computer software, and even to animals- with one volunteer able to stimulate the motor area of a sedated rat’s brain in order to get it to move its tail. The closest scientists have come to something resembling teleportation is a process known as quantum teleportation, which is less an act of transportation so much as it is communication. Quantum information has been proven capable of transmitting from one place to another. In 2014, researchers at the Technical University Delft reported having teleported the information of two entangled quantumbits three meters apart. These breakthroughs may not have impacted our everyday lives in the way that the futurists’ hoped, but they are nonetheless extraordinary accomplishments that we can only hope will serve as part of a greater journey of discovery.
https://medium.com/predict/failed-predictions-for-2020-we-wish-came-true-7dba84a76bea
['Michael J. Vowles']
2020-12-11 22:43:19.768000+00:00
['History', 'Future', 'Technology', 'Science', '2020']
1,738
Autonomous Vehicle as a Social Factor: History, Benefits, Forces, Problems, and Opportunities
Abstract “Technology” is a term that will pale if it is abstracted from its social context. From culture to culture, technology’s triumph is never its own triumph, because marketing interests, political decisions, social alternation, economic condition all lead to it. Massive technology itself is never an “isolated genius”. In this paper, Autonomous Vehicle is articulated through the history and evolution of Self-Driving Cars and related human involvement. The evolvement of autopilot vehicles through social relations will also be discussed with the following social problems and future opportunities. Introduction: What is Autonomous Vehicle Autonomous vehicle (AV) is also called Self-Driving car. It describes a vehicle that is capable of sensing its environment and moving safely with little human interference. (Taeihagh, 2019, p.39) Because of inconsistent terminology, classification, and levels proposed by different organizations, the standard for “automatic” used in the industry are varied. To be more specific, a Self-Driving Car can also be called a Connected and Autonomous Vehicle (CAV), Driverless Car, Robo-car, or Robotic car. Combined with a variety of sensors and software, the autopilot system allows a car to operate and stay in control safely by itself. The History and Evolution of Self-Driving Cars and Human Involvement “The social and cultural context in which a work was created is an integral part of it, one that cannot and should not be slighted in apprehending the work at a later time.” (Csikszentmihalyi & Robinson, 1990, p.51) Most of the time, social and historical context should be considered a significant contributor to innovation. Although modern technology does not support us to communicate across the boundaries of time from era to era, looking back to history can give us systematic background information. In Fabian Kröger’s Automated Driving in Its Social, Historical, and Cultural Contexts, it traces central elements in the pictorial and technological history of driverless cars. (Kröger, 2016) Here are some of the turning points I capture: In the 1500s, the first blueprint for a self-propelled cart is sketched by Leonardo De Vinci, while the concept still sounds impossibly futuristic. In the 1920s, the dream begins, and experiments have been conducted, although the idea back to 100 years ago sounded abstruse or even illusory at that time. In the 1970s, the first semi-automated car was developed by Japan’s Tsukuba Mechanical Engineering Laboratory. However, special assistant including an especially marked environment, analog computer, and cameras is still required. In the 1980s, CMU had pioneered to steer control autonomous vehicles. Although the speed and brake had to be controlled, the vehicles can travel 2,797 miles. In the 2000s, self-parking systems began to emerge. During the meantime, the U.S. Department of Defense’s research arm sponsored a series of challenges that pushed the technologies forward. In the 2010s, Tesla’s semi-autonomous “Autopilot” with hands-free control feature was introduced in 2015. Automobile manufactories are all racing to build autonomous vehicles. Benefits and forces for the technology revolution Albert Borgmann (2010) mentions in his article “Reality and Technology” that although technology is economically precise, its cultural and moral force needs explication. Technology is a massive and seductive force. According to him, the technology revolution is propelled by two major forces: (1). the change from the provision of relief and (2). the pursuit of pleasure. (Borgmann, 2010, p33) Two forces for Automotive vehicle technology will be discussed are (1). safety consideration and (2). more pleasurable comfortable travel experiences. The innovation facilitation is not just because of the technology breakthrough and hardware improvement. The combination of economic facilitation, political decision, insurance push, traffic condition also matters. In this way, the social contexts that those artifacts are situated are also mentioned. Improved Driving Safety It is commonly believed that autonomous vehicles reduce traffic accidents and fatalities. The developers of Google states that Automotive vehicle technology will reduce casualties of traffic accidents to half compared with nowadays. (Thurn, 2010) The reasons include: (1). The reaction speed of Computers is far faster than human brains. (2). Digital algorithms are not temperamental, emotional psychological effects (3). The operating system cannot interfere with drugs or alcohol products and it never gets sleepy. However, if the self-driving services become publicly available, the economic aspects can be seen as the highest priority to both individuals, insurances, and manufactories. Then will it create safety concern, and will the standard still meet our safety expectations? If autonomous vehicles are as safe as expected, the accidents by human factors would decrease and insurance companies might rethink traditional business models that work for the novel circumstances. In the meantime, new systemic solutions to guarantee safety in organizations including regulations, authorities, and safety culture will also be altered. A new transformation of automatization markets and business models will occur. More Convenient and Pleasurable Travel Experience It is not hard to imagine that improved and more comfortable driving experience will be brought by Automotive Vehicle Technology. The boring driving process can become a relaxing trip, and the passengers are welcomed to multitask both work and leisure-related activities while in the vehicles. In other words, people inside the self-driving cars can use time doing other things and spend this time to do pleasurable and productive activities. A new socio-technological system for autonomous vehicles will be another outcome due to the new transportation plans. Although current studies show ambiguity and low predictability of long-term impact, it can assist with achieving a radical improvement in future transportation. (Aleksey & Maria, 2020) Efficient travel and ease of commuting make people care less about leaving far away from the working spaces. It can also lead to decreased urbanization and a new organization of urban space. What’s more, ownership may not be necessary. If sharing services developed and it evenly spreads out the population, the urban environment can be improved. Other potential Social Problems Intentionally or unintentionally, any groundbreaking change brings social issues and the self-driving vehicle will not be an exception. However, automotive vehicle technology should not be the ones to be blamed for. Attribute the tragedies to the hardware itself is like attributing crimes to the victims. “When designing interactive systems, it is critical to understand the social and collaborative aspects of interaction and experience.” (Forlizzi & Battarbee, 2004, p.266) When it comes to the social determinants of technology, what matters is the social and economic system it is in. Only if we stop victim-blaming and start to look at the social context of these man-made systems, the analyzation can be subjective and valuable. The following paragraphs will discuss: (1). Social Structure Changes and Unemployment, (2). Intensification of Social Inequality, and (3). Cyber-security Issue. Social Structure Changes and Unemployment Industry structure transformation comes with the evolution of self-driving cars. It is hard to tell whether it is more beneficial or harmful to our society, but to adapt to industrial innovation, factories and organizations will develop advanced routines and rules. “Success rests on the combination of people, skill sets, artifacts modes of organization.” (Matthewman, 2011, p.22) When a new industry just generates and a supply chain just launched, traditional skills and modes can no longer be used. Although new career fields will be created, people who earn their living from driving will suddenly be out of their traditional jobs. Since driving is their main skill in society, it will be difficult for such unemployed workers to quickly find new work, and the cost of giving unemployment benefits and training those low-skilled workers could be high. The domestic job loss also causes other social problems including homelessness, social isolation, increasing crime rates. Intensification of Social Inequality Technologies of transit are not neutral. “The technological deck has been stacked in advance to favor certain social interests and that some people were bound to receive a better hand than others.” (Winner, 1986). The increasing use of self-driving vehicles is also possible to lead to new forms of social difference. In other words, the new industries and products can globally cause unevenly distributed across classed and racial lines. In Autonomous automobilities: The social impacts of driverless vehicles. Current Sociology, the authors question how autonomous technology might reflect the values and preferences. (Bissell et al. 2018) Women may be marginalized in the evolution of autonomous systems, as same as what has been observed in other male-dominated fields including Robotics Engineering and Software Development. “Technologies can be used in ways that enhance the power, authority, and privilege of some over others.” (Winner, L, 1986, p.25). During the 20th century, Robert Moses embodied the social class biases in the design of his low overpasses to Jones Beach that constructed in a way to achieve some sort of social effect. Some designs are intended, and some are not, but artifacts serve a purpose to favor some social/political interests and some groups of people. Sometimes sacrifices occur. It is likely that automobility has similar impacts. What’s more, how old people and people with disabilities can use the driving service stays unknown. Cyber-security Issue The increasing scale of hardcore technological innovations also creates cyber-security issues. Matthewman argues, although technology affects history, it is not able to drive it since it cannot be distorted from human desires, needs, and passions. (Matthewman. 2011, p.9) Automotive software involves digital algorithms, while the algorithms are designed by companies, which leaves a possibility for surveillance. Any flaws in the tracking, data-receiving, and the regulating process can be exploited. Especially during the early stage of the application, the driving system might be hacked due to an abundance of digital infrastructure. What’s more, it keeps debatable about who should be accessed to the automotive data and user information under varied conditions. This leaves the self-driving innovations a great social problem about generating, sending, and generated user data more securely. Conclusion Besides technological, ethical, regulatory challenges, and legal challenges, a more advanced system and design for social alternation are still necessary. Automotive Vehicle Technology leaves not only a brand-new way to build new social order in the world but also a field for designers and researchers to explore. As designers, we are still not sure the way that self-driving car will transform their driving experience. There are three elements of technological normalization: institutional, contextual, and systemic (Matthewman. 2011, p.25) What will the Automotive vehicles be? Will machines give labor, support human, or take labor? (Automation, Augmentation, or Heteromation) How people will spend their time on the road and how to improve the experience? The emotional perceptional and expressional stage of car travel stays unknown. For the design of future automotive vehicles, first, the safety of both drivers and passengers should be the priority. Secondly, Usefulness, convenience, reliability, accessibility should be considered in order to increase the acceptance of autonomous vehicles. However, what about other aspects? There are also other factors for acceptance and many challenges on the way. Reference Aleksey Z., & Maria R. (2020). Impact of Self-driving Cars for Urban Development. Форсайт, 14 (1 (eng)), 70–84. doi: 10.17323/2500–2597.2020.1.70.84 Bissell, David & Birtchnell, Thomas & Elliott, Anthony & Hsu, Eric. (2018). Autonomous automobilities: The social impacts of driverless vehicles. Current Sociology. 001139211881674. 10.1177/0011392118816743. Borgmann, A., 2010. “Reality and technology,” Cambridge Journal of Economics, Oxford University Press, vol. 34(1), pages 27–35, January. Csikszentmihalyi, M. C., & Robinson, R. R. (1990). The Art of Seeing. In The Major Dimensions of the Aesthetic Experience (pp. 27–72). J. Paul Getty Museum and the Getty Education Institute for the Arts. https://www.punyamishra.com/wp-content/uploads/2017/09/csikzentmihaly-dimensionsofaesthetics.pdf Forlizzi, J., & Battarbee, K. (2004, August). Understanding experience in interactive systems. In Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 261–268). Kröger F. (2016) Automated Driving in Its Social, Historical and Cultural Contexts. In: Maurer M., Gerdes J., Lenz B., Winner H. (eds) Autonomous Driving. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-48847-8_3 Matthewman, S. (2011). Technology and Social Theory. Chapter 1, pp. 8–28 Shubbak, M., Self-Driving Cars: Legal, Social, and Ethical Aspects (December 18, 2013). Available at SSRN: https://ssrn.com/abstract=2931847 or http://dx.doi.org/10.2139/ssrn.2931847 Taeihagh, A., Lim, M. (2 January 2019). “Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks”. Transport Reviews. 39 (1): 103–128. Thrun, S., 2010. What We’re Driving At [online], OFFICIAL GOOGLE BLOG, available from: http://googleblog.blogspot.com/2010/10/what-were-driving-at.html [Accessed 12 December 2013] Winner, L. (1986). Do artifacts have politics? In The whale and the reactor, pp. 19–39. Chicago: University of Chicago Press.
https://medium.com/@seren980224/autonomous-vehicle-as-a-social-factor-77e1264dde7
['Qingran Ni']
2020-11-15 20:11:03.490000+00:00
['Autonomous Vehicles', 'Social', 'Robotics', 'Self Driving Cars', 'Driverless Car Technology']
1,739
How Blockchain Can Disrupt The Card Payments Industry — And Why It Hasn’t Already
Photo by Iker Urteaga on Unsplash Blockchain is a buzzword that is starting to lose its buzz. The word is thrown around constantly, and often people have little idea what it means. But the technology does have value, and it doesn’t just have to be an esoteric term. Instead, the blockchain has the capability disrupt and change a number of industries. The card payments industry is one of them. While card payments may be convenient for shoppers, the merchants have been footing the bill for this convenience with the high processing fees charged by the card processing intermediaries. Since blockchain technology is based on the core principle of “decentralization,” it provides an opportunity to breakthrough the multiple “middlemen fees” with an alternative low-fee digital payments network. And, besides convenience, merchants can share these savings in processing fees with their shoppers to elevate their experience to a whole new level. Let’s look at how blockchain can help improve the card payments industry, and, if it’s so great, why it hasn’t already. Problems the industry currently faces The card payments industry has a lot of issues right now. It’s huge industry dominated by a handful of companies that see billions of transactions a month. One of the biggest payment card companies, Visa, handles around 150 million transactions every day. But although things seem to be working in order, there is a lot that can be improved for the average merchant. For example, merchants currently face high transaction fees. A card transaction requires the buyer to effectively authorize the seller to ‘pull’ a payment from their account, passing through several financial intermediaries in the process. A typical card transaction, besides the merchant and the cardholder, involves at least four parties (commonly referred to as merchant services providers). These include the acquirer (the financial institution that enables payments to the merchant), the payment gateway, the interchange (e.g., Visa, MasterCard or Amex), and the issuer (the card holder’s bank). With the card payment system currently in place, all fees go to these intermediaries. If a shopper is to buy a good worth $100 using a debit card as opposed to cash, these intermediaries will take between 1.5 to 3.5 percent, which means the merchant receives as little as $96.50. The problem is not just with the card payments, either. Many companies who now offer “mobile payments” as an alternative to card payments still charge almost the same in processing fees. Again, this can prove costly over the year — especially in a world that’s becoming less dependent on cash. Despite the relative efficiency of these existing systems, users are still vulnerable to manipulations and risks. Also, these services are not offered for free by the processors, as maintenance fees and other service charges do exist. These fees add up over the course of the financial year. This brings us on to the next point: chargeback fees. Chargeback fees occur when a customer disputes a transaction with a merchant. If the customer is right in his dispute, the card company takes the money from the merchant’s bank, rightly puts in back in the customer’s account and a fee is incurred. Problems include unauthorized transactions, goods never delivered or delivered late. Card companies can charge over $20 per dispute, which adds up over time, costing merchants potentially thousands over the year. What solutions can blockchain bring to the table? So let’s look at how blockchain can make the merchant’s life easier and potentially disrupt the card payments industry. Firstly, blockchain, as you are probably aware, is a decentralized technology. Therefore, if we were to introduce blockchain to the table of the card payments industry, many middlemen could be cut out of the picture. And with these middlemen out of the picture, transactions fees would be, too. This would effectively slice the transaction overhead and even facilitate what is known as “micro-transactions” — small transactions conducted with near-zero charges and instant verification. Likewise with chargeback fees. Payments in a blockchain system allow the customer to decide on how much money is given, and merchants are unable to take more. There is no central authority. Rather, everything is held in a ledger. The middleman who wants to take a cut is eliminated and this benefits not just the customer but merchants as well, and makes chargeback a thing of the past. International payments could also be made easier, as many banks charge fees for using their services abroad. A blockchain payment system would allow shoppers to buy things with ease, whenever or wherever they are in the world. And smaller merchants, who previously wouldn’t be keen on international sales due to cross-border fees, would be given the chance to flourish. If blockchain solutions are so great, why haven’t they been taken up already? Photo by William White on Unsplash Well, for starters, using a blockchain system for payments would likely entail cryptocurrencies. And both retailers and shoppers are confused by such a system. As mentioned before, although it’s slowly gaining traction in the mainstream, blockchain is still esoteric, and retailers are keen to stick to what they are familiar with. Volatility is the main problem that is putting off shoppers and retailers. The price of cryptocurrencies are highly volatile and frequently experiences a rise or fall of over 10 percent in just one day. For this reason, the use of cryptocurrencies usually involves a lot of guesswork as well as trend research — something neither shoppers nor merchants want to risk. Complexity is another problem. The current blockchain and cryptocurrencies payment solutions are designed for the early enthusiasts, with little emphasis on user friendliness. A mass adoption would call for a system as seamless as a card payment, Apple Pay or Google Pay. Shoppers — and even merchants — often aren’t too familiar with what is going on in the background. They wouldn’t have to understand the technology; it would just have to be much more accessible. This is another reason why it hasn’t quite disrupted the industry just yet. Availability (or Accessibility) is yet another hurdle. Currently, cryptocurrencies aren’t used widely. And places to spend them are even more scarce. If someone has money which can be spent through the blockchain, they are often unable to spend it for everyday purchases. Scalability is a final problem. The early blockchain protocols (like Bitcoin and Ethereum) are quite constrained by the number of transactions they can process concurrently, and the cost for committing a transaction on the blockchain network is too high. But this limitation is quickly fading away with the emergence of next-gen blockchain infrastructure technologies like Stellar, Ripple and IOTA which were designed with the requirements of a highly scalable and cost-efficient payment system in mind. There is room for it. It will just take a while to be taken on board Blockchain technology has enormous potential as a way to securely and transparently process transactions and effectively manage data. A scalable blockchain digital payment solution can offer merchants an alternative way to accept payments without the heavy fees charged by the banks and payment processing companies on transactions and chargebacks, allowing them to provide a better value and experience to their customers.
https://medium.com/hackernoon/blockchain-for-disrupting-card-payments-dff87840313c
['Sarthak Moghe']
2018-06-25 07:21:01.272000+00:00
['Tillbilly', 'Blockchain', 'Payments', 'Cryptocurrency', 'Retail Technology']
1,740
Happy Hanukkah, friends!
Cartoonist. Former Dave Letterman joke writer. Yinzer. Dad of two girls (one a T1Der). Beer drinker. Pizza lover. He/him.
https://medium.com/@chumworth/happy-hanukkah-friends-a8b20a30d8e4
['Phil Johnson']
2020-12-14 15:10:04.066000+00:00
['Kids', 'Technology', 'Family', 'Comics', 'Cartoon']
1,741
Make Money With Python — The Sports Arbitrage Project
Time to Code We’ll be scraping live data from 3 betting sites. We’ll start with live data from matches in the betting site ‘Tipico.’ For those who followed my previous web scraping tutorial, this will be a similar approach, but with advanced tools necessary to get live odds data. Among the 3 betting sites, I considered this one the hardest one to get data from, so once you’re able to get live odds from this one, you’d be able to understand the code for the other 2 sites easily and also create your own betting site scraper. Let’s start coding! The full code is available at the end of section 4. Update March 21st, 2021: I uploaded CSV output examples of the Tipico, Betfair, and Bwin scrapers to my Github, so you have an idea of how the output of section 1 (Scraping live betting data with Selenium) looks like. However, this data still needs to be preprocessed in sections 2 and 3 before finding surebets. There was a slight change of names in the dropdowns of the Tipico betting site. Initially, the name was, “Both teams to score?” but now it is “Both Teams to Score” The code below considers the new names. 1. Scraping live betting data with Selenium Importing libraries We need to import Options to change chromedriver’s default options; Select for selecting within dropdown menus and By , WebDriverWait , EC , time to wait for a certain condition to occur. Changing Chromedriver default options To keep scraping while doing something else on the computer, we need to use the headless mode. To do so, we write the following code: Breaking down the code: options = Options() creates an instance of the Options class creates an instance of the Options class options.headless = True turns on the headless mode turns on the headless mode options.add_argument(‘window-size=1920x1080’) opens the windows in a customized size on the background opens the windows in a customized size on the background webdriver.Chrome(path, options = options) applies the changes we made in the chromedriver applies the changes we made in the chromedriver driver.get opens the browser opens the browser web represents the betting site’s URL, while path represents the chromedriver’s path in your computer If you don’t want to work in headless mode, then write option.headless=False and use driver.maximize_window() instead of options.add_argument(‘window-size=1920x1080’) as shown in the full code. Select values from dropdown menus We need to select the betting markets we want to get data from. We do this with Select and the Xpath of the dropdown. To get the Xpath, do this: Image by author But first, keep in mind that every time you open some betting sites, a cookies banner is displayed. We need to get rid of them by clicking the ‘accept button.’ To click on the ‘accept button’ and then select on the dropdown menus, we write the following code: Breaking down the code: WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="_evidon-accept-button"]'))) makes the driver wait until the ‘ok button’ of the cookie banner is clickable. If this throws an error, then use ‘option 2’ instead. To get the Xpath, right-click on the ‘ok button’ and inspect as we did before for the dropdown menu. makes the driver wait until the ‘ok button’ of the cookie banner is clickable. If this throws an error, then use ‘option 2’ instead. To get the Xpath, right-click on the ‘ok button’ and inspect as we did before for the dropdown menu. WebDriverWait(driver,5).until(EC.presence_of_element_located((By.XPATH, '...'))) makes the driver wait some seconds until the dropdown menu is located makes the driver wait some seconds until the dropdown menu is located Select('...') selects dropdown menus selects dropdown menus first_dropdown one of the three dropdowns that contain betting markets one of the three dropdowns that contain betting markets first_dropdown.select_by_visible_text() selects an element inside the dropdown menu with the help of the betting market’s name. I’m choosing ‘Both Teams to score’ and ‘Over/Under’ in particular because they represent betting markets that I consider easy to find surebets. Looking for ‘live events’ and ‘sports titles’ Before getting the data, we need first to look for live events. Also, to simplify the analysis, we’ll choose only ‘football’ in sports’ names. To do so, we write the following code: Breaking down the code: box represents the box that contains sports events. The site also contains upcoming events, which we don’t need for this analysis. represents the box that contains sports events. The site also contains upcoming events, which we don’t need for this analysis. driver.find_element_by_xpath() helps us find an element within the website through the ‘Xpath’ of that element. Unlike Xpaths from most elements we worked with before, the ‘live events’ element is trickier to get. For that reason, we need to be very specific. Then we use the contains option to match an element that contains Program_LIVE inside the testid within a div tag. Check the picture below to find Program_LIVE Program_live — Image by author sport_title represents the sport name of each section. represents the sport name of each section. driver.find_elements_by_class_name() helps us find an element within the website through the ‘class_name’ of that element. To obtain the class name, just follow the same steps we used before for dropdown menus, but in this case, click on the ‘Football’ header and copy the class name ‘SportTitle-styles-sport’. Since we use ‘elements,’ we’ll obtain a list of sports names that we’ll loop through later. Before we go on with the code, we need to understand the scheme that will help us scrape live data. Scheme for scraping live games The scheme for scraping live games is similar to the one used for scraping pre-match games. However, live games in the betting site Tipico include ‘double row events’ and ‘empty events’ that we need to exclude. Sports title: Represents the sports section. The website has many sports available, but we’ll focus only on football since it’s the most popular sport. Double-row event: Football live events in the first half (0'-45') have 2 rows. The first row contains odds for the full game and the second contains odds for only the first half. We’ll only work with the first row. Single-row event: Football live events in the second half (45'-90') have only 1 row that contains odds for the rest of the game. odds_event: Represents the odds available within a row. Each row has 1 ‘odds_event’ and each ‘odds_event’ has 3 boxes with the markets ‘3-way,’ ‘Over/Under’ and ‘Handicap’ by default. Empty events: Live odds might be suspended for a couple of seconds because of many reasons. We won’t consider rows with empty events to simplify the analysis. Find ‘empty events’ Unlike pre-match games, odds in live games change quickly and could be suspended for a couple of seconds at any time. When betting odds are momentarily suspended, the odds are either locked or empty. Locked odds are easy to scrape, but empty odds are harder. That’s why we need to locate those empty odds with the following code: Breaking down the code: for sport in sport_title loops through the sports list previously obtained loops through the sports list previously obtained parent represents the ‘parent node’ of the sport element, while grandparent is its ‘grandparent node’ that represents the whole ‘football’ container inside live events represents the ‘parent node’ of the element, while is its ‘grandparent node’ that represents the whole ‘football’ container inside live events empty_groups represents each football game with suspended or empty odds represents each football game with suspended or empty odds granparent.find_elements_by_class_name() gives a list of all the empty odds with class name ‘EventOddGroup-styles-empty-group’ inside the football live section. You can get the class name by doing this: empty_events represents each row or football game with at least 1 empty odds market (3-way, over/under, etc.) represents each row or football game with at least 1 empty odds market (3-way, over/under, etc.) [empty_group.find_element_by_xpath('./..') for empty_group in empty_groups] is a list comprehension that loops through the empty_groups list to obtain the ‘parent node’ of each element, that is, the empty_events is a list comprehension that loops through the list to obtain the ‘parent node’ of each element, that is, the except handles errors that could be encountered in the try block. We need this to handle errors when there isn’t any game with empty odds Remove empty_events from single_row_events We find single row events and remove all empty events inside with the following code: Breaking down the code: single_row_events represent the first row of double-row events or the unique row of single-row events. As we explained before, both represent odds for the full game and we focus on them because it’s more common to find surebets in odds calculated for the whole game. Find the class name of single-row events by inspecting any single row. You should find this class name 'EventRow-styles-event-row' represent the first row of double-row events or the unique row of single-row events. As we explained before, both represent odds for the full game and we focus on them because it’s more common to find surebets in odds calculated for the whole game. Find the class name of single-row events by inspecting any single row. You should find this class name [single_row_event for single_row_event in single_row_events if single_row_event not in empty_events] list comprehension that excludes any empty games inside the single_row_events list comprehension that excludes any empty games inside the try/except handles errors that could be raised when there isn’t any game with empty odds Getting live odds Each dropdown selected before will give odds for 3 markets that are extracted with the following code: Breaking down the code: for match in single_row_events loops through all the matches inside the ‘single_row_events’ list loops through all the matches inside the list odd_events represent each event with odds available represent each event with odds available match.find_elements_by_class_name(‘EventOddGroup-styles-odd-groups’) helps us find all the ‘odds_event’ within every match. To find the class name ‘EventOddGroup-styles-odd-groups’ just right-click and inspect the code behind ‘odds box’ as we did before for empty odds helps us find all the ‘odds_event’ within every match. To find the class name just right-click and inspect the code behind ‘odds box’ as we did before for empty odds for team in match.find_elements_by_class_name(‘EventTeams-styles-titles’) loops through the elements with the class name 'EventTeams-styles-titles' within the ‘match’ node. Matches have 2 teams (home and away team); that’s why we need to loop through them and obtain their name with team.text. Then they’re stored with the append method in the teams list loops through the elements with the class name within the ‘match’ node. Matches have 2 teams (home and away team); that’s why we need to loop through them and obtain their name with Then they’re stored with the method in the list for odd_event in odds_events loops over the total number of live matches on the betting site. loops over the total number of live matches on the betting site. for n, box in enumerate(odds_events) loops through all ‘odds boxes’ inside a match. In the beginning, we changed the dropdowns to ‘3-way,’ ‘Over/Under’ and ‘Both Teams to Score,’ so they’re the odds we’ll scrape loops through all ‘odds boxes’ inside a match. In the beginning, we changed the dropdowns to ‘3-way,’ ‘Over/Under’ and ‘Both Teams to Score,’ so they’re the odds we’ll scrape rows = box.find_elements_by_xpath('.//*') gives all the children nodes (odds) inside the box element. gives all the children nodes (odds) inside the box element. n==0 means ‘only take values from the first box.’ In this case, the first box is the ‘3-way’ box and is stored in the x12 list means ‘only take values from the first box.’ In this case, the first box is the ‘3-way’ box and is stored in the list rows[0] tells Python ‘only pick the first row on each odds box.’ With this, we ignored the second row in double-row events. Goal-line Image by author The same process is followed for n==1 (Over/Under) and n==2 (BTTS), but in n==1 we also need to store the ‘goals line.’ This represents the total number of goals that you need to win the bet. For example, Over 2.5 goals mean at least 3 goals to win the bet. Let’s break down the code for n==1. box.find_element_by_xpath('./..') gives the ‘parent node’ of the box element that contains the odds. gives the ‘parent node’ of the box element that contains the odds. goals = parent.find_element_by_class_name('EventOddGroup-styles-fixed-param-text').text finds the goal line by giving its class name. Just right click and inspect the goal line (shown in the picture above) to obtain its class name. Then we obtain the data with .text finds the goal line by giving its class name. Just right click and inspect the goal line (shown in the picture above) to obtain its class name. Then we obtain the data with over_under.append(goals+' '+rows[0].text) appends the goals line data in a standard format. The over/under odds format will look like this 2.5 2.4 1.5 appends the goals line data in a standard format. The over/under odds format will look like this driver.quit() closes the Chrome browser At this point, the scraping part is done! Now we have to make the data readable and do some pre-cleaning with Pandas. Then we’ll save the data with Pickle. Breaking down the code: dict_gambling is the dictionary that stores all the lists that contain the odds scraped is the dictionary that stores all the lists that contain the odds scraped pd.DataFrame.from_dict(dict_gambling) turn the dictionary into a dataframe df_tipico, so we can read it and work with it easily turn the dictionary into a dataframe so we can read it and work with it easily df_tipico.applymap(lambda x: x.strip() if isinstance(x, str) else x) cleans all the leading and trailing white spaces that the odds might have with the strip method. cleans all the leading and trailing white spaces that the odds might have with the method. open('...', 'wb') opens a file named df_tipico in the mode ‘write bytes’ (wb). We save this in the variable output opens a file named in the mode ‘write bytes’ (wb). We save this in the variable pickle.dump(df_tipico, output) saves the dataframe created in the file named df_tipico. We used the same name for both the file and the dataframe, but they could be named differently saves the dataframe created in the file named We used the same name for both the file and the dataframe, but they could be named differently output.close() closes the file Great! We finished scraping the first bookmaker. Bookmaker 2 and 3 are easier to scrape. The full code of bookmakers 1, 2 and 3 is at the end of this article. Now let’s work with the data obtained to find a surebet automatically!
https://medium.datadriveninvestor.com/make-money-with-python-the-sports-arbitrage-project-3b09d81a0098
['Frank Andrade']
2021-03-22 01:59:14.387000+00:00
['Technology', 'Python', 'Web Scraping', 'Programming', 'Data Science']
1,742
It’s about putting your people first for peak season 2021
It’s about putting your people first for peak season 2021 Alex MacPherson, Director of Solution Consultancy and Account Management at Manhattan Associates, discusses peak season and how managing a work-life tech balance is going to be critically important during this peak season too. Top Business Tech Dec 20, 2021·5 min read Alex MacPherson, Director of Solution Consultancy and Account Management at Manhattan Associates, discusses peak season and how managing a work-life tech balance is going to be critically important during this peak season too. Whether it’s freight and potential wage increases, the shortage of HGV drivers or simply the challenges of recruiting and upskilling enough people to staff warehouses and stores, supply chains have never been more topical or talked about than they are right now. With retailers gearing up for their busiest period of trading and already battling a number of external headwinds (more so than simply the effects of the pandemic), this year’s peak season feels like it will be more stressed, for more brands, than ever before. Beyond the material challenges being spoken about however, there is also a welcome shift underway in softermessaging about the increasing value of managing employee well-being and engagement through periods of extended (and often excessive) stress like peak season. This is the reality of the situation many retailers find themselves in. At a time when many have only just managed to weather the storm presented by the pandemic (successfully pivoting to new processes, methods of customer communications and fulfilment strategies to cope with shifting consumer demands). Warehouses now face the reality that many of their workforces have been working at consistently high levels of throughput, meaning that they need to adapt and be flexible to meet consumer demands. If end of year sales targets are to be met, then it does not just supply chain processes that need to be effective; it’s the people who make the processes work that need to be considered far more too. Whether it’s warehouse employees or front of house store associates, much of the pressure of peak season 2021 (and ultimately the annual success of retailers) rests on the shoulders of these individuals. Without wanting to continually look back, there are two key learnings we should take from the pandemic: first, the need for scalable and agile technology is the key to navigating fast-paced, changeable industry landscapes. And, secondly, for all the smart technology you might have in place within your supply chain network, it is ultimately, people, that power businesses and commerce. Employee engagement in warehouse settings has received well-deserved media attention in recent months. The challenges faced during the pandemic have shown the importance of empathy and understanding the worries and anxieties of individual workers. They also highlighted the risks warehouse workers continue to take every day in order to make sure we, the consumer, still get what we want delivered to our door when we want it. This surge in e-commerce over the last 18 months (and indeed its continued popularity) quickly highlighted the importance of productivity and an engaged, committed and healthy workforce. Having a loyal and engaged workforce is key to business success, but also crucially, it’s key to employee well-being too. Achieving a happier, more engaged workforce requires more than simply reviewing performance data and rewarding high-performers. Increased expectations for fulfilment speed and volume are driving organizations to better understand and engage their workforce to differentiate and excel. Manhattan Active WM uses gamification theory and behavioral sciences to revolutionize warehouse labour management, with a focus on providing a more individual and rewarding work experience. The result. A warehouse environment that helps promote employee productivity, satisfaction, well-being and reduces turnover too. Managing a work-life tech balance is going to be critically important during this peak season too. Advances in consumer technology are bleeding into the business world, meaning enterprises are under increasing pressure to keep pace with the consumer market when it comes to user experience and innovation. Using outdated, archaic technology is a sure-fire way to exasperate your workforce and cause them unnecessary stress. Warehouse employees and front of house store associates expect to be able to use the technology they are accustomed to in their personal lives, in their working space. This means having instant access to accurate information (such as in-store and online inventory, customer records or even social media purchasing and browsing trends), using technology that can connect them to other team members and having user-friendly operating systems that are intuitive and easy to get to grips with. To switch from the back-end (warehouses) to front of house (stores) for a moment; ensuring a store Point of Sale (POS) solution is capable of handling orders and sales, managing inventory and customer-facing functionality (including loyalty, promotions or clienteling) will help retail associates both at the sales desks and in-flight on the shop floor stay ahead of trends, regardless of what might happen in future to shape them. However, it will also arm the in-store associates with the information they need to provide a seamless customer experience for shoppers. And, as anyone who has ever worked in a high-street store over Christmas will know, being able to have accurate, real-time, information on hand to deal with customer requests makes life 100% less stressful! The pandemic has provided individuals and organizations with an opportunity to rethink priorities, and in many cases, it has also afforded them the chance to reset what is truly important to them: whether it’s a greater emphasis on sustainability or a newfound appreciation of the true value and importance of people to organizations, there have been silver linings to the clouds of the last 18 months. It is clear that options such as pick and ship from store, Click and Collect and micro-fulfilment are no longer nice to have options; they are critical areas of a brand’s ability to meet peak (and regular) seasonal challenges and people are key to the success of these capabilities. As retailers continue to focus on remodeling the store, decentralizing fulfilment and managing direct-to-consumer requests, having an omnichannel proposition capable of managing an effective decentralized fulfilment network is now critical like never before. READ MORE: Let us hope that with the use of a smarter supply chain and retail technology (tempered in the heat of the pandemic), that peak season 2021 will see more emphasis placed on enabling, empowering and looking after the well-being of those people in warehouses and stores around the globe who continue to keep life and commerce running for us all. For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin! Follow us on LinkedIn and Twitter
https://medium.com/tbtech-news/its-about-putting-your-people-first-for-peak-season-2021-ef2e4691746d
['Top Business Tech']
2021-12-20 11:34:30.144000+00:00
['It', 'Technology', 'Business', 'Leadership', 'Peak Season']
1,743
Cleantech’s Comeback
Numerous accounts have documented the collapse of venture investment in the clean-technology sector during the first fifteen years of the 21st century. Retrospectively known as Cleantech 1.0, investors piled $25 billion into cleantech startups from 2006–2011, funds that resulted in little return on capital. [1] The subsequent flight of capital from cleantech increased commercialization challenges for the struggling sector. In the latter part of the 2010s, however, the tide turned once again for cleantech startups. With $4 billion invested in the space since 2017, investors clearly have renewed interest in supporting cleantech companies. So, what have we learned from Cleantech 1.0? What are investment firms doing differently to account for this newfound knowledge? What problems may still exist, and what can be done to solve them? In short, the investment community has moved to account for the deep technical risk, long development timelines, and capital intensity associated with cleantech investing. However, while energy markets, including electricity, fuels, and transportation infrastructure, seem large, the paths to market are arduous, and value capture in those markets is challenging. For Cleantech 2.0 to be a resounding success for venture investors, a series of structural reforms and government interventions are necessary. The climate challenges facing the planet are numerous — emissions are tightly tied to global economic growth and, despite progress in reducing emissions in the electricity sector through deployment of carbon-free electricity and efficiency gains in end usage, achieving the Paris Climate Accord goals of limiting global temperature increase to 2°C will require both an extraordinary build-out of existing renewable resources and rapid invention and diffusion of new technologies. Michael Kearney is a Senior Associate at The Engine. He holds a Ph.D. from MIT Sloan School of Management, where his research focused on frictions in the commercialization of science, regulatory barriers to innovation and entrepreneurial strategy. Previously, he led development efforts at a cleantech startup called Ambri. Mike received an M.S. in Technology and Policy from MIT and a B.A. from Williams College There is an emerging body of evidence that in the energy sector, startups are more likely to fund high-risk, high-impact technical projects [2] compared to large incumbents with incentives to show growth on a quarterly-returns basis that is not aligned with the longer timelines associated with innovative projects. As a result, these startup projects are vital. Society must approach the existential challenges of climate change from every angle — every tenth of a degree increase in global temperature that we are able to mitigate has meaningful implications for the future of our planet. That the investment community is stepping up to this challenge is a resoundingly positive step in the right direction. Lessons from Cleantech 1.0 In a thoughtful review of financial returns during Cleantech 1.0, Gaddy, Sivaram, and O’Sullivan (2016) evaluate the returns to cleantech venture capital investments relative to those in other sectors. Of the $25 billion that investors placed in cleantech firms from 2006–2011, they lost more than $12.5 billion (over 50%). Moreover, whereas successful cleantech investments returned 8.6 times the initial investment to VC firms, similarly successful investments in software companies returned 11.6 times the initial investment, and this likely understates the overall difference as cleantech companies in the sample were more likely to fail. As we reflect on the Cleantech 1.0 period, it is important to reflect on areas where the investment community has evolved in response to the challenges that hindered returns. Today’s investor community has internalized these lessons, shifted focus, and launched a variety of experiments that offer hope that the returns for Cleantech 2.0 will be different. Technical Risk: Investors did not fully appreciate the technology risks inherent in clean technologies. Complete understanding of the technological advance at the root of a cleantech innovation requires accessing the frontier of a specific scientific field. Rarely do investment teams retain in-house talent able to adequately evaluate these types of technologies. During the Cleantech 1.0 period, the venture capital industry self-assembled around software-driven business innovations, with VC funds recruiting primarily from MBA programs rather than PhD programs. In response to this challenge, two divergent pathways have emerged. On the one hand, many firms have eschewed technology risk altogether and found ways to use parallel innovations in software, connectivity, and analytics to build large, impactful businesses in the energy sector. These investors look to high-profile exits like EnerNOC, Nest, and Opower as inspirational endeavors that both provide VC-quality returns and have significant potential impact in the energy sector. On the other hand, cleantech-specific firms have arisen, staffed with the internal technical and sector-specific experts needed to adequately assess industry-specific risks. (Take a moment to check out the work being done at Clean Energy Ventures and Energy Impact Partners, among others.) Technology Development Timelines: A different feature than pure technical risk, the timeline to maturity for a given technology can adversely affect investor return. There is a mismatch between a typical 10-year close-ended fund, along with the return requirements thereof, and the plausibly five-to-ten-year time horizons for technology development, scale up, and manufacturing. New funds and financially engineered structures have arisen to abate some of the more vexing realities related to the long development timelines. Prime Coalition creatively brings philanthropic capital into the capital stack by blending that capital with traditional LP dollars to shift the return profile of the fund as a whole. Breakthrough Energy Ventures and The Engine have extended fund lives to provide game-changing technologies the time to mature. Capital intensity: In addition to the time it takes to develop technology, cleantech firms require significant capital investment throughout a company’s life cycle. [3] Importantly, though, it isn’t just that it takes a lot of capital to bring the technology to market but also that even in the early days of a company’s life, the necessary technical experimentation is relatively costly. This is key to the lower returns thresholds described above. Conditional on a comparable exit valuation, a firm that requires more money to get to scale will return less capital. However, the capital stack is diversifying to include later-stage institutional investors that play a critical role in managing the capital intensity of the cleantech sector as firms scale into commercial readiness. Large institutional investors, such as Softbank, Temasek, Fidelity, and others, are now active players, as are large corporates like ENI. More change is necessary These developments are exciting, but they could be largely immaterial if the commercialization path for cleantech companies is not streamlined. There remain significant barriers to the scale-up of clean technologies, barriers that stem not from the inherent technical challenges of innovation, but rather from the market dynamics within which these technologies have to compete. Across industry verticals, energy technologies face an uphill climb to commercialization. Consider, for example, the electricity industry. In electricity, end users of innovative products, mostly electric utilities, are highly regulated organizations — their return profile on an investment in an innovative technology looks exactly the same as their return profile on a traditional technology, resulting in a system lacking incentives for change. Similarly, in fuels and any sectors currently dependent on fuels for transportation or high-quality process heat, carbon-free alternatives have to compete with traditional fossil fuels at cost in many cases because economies have not appropriately priced carbon emissions. Moreover, the market price for oil and gas can ebb and flow in response to competitive pressures, which is an existential threat to commodity competitors. To generalize across end-use applications, cleantech companies face a few specific hurdles getting to market: access capital to scale up production or deploy first-of-a-kind commercial projects, entering highly regulated markets, and working with risk-averse incumbents. These barriers result in reduced market opportunities for energy companies across commercialization stages, in particular at any potential exit point, resulting in reduced valuations and exit multiples. Specifically, companies face four distinct but related barriers: Funding of early-stage prototypes: Companies have to balance achieving meaningful technical progress at a relevant scale while demonstrating market traction, even though, at this scale, a prototype has little market value. Funding of first-of-a-kind commercial projects: A critical barrier to commercializing clean technologies is asymmetry between project risk for first-of-a-kind deployments and risk tolerance of capital providers for project finance. In non-commodity fields, a financier would be able to internalize the increased project-level risks by increasing the interest rate on the capital to be provided for a project. However, because cleantech firms often operate in a commodity market, increasing the interest rate on provided capital decreases the economic viability of those projects. In cleantech, this challenge extends beyond the “first of a kind” as well because even after the first deployment, it can take months or years to demonstrate the lifespan and reliability of an infrastructure asset. Fractured, convoluted regulatory regimes: Regulatory environments remain particularly stubborn to new technologies. This is acute across energy subsectors but perhaps most acute in electricity. For example, 10 years after the first battery storage projects tied into the grid, wholesale electricity markets are still debating how to value energy storage in capacity markets. Regulations within the electricity sector prohibit the primary end users of new technologies, namely electric utilities, from efficiently working with the new technologies on a research or commercial basis. One game with different rules: The reality is that today, cleantech firms are competing with conventional energy sources in an economy that does not appropriately price greenhouse gas externalities. This limits the market opportunities for cleantech startups, with associated downstream effects on the investment community that reduce incentives for investment across the innovation pipeline. Innovation scholars across fields have articulated the important role of efficient commercial markets for technology as a key element of a functioning innovation system. [4] Critical to the commercialization of a new idea or product is a startup firm’s engagement in the market with customers, regulators, and larger established firms as strategic partners, exit opportunities, or both. The fluidity of these engagements is critical to building a cleantech financial system. The path forward here is not complicated — there is no shortage of good ideas about how to solve these challenges. The first and most obvious response is a nationwide price on carbon. However, moving beyond the obvious, focus must reside on pathways to reframe the regulation of energy technology and a national deployment effort. Considering New Regulatory Frameworks Unlike drug development, where there is a federally regulated but clear path to commercialization that delineates appropriate value inflection points across the life of technology development, the energy sector in the U.S. is regulated within each state, across collections of multi-state actors, and at the federal level. It is an opaque framework that encourages incumbents to be risk-averse and limits those incumbents’ ability to experiment with new technology. Consider the regulatory framework for electric power. The challenging role of electric utilities is to deliver power on a sub-second basis across vast distances with high reliability. Downtime is measured in the magnitude of dollars lost in the economy, often on the order of billions of dollars, as we have seen recently in the rolling blackouts in California. As a result, the industry is tightly regulated to preserve reliability and protect consumers — technology innovation and diffusion become casualties of a system that prioritizes reliability. We must move beyond the false choice of reliability or innovation by creating frameworks that enable both. New rules are needed that empower electricity providers to experiment with new technologies. The federal government could assist with this through the creation of a technology certification office that approves specific technologies for experimentation in risk-averse settings at initially modest investment levels that increase with the technology’s maturity. Moreover, a staged process provides investors with tangible value-inflection points as a company approaches commercialization, value-inflection points that draw more follow-on capital into a company. A National Deployment Effort Barriers to cleantech commercialization exist across stages of deployment for clean technologies from pilot projects to broad-scale commercialization. A National Deployment Effort that nurtures technologies from pilot projects to massive impact is necessary. Historically, the U.S. government has played an active role in later-stage commercialization efforts of foundational technologies, and that same effort is required for cleantech going forward. Consider, for example, the development of the U.S. semiconductor industry in the 1950s and 1960s. [5] Government procurement efforts were as or more fundamental to the growth of the sector as government R&D efforts. In fact, from 1955 to 1977, government procurement accounted for an average of 38% of all semiconductors produced in the U.S. In 1962, the first year that integrated circuits shipped, the government purchased all of them (100%). At the time, the U.S. government was one of the largest consumers of transistors in the world, just as today, the U.S. government is the largest individual consumer of energy in the world. A critical barrier to commercializing clean technologies is asymmetry between project risk for first-of-a-kind deployments and the risk tolerance of capital providers for project finance. The government has played a productive role in bridging this gap in the past through tax credits and the DOE Loan Guarantee Program, among other mechanisms. To capitalize on the recent growth in cleantech innovations, however, a codified national deployment effort is necessary that adjusts existing programs and offers new ones that meet the scope of the climate crisis. The Loan Guarantee Program has supported the deployment of energy projects with significant capital expenses. While it continues to support large-scale efforts, it is an imprecise tool for supporting more modular, distributed, early-stage projects. Potential improvements to the program include expanding the set of technologies that can be supported to a broad umbrella of clean technologies and reducing barriers to entry for all smaller, more distributed, and higher-risk endeavors. Recently, the Clean Future Act included language for a National Climate Bank. This would be an effective tool for launching new clean technology products and projects into the market. Importantly, the Bank is structured to leverage private sector capital to cover the majority of project costs but only public capital to support the difference between the market value of the project and the ultimate project costs. This is a critical intervention because it is rare for novel clean technologies to be competitive for early projects — often, components/products are not yet being manufactured at scale, and the corresponding projects have to compete in commodity markets. Finally, as the largest consumer of energy in the world, the U.S. government could serve as a test bed for early-stage commercial energy projects. Procurement and testing mandates for government facilities could do for cleantech what government procurement of early computing technologies did for that hardware market. Moving Forward The data is clear: the investment community is currently rising to the challenge of supporting the next generation of clean technologies. It is doing so with new approaches and coalitions that address long-standing asymmetries between cleantech innovation and investment structures. The efforts to align capital to the realities of technology development and the needs of financial markets at all stages will remain undervalued without commensurate attempts to tackle the daunting challenges on the commercialization side for cleantech startups. Investors, collectively and alongside their portfolio companies, must participate in the national conversation about how best to seed the commercial landscape of our energy future. In response to a depression-like contraction of the economy, the success of the cleantech industry is critical. We must redouble our efforts to ensure that the story of Cleantech 2.0 is one of sustained growth, new industries, and new opportunities for the American people.
https://medium.com/@the-engine/cleantechs-comeback-38ec13507452
['The Engine']
2020-11-09 19:47:39.104000+00:00
['Climate Change', 'Climate', 'Investing', 'Technology', 'Venture Capital']
1,744
XinFin Partners With AiX, A Leading AI Trading Platform Backed By Major Regulated Financial Institutions
XinFin Partners With AiX, A Leading AI Trading Platform Backed By Major Regulated Financial Institutions XinFin will provide the live beta environment for pilot testing the creation, trading, and integration of tokenized bonds. XinFin Network, an open source enterprise-friendly hybrid blockchain platform has partnered with London-based AiX, a leading AI Trading Platform backed by a FINRA-registered Wall Street Brokerage Firm. AiX has a team of experienced inter-dealer brokers holding FINRA Series 7, 63, 55 and 24 designations in the US in addition to FCA registrations in the UK. As a part of this partnership, XinFin will extend its platform and technology to AiX for the creation and trading of tokenized digital bonds. The AiX platform will leverage XinFin TestNet simulation for demonstration and integration of tokenized bonds. XinFin will provide continuous support to the AiX team for this simulation and integration of the project through to commercial production by deploying XinFin Public Network masternodes on TestNet that recently went live. URL: http://xinfin.network/#stats XinFin Public Network is an EVM compliant public network with KYC enforced masternodes with randomized Delegated Proof of Stake consensus that achieves decentralization with high throughput. The Hybrid architecture ensures the privacy of sensitive data and transaction information is operated in the private network while a limited set of data and transactions can be relayed on the XinFin Public Network. Digital Bond Creation Digital Bond Creation PoC link is available at https://www.tradefinex.org/publicv/bond_create The global infrastructure gap is projected to be $15 trillion by 2040 and XinFin’s focus is to bridge this gap by providing liquidity access to global institutional investors through investments in critical physical infrastructure projects including power, water, and transportation. URL: https://www.tradefinex.org/ Sharing his excitement on formalizing AiX and XinFin collaboration, Joseph Appalsamy, Head of Business Development said, “We look forward to this exciting collaboration with AiX as we team up to help bridge the global infrastructure gap by delivering innovatively regulated infra asset and instrument tokenization trading and market making to unlock liquidity.” AiX will help XinFin in new market making and secondary trading of digital infrastructure bonds through tokenization of IAC (Infrastructure as an Asset Class) and matching of price-points, risk and return profiles. AiX will extend its latest technological tools and platform to XinFin for newer/secondary markets generation and for integration with XinFin TestNet. AiX will connect syndicate leads led by market makers, dealers, brokers, banks and institutional investors over their AI trading platform. XinFin will use AiX platform for OTC Trading and will get continuous support from AiX technology and hybrid institutional broking team. Jos Evans, CEO, AiX said, “AiX is thrilled to be a part of this brave new world in finance. We are firm believers in the future of the security token ecosystem and will work tirelessly with the XinFin team to ensure the best execution for these new products by using our unique AI broker service to source liquidity globally.” About AiX: AiX platform uses AI to connect traders across Financial Markets. AiX provides instant, trader-to-trader interaction, removing the need for inter-dealer-brokerage through groundbreaking AI. Underpinned by award-winning cognitive reasoning technology and the power of blockchain technology, AiX connects traders across markets and provides unprecedented insight and control. AiX makes trading simpler, smoother and more secure, boosting profits in the process. AiX team of experienced inter-dealer brokers hold FINRA Series 7, 63, 55 and 24 designations in the US and FCA registered in the UK. Ombrello Solutions is a trading name of Arian Financial LLP [FCA registration number 415230] — an established London based Inter-Dealer Broker which has been providing liquidity across all asset classes, in the Institutional market for 18 years. Website: https://aixtrade.com/ About XinFin Network: XinFin is an open source Hybrid Blockchain protocol initiated out of Singapore. XinFin Network[XDCE] is a utility network that lets enterprises deploy real-world applications on the Hybrid Blockchain Protocol in a conducive, compliant, and regulation-friendly environment for diverse use cases in trade, finance, remittance, supply chain, healthcare, and other industrial areas to improve business efficiency. The XDC Dev Environment encourages developers to build DAPPs using smart contracts. The XinFin community derives its success from the ecosystem comprising of developers, network utility, and long-term backers. Website: https://www.xinfin.org/ Slack: https://launchpass.com/xinfin-public Telegram: https://t.me/joinchat/IDjEOEUaNJNpbeM-c1YtZw Twitter: https://twitter.com/XinFin_Official Linkedin: https://linkedin.com/company/xinfin/ Subscribe to the XinFin Community Newsletter by clicking here.
https://medium.com/xinfin/xinfin-partners-with-aix-a-leading-ai-trading-platform-backed-by-major-regulated-financial-c670ef1eba87
['Xinfin Xdc Hybrid Blockchain Network']
2019-04-05 16:45:16.934000+00:00
['Finance', 'Technology', 'Blockchain Technology', 'Blockchain', 'Trading']
1,745
Disney plans to match Netflix in its streaming budget
According to The Economist Disney+ streaming service has been one of the great success stories of the year of lockdown. Launched a year ago with a target of reaching 60m-90m subscribers by 2024, it‘d hit that mark by this summer. Disney said that in the next few years it would release around ten more “Star Wars” series, ten based on the Marvel comic books, 15 other new original series and 15 feature films. By 2024 its content spending on Disney+ will be $14bn-16bn across all its streaming channels (which include Hulu & ESPN+). To help pay for all this, the company plans to raise the subscription price of Disney+ by a dollar a month. But that dollar will be multiplied by what it now expects to be 230m-260m subscribers by 2024 — more than treble its previous target. Across all its streaming channels, Disney expects to have more than 300m paying subscribers by 2024. This could be enough to make streaming the company’s single largest business by revenues. The spending spree will also worry cinema-owners. WarnerMedia, a subsidiary of AT&T, a telecoms group, shocked them last week with its announcement that in 2021 all its feature films will be released on its HBO MAX. The first movie to get this treatment, on Christmas Day, will be “Wonder Woman 1984”. Follow us on our Instagram.
https://medium.com/@techrule/disney-plans-to-match-netflix-in-its-streaming-budget-b4b60790422f
['Tech Rule']
2020-12-26 05:52:48.984000+00:00
['Disney Plus', 'Marketing', 'Netflix', 'Technology', 'Streaming']
1,746
Why should you care about your MVB just as much as about your MVP?
Why should you care about your MVB just as much as about your MVP? Joanna Chilicka Follow Mar 19 · 6 min read While navigating a complex reality of a tech startup, you may tend to lose focus on who is your solution actually for — and those are humans, who are making decisions and judgments based on their emotions. It is therefore crucial, for any startup, to define its key brand’s attributes through which the target users will be perceiving your product and the whole company in general. You may want to do that before they will make their own opinion about you. That can be challenging though, as the tech startup scene is very active and that requires new products to pursue new ideas with agility and speed to be first or to use their opportunity to be better than the others as quickly as possible. This is how, a while ago, the concept of Minimum Viable Product (MVP) became popular. Little those startups know, the same can be done with branding by making a Minimum Viable Brand (MVB). MVB helps startups to ensure that the product hypotheses are based on market insights and strategic thinking and that branding fits into a speedy development of the product. WHY DO MVB? A quick and agile process that fits into and does not stop your product development process that fits into and does not stop your product development process Better connection with your audiences and their understanding of your vision as well as easier comprehension of the product’s functionalities with your audiences and their understanding of your vision as well as easier comprehension of the product’s functionalities Maintaining marketing materials consistency and as a result, looking more put together and professional — both users and investors will look at your product differently when it will look good and communicate to the point! and as a result, looking more put together and professional — both users and investors will look at your product differently when it will look good and communicate to the point! Instead of joining the gimmicks and price battles for clients’ attention, you will gain human attributes to your brand which develops emotions bringing you user’s loyalty in the long run. Standing out from the crowd , or at least having enough clarity in messaging that users can appreciate the ease of communication. It’s important especially for products that have hundreds of competitors. which develops emotions bringing you in the long run. , or at least having enough clarity in messaging that users can appreciate the ease of communication. It’s important especially for products that have hundreds of competitors. Great ROI , because with your brand being already distinct you can spend less on raising awareness than your competitors who did not figure out their branding yet. , because with your brand being already distinct you can than your competitors who did not figure out their branding yet. Second chance from the users — when being authentic, branding can help you to convince the users to forgive you small mistakes or bugs while you keep improving your product. Best yet, even if you have to pivot, your audience will know you and your quality that you stand for. The thing is, that especially in the case of seed-stage startups, you don’t necessarily need the full branding package. Such activity requires sufficient funds which at that point many of you don’t have yet. Focusing then on what takes the least effort and money but also is enough to convey the brand’s value to communicate with the users properly is enough. When creating MVB we need to think of the core elements of a brand: Those basic elements should be commonly understood within an organization before launching a new product to clearly articulate the brand’s and product’s values to the masses and create cohesive narration throughout all the communication actions. A startup with a strong brand will have a long-term advantage with organic growth over the competitors who do not. In most cases, startups focus on their products first then find scalable markets for them. But the trick is in finding out what is the right amount of brand work at the stage the company is at. An MVB creates the right balance between agility & speed and structure & flexibility. WHAT WE STAND FOR — Ask yourself why are you doing this? This is your brand’s essence — a condensed heart and soul of the brand and the fundamental nature of it or a certain quality. It’s usually described in 2 or 3 words and is presented across the product. In the current times, it’s crucial to be transparent about what the company/product is standing for as the consumers have become very much aware of anything that even has a hint of insecurity. The essence is the constant reminder — for external and internal communication — of why are you doing this in the first place. WHAT WE BELIEVE IN — Identify the product’s values Usually, those are the values founders had started the company with their vision of how things should work and serve others. Those are very important to identify for the internal teams to relate to easily. When they do so, they naturally become the company’s advocates and care more about its success in general. After all, you do want your team to love your vision and their job, don’t you? :) WHAT PEOPLE WANT — Ensure that you speak to the user’s needs Realizing what people want and what they will be willing to engage with will be crucial for your company to serve them the right solutions they will be willing to buy. This step is all about understanding your target audience and imagining what their representation (persona) looks like, feels like, struggles with, and how they find solutions. Having all that information can potentially make you realize things you have not yet discovered or make you want to adjust the product in a way that serves people better and therefore — makes the product more successful. WHAT WE OFFER — Have a comprehensive look into what your product offers This may look like an obvious one but when you take a deeper dive into user understanding you may come to certain realizations f.ex. That some of the features serve well to one target and not to another etc. Also, taking a good look at the experience people are having when they are interacting with your product would be very rewarding. It may eventually help you to understand the user experience that your product currently has and whether there is a need for certain adjustments to enhance it. WHAT DISTINGUISHES US — When people think about your product — what pops up? Those are your key differentiators. What makes you stand out among the competitors? How is your product better? How does your product serve your target audience better? This step can also help companies that are not much different than their competitors but with the power of branding you can speak to people’s needs and emotions so well they will still win the majority of the market. WHAT WE SAY/SHOW — Turn all that exploration into a visual representation This last step is probably what everybody is most acquainted with when they are thinking about branding. That is because it’s about all the visual output created from the information gathered from the previous steps. That is a logo, visual identity as well as the way you communicate with your audiences (your voice — spoken and written). *** The brand you create should become your compass. Everyone internally should have a sense of what is “on brand” and what is not to be able to execute their part correctly — especially the marketing team and founders need to pay attention to have communication, partnerships, and decisions to be consistent with the brand. After all this effort is done it is still hard to quantify the value of the brand and that’s something we need to be aware of. You sure can measure your brand awareness by spending money on making surveys but a better way is to just observe the market. You will start to see positive product reviews and comments of people loving your brand. While your product grows and develops your brand should too so that the users are provided with a great technology that communicates in a truly humanistic and approachable way. Tech product’s layers — core values, technology, and design on top. At Onteractive, we believe that the success of a tech startup does not come from technology alone. We collaborate with our clients to clarify the founder’s core vision and translate it into designs that communicate well with their audiences. Please feel free to contact us if you would like to discuss possible cooperation! Instagram /Facebook / hello@onteractive.eu Autor: Joanna C.
https://medium.com/onteractive/why-should-you-care-about-your-mvb-just-as-much-as-about-your-mvp-4ea84d85eb4d
['Joanna Chilicka']
2021-03-19 01:32:50.858000+00:00
['Technology', 'MVP', 'Startup', 'Branding', 'Innovation']
1,747
How Apple Can Make Money Through a Search Engine
OPINION How Apple Can Make Money Through a Search Engine How Apple can make up for losing $12 Billion Created with Canva Design Congress wants to break Google, claiming it has illegally thrashed the competition in the search engine market. Google dominates the market with an 86.86% market share as of July 2020. The argument is that Google pays billions of dollars to other companies to become the default search engine for its consumers. In 2019, Google paid $30bn for “traffic acquisition costs”, almost a third of its entire search revenue. This was up from $26.7bn the previous year, and up from just $6.2bn a decade earlier. Google reportedly paid $12 Billion to Apple to make Google the default search engine on Safari for iPhones. This means that if Google is regulated, Apple would lose around $12 Billion each year, which amounts to 1/5 of all its services revenue. Apple depends on Google, and their ties can be cut anytime. As an alternative, Apple is rumored to be working on its own search engine — Apple Search. So, how Apple plans to monetize it and make up for $12 Billion?
https://medium.com/datadriveninvestor/how-apple-can-make-money-through-a-search-engine-12e018a58154
['Shubh Patni']
2020-11-14 14:06:37.266000+00:00
['Technology', 'Google', 'Apple', 'Innovation', 'Business']
1,748
As We Stop the Spread, What Are We Doing for Children?
As We Stop the Spread, What Are We Doing for Children? Tech Empowerment vs. Tech Dependency in the Age of COVID-19 Photo from NME While most of us anxiously await the chance to put 2020 behind us, there is one unhurried soul merrily reveling in the bounty of this year — Baby Shark. Indeed, just this month, the beloved underwater character and his entire Pinkfong clan claimed the title of “Most Watched YouTube Video of All Time,” soaring well above 7.04 million views. While this feat may come as a surprise to many, a quick glance into the data will reveal that this isn’t astounding at all, especially considering the video’s appeal to one of the largest demographics in YouTube’s consumer base: children. In fact, in 2019, YouTube was hailed “the most popular babysitter in the world” and a Pew Study found that videos featuring children were viewed three times more than videos in any other category. One can only imagine how much those statistics have skyrocketed since the onset of the global pandemic, with a 60% increase reported in technology use at-large. Quite frankly, it would be absurd to expect otherwise, as COVID-19 has shaken our world to its core, changed the course of life as we once knew it, without a discernible end in sight. Our new normal consists of work-from-home culture, virtual learning, masks, and face shields, and at the crux of it all lies our indispensable friend, technology. Once an arbitrary tool, now a permanent fixture, that we simply cannot do without, but as with any tool, mishandling may lead to risks, like obsession and misuse, especially for adolescents. Photo from iStock Photo Fortunately, there are ways to mitigate these risks and teach children to actively master technology, rather than passively letting it control them. And with the possibility of another shutdown in our horizon, there has never been a more opportune time for them to learn. Teach them to Create, Not Just Consume The most notable innovators in the tech industry have spoken publically about limiting screen time within their home. Apple’s late and great Steve Jobs, Google’s Sandar Pichai, and Snapchat’s Evan Spiegal, among others. They all expressed a need for balance in childrens’ lives to embolden imagination and independent thinking. While we would be wise to follow their example, considering their expertise in the industry, COVID-19 has completely thrown any semblance of “balance” out the window. Even pediatric experts have urged parents not to obsess over the amount of time children spend utilizing technology. What matters most is how they use that technology. A new era demands new rules and by not only ensuring that kids injest quality content, but inspiring them to build content of their own, we can cultivate a generation of leaders who are ready to rule tomorrow.
https://medium.com/swlh/as-we-stop-the-spread-what-are-we-doing-for-children-5c49bb7dcee6
['Terranie Clarke']
2020-11-20 00:23:47.153000+00:00
['Technology', 'Parenting', 'Covid 19', 'Edtech', 'Education']
1,749
How Palantir renews the promise of big data
Data analytics has been around since the late 1960s when computing and processing costs dropped. In the last decade, we saw another boom of data analytics through the rise of mobile technology, social media, and big data. The new era promised greater analytical powers, by which algorithms far surpassed the human ability to process information; it expanded the need for intelligent assistance, delivered through a capable system like the Iron Man’s JARVIS. Big data revolutionized decision-making: businesses and organizations desire ever greater “visibility” and chase after “data-driven” insights, to improve strategy, operations, and supply chain. Born in the Silicon Valley that started it all, Palantir is a uniquely positioned company catering to the renewed appetite for data analytics. Palantir started out as a service company, but its business model today is built around a futuristic technology. Despite recent growth in data analytics, few organizations have unlocked the responsive potentials of big data; in fact, many are still on a journey to technological transformations that are fraught with friction. Today’s big data calls for a full suite of data system, query, dashboarding and optimization tools, while companies founded in the last century (e.g. IBM, Oracle, SAP) remain dominant data solutions providers. Typical data systems and solutions. Although a number of smaller point solutions have grown to fill in gaps of these ERP-focused players, they remain disparate systems and interrupt the native flow between end-to-end processes. Any one system relies on heavy integrations to communicate with another system; when integrations are not possible, users fall back on manual importing or exporting of data, reverting back to the old days of operating offline. Palantir set out to solve this problem. For the last decade, technology consulting has been helping disrupted organizations transition to the new era. They help clients translate extensive knowledge and sophisticated operations into “requirements”, a list of capabilities essential for operations, which technology providers take to configure in a standard solution. But not every requirement can be met, because organizations often have a vast amount of human knowledge and expertise that took decades to develop into a competitive advantage. If they want to onboard all operations onto a new system, the alternative is to customize the solution at much higher costs. However, there is a catch: the customized solution is not future proof. It does not get upgraded, increasing the risk of its obsolescence against the speed technology advances today. In addition to trapping the organization in a sunk cost fallacy, customized solutions tend to reinforce existing siloes and rigid processes rather than changing them. For these reasons, technology consultants and their clients tend to stick with a package of standard solutions, while using manual workarounds to accommodate for unique needs that have not been addressed. More ambitious companies are aware of the limitation of off-the-shelf solutions and decide that the path to cost-efficient and integrated analytics is to do it themselves. They would hire talents to build an analytics center of excellence (CoE) inhouse. Expectedly, demand for data engineers and analysts have soared in recent years, and it is projected to grow by over 30% in the near future. Nonetheless, there are significant investment risks to inhousing. First, technology development is not the forte of commercial or public organizations; no matter how well staffed, non-tech companies would have limited resources to match the capability of leading technology vendors. Second, the reality of internal software development is often subpar products, delayed timelines, and over-budget costs. Typically, it takes 5 to 10 years for a company with legacy technology to build a full-fledged CoE — by the time it is fully functional and integrated into operations, the market may already be on the next generation of products and capabilities. The onset of COVID-19 signals to organizations and enterprises that the need to transform and pivot becomes ever more imminent. Yet the challenge remains: rapid changes in the world today often outpaces the ability to adapt. This is why Palantir is a game-changer in the data solutions market. It effectively delivers an analytics CoE with a substantially shorter timeline (2 years or less) by lending a powerhouse of developers and software engineers to its clients. It propels the client to the latest capabilities, rather than leaving them to play catch up every few years. Of course, the challenge to this type of deployment approach is the steep learning curve, which has been reported as the system being “not user friendly” and requiring “technical backgrounds” to use. Palantir has tried to mitigate these risks by lowering the bar for using the solution (e.g. with no code workflows) and providing training services as part of the solution stack. What Palantir also changed is the bad reputation around customization. Thanks to its subscription model, Palantir is able to achieve “customization without legacy”. They co-develop a wealth (hundreds) of custom applications with the client, and sustain the delivery of updates to them via Apollo. By design, frequent updates entice clients to the product as its enhanced powers become addictive: clients “found that the more data [Palantir] touches, the more powerful and relevant and useful it becomes.” Combined, customization and the subscription model highly increase the cost of switching, thereby giving Palantir massive incumbent advantage. Palantir’s final advantage is its Silicon-Valley strategy: scaling. On Investor Day, Palantir announced that its vision is to scale their solution across industries. Scaling customized solutions used to be a challenge because highly specialized industries (e.g. defense, law enforcement, advanced manufacturing, pharma) have complex operations and unique needs that they share much less with the larger base of consumer business. Traditionally, standard solutions scaled by being the lowest common denominator across industries; but Palantir has chosen a different path. It started in the most sophisticated environments and managed a large number of scenarios. As a result its product is more robust and flexible to support a diversity of needs. It would hold a significant advantage over point solutions players in various industries today, because it would bring to the table not just an ETL, a machine learning algorithm, or an optimizer. It seeks to replace all of those tools, while offering comparable, of not better, analytical power. Iron Man gets intelligence at his finger tips, via JARVIS. As long as the promise of big data analytics is alive, Palantir will do well, because the market today still wants a smartphone for enterprises — a vertically integrated, one-stop-shop system that underpins end-to-end operations. Not to mention, the future will see the need for automating analysis with technology and programming, as much of data analysis today remains costly and repetitive. While Palantir’s corporate product, Foundry, may join the competition against current leading data solutions (such as Alteryx and Snowflake), its priority and niche will probably remain with the most complex organizations for a while. In spite of the risk of customer concentration, the best environment for Palantir to grow is where players are few and challenges are many. The development and implementation of Palantir’s products will continue to burn capital — as is the case with any highly customized solutions — but when it succeeds in making JARVIS, it would capture the most valuable and untouchable customers for years to come.
https://medium.com/humble-shop/how-palantir-renews-the-promise-of-big-data-eaba4809570b
['Miaoling Liang']
2020-12-13 00:43:23.621000+00:00
['Technology', 'Data Analytics', 'Palantir', 'Scaleup']
1,750
Satellite-Based narrow band-IoT now a reality in India
Skylo developed a IoT solution under Made In India scheme. It will connect with BSNL’s satellite ground infrastructure and provide coverage over All India, including Indian seas. BSNL, in partnership with Skylo tech India, announced today a reported today an achievement in satellite-based NB-IoT (Narrow Band-Internet of Things), in persuance of the Hon’ble Prime Minister, Shri Narendra Modi’s vision of a truly Digital India. With this solution, India will now have access to an omnipresent texture of availability for a huge number yet detached machines, sensors and modern IoT gadgets. This new solution, developed by Skylo, will connect with BSNL’s satellite ground infrastructure and provide PAN-India coverage, including seas. It will cover so much vast area, covering whole India, including the Indian Seas. Shri P.K.Purwar, CMD, BSNL, said:“The solution is in line with BSNL’s vision to leverage technology to provide affordable and innovative telecom services and products across customers segments”. He further added:“Skylo would also help provide critical data for the logistics sector to enable effective distribution of COVID-19vaccine in 2021and will be a big contributor in service to the nation.” Shri VivekBanzal, Director (CFA), BSNL Board, said, “Successful POCs have already been conducted by BSNL and Skylo in India and we will soon approach various user groups before the New Year 2021 begins.” This breakthrough announcement is very timely because it comes during the ongoing Indian Mobile Congress 2020. This new technology supports the Department of Telecom and NITI Aayog’s plan of bringing indigenous IoT connectivity to India’s core sectors. Examples of where this technology has already been tested successfully include, Indian Railways, fishing vessels, and enabling connected vehicles across India.
https://medium.com/@rnssoft/skylo-developed-a-iot-solution-under-made-in-india-scheme-79e21a28fe95
['Akhil Nautiyal']
2020-12-20 18:58:09.714000+00:00
['Satellite', 'IoT', 'Internet of Things', 'Satellite Technology', 'India']
1,751
What Is Blockchain
Many people think of blockchain as the technology that powers Bitcoin. While this was its original purpose blockchain is capable of so much more. Despite the word's sound, there’s not just one blockchain is shorthand for a whole suite of distributed ledger technologies that can be programmed to record and track anything of value from financial transactions to medical records, or even land titles. You might be thinking, and we already have processes in place to track data what’s so special about blockchain. Let’s break down why blockchain technology stands to revolutionise the way we interact with each other. Reason number one, the way it tracks and stores data blockchain stores information in stashes called blocks linked together chronologically to form a continuous line metaphorically, a chain of blocks. If you change the data, recorded in a particular block, you don’t rewrite it. Instead, the change is stored in a new block, showing the x change to y, at a particular date and time. Sound familiar. That’s because blockchain is based on a centuries-old method of the general financial ledger. It’s a non-destructive way to track data changes over time. Here’s one example. There was a dispute between Ann, and her brother Steve over who owns a piece of land, that’s been in the family for years because blockchain technology uses the ledger method. There is an entry in the ledger showing that Adam first owned the property in 1900. Adam sold the property today 1930, and a new entry was made in the ledger, and so on. Every change of ownership of this property is represented by a new entry in the ledger right up until and bought it from their father in 2007, and is the current owner, and we can see that history and the register. Now, here’s where things get interesting. I want the age-old ledger method. Initially, a book that a database file stored on a single system blockchain was designed to be decentralised and distributed across an extensive computer network. This decentralising of information reduces data tampering and brings us to the second factor that makes blockchain unique, which creates trust in the data before a block can be added to the chain, a few things have to happen. First, a cryptographic puzzle must be solved, thus creating the block. The computer that solves the puzzle shares the solution to all of the other computers on the network. This is called proof of work. The system will then verify this proof of work. And if correct, a block will be added to the chain. Combining these tricky math puzzles and verification by many computers ensures that we can trust every block on the chain, because the network, does the trust-building for us. We now have the opportunity to interact directly with our data in real-time. And that brings us to the third reason blockchain technology is such a game-changer no more intermediaries. When doing business with one another, we don’t show the other person, our financial or business records. Instead, we rely on trusted intermediaries, such as banks or lawyers, to view our records and keep that information confidential. These intermediaries build trust between the parties can verify, for example, that, yes, and is the rightful owner of this land. This approach limits exposure and risk and adds another step to the exchange, which means more time and money spent and land title information was stored in a blockchain. She could cut out the middleman. Her lawyer who would ordinarily confirm her information with Steve, as we now know all blocks added to the chain, has been verified to be accurate and can’t be tampered with. So and can show Steve her land title information secured on the blockchain. And would save considerable time and money by cutting out the middleman. This type of trusted peer to peer interaction with our data can revolutionise how we access verify and transact with one another. And because blockchain is a type of technology and not a single network. It can be implemented in many different ways. Some blockchains can be completely public and open to everyone to view and access. Others can be closed to a select group of authorised users, such as your company, a group of banks, or government agencies. There are hybrid Public-Private blockchains to which private access can see all the data, while the public can see only selections in others, everyone can see all the data, but only some people have access to add new data. I’ve government, for example, could use a hybrid system to record the boundaries of Ann’s property, and the fact that she owns it, while keeping her personal information private or it could allow everyone to view property records, but reserve to itself, the exclusive right to update them. It is a combination of all these factors decentralising the data, building trust in the data and allowing us to interact directly with one another and the data that gives blockchain technology, the potential to underpin many of the ways we interact with one another. Like the rise of the Internet, this technology will bring complex policy questions around governance international law, security, and economics.
https://medium.com/technology-hits/what-is-blockchain-4211ed888d7
['Josh', 'Υя_Ωιѕємαη']
2020-12-27 16:34:26.119000+00:00
['Software Development', 'Business', 'Blockchain', 'Technology', 'Tech']
1,752
covid-19 articles. Infected folks may have a extensive…
Infected folks may have a extensive variety of sickness severity, with many sufferers displaying moderate or maybe asymptomatic sickness. However, for unknown reasons, as much as 10% of asymptomatic and moderate infections result in extra extreme results, which includes breathing misery requiring hospitalization.1 Although threat elements for extra extreme results were defined (which includes an older age, obesity, hypertension, and underlying persistent scientific conditions),2,three the relationship among viral load and results has now no longer formerly been examined in a longitudinal study. Several remedy alternatives were explored for hospitalized sufferers with Covid-19 (e.g., antimalarial drugs,four antiviral agents,five-7 immunomodulators,8–12 glucocorticoids,13,14 and convalescent plasma15,16) with various outcomes. However, there were no huge randomized, managed trials of centered remedies which might be unique for extreme acute breathing syndrome coronavirus 2 (SARS-CoV-2) and which might be meant to minimize sickness development in sufferers with early sickness. Preclinical research of neutralizing-antibody remedies for SARS-CoV-2 contamination in numerous animal fashions have proven promising outcomes, with marked discounts in viral masses withinside the higher and decrease breathing tracts.17 SARS-CoV-2 profits access into cells thru binding of its spike protein to receptors for angiotensin-changing enzyme 2 on track cells.18 LY-CoV555 (additionally called LY3819253), a strong antispike neutralizing monoclonal antibody that binds with excessive affinity to the receptor-binding area of SARS-CoV-2, become derived from convalescent plasma received from a affected person with Covid-19. The antibody become advanced with the aid of using Eli Lilly after its discovery with the aid of using researchers at AbCellera and on the Vaccine Research Center of the National Institute of Allergy and Infectious Diseases. The discovery of LY-CoV555 and its passive safety in opposition to SARS-CoV-2 in nonhuman primates has been pronounced formerly.19 Here, we record intervening time outcomes from the Blocking Viral Attachment and Cell Entry with SARS-CoV-2 Neutralizing Antibodies (BLAZE-1) trial, an ongoing segment 2 trial to assess the efficacy and protection of LY-CoV555 in sufferers with these days identified moderate or slight Covid-19 withinside the outpatient setting. We tested the impact of the neutralizing antibody on viral load, symptom scores, and scientific results and additionally record an located connection among a consistently excessive viral load and sickness severity. PATIENTS Characteristics of the Patients at Baseline. From June 17 thru August 21, 2020, a complete of 467 sufferers underwent randomization to get hold of both LY-CoV555 (317 sufferers) or placebo (one hundred fifty sufferers), and the sufferers withinside the LY-CoV555 institution have been assigned to one in all 3 dose subgroups. Of the sufferers who had passed through randomization, 452 met the standards for inclusion withinside the number one evaluation (309 withinside the LY-CoV555 institution and 143 withinside the placebo institution). LY-CoV555 become administered to those sufferers in doses of seven-hundred mg (a hundred and one sufferers), 2800 mg (107 sufferers), or 7000 mg (a hundred and one sufferers) (Figure 1). The trial businesses have been properly balanced concerning threat elements on the time of enrollment (Table 1). Nearly 70% of the sufferers had as a minimum one threat aspect — an age of sixty five years or older, a body-mass index (BMI, the load in kilograms divided with the aid of using the rectangular of the peak in meters) of 35 or extra, or as a minimum one applicable coexisting illness — for extreme Covid-19. After present process randomization, sufferers acquired an infusion of LY-CoV555 or placebo inside a mean of four days after the onset of symptoms; on the time of randomization, extra than 80% of the sufferers had best moderate symptoms. The located suggest PCR cycle threshold (Ct) price of 23.nine at the day of infusion (equating to about 2.five million RNA equivalents) matched expectancies that a these days identified populace might have a excessive viral burden. The conversion from Ct price to viral load is defined in Section 6.10 of the statistical evaluation plan. Change from Baseline in Viral Load. Americans are making ready for what’s going to be, for many, one of the strangest and maximum anxiety-crammed Thanksgiving vacations in their lives. They have agonized over tour plans, the dimensions of visitor lists, testing, air flow and the fitness and protection in their pals and family — all at the same time as looking coronavirus instances and deaths skyrocket round them. The vacation arrives because the surge withinside the Midwest q4 has grown right into a coast-to-coast catastrophe and new infections jump in towns like Baltimore, Los Angeles, Phoenix and Miami. Across the country, the quantity of recent instances has by no means been higher, with extra than 175,000 a day, on common during the last week. Deaths crowned 2,two hundred yesterday, the maximum for the reason that early May. By day eleven, the bulk of sufferers had a considerable fashion closer to viral clearance, which includes the ones withinside the placebo institution. The located suggest lower from baseline withinside the log viral load for the complete populace become −three.81 (baseline suggest, 6.36; day eleven suggest, 2.56); this price corresponded to a lower with the aid of using extra than a aspect of 4300 withinside the SARS-CoV-2 burden, for an removal of extra than 99.97% of viral RNA. For sufferers who acquired the 2800-mg dose of LY-CoV555, the distinction from placebo withinside the lower from baseline become −0.53 (95% self belief interval [CI], −0.ninety eight to −0.08; P=0.02), for a decrease viral load with the aid of using a aspect of three.four (Table 2). However, smaller variations from placebo withinside the lower from baseline have been located some of the sufferers who acquired the seven-hundred-mg dose (−0.20; 95% CI, −0.sixty six to 0.25; P=0.38) and the 7000-mg dose (0.09; 95% CI, −0.37 to 0.55; P=0.70).
https://medium.com/@swahid-sehali1/covid-news-around-the-world-1c143497a077
['Swahid Sehali']
2020-12-22 13:23:37.061000+00:00
['Technology', 'Care', 'Technews', 'All About Health', 'Healthy Lifestyle']
1,753
The Election is Over. The Winner is Mobile Voting.
Mobile Voting The Election is Over. The Winner is Mobile Voting. A company in Boston is taking elections to a whole other level to uphold our democracy. Photo by Maxim Ilyahov on Unsplash It’s time to vote by mobile device. In 2020 the election was (and still is) chaotic and uncertain due to the largest voter roll in history. So much so, that even the loser’s popular vote total(Trump more than likely) will break the record with almost 70 million votes. At the time of this writing, results remain outstanding Georgia, North Carolina, Pennsylvania, Arizona, and Nevada. The anxiety across the country is palpable as it should be. This election is the most critical we’ve seen in our 240 year history. The outcome will determine the country’s path for decades to come. Issues like the pandemic, the economy, and climate change, not to mention trade, foreign policy, and immigration will either be pressed further shaking up global norms or radically altered by a new party. Needless to say, patience is a bitch. So, it’s time for the efficiency, security, and accessibility mobile voting provides. Democracy by Device This is where a small start-up out of Boston, MA, Voatz, Inc. comes in. Voatz delivers mobile voting solutions on either personal smart devices and web-based computers at the polls. No more long wait lines, manual counts, voter intimidation, or sewn confusion about whose won what where. Built-In Boston’s Quinten Dol interviewed Voatz’s CEO, Nimit Sawhney where Sawhney said his company has invested heavily so voters can expect counts to be accurate, auditable, not to mention secure while maintaining voter anonymity and the country’s democracy. Disenfranchised voters who are either disabled, in the military, or live overseas can download an app and choose who they want in congress, the white house, or any other office. Only 7 percent of 3 million overseas citizens vote; studies show this rate would increase to 37 percent.” — Nimit Sawhney, CEO of Voatz, inc. Think about that and what casting a ballot via smart device or laptop could mean, not only for the military and overseas voters, but also for minorities and the disenfranchised who feel vulnerable standing in line for hours to exercise their most fundamental right as citizens. This includes Native Americans who have massive obstacles getting to a polling location. Photo by Jessica Radanavong on Unsplash Voatz’s digital platform is the only one to meet four key criteria required for the vote — security, confirming voter identity, accessibility, and audits too(definitely not Twitter or Facebook). While Voatz uses end-to-end encryption for security, a digital security chamber called block-chain technology secures votes on multiple restricted-access, geographically-distributed servers that eliminate potential tampering with ballot counts. Also, Voatz enacts biometric verification, or face recognition technology, to verify you are who you say you are. That data is then paired it to your phone’s security PIN maintaining voter privacy. To be clear, Voatz claims your information is never stored or shared. As of today, Voatz’s technology helped overseas voters in over 30 countries in both Democratic and Republican elections. Locations include Denver, Co, Delaware, West Virginia, New Jersey, Utah County, UT, Umatilla County, OR, Jackson County, OR, King County, WA, to mention a few. Contests were not for president, but local run-offs, special elections, and hyper-local contests. Mobile is “the Moment” We Need to Meet An outpouring of volunteers stepped it up at the polls during this election and the stories are inspiring. A close friend volunteered in Oceanside, CA at the Roosevelt Middle a School — a super polling center which serves many districts. He helped an officer’s handicapped mother cast her ballot. He assisted a blind teen who was voting for the very first time (The kid thanked the the poll workers with cookies). And after the polls closed, he stayed late into the early morning hours to count the votes. That said, mobile voting will have its own story to tell, the most important of which is how a more complete electorate including the disabled, military, native populations, and Americans stationed around the world brought our nation one step closer to a Perfect Union. Check out these interesting reads by Anthony Fireman: Why You Should Quit Twitter (published in Curious)* How I Motivate My Kids to Exercise(published in Illumination)* Trump Should’ve Been Muzzled. Unfortunately, Wallace is A Pro *This article was curated across Medium
https://anthony-c-fireman.medium.com/why-voatz-inc-will-be-the-simple-solution-for-future-elections-a80ba59d3067
['Anthony Fireman']
2020-11-06 23:13:57.682000+00:00
['Voting', 'Technology', 'Election 2020', 'Politics', 'America']
1,754
Blocktick — The Blockchain Powered Document Verifying Application
With the dawn of Blockchain technology, at once the future was redefined like never before. At the time when Cybersecurity was a blatant lie, blockchain technology gave way for a novel idea that ensured transparency, trustworthiness, detection, and elimination of any fraudulent practice in online data transactions. Simply put, blockchain is a back to back connected data packs distributed vastly over different databases around the world. Each of these individual blocks is time-stamped with the date of its creation and any manipulation of its content is only possible by altering all the sequentially connected blocks. Since each block is owned by different individuals who necessarily don’t know each other, manipulation of each block’s content is almost impossible. This whole idea was proposed back in the 1990s but the practical implementation came only in 2009 with the invention of bitcoin by Satoshi Nakamoto. The decentralized state or the lack of any ‘single authority’ is the highlight of blockchain, which in turn makes it super trustworthy and transparent. Wielding a sceptre like this, its possibilities are enormous that it could be applied to literally anything. With document forgery becoming a major issue, an efficient means of document verification is the necessity of the time. The problem with the already existing document verification service is that it consumes a lot of time. Sometimes it may take like, days to check the genuinity of a document as the service provider would have to go through tons of documents. Blocktick, the blockchain-based document verification application is the newest piece of innovation. Making use of the facilities provided by the blockchain technology and entirely run on the EOS blockchain platform, Blocktick is the most efficient tool for issuing and verifying document. Why Blocktick? Like mentioned before the existing document verification services have a lot of setbacks. When it comes to Blocktick, every issue regarding this process is rectified. The verification time is brought down from days to seconds, thanks to its superfast data processing capability. Blocktick can go through millions of documents, effectively within seconds. With Blocktick, one doesn’t need to depend on a third party, as the verification process is independent and void of any intermediary involvement. With Blocktick, institutions can issue new certificates that are literally impossible to tamper with. The intricate cryptography for storing data in the blockchain is impenetrable and provides high-level security as there is no way to hack it. The whole process of certification is tiring as the authority has to go through a lot of paperback data manually for every individual candidate. But with Blocktick, everything is made simple. The paper works are minimized as there is provision for digital documentation.
https://medium.com/blockchainexpert-blog/blocktick-the-blockchain-powered-document-verifying-application-11b7b7338662
['Blockchain Experts']
2020-02-20 06:12:15.659000+00:00
['Blockchain Application', 'Blocktick', 'Blockchain Development', 'Blockchain', 'Blockchain Technology']
1,755
WATCH Star Trek: Discovery ([Season 3 : Episode 12])
⭐ Watch Star Trek: Discovery Season 3 Episode 12 Full Episode, Star Trek: Discovery Season 3 Episode 12 Full Watch Free, Star Trek: Discovery Episode 12,Star Trek: Discovery CBS All Access, Star Trek: Discovery Eps. 12,Star Trek: Discovery ENG Sub, Star Trek: Discovery Season 3, Star Trek: Discovery Series 3,Star Trek: Discovery Episode 12, Star Trek: Discovery Season 3 Episode 12, Star Trek: Discovery Full Streaming, Star Trek: Discovery Download HD, Star Trek: Discovery All Subtitle, Watch Star Trek: Discovery Season 3 Episode 12 Full Episodes Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[12] The word “cinema”, short for cinematography, is ofCBS All Access used to refer to filmmaking and the film Star Trek: Discovery, and to the art form that is the result of it. ❏ STREAMING MEDIA ❏ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the process of delivering or obtaining media in this manner.[clarification needed] Streaming refers to the delivery method of the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies specifically to telecommunications networks, as most of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio CDs). There are challenges with streaming conCBS All Accesst on the Internet. For example, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of the conCBS All Accesst. And users lacking compatible hardware or software systems may be unable to stream certain conCBS All Accesst. Live streaming is the delivery of Internet conCBS All Accesst in real-time much as live television broadcasts conCBS All Accesst over the airwaves via a television signal. Live internet streaming requires a form of source media (e.g. a video camera, an audio interface, screen capture software), an encoder to digitize the conCBS All Accesst, a media publisher, and a conCBS All Accesst delivery network to distribute and deliver the conCBS All Accesst. Live streaming does not need to be recorded at the origination point, although it frequently is. Streaming is an alternative to file downloading, a process in which the end-user obtains the entire file for the conCBS All Accesst before watching or lisCBS All Accessing to it. Through streaming, an end-user can use their media player to start playing digital video or digital audio conCBS All Accesst before the entire file has been transmitted. The term “streaming media” can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered “streaming text”. ❏ COPYRIGHT CONCBS All AccessT ❏ Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[12][12][12][12][12] The creative work may be in a literary, artistic, educational, or musical form. Copyright is inCBS All Accessded to protect the original expression of an idea in the form of a creative work, but not the idea itself.[12][12][12] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States. Some jurisdictions require “fixing” copyrighted works in a tangible form. It is ofCBS All Access shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[citation needed][12][3][3][3] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[3] Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not exCBS All Accessd beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsisCBS All Accesst.[3] Typically, the public law duration of a copyright expires 3 to 12 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[12] to establishing copyright, others recognize copyright in any completed work, without a formal registration. It is widely believed that copyrights are a must to foster cultural diversity and creativity. However, Parc argues that contrary to prevailing beliefs, imitation and copying do not restrict cultural creativity or diversity but in fact support them further. This argument has been supported by many examples such as Millet and Van Gogh, Picasso, Manet, and Monet, etc.[3] ❏ GOODS OF SERVICES ❏ Credit (from Latin credit, “(he/she/it) believes”) is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[12] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people. The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[12] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower. ‘Star Trek: Discovery’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S. CBS All Access’s live-action “Star Trek: Discovery” was supposed to be a huge win for under-represented groups in Hollywood. The $12 million-budgeted film is among the most expensive ever directed by a woman, and it features an all-Asian cast — a first for productions of such scale. Despite well-inCBS All Accesstioned ambitions, however, the film has exposed the difficulties of representation in a world of complex geopolitics. CBS All Access primarily cast Asian rather than Asian American stars in lead roles to appeal to Chinese consumers, yet Chinese viewers rejected the movie as inauthentic and American. Then, politics ensnared the production as stars Liu Yifei, who plays Star Trek: Discovery, and Donnie Yen professed support for Hong Kong police during the brutal crackdown on protesters in 31212. Later, CBS All Access issued “special thanks” in the credits to government bodies in China’s Xinjiang region that are directly involved in perpetrating major human rights abuses against the minority Uighur population. “Star Trek: Discovery” inadverCBS All Accesstly reveals why it’s so difficult to create multicultural conCBS All Accesst with global appeal in 2020. It highlights the vast disconnect CBS All Accessween Asian Americans in Hollywood and Chinese nationals in China, as well as the exCBS All Accesst to which Hollywood fails to acknowledge the difference CBS All Accessween their aesthetics, tastes and politics. It also underscores the limits of the American conversation on representation in a global world. In conversations with seStar Trek: Discoveryl Asian-American creatives, Variety found that many feel caught CBS All Accessween fighting against underrepresentation in Hollywood and being accidentally complicit in China’s authoritarian politics, with no easy answers for how to deal with the moral questions “Star Trek: Discovery” poses. “When do we care about representation versus fundamental civil rights? This is not a simple question,” says Bing Chen, co-founder of Gold House, a collective that mobilizes the Asian American community to help diverse films, including “Star Trek: Discovery,” achieve opening weekend box office success via its #GoldOpen movement. “An impossible duality faces us. We absolutely acknowledge the terrible and unacceptable nature of what’s going on over there [in China] politically, but we also understand what’s at stake on the Star Trek: Discovery side.” The film leaves the Asian American community at “the intersection of choosing CBS All Accessween surface-level representation — faces that look like ours — versus values and other cultural nuances that don’t reflect ours,” says Lulu Wang, director of “The Farewell.” In a business in which past box office success determines what future projects are bankrolled, those with their eyes squarely on the prize of increasing opportunities for Asian Americans say they feel a responsibility to support “Star Trek: Discovery” no matter what. That support is ofCBS All Access very personal amid the Star Trek: Discovery’s close-knit community of Asian Americans, where people don’t want to tear down the hard work of peers and Star Trek: Discovery. Others say they wouldn’t have given CBS All Access their $3 if they’d known about the controversial end credits. “‘Star Trek: Discovery’ is actually the first film where the Asian American community is really split,” says sociologist Nancy Wang Yuen, who examines racism in Hollywood. “For people who are more global and consume more global news, maybe they’re thinking, ‘We shouldn’t sell our soul in order to get affirmation from Hollywood.’ But we have this scarcity mentality. “I felt like I couldn’t completely lambast ‘Star Trek: Discovery’ because I personally felt solidarity with the Asian American actors,” Yuen continues. “I wanted to see them do well. But at what cost?” This scarcity mentality is particularly acute for Asian American actors, who find roles few and far CBS All Accessween. Lulu Wang notes that many “have built their career on a film like ‘Star Trek: Discovery’ and other crossovers, because they might not speak the native language — Japanese, Chinese, Korean or Hindi — to actually do a role overseas, but there’s no role being writCBS All Access for them in America.” Certainly, the actors in “Star Trek: Discovery,” who have seen major career breakthroughs tainted by the film’s political backlash, feel this acutely. “You have to understand the tough position that we are in here as the cast, and that CBS All Access is in too,” says actor Chen Tang, who plays Star Trek: Discovery’s army buddy Yao. There’s not much he can do except keep trying to nail the roles he lands in hopes of paving the way for others. “The more I can do great work, the more likely there’s going to be somebody like me [for kids to look at and say], ‘Maybe someday that could be me.’” Part of the problem is that what’s happening in China feels very distant to Americans. “The Chinese-speaking market is impenetrable to people in the West; they don’t know what’s going on or what those people are saying,” says Daniel York Loh of British East Asians and South East Asians in Theatre and Screen (BEATS), a U.K. nonprofit seeking greater on-screen Asian representation. York Loh offers a provocative comparison to illustrate the West’s milquetoast reaction to “Star Trek: Discovery” principal Liu’s pro-police comments. “The equivalent would be, say, someone like Emma Roberts going, ‘Yeah, the cops in Portland should beat those protesters.’ That would be huge — there’d be no getting around that.” Some of the disconnect is understandable: With information overload at home, it’s hard to muster the energy to care about faraway problems. But part of it is a broader failure to grasp the real lack of overlap CBS All Accessween issues that matter to the mainland’s majority Han Chinese versus minority Chinese Americans. They may look similar, but they have been shaped in diametrically different political and social contexts. “China’s nationalist pride is very different from the Asian American pride, which is one of overcoming racism and inequality. It’s hard for Chinese to relate to that,” Yuen says. Beijing-born Wang points out she ofCBS All Access has more in common with first-generation Muslim Americans, Jamaican Americans or other immigrants than with Chinese nationals who’ve always lived in China and never left. If the “Star Trek: Discovery” debacle has taught us anything, in a world where we’re still too quick to equate “American” with “white,” it’s that “we definitely have to separate out the Asian American perspective from the Asian one,” says Wang. “We have to separate race, nationality and culture. We have to talk about these things separately. True representation is about capturing specificities.” She ran up against the Star Trek: Discovery’s inability to make these distinctions while creating “The Farewell.” Americans felt it was a Chinese film because of its subtitles, Chinese cast and location, while Chinese producers considered it an American film because it wasn’t fully Chinese. The endeavor to simply tell a personal family story became a “political fight to claim a space that doesn’t yet exist.” In the search for authentic storytelling, “the key is to lean into the in-CBS All Accessweenness,” she said. “More and more, people won’t fit into these neat boxes, so in-CBS All Accessweenness is exactly what we need.” However, it may prove harder for Chinese Americans to carve out a space for their “in-CBS All Accessweenness” than for other minority groups, given China’s growing economic clout. Notes author and writer-producer Charles Yu, whose latest novel about Asian representation in Hollywood, “Interior Chinatown,” is a National Book Award finalist, “As Asian Americans continue on what I feel is a little bit of an island over here, the world is changing over in Asia; in some ways the center of gravity is shifting over there and away from here, economically and culturally.” With the Chinese film market set to surpass the US as the world’s largest this year, the question thus arises: “Will the cumulative impact of Asian American audiences be such a small drop in the bucket compared to the China market that it’ll just be overwhelmed, in terms of what gets made or financed?” As with “Star Trek: Discovery,” more parochial, American conversations on race will inevitably run up against other global issues as U.S. studios continue to target China. Some say Asian American creators should be prepared to meet Star Trek: Discovery by broadening their outlook. “Most people in this Star Trek: Discovery think, ‘I’d love for there to be Hollywood-China co-productions if it meant a job for me. I believe in free speech, and censorship is terrible, but it’s not my battle. I just want to get my pilot sold,’” says actor-producer Brian Yang (“Hawaii Five-0,” “Linsanity”), who’s worked for more than a decade CBS All Accessween the two countries. “But the world’s getting smaller. Streamers make shows for the world now. For anyone that works in this business, it would behoove them to study and understand Star Trek: Discoverys that are happening in and [among] other countries.” Gold House’s Chen agrees. “We need to speak even more thoughtfully and try to understand how the world does not function as it does in our zip code,” he says. “We still have so much soft power coming from the U.S. What we say matters. This is not the problem and burden any of us as Asian Americans asked for, but this is on us, unfortunately. We just have to fight harder. And every step we take, we’re going to be right and we’re going to be wrong.” ☆ ALL ABOUT THE SERIES ☆ is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[12] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people. The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[12] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower. ‘Hausen’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S.
https://medium.com/star-trek-discovery-s3xe12-4khd-quality/watch-star-trek-discovery-series-3-episode-12-online-1080p-hd-6b78c9945f03
['Naomi Briggs']
2020-12-25 22:08:28.687000+00:00
['Technology', 'Lifestyle', 'Coronavirus', 'TV Series']
1,756
Incubators, Accelerators, Studios. What’s the Difference? And where should a Startup Go?
Hi, I’m Igor Sokolov, Partner at Pragmatech Ventures and Head of Pragmatech Studio. I’d like to help clarify the features of different ecosystems for startups, such as incubators, accelerators, and studios. What are their peculiarities and what does each of them do? Are you thinking about creating a startup? Then you’ll need to have at least a few of the elements from the following list ready: ● a team, ● an idea, ● startup capital, ● an experienced mentor, ● a business model, ● market expertise. Don’t worry ahead of time; you’re not alone! You can find much of these in the special ecosystems for startups: incubators, startup studios, and accelerators. In a Nutshell ● Incubators help entrepreneurs at an early stage, from initial idea to proof of concept. ● Accelerators usually focus on teams with a working product (a prototype) and help them scale up. ● Startup studios create businesses from scratch, mostly using internal resources and expertise, and provide maximum support to an entrepreneur. Now, let’s dig a little deeper into the details. Incubators: ‘Business Greenhouses’ What do they do? The focus of startup incubators is essentially to make a project from scratch. These organizations help funders assemble a team and find their first clients. Typically, incubators have a specific niche, such as marketplaces, hardware, B2B Enterprise solutions, and so on. Simply put, it’s like a greenhouse, a supportive environment for aspiring entrepreneurs. Incubators do a lot to ensure that the team doesn’t give up at an early stage, and helps to create a solution to show to clients. For that, they: ● explain how to manage the administrative side of a business; ● mentor in marketing, sales, and product development; ● provide office space if needed, and so on. What’s the timeframe? An incubator’s job is to help a startup go from an idea to an MVP or even to be ready to go to market, so it’s hard to predict an exact timeline. An incubator works on a somewhat flexible schedule often for three to six months, and even longer in exceptional cases. Therefore, incubators anticipate several iterations in project development as well as possible sharp deviations, in advance. What about financing? Incubators don’t always provide capital to start-ups. And when they do, funds are often transferred ‘in portions’. Many such organizations ask the team to show meaningful accomplishments at various stages, and progress is ‘rewarded’ with funding. An incubator provides support in exchange for a share of the project. However, some incubators do not take a share of the project for their services. Most often, these are governments or universities. The logic behind financing is to help startups to avoid beginners’ mistakes, thereby increasing the chances of business success from the beginning. Who should go to an incubator? Start-ups with an idea, but no experience. Accelerators: ‘Start-up Universities’ What do they do? The goal of an accelerator is to speed up the development of start-ups. In this case, they don’t have the ‘greenhouse’ approach of an incubator: ● They prefer to take projects with prototypes or already finished products; ● The selection process is often relatively rigid: you still have to ‘enter’ an accelerator, just like a university; ● Accelerators prepare startups for institutional capital, helping them to form a valid and scalable business model; ● It’s no longer about the first steps, but about preparing to scale up. The best accelerators also become a platform for building connections between investors and founders. Such networking boosts synergies for further growth. What is the timeframe? In accelerators, the goal is achieved through programs with clear timelines: they support start-ups with mentors and experts to reach the level of a full-fledged, market-ready product. It goes about handling marketing and sales, as well as legal issues. As a rule, such programs last from several months to six months. An acceleration program ends with a demo day. There, start-up ‘graduates’ present their projects to investors. In essence, this is the start of the fundraising campaign. What about financing? There are two main models of accelerators on the market: equity and non-equity. While the former select companies to participate and also invest in them (a regular cheque is $50,000-$100,000) in exchange for a share of the business (usually up to 10%), the latter do not give money and do not require a stake in the project. Moreover, participation in a non-equity accelerator is usually for a fee, unless they are non-for-profit organizations (e.g., educational institutions) or if the government sponsor them. Corporate accelerators are also worth mentioning. They select projects that have synergies with a large company who then pay for the process. Who should go to an accelerator? Someone who already has a team, a prototype ready, and is preparing to make a breakthrough. Start-up Studio: a ‘Business Factory’ What do they do? If incubators and accelerators are relatively common in the Eastern European market, they are different from start-up studios. This option is still little known to local entrepreneurs — and we at Pragmatech Studio are going to change that. The goal of a startup studio is to gather a team of entrepreneurs and create a business with them from scratch: ● a studio is continuously generating and validating business ideas; ● people from the outside can come in without an idea; they will be offered one from the studio to implement. Candidates are expected to have an entrepreneurial spirit, a desire to build and develop businesses; ● the studio’s expertise helps to minimize the risks for a startup and increase its chances of success; ● the studio team works with entrepreneurs to develop the idea into a viable business: from providing office space to mentoring and operational support. Studios combine several advantages of incubators and accelerators while being willing to invest more in project development. It’s worth mentioning that the studios get a larger stake in the business than an incubator or an accelerator. Simply put, if you came up with a new Netflix or TikTok, you should go to an incubator. If you are the next Elon Musk and you lack an outlet for your potential, a startup studio is waiting for you. The latter provides entrepreneurs with a partner who will support them through all the stages of company development. Besides, start-up studios often have connections to venture capital (including Pragmatech Studio), which simplifies a founders’ task at the initial stages. All the effort and time that doesn’t go on fundraising can help to create a better product. What’s the timeframe? There are no strict accelerator-type limitations here: studios focus on creating and growing a business. Different studios have different approaches; some support the project up to the seed stage, while others support it up to the exit stage. Pragmatech Studio doesn’t aim to exit projects in the future: we support a new business at all stages. What about financing? Incubators and accelerators ‘grow’ ready-made projects. Start-up studios focus on a different kind of capital: human capital. They generate ideas and create start-ups based on them, finding external professionals who become co-founders of the projects. The entrepreneurs are engaged in product development with the support (including financial) of the studio’s team. Pragmatech Studio works on the same principle. Who should go to a studio? Start uppers with an entrepreneurial spirit and a desire to grow and develop their business, both with and without a ready-made idea. _____ Initially published in AIN.UA, Ukrainian media about IT and business.
https://medium.com/@pragmatechventures/incubators-accelerators-studios-whats-the-difference-and-where-should-a-startup-go-2c09fc26d57f
[]
2020-12-16 10:34:31.748000+00:00
['Startup', 'Information Technology', 'Entrepreneurship', 'Business', 'Venture Capital']
1,757
Lightning Network: the future of Bitcoin
Currently, Bitcoin is experiencing this particular situation: slow and expensive transactions. Transaction slowness is simply related on the decision to validate a new block by taking about 10 minutes. Therefore, a transaction, which must be inserted in a block to be correctly recorded in the blockchain, usually takes about 10 minutes on average to be inserted in the block and validated. Furthermore, it is not convenient to consider a transaction fully confirmed as soon as the block in which it is inserted is validated, but it is advisable to wait until at least some other block is confirmed, for security reasons. Therefore, we generally have to wait at least 10 minutes to consider it valid, but it can pass even more than half an hour before it can have other confirmations and so, be considered completely confirmed. It is clear that in this way Bitcoin transactions results too slow, especially if compared with fiat payment circuits such as Visa. In addition, maximum block size is 1 MB, which means that max of about 2,000 transactions can be inserted within it, or just a little more. 2,000 transactions made every 10 minutes are really few numbers, and it often happens that transactions carried out on Bitcoin network all over the world are so many that some risk being excluded from the blocks, without be validated. For this reason, Bitcoin network introduced fees which are transaction commitment costs used to convince miners to validate those blocks with their transaction before than other blocks which are in queue. These fees tend to be particularly expensive when there are many transactions in the queue. In some moments it took $ 20 of fee to convince miners to insert a transaction in the first block to be validated! This situation made Bitcoin up to now, not really scalable. That is, without an additional solution, it was impossible that it could spread as a means of common payment. But the solution is there, it’s already operational, and it’s called Lightning Network (LN). In order to avoid blockchain slowness due to its small block size, Lightning Network allows user to perform off-chain operations, i.e. transactions that don’t need to be validated by inserting them in a block. It allows the creation of temporary channels, within which it is possible to make one-to-one direct transactions among the wallets that are able to use them. For each channel, only two transactions are required to be recorded on the blockchain: the one for opening the channel, and the closing one. All other transactions that are carried out within the channel, after opening and before closing, aren’t recorded on the blockchain, but only within the off-chain. This means that they are immediate and fees significantly reduced. Therefore, after opening a Ligthning Network chain, it is possible perform as many transactions as you want within it, without any costs, and with immediate execution. This is the future of Bitcoin! All off-chains have a deadline, or once opened they must then be closed. However, at the time of expiry, the closure can be postponed, so as to be able to keep the channel open by setting a new deadline, and so on. The great thing about LN is that all the chainnels are connected. In other words, they are all part of the Lightning network so, once only a single channel is opened, through this you can get to send or receive BTC, to or from, any other channel, without the need to open new ones. In other words, once you open your chain, you can send or receive BTC immediately, with less cost from any other wallet that currently has an active Lightning Network chain. This innovation is already active, although for now it is still considered only in the testing phase, but probably by the end of 2018 it will be spread all over the world. The problem, if we want, is that the current wallets are not compatible. In other words, in order to be able to open a LN chain, a wallet must integrate an additional function that is not present on traditional wallets. Therefore, in order to use this new technology, you must update your wallets to new versions that integrate it. Electrum wallet (go to the website) To date, for example, Bitcoin wallet Electrum which is probably the most widely used in the world, doesn’t integrate this additional feature yet, so you cannot use it to create your own Lightning Network chain. Probably, new version that implements it will be released as soon as possible. Once LN has spread, the number of transactions that can be managed will increase a lot, and the costs will drop a lot. It is probably for this reason that the major international payment circuits in fiat currencies seem to be worried about the possible success of Bitcoin, because immediate transactions and practically without additional costs, could make their technology obsolete and no longer convenient. CREDITS This article was originally written in Italian by Marco Cavicchioli and translated in English by Stefania Stimolo for NovaMining.
https://medium.com/novamining/lightning-network-the-future-of-bitcoin-dfbc2a13f3c0
['Marco Cavicchioli']
2018-06-01 12:21:37.099000+00:00
['Blockchain Technology', 'Bitcoin Mining', 'Blockchain', 'Bitcoin']
1,758
Leonardo’s Robot: Leonardo da Vinci’s Mechanical Knight and Other Robots
Leonardo’s Robot: Leonardo da Vinci’s Mechanical Knight and Other Robots Leonardo da Vinci, one of the greatest geniuses of the Italian Renaissance, designed and constructed a mechanical knight way back in 1495. According to historical sources, this mechanical knight was capable of humanistic movements. Leonardo da Vinci — Genius of the Italian Renaissance The Italian Renaissance was one of the greatest epochs in human history. During this period, there were extraordinary developments in art, sculpture, architecture, philosophy, literature, and many other human endeavours. A rediscovery of Ancient Greek and Roman cultures — from surviving works that Islamic scholars had preserved as well as surviving art and sculptures — influenced and inspired most of the artists, sculptors, architects, writers, philosophers, and other intellectuals. Using the Greek and Roman works as their base, they built further on these to create astonishing masterpieces. As you might expect, there was no dearth in geniuses engaged in these activities, but Leonardo da Vinci is a personality that still manages to tower over everyone. In terms of output, he was not actually as prolific as some of his contemporaries. A restless spirit, he had a mind that overflowed with so many ideas that he didn’t have the time to bring them all into fruition. He painted only a few paintings, most of which are now very well-known, like the Monalisa, the Last Supper, and The Virgin and Child with Saint Anne. His fame rests on these as well as on the fact that he was a very multi-faceted man. Apart from art, he could turn his hand at architecture, mechanics, armaments, and more. His many surviving notebooks are filled with designs and ideas. Some of which were advanced not only for his time but come across as amazing in ours as well. Leonardo da Vinci’s mechanical knight In 1957, the researcher Carlo Pedretti discovered some of Leonardo da Vinci’s sketchbooks that contained design notes for a mechanical knight. While there isn’t one single complete drawing of the mechanical knight — if Leonardo da Vinci made a complete drawing, it didn’t survive over the ages — there are many fragmented details of the design in various sketchbooks. Historical accounts had already mentioned da Vinci’s famous Automa cavaliere that is Automaton knight. However, most people assumed these stories were exaggerated. The discovery of the notebooks proved this was not so. Leonardo da Vinci designed the mechanical knight to impress his patron Ludovico Sforza, who was the ruler of Milan. In 1495, Ludovico Sforza had a pageant at his court and asked Leonardo da Vinci to oversee all the arrangements of the celebration. The latter, of course, outdid himself when he unveiled his mechanical knight at this gathering. Apparently, it left everyone there thunderstruck. They had never imagined seeing a machine that resembled an anatomically correct knight and, moreover, moved like one too. Design of Leonardo’s robot It is well-known that Leonardo da Vinci carried out extensive anatomical research. He even dissected cadavers and made detailed drawings of their interiors. As a result, he developed a deep understanding of the human body. He was able to figure out that it was the muscles that enabled the bones and joints to move. When he designed his mechanical knight, he made sure it was anatomically correct. The knight has proportionate limbs and joints that follow the Canon of Proportions that Leonardo noted in the Vitruvian Man. Furthermore, he tried to incorporate the way human muscles worked into the design. Clad in medieval German-Italian armour, Leonardo’s robot could move its arms and raise its own visor and wave its sword. It could also move its jaw and neck. Furthermore, it could sit, stand, lie down, and walk. These robotic movements were very advanced and human-like for that period. In fact, we can say that the mechanical knight was the first humanoid robot that the world ever knew. How Leonardo’s robot worked Known as Robot di Leonardo or Automa cavaliere, that is Automaton knight, the machine used two elaborate and connected systems made up of pulleys, cables, gears, and wheels to move. In his article ‘The da Vinci Robot’, Michael Moran tells us that one of these systems was a four-factor one and was used for the movements in the hands, wrists, elbows, and shoulders. Leonardo fitted this cylindrical system into the knight’s chest. The other system was external and tri-factor, and it was used to move the ankles, knees, and hips. All the movements were to the drumbeat. Modern reconstructions of Leonardo’s robot In 1996, Mark Rosheim, a modern-day researcher and robotics expert, studied Leonardo’s different design notes for the mechanical knight. Later, in 2002, he used them as blueprints to replicate the mechanical knight. Amazingly, he found that it worked exactly like the historical accounts had said it did. In addition to creating a model of Leonardo’s robot, Mark Rosheim also used some of his design ideas to build robots for planetary exploration for NASA. You can find how Rosheim built the mechanical knight as well as other of Leonardo’s robots — a programmable cart, a bell-ringing automaton, and a lion — in his book “Leonardo’s Lost Robots.” There is also a research group of Leonardo da Vinci aficionados called Leonardo3. They have also engaged in discovering, interpreting, and reconstructing Leonardo da Vinci’s inventions. Their book “Leonardo da Vinci’s Robots” contains reproductions of Leonardo’s original designs and manuscripts. Further, along with this book, Leonardo3 offers a kit with which you can have a go at building Leonardo da Vinci’s programmable cart yourself. Other robots designed by Leonardo da Vinci As mentioned above, Leonardo da Vinci used his engineering skills to design other robots as well. These include a mechanical lion that, like the knight, was capable of movement. It could walk and, supposedly, could also hold and offer flowers. While there are no eye-witness accounts of this lion, Leonardo apparently designed it for Guiliano de Medici to be given as a gift to the King of France. In 2009, the French museum, the Château du Clos Lucé and Parc, built a modern version of this lion. Leonardo da Vinci was an incredible man for any age. His ideas and designs of robots have considerably aided the development of modern robotics. It is interesting to wonder about the things he might have achieved if he had access to many of our modern conveniences. By Sonal Panse. Read more intriguing articles on STSTW Media.
https://medium.com/@ststw/leonardos-robot-leonardo-da-vinci-s-mechanical-knight-and-other-robots-c2f31742c959
['Ststw Media']
2020-05-20 22:31:42.186000+00:00
['Automaton', 'Robotics', 'Leonardo Da Vinci', 'Robots', 'Technology']
1,759
HOW DOES MOONSHOT INSURANCE CONTEXTUAL INSURANCE WORK ?
Moonshot Insurance is an Insurance-As-A-Service platform offering contextual B2B2C insurance. One of our strengths is our ability to plug your existing sales funnel quickly and efficiently into our APIs and offer our products to your customers. But how does this customized platform work ? What is an API ? An API is a service that allows two applications to talk to each other; it stands for Application Programming Interface. Moonshot Insurance’s contextual insurance solution offers an API to connect your e-commerce website to our services. The operation is simple; our API adheres to industry standards and offers state-of-the-art security. Our backend is hosted in Amazon Web Services and leverages many “serverless” technologies such as API Gateway (for API exposition), Lambda functions (for business logic) and DynamoDB (for data storage). The end result is a robust, highly scalable, low-maintenance platform. A quick journey into our deployment process. At first, we configured and deployed the Lambda functions and API gateways manually. This way we explored the different configuration option and tools to deploy a quick proof of concept. We then sought to automate our processes. It was a long work, and one that is still ongoing today; as our platform and services developed, we eventually outgrew our initial toolchain and invested in more robust automation solutions. Today, using Infrastructure as Code and industry-leading tools such as Terraform, we are able to deploy new changes to production in just a few minutes. Before we do this, however, our code is thoroughly tested, both locally and remotely, before the code becomes active in production. Our architecture is still very much a work in progress, and we expect our tools and processes to keep evolving to improve our performance.
https://medium.com/moonshot-internet/how-does-moonshot-insurance-contextual-insurance-work-6f7ba7eacd0d
['Moonshot Insurance']
2020-12-15 14:01:23.964000+00:00
['API', 'Innovation', 'Insurance', 'Technology']
1,760
Password-Hashing-As-A-Service
PHAAS? What acronyms you get when everything is as a service Can’t I hash my own passwords? Companies have a vested interest in ensuring passwords in their care are handled securely. A centralized PHAAS could help by ensuring that passwords are encoded and verified according to company standards and best practices. Centralizing password hashing also removes security-sensitive code from individual applications and standardizes it in a single location. This also reveals which applications perform password hashing and which algorithms are used. Why not Hashing-As-A-Service? I’m not sure that hashing as a generic capability makes a lot of sense as a centralized function. Nearly every modern language has either built-in or widely-available library support for common cryptographic hashing algorithms, such as MD#, SHA#. Hashing algorithms are typically used to compare downloaded files to ensure integrity or to compare two files. The idea of passing files or large amounts of data to a centralized service to run a SHA256 seems unwarranted, except for the case of handling passwords. The details… the details Companies may have several different allowed password hashing schemes, to allow flexibility as legacy systems are brought into compliance, or to allow different levels of protection. The <scheme> path parameter in the HTTP examples below indicates that a set of rules (identified by the value of <scheme> ) will be applied to the provided password. What kinds of rules might apply to hashing passwords? Primarily the choice of hashing algorithm such as bcrypt, MD#, SHA#, or other modern hashing algorithms. Note that MD# and SHA# should never be used for hashing passwords. If we choose bcrypt, there are still several choices to make that could be decided by company standard. The bcrypt cost is often tuned so that it takes one second to hash a password — or possibly to a higher cost as per guidelines. Let’s be honest — is anyone actually doing this tuning? The bcrypt algorithm also needs a crytographically secure pseudo-random source to generate the salt. Companies could also specify running the password through a sequence of steps (such as pre-hashing or pre-encoding) before the crytographic hash. Note that a centralized PHAAS would NOT store the password. It’s up to the consumer to store the password and provide it back to the centralized service for verification. Note also that exposing expensive hashing as a function would need to be an authorized-only capability, to prevent denial of service or participation in a brute-force attack — whether intentional or not. Here’s an example of a request to hash a password given scheme=bcryptplain , which in this case might indicate that the password should be hashed using bcrypt algorithm version 2a with a cost of 10 and no pre-hashing or pre-encoding. It could be possible to POST with either a request body or URL parameters, but I prefer symmetric JSON-in-JSON-out requests, and I feel that POST bodies are more appropriate for a headless service (traffic to-and-from a crypto-as-a-service would likely never be seen from a browser). POST /password/hash/bcryptplain HTTP/2 Authorization: Bearer eyJhbGc... Content-Type: application/json Content-Length: # { "clearPassword": "myPassword123" } < HTTP/2 200 < content-type: application/json < content-length: # { "opaquePassword": "$2a$10$FwcytWTfZQ3SzcBe.ojeD.BK1GwTi34V50wvggWUfizp.AFleYE0S" } Now we need a similar password verification capability, so that subsequent login attempts can be verified. I anticipate that such a function would rely on JSON responses, not HTTP status codes, to indicate verification pass or fail. POST /password/verify/bcryptplain HTTP/2 Authorization: Bearer eyJhbGc... Content-Type: application/json Content-Length: # { "clearPassword": "myPassword123", "opaquePassword": "$2a$10$FwcytWTfZQ3SzcBe.ojeD.BK1GwTi34V50wvggWUfizp.AFleYE0S" } < HTTP/2 200 < content-type: application/json < content-length: # { "matches": true } Who’s allowed to hash or verify passwords? Both hashing and verification should require authorization (or be guarded through some other mechanism) because hashing and verification are expensive tasks and a centralized PHAAS would be easily attacked DOS. I recommend that clients of PHAAS should be given access to a specific <scheme> in order to accomplish their own objectives. This could be accomplished through an OAuth scope such as phaas.bcryptplain to indicate the <scheme> shown above. This grants access to both the hash and verify capability. Could an implementation be open-sourced? Of course! Note that there’s a lot of leeway in the choice of algorithms and rulesets (such as whether to pre-hash), meaning that the open-source component would either have those rules baked in or be easily extensible. It seems quite possible that this could be open-sourced in library form with some simple rulesets and then users could register their own rulesets. In a Java+Spring library this could possibly be done by registering @Beans of type PasswordEncoder with names corresponding to the <scheme> . Can this approach help migrate a legacy app to a better hashing scheme? I think so! Let’s say legacy app corporateTreasure is using SHA-256 and wants to migrate to using bcrypt. One approach is to invalidate all currently-hashed SHA-256 passwords and require the user to provide a new passwords that can be hashed with bcrypt. Another approach that spreads risk over time is to “strangle” the userbase from SHA-256 to bcrypt as passwords naturally expire, say every 90 days.
https://medium.com/@joshuatcasey/password-hashing-as-a-service-42cd64de492d
['Joshua Casey']
2020-12-03 18:13:50.680000+00:00
['Authentication', 'Security', 'Technology']
1,761
Do High Frequency Gravitational Waves Explain Li & Podkletnov’s Experimental Results?
Scientists like Dr. Ning Li & Eugene Podkletnov have claimed to see anomalous gravitational effects for decades. Could High-Frequency Gravitational Waves provide an explanation? We join Dr. Robert Baker, Jr. to discuss international HFGW research and hypothesize about what might be causing these strange experimental effects… Robert, I understand that there are literally dozens of physicists & engineers doing research on High Frequency Gravitational Waves, and the 2003 Mitre HFGW conference was a pivotal first event in terms of bringing them together as a community. Can you describe this for me a bit? Dr. Robert Baker, Jr — view his bio here. The MITRE conference was a crucial first step in bringing scientists together to discuss HFGWs from both the perspectives of theoretical physicists and practical engineers and included scientists from all over the world. Interestingly though, scientists from China who were not able to attend the event became some of the biggest proponents of HFGW research and use the MITRE papers as background for their research. The community that took shape at the MITRE Conference later evolved into STAIF Section-F, and this was due entirely to the tireless efforts of Paul Murad and Tony Robertson. STAIF proved to be an excellent forum for the presentation and discussion of new concepts in gravitational science. I’ve been following a few of them recently — such as Dr. Ning Li. The last I’d heard from her was a very brief message in which she claimed to have achieved an 11 kilowatt experimental result for gravitational wave production. She calls this “AC gravity”, which is poorly understood by myself and many others. Well, at the 2003 MITRE conference, we had participants from about nine countries and about 25 papers, indicating quite a bit of interest — particularly in Russia and China. Ling Li presented a paper there, which was theoretical in nature and perhaps not easily understood by laymen, and in it she did indicate a theoretical output of 11 kilowatts from her system. In her mind, “AC gravity” is simply alternating current — the rest of us refer to it as high frequency gravitational waves. So we’re really talking about the same thing — with frequencies on the order of a hundred kilocycles and on up to the gigacycle range and beyond. As I understand things, Ning Li’s research uses rotating superconductors, and is similar to the experiments that Eugene Podkletnov claimed results for in the early ’90s. The biggest difference is that she’s working on a theoretical model for her experiments, right? Well, she worked with Douglas Torr while they were at the University of Alabama and came up with a very significant paper back in 1992 which indicated that these gravitational waves could be refracted. Essentially, she theorized that there’s an index of refraction, which means you could make a gravitational wave lens out of a superconducting material. It this idea pans out, it would lead to all kinds of optics and a lot of future innovation — it would be quite an amazing feature of her studies. Dr. Ning Li, demonstrating the superconductor she claims to have generated “AC Gravity”. The superconductors that both Li & Podkletnov have been experimenting with are quite large — much larger than what NASA used during their experimental replication. Does that play a role in their results? Yes, they’re both using discs up to almost 10 inches, I think. I’m not quite sure, but certainly 10 centimeters at the least. They’re quite large. My thesis has been that high frequency gravitational waves are the basis for the effects Li & Podkletnov have claimed. In other words, the rotating disc, fields, high frequencies, harmonics and so on could be producing HFGWs that are perhaps about below the limits of detection at the magnitudes they work at, but may be influencing the local gravitational field, perhaps through self-rectification or something along those lines. That makes sense. She’s been rather quiet lately — I haven’t heard from her in some time, but did see a recent funding request that appears to have been fulfilled by the Department of Defense, so it seems they may be financing part of her research? Well, they did, and I was a part of the oversight committee. It was done by the U S Army and the Redstone Arsenal there in Huntsville. They did indeed have her do some research, and I was on the panel that was taking a look at that research for the Army. There was a problem, though: we couldn’t get a final report out of Ning. You know, we had to get something. She is a brilliant scientist, but like so many brilliant scientists, sometimes they’re a little less than practical. It’s not that she was hiding it or anything nefarious. In fact, I was on the panel that was talking to Ning, and I said, “well, where’s the final report?” Well, she just didn’t quite get around to it, you know, and that’s a problem. I‘ve talked with her since then, however, and she would play a role in the efforts that I’m trying to promote. Back in the 1990's, Podkletnov claimed a 3% loss in weight in a rotating superconductor, which is similar to Ning Li’s experiments. However, in the early 2000’s he co-authored a paper with Dr. Giovanni Modanese claiming that a spark discharge onto a superconductor was generating over 20 pounds of force. Are you familiar with this second paper? Podkletnov’s rotating superconductor was claimed to produce a 3% loss in weight. Well that’s interesting, and I would love to see a replication of it. Some of their work I have seen, and it had interesting correlation with frequency. In other words, the higher the frequency, the greater the effect, or rather with the square of the frequency. That caught my attention, because high frequency gravitational wave influence also increases with the square of its frequency, so again I can’t help but wonder if there’s an HFGW effect that might explain their claims. Please keep in mind, however, that I’m an engineer — not a theoretician. In this particular case, you’d want to follow up the experimental work with a detailed theoretical analysis. That’s not always the case. Often you have a theory and then develop an experiment to you know, to test it, but since they already claim to have experimental results I think they should work it the other way around. The Podkletnov & Modanese claims are significant, as is this 11 kilowatt effect that Ning Li claimed to generate — how is that measured? What kind of force could we anticipate from future experimental devices? Well it’s hard to tell. I mentioned Landau and Lifshitz, but they do not give any particular specifications. Now another scientist that you might get in touch with is Professor Giorgio Fontana over in Italy. Fontana has come to some of the HFGW meetings and presented papers on gravitational wave beams, interactions and so forth. He’s one of several researchers in Italy looking at gravitational wave beams — what happens when they intersect, and how they might change the gravitational field. I think Giorgio’s work is good, and he’d be a good person to talk with, but I really don’t think anybody’s theoretically come up with the magnitude. Do you have any ideas on how conservation of energy is preserved in these experimental claims? I ask because these are claims for pretty substantial results, and I wonder how input power might be coupled to output effect. I’m not sure and I don’t think anybody’s quite sure. Again, that’s why an experiment is valuable. I’ve not seen any of these purported gravitational change experiments myself, of the people who are reporting claims, none of them have reported any kind of permanent effect or easily replicable effects that would allow for detailed measurements coupling measurements. In some of the work that I’m doing with the Chinese, you can get up to 10²⁰ watts per square meter for a small area, and that might really open things up. After all, when Marconi developed his ship to shore radio telegraphy, I don’t think he ever thought about microwave oven applications for his work. So we just won’t know until we actually do more experimental research. Podkletnov’s “impulse generator” was claimed to generate up to 20 lbs of force. A lot of that research is currently being done in China, from what I understand. Can you tell us a bit about their efforts, and how many people have become involved in their research projects? Yes, the Chinese sponsored me on a month-long lecture tour of Universities and Institutes in China on the subject of HFGWs. Right now there are probably more scientists in China working on HFGW research than in the whole of the rest of the world. I’m also working on a project with professor Fangyu Li at Chongqing University in China on an experiment attempting to emulate the gravitational waves produced by a double star system. Fangu Li has a detector that would be well suited to detecting these gravitational waves, and we’re excited to see what kind of results we can produce & detect. What is the anticipated timeframe before we can anticipate seeing scientific applications for High Frequency Gravitation Wave research? It’s like anything, it really boils down to money. If the A-bomb project during World War II had proceeded at a normal pace, it probably would have taken 50 years to develop — but when the Manhattan project funded it & removed all the stop, then it happened very rapidly. You could make the same case for the rapid development of computer processors. Basically as I see it, there’s a tipping point in terms of experimental results, and once that’s been reached then mainstream “big science” gets involved and further research will move rapidly. The question is how fast we’ll reach that tipping point — it could be a couple of years, or it may be a decade. It’s difficult to say.
https://medium.com/discourse/do-high-frequence-gravitational-waves-explain-li-podkletnovs-experimental-results-5d9f9560e1a6
['Tim Ventura']
2019-12-24 01:00:21.732000+00:00
['Physics', 'Futurism', 'Science', 'Technology', 'Gravity']
1,762
American ISPs are a Monopoly and Need to Be Controlled
My family and I live in the country where internet is scarce. There are thousands of acres where there is no internet available short of satellite which is lackluster to say the least. I am fortunate to have an unmetered connection, but the fastest speed I can get today is 25Mbps down and 3Mbps up from a bonded DSL connection. I also pay a hefty $87/month for this “luxury”. That has come down from the $106/month when I first started with the company and had a measly 15Mbps down and a choice of 1Mbps or 2Mbps up, with the latter adding another $2 to my monthly bill. I opted to pay the extra money. The lack of internet availability and astronomical cost where it can be gotten all came as a shock to me. We moved out of a small city where I had 60Mbps down and something like 5Mpbs up for only $55/month with a guarantee the price would never increase but my speed would. I had been with the company for nearly 10 years and in that time my speed went from 3Mbps down to 60Mbps while my price had only risen $5, but the last raise came with the guarantee. My country ISP is in the process of running fiber, as most are. I have now been calling for 2 and a half years to see when fiber would be run. 2 weeks ago they ran a fiber trunk from their office to the nearest town which runs right in front of my house. They now say it will take another 14 months before that trunk line is operational and can provide fiber to my house! These “internet dark age” years lead me to the conclusion that ISPs are monopolies operating in America illegally, and nobody seems to care. Let’s take a look at a new subdivision for example. If ISP1 agrees with the land developer to provide their services to every single house, there is sometimes an agreement that keeps other ISPs out for a certain time. That sounds like a violation of the anti trust agreements to me. Since each ISP has to run their own cabling, if they deem it’s not worth their effort they won’t run cabling to certain areas. This means it’s a geographical monopoly and the only way you can choose another provider is to sell your most expensive purchase, your house, and move to another location. Just a couple years ago in Nashville, Tennessee Google came in and started running fiber after going through all the proper channels. Comcast and AT&T both got scared and their marketing tactics showed it. In a final act of desperation they teamed up and went after Google to stop the competition. Turns out they each were on the lease for the telephone poles and Google wasn’t, so the county was caught in a breech of terms and their hands were tied. Here is one website that has some details and is trying to raise support and have an ordinance passed: http://www.googlefibernashville.com/. I am not being compensated by this, just sharing to highlight the facts. How can this be resolved? In my humble opinion, if the ISPs shared the cabling I believe this would go a long way to stop the monopolies. The internet connects the world, but the ISPs keep the neighborhoods segregated. If there was a way that a connected house could choose their provider, then when the homeowners or renters decide that ISP doesn’t have good customer support, they could cancel and call another, merely needing to swap ISP hardware such as modems. As to who would do maintenance, that’s the million dollar question I suppose. Personally, I’d love to see the internet cabling handled like roads — leave it up the municipalities to create the webbing, but make sure they are all connected at the borders. It’s not like we have to drive only a Chevy in one town or a Ford in another. All makes of vehicles can drive on the same road. What’s the point anyway? You might be asking yourself, why does any of this even matter? It matters because I’ve had to work with several ISPs, some personally, others professionally. There are ISPs out there that have such low customer service numbers they seem truly terrified of something like this and would most likely be willing to risk just about everything to keep their monopoly alive because without it, they wouldn’t have a business. Competition! Why is it that I am paying $85/month for internet that’s so terribly slow anyway after paying $55/month just 10 miles away? I have the choice of either this or no internet — there is no competition. Besides, if you could choose any internet company in the nation, how would that affect your pricing? We would see nationwide internet prices drop faster than a tank free falling from a C130! This would also boost the American economy because one day you might only be paying $30/month for internet. That price reduction could possibly equal billions of “free” money added to the American purchasing power. With a population of 328,915,700 in a census in 2019 and taking just 60% of those to account for rural houses with no available internet access and those with internet availability who choose not to partake would be 197,349,420. If those 60% were to save just $10/month that would be $1,973,494,200/month saved for the American population. What do you think that would do to our economy?
https://medium.com/swlh/american-isps-are-a-monopoly-and-need-to-be-controlled-ed8f51e7fd68
['Matt Anderson']
2019-07-15 18:25:50.277000+00:00
['Technology', 'Change', 'Internet', 'Monopoly', 'Antitrust']
1,763
6 Best Smart Light Bulbs In 2020
Adding smart light bulbs to your home is a quick and easy way to make your place smarter without breaking the bank. Available in all sorts of sizes and styles, a smart bulb can set the mood through dimming or setting a scene with bursts of color. We have gathered the best smart light bulbs around to help you decide which is the best fit for you. 1. BERENNIS Smart Light Bulb BERENNIS LED Smart WiFi Bulb is an easy, affordable way to add colorful, smart lighting to your home, featuring 16 million colors and thousands of whites. Connect multiple bulbs in an APP without hub and enjoy smart light in every room. Promising review: I tried 3 different kinds but nothing could beat this one. I love the APP it was easy to install and easy to setup the light. I think it’s plenty bright enough especially at warm white is full brightness. I love that in the app I can set it to change colors at certain times so as it gets closer to bedtime it gets calmer. Setup with Alexa and Google Home super easy. I was using the big WEMO plugs but this is so much easier and smarter. I would highly recommend! They are slightly larger than normal bulbs or even there 4.5W bulbs which I did find too dim, but it’s not a big deal. They look great and work great. Hope this helps! Price: $29.99 At Amazon 2. Smart LED Light Bulb 2.4G Promising review: I wanted to get into smart lighting for my apartment but was discouraged by the high price and less than stellar reviews of Philips Hue. Decided to give these a shot due to the reasonable price and positive reviews… they are legit. The bulbs easily pair with the Smart-Life app and have plenty of color/brightness/temperature options. In addition, my Echo Dot links with the app for voice control, no hub needed. One of my favorite (albeit small) details is that they gradually turn on and off instead of just blinking on and off. It’s a nice touch. Also, the customer service is great and I bought more for the rest of the lights in my Apartment. Price: $22.99 At Amazon 3. WiFi LED Light Bulbs Promising review: These bulbs are awesome. They are super easy to set up with the Smart Life app on iOS and connect to Siri and Alexa. They are really bright and you can dim them. Dimming is also easy to do with Alexa. You really can’t go wrong with these bulbs. I will definitely buy more of these! Price: $43.99 At Amazon 4. TECKIN Smart Light Bulb Promising review: This bulb provides more features than I expected. I got choices of cold, warm and color light. Color mode as well as brightness can be conveniently controlled by the smart app on your phone. Even though says 13W, it’s pretty bright when I set to the max brightness. Wifi and app connection setup was pretty easy and straight forward, just need to follow the instructions and hints. My favorite feature is the voice control part. It’s hooked up with my google home, so instead of turning it on & off via APP, I just need to do voice control. For me I feel like this is so convenient and saves time. The bulb looks great too and you can feel it’s in great quality. If you are looking for a smart bulb with reasonable price, this is the one I highly recommend. Price: $20 At Amazon 5. Harmonic Smart Light Bulb Promising review: I have never owned smart light bulbs before. I read an article highlighting the benefits of different color lights and their ability to enhance you mood and improve your health. So I wanted to try these color changing lamps. The app is simple to use and the many colors that the lamps can change is breathtaking. I set it on green today, that is the color that will make you happy. Must be because it’s the color of money. Lol. I highly recommend these lamps. Price: $25 At Amazon 6. Smart LED Light Bulb Promising review: Was extremely easy to install. I already have the smart life sockets so I did not have to even do anything to connect to my Alexa but the instruction manual will give you clear steps. For the bulb all I had to do was manually turn it on and off 3x and then it will start blinking. You confirm in the smart life app that it is blinking and then it will connect to your wifi and that’s it! Once the light bulb was added connected into my app I am able to turn it on/off, change the brightness, change the color, and schedule times. I was also able to control it by telling my Alexa. Great product and easy to use. Price: $22 At Amazon Light all the things Having a smart light bulb opens the door to many different possibilities. Through smart home integrations, light bulbs can be automated to simulate the light of day, or they can activate when motion is sensed. Best of all, smart bulbs are the ultimate in convenience as they allow you to control them without having to get up from the comforts of your couch or bed.
https://medium.com/@tellmegist/6-best-smart-light-bulbs-in-2020-852b116be41c
['James L. Komi']
2020-01-14 08:02:17.306000+00:00
['Tech', 'Internet of Things', 'Bulb', 'Smart Home', 'Technology']
1,764
Every Utopia becomes a Dystopia
A Map of Utopia I remember Reagan saying to Gorbachev “Tear down this wall.” Sorry, that’s fake news. Or at least white lie news. There’s no way I could have heard a live conference in West Berlin in 1987. It was probably past my bedtime in Delhi. I also have a memory of reading it in some magazine or the other. Perhaps Time. Perhaps Newsweek. Or because it was international news, I might have even read it in an Indian magazine like India Today. Frankly, since the news conference has posthumous fame — after the wall actually fell — there’s a good chance that all my memories are from reading about the event years later. When I say “I remember Reagan saying…,” I mean that the perceived importance of the event combined with my imagination has created a vivid “memory” of an event. Well, most memory is like that. We don’t store the facts as is; instead we compress and transform every event to suit our needs. Selective understanding is crucial to living a sane life today, when we are deluged with information 24/7. So what is a true memory? There’s a famous thought experiment in epistemology called the Gettier paradox. Here’s a version I like: Imagine you’re watching the 1984 Wimbledon finals with McEnroe facing Connors. Unfortunately, the broadcaster has lost contact with his TV van and doesn’t have a live feed anymore. Someone has a clever idea: why not broadcast a recording of the 1982 final instead which had the same cast? So you’re watching the 1982 final while thinking you’re watching the 1984 final. In this version Connors wins. You go to sleep thinking Connors has won. Let’s say that Connors won the 1984 final (actually, McEnroe won in 1984; for the record, I supported Connors) and when you open the newspaper in the morning, you read the headline “Connors defeats McEnroe again.” Your belief that Connors has won is a true belief despite being arrived at via a flawed route. Something is wrong when you can arrive at true beliefs through mistaken means isn’t it? Of course, Gettier’s thought experiment is a contrived situation. How likely is it that exactly the same type of prior event is available as a substitute for an actual one? Tennis match twins might be hard to find but the use of memories as evidence is all too common — in testimony, in arguments between spouses, in story telling. When I tell the jury that I saw that man pull the trigger, what if never saw him shoot the victim. What if I am combining the knowledge that the man is a known hoodlum, the actual experience of shots being fired and reading headlines in the local newspaper? Here’s the question: even if the man was the murderer, is my testimony valid? Further, if much testimony is confabulation, is any testimony valid? Especially in a murder trial where the jury is one color and the defendant another? And the final dystopian possibility — what if our social media feeds are full of posts that prime our memories to be one way rather than another. Can we trust our own minds? I want to explore that internal dystopia in future essays. For example: can technology help us certify memories? what would a process of certification look like? let’s say it takes the form of “bitcoin meets the brain.” Is that a techno-utopia or a techno-dystopia? But we aren’t there yet. I am still a few decades behind that brave new world. But it does seem as if every utopia becomes a dystopia sooner or later. And then replaced by the next utopia. Let’s start with 1945. The second world war had just ended. Hundreds of millions dead, entire populations genocided, atom bombs burst. The Soviet Flag over Berlin Never again they said. Let’s form the United Nations and give a seat at the table to everyone. Some more prominently than others, i.e., those who were on the winning side of WWII. Decolonization started in earnest; India and Pakistan became independent in 1947, though that utopian moment happened in parallel with its own dystopian partition whose effects we feel to this day. Anyway, the European powers who brought us two world wars lay defeated; even the victors. In their stead were two confident new powers: the United States and the Soviet Union. Each had its theory of progress, of delivering material prosperity to its citizens and eventually the world. When he said energy will become too cheap to meter we believed him. Unfortunately, that energy can flow smoothly out of an outlet or burn the sky. Even more so if you have ten thousand of them. That’s what led to: US and Soviet tanks face off I can’t believe how close the US and the USSR brought us to the end of times, but we were lucky; the nuclear winter never came despite several close runs. And then Reagan came to Berlin and asked that the wall come down. And it did, a couple of years after he asked! When I first came to the US in the nineties it was an unrivaled power. For twenty plus years, it ruled the world, the most powerful country that has ever existed. It expanded market capitalism everywhere, most prominently in China but also in India. Globalization as we know it is a product of American power. I owe the writing of this essay in a cafe in Bangalore to the fall of the Berlin wall. Yes Brandenburg Gate, No Foxconn. When 9/11 happened, the headlines across the world were “we are all Americans.” While that headline was meant as a mark of solidarity, it was truer than we think. The world of startups and markets, of Hollywood storytelling. The possibility of progress backed by global networks of influence and immense military power — who doesn’t want that in some form? Fukuyama’s flawed masterpiece So much so that it became possible to write a book called “The End of History” which claimed that market driven liberal democracy is the final solution to the problem of political order. In this reading, human history is a series of attempts at prosperity that collapse in violence (Rome, Han China, Gupta India) and we continue to look for a solution that combines peace and power in a manner acceptable to most. Fukuyama thought that solution was found in 1989. Let’s call it EOH (End of History) liberalism. That we can all ride into the sunset in our Cadillacs. Who would have thought in 1992 that the most powerful nation in history would elect Trump in 2016, that EOH liberalism would be replaced by ethno-nationalism in every major country in the world? That it would be possible for Vladimir Putin to declare in a recent interview that liberalism has “become obsolete.” Why did that happen? Is there an intrinsic tendency for a utopian bubble to be succeeded by a dystopian abyss? I don’t know if there’s a universal principle of that kind, but I believe it’s important to understand the internal and external contradictions that are bursting the EOH bubble. Of which two are the most important: EOH Liberalism was deployed on networks — of goods and information — and these networks became instruments of concentration and inequality instead of decentralization and democratization that we were promised. Why? EOH Liberalism hastened the exploitation of the nonhuman world that supports all human life and economic activity. If I may say so, it is a UX designed for easy extraction. Could we have predicted the two? Yes, and many did, but they weren’t heard loudly enough. Perhaps because we didn’t want to hear what they were saying or perhaps because they weren’t saying it the right way.
https://medium.com/swlh/every-utopia-becomes-a-dystopia-9f6513c7d493
['Rajesh Kasturirangan']
2019-06-29 16:51:09.296000+00:00
['Philosophy', 'Utopia', 'Politics', 'Technology', 'United States']
1,765
Replacing VBA with Java in Excel
Excel is ubiquitous in nearly every workplace. From top tier investment firms and large scale engineering companies right down to individual sole traders, people get work done using Excel. This article will look at some of the problems and advantages of using Excel, and how using Java embedded in Excel those problems can be overcome. You don’t have to look far to find criticism of Excel and cases where its mis-use has resulted in heavy losses to companies. Over the last few years there have been many cases where bugs in Excel spreadsheets have been at least partly to blame for embarrassing and costly errors. With such risks, why do companies still use Excel, and what can they do to prevent similar situations? The main reason why Excel is so heavily used is worker productivity. Someone using Excel can do an incredible amount of complex work far more effectively than with any other tool. Software developers often argue that the same can be achieved in their favourite programming language. Technically they may be correct, but that assumes that everyone has the time and appetite for learning to be a developer (which takes most of us many many years). Most companies don’t have the resources to have developers dedicated to each business user, and even if they did then communicating what’s needed and iterating to get the desired outcome would struggle to compete with an individual ‘knocking together’ an Excel spreadsheet. The Problems with Excel What is it about Excel that makes it prone to errors? Nothing in itself, but there are various things that developers are used to that reduce the same sorts of risks in non-Excel based solutions. The following is a list of a few weaknesses in how Excel spreadsheets are typically developed. Too much complexity . A spreadsheet may start off fairly simple with a few cells and formulas. Then bit by bit it grows and grows. Ranges get duplicated to handle more and more cases or multiple sets of data until it’s hard to reason about what’s going on. The task may still be reasonably simple, but because of the duplication required and the fact that each cell can only hold one unit of data a spreadsheet can sort of explode into something way too complex. . A spreadsheet may start off fairly simple with a few cells and formulas. Then bit by bit it grows and grows. Ranges get duplicated to handle more and more cases or multiple sets of data until it’s hard to reason about what’s going on. The task may still be reasonably simple, but because of the duplication required and the fact that each cell can only hold one unit of data a spreadsheet can sort of explode into something way too complex. Minimal or no testing . Excel spreadsheets are subject to very little in the way of tests. The are typically no unit tests written for VBA code, and the only functional tests are often by the author of the spreadsheet on a limited set of inputs. When another user has to use the same spreadsheet there’s a high chance because they’re not familiar with the nuances of how it works they’ll stumble across some error that was never tested for. . Excel spreadsheets are subject to very little in the way of tests. The are typically no unit tests written for VBA code, and the only functional tests are often by the author of the spreadsheet on a limited set of inputs. When another user has to use the same spreadsheet there’s a high chance because they’re not familiar with the nuances of how it works they’ll stumble across some error that was never tested for. Stale or outdated data . Connecting Excel to external data sources can be tricky. It can connect directly to a database, but what if the data you need is produced by another system, or you don’t have direct database access to the data you need? Data is often copied and pasted from reports from other systems to Excel, often with no way of knowing when the data was last copied or even if it is complete. . Connecting Excel to external data sources can be tricky. It can connect directly to a database, but what if the data you need is produced by another system, or you don’t have direct database access to the data you need? Data is often copied and pasted from reports from other systems to Excel, often with no way of knowing when the data was last copied or even if it is complete. Bugs spread out of control . Even if you know there is an error in a spreadsheet, and you’ve figured out how to fix it, how can you find all the instances of other spreadsheets that have copied and pasted the same bit of VBA code, or even just copies of the exact same spreadsheet? The chances are that you can’t. Spreadsheets get copied and emailed around, and there’s no separation between the spreadsheet, data and code. . Even if you know there is an error in a spreadsheet, and you’ve figured out how to fix it, how can you find all the instances of other spreadsheets that have copied and pasted the same bit of VBA code, or even just copies of the exact same spreadsheet? The chances are that you can’t. Spreadsheets get copied and emailed around, and there’s no separation between the spreadsheet, data and code. Version control hell. When you’re working on a large spreadsheet and you get to the point that it’s stable and working, what do you do? Most likely the answer is you save a copy — maybe with the date added to the file name. That’s about as far as version control goes in Excel. What if a change has unintended consequences? How do you know when that change was introduced? It’s almost impossible! How can we tackle these problems? These problems all stem from taking something that on the surface is quite simple (a grid of related numbers), and pushing it until it’s extremely hard to reason about its behaviour and correctness. In essence, Excel is a way of expressing relationships between things. A1 is the sum of B1 and C1, for example. Where it starts to go wrong is when those relationships become more and more complex. If you wanted to compute “A1 is the variance of daily returns of time series X”, what would that look like in Excel? If you are an experienced Excel user you might be imagining a table representing time series B with extra columns for computing the returns and a formula to compute the variance. But what if now we want to compute the returns for another N time series? Copy and paste the formulas for each new time series? This is how errors start to creep in! Much better is to encapsulate the algorithm of computing the variance of the daily returns of a time series into a function. Then we can call that repeatedly for as many time series as we want without risk of one of the intermediate cells getting edited or not copied correctly. Now imagine that instead of a table of data, a time series could be represented by a single cell in Excel. If that could be achieved then we’re back at a simple relationship between two cells — “A1 =daily_returns_variance_of(B1)”. Suddenly our spreadsheet starts to look a lot less complex! We still have the problem that the time series data has to come from somewhere. Rather than copy and paste from another system or database, what if we had a function that loaded a time series from that the system or database directly? That way, each time we calculated the spreadsheet we’d know that the data was up to date and complete! To continue the previous example, we might have “B1 = load_time_series(ticker, start_date, end_date)”. We’ll come on to how exactly we can store a whole data set in a single cell later. It often won’t just be the person using Excel who writes the functions they use. By providing end users with a solid set of Excel functions, technology teams can support the business more effectively than if they just write applications with only shallow Excel integration, like exporting reports. How does Java help us? By thinking about our spreadsheet and putting algorithms into functions, and by fetching data directly instead of copying and pasting, we’ve addressed a couple of the big issues with Excel spreadsheets. We haven’t yet touched on how those functions can be written, and the problems around testing, bug fixing and version control. If we were to decide to write all of our functions in VBA (and believe me, many people do!) then we wouldn’t be taking advantage of any of the advances in software development made in the last 20 years! Java has kept apace with modern software development and has a lot to offer over VBA. Testing . Java has lots of different testing frameworks, all with different strengths and weaknesses. Whichever you choose though, being able to run automated test suits across your code base gives you confidence that it’s doing the right thing. This simply isn’t possible with VBA. . Java has lots of different testing frameworks, all with different strengths and weaknesses. Whichever you choose though, being able to run automated test suits across your code base gives you confidence that it’s doing the right thing. This simply isn’t possible with VBA. Extensive Library Support. Writing VBA code is often a case of writing quite standard algorithms found online and converting them to VBA. Want to do something trivial like sort an array of data? In Java that’s no problem but in VBA you will be responsible for making sure your sorting algorithm works, and without any testing. Now imagine writing a complex derivative pricing model! Writing VBA code is often a case of writing quite standard algorithms found online and converting them to VBA. Want to do something trivial like sort an array of data? In Java that’s no problem but in VBA you will be responsible for making sure your sorting algorithm works, and without any testing. Now imagine writing a complex derivative pricing model! Keep code outside of Excel. VBA code is usually saved inside the workbook, which is why when sharing workbooks bugs become so hard to track down. If your spreadsheet instead references a compiled Java library (JAR), then that is external to all the spreadsheets that reference it and can be updated easily. VBA code is usually saved inside the workbook, which is why when sharing workbooks bugs become so hard to track down. If your spreadsheet instead references a compiled Java library (JAR), then that is external to all the spreadsheets that reference it and can be updated easily. Version Control. Java source code is just text, and so can easily be checked into a version control system. Most Java IDEs have excellent support for this as it is a standard part of modern software development. Java source code is just text, and so can easily be checked into a version control system. Most Java IDEs have excellent support for this as it is a standard part of modern software development. Development Environment. The VBA editor (VBE) hasn’t changed in years. It offers little more than a very basic text editor with rudimentary debugging capabilities. Java on the other hand has a range of excellent IDEs to choose from. But Java isn’t part of Excel! That’s true, but Excel has a concept of “add-ins” that allow developers to extend Excel’s functionality. One such add-in is Jinx, the Excel Java Add-In. Using Jinx, you can completely do away with VBA and write worksheet functions, macros and menus entirely in Java. Writing a worksheet function in Java is as simple as adding Jinx’s @ExcelFunction annotation to a Java method: package com.mycompany.xladdin; import com.exceljava.jinx.ExcelFunction; /** * A simple Excel function. */ public class ExcelFunctions { /*** * Multiply two numbers and return the result. */ @ExcelFunction public static double multiply(double x, double y) { return x * y; } } You can return all the basic types you’d expect, as well as 1d and 2d arrays. For more complex types, you can write your own type converters, or you can return Java objects directly to Excel as object handles to be passed in to another Java method. Jinx is free to download. See the Jinx User Guide for more information about how you can use Java as a replacement for VBA. What was that about returning a time series as a single cell? Jinx functions can return all the standard types you’d expect (ints, doubles, arrays etc.), but it can also return Java objects! When a complex Java object (like a class representing a time series loaded from a database) is returned it will be returned to Excel as an object handle. That object handle can then be passed to other Jinx functions, and the Java method will be passed the original Java object returned from the first function. This is an extremely useful technique for simplifying spreadsheets to keep the complexity of the data involved away from the spreadsheet. When needed, the object can be expanded to an Excel array using a Jinx array function. You can read more about these object handles in the Jinx Object Cache section of the user guide. Other Languages These techniques aren’t unique to Java. The same could be said for other JVM languages like Scala or Kotlin. Jinx works with all JVM languages, not just Java. Another popular language for writing Excel add-ins is Python. This can be achieved using the PyXLL add-in for Excel.
https://towardsdatascience.com/replacing-vba-with-java-in-excel-e9f5e28d4e5c
['Tony Roberts']
2019-06-13 22:24:31.694000+00:00
['Excel', 'Vba', 'Java', 'Programming', 'Enterprise Technology']
1,766
Remarkable AI Solutions that Help to Achieve Business Goals
AI is not just a techy buzzword anymore. We regularly come across AI/ML technology in both personal and business use applications. Businesses are looking at AI for ways to increase revenue, improve operations, and to enhance productivity. If you are looking for readily available AI solutions that could accelerate your business, then dive straight in. What is Democratization of AI ? According to a recent Gartner report, 77% of the organizations surveyed planned to either increase or retain their AI investments in spite of the global pandemic. Additionally, the report also talks about the “Democratization of AI”. This implies that businesses want everyone linked to the organization to experience the benefits of AI solutions. This includes customers, business partners, business executives, salespeople, assembly line workers, application developers and IT operations professionals. How exactly can this sort of democratization be achieved when every aspect of the business functions differently? In spite of the differences, there are many common practices across departments which are tedious and still carried out manually. For example, why do we still need someone to jot down meeting notes? While most businesses appreciate the value that AI solutions can provide, they are unaware of the common areas where automation can increase productivity. Fear of the unknown acts as a barrier to get started. Also, acquiring the required expertise or knowledge to build and subsequently work with AI solutions is another challenge. Let us find out the relationship between the different business goals and available AI solutions. How to transform your business with AI There are several ways in which AI can be integrated into your day-to-day operations. Let us look at how different AI solutions can be used to achieve specific business goals. Increase revenue : Your sales revenue largely depends on better customer understanding and improved customer service. AI solutions that analyze data from both new sources like CCTV and beacons, in combination with existing sources like POS can provide a deeper understanding of customers/users. This can help to influence customer spending habits and increase stickiness. AI solutions that suggest optimal product placements in retail stores or online product recommendations are results of such analysis. : Your sales revenue largely depends on better customer understanding and improved customer service. AI solutions that analyze data from both new sources like CCTV and beacons, in combination with existing sources like POS can provide a deeper understanding of customers/users. This can help to influence customer spending habits and increase stickiness. AI solutions that suggest optimal product placements in retail stores or online product recommendations are results of such analysis. Improve operations : To improve operations, you need to identify redundant or lengthy processes that can be optimized. Again, operational data analyses can be used to identify and gain insights into operational inefficiencies. These can be subsequently addressed using problem-specific AI solutions that eventually result in a reduction in unnecessary operational costs. : To improve operations, you need to identify redundant or lengthy processes that can be optimized. Again, operational data analyses can be used to identify and gain insights into operational inefficiencies. These can be subsequently addressed using problem-specific AI solutions that eventually result in a reduction in unnecessary operational costs. Enhance productivity: AI solutions provide intelligent tools that can reduce effort, time, and financial investments by automating manual or expertise-heavy processes. They can be trained to deliver an equal or similar quality of output at a fraction of the initial resource investments. Let us now look at a few common business scenarios that can be optimized with the use of AI. AI Applications that address your mundane problems We will now look in detail at five challenges that are common to many businesses and AI solutions that can be used to address these challenges. Meeting Notes The process of taking down meeting notes usually does not cover all spoken content and is prone to human errors. Without recorded texts of the meeting, it is impossible to run a textual search to find or reference discussions that happened in the meeting. The result is lost productivity (in recalling or listening through the recorded audio, if any) and errors (in an incorrect recall or misinterpretation due to imperfect memory). Voice AI solutions like automatic speech recognition from Sentient,io can process such recordings and transcribe them to textual meeting notes with high accuracy. This text may be further processed to add context, before sharing with all participants and stakeholders. Customer Profiling In the retail industry, customer profiling helps businesses to understand spending habits of customers and design targeted marketing campaigns aimed at a particular type of customer. Customer profiling is usually unidimensional, from a single data source that belies the multifaceted nature of customer behaviour. Such data may not be enough to provide a good understanding of customers and subsequently result in poorly informed business decisions. Data about customers may be available from other sources which can help to build an accurate picture about the customer. For example, a retail chain may combine CCTV data with POS data to better profile their target customers at different locations. This would help to increase revenue while allowing for better management of inventories. A possible solution could be to count the density of people in specific departments of the store, helping to identify items which were seen, but not purchased by customers. Podcast Production The podcast production process requires an effective voice and speaking capabilities, in addition to a lengthy process of metadata creation. You may also want the podcasts to be delivered in specific accents depending on the audience demographics. Cost of production is directly proportional to human effort involved in most cases. However, AI based text to speech services, like from Sentient.io, can help you to not only edit but also create podcasts from scratch. Such services are available as affordable SaaS solutions. Extraction of Financial Information Businesses may be required to extract financial information about individuals in situations where background checks are required. Extraction of such information manually is a laborious process because the information available is unstructured and distributed across different types of sources. An individual search solution combines data from different sources and provides it in a readable and manipulatable format thereby addressing the operational inefficiencies of the extraction process. Video Editing Creating short-form videos from long-form videos is a laborious and time-consuming process that requires adequate skills and experience. There is a high cost attached if you engage internal or third-party resources to perform this task. An AI based video processing software could provide the solution to this problem by analyzing the video content to identify key scenes in the video based on certain criteria and using these to build a shorter video. This solution would eliminate the requirement for human effort and provide businesses with an affordable SaaS solution. To read more on AI solutions, click here.
https://brandzasia.medium.com/remarkable-ai-solutions-that-help-to-achieve-business-goals-6956567fc94
[]
2020-12-10 06:45:38.004000+00:00
['Machine Learning', 'Artificial Intelligence', 'Business', 'Technology', 'AI']
1,767
Harnessing new technologies for strengthening transport connectivity for sustainable development
Harnessing new technologies for strengthening transport connectivity for sustainable development By Sandeep Raj Jain, Economic Affairs Officer Technology is helping the world stay more connected. Photo: Shutterstock International road transport requires agreements among countries on traffic rights to enable movement of vehicles of one country into another. The signing of a transport agreement that allows movement of foreign vehicles indicates significant steps forward to reduce cross-border and transit transport costs. However, while signing is a first step, real benefits materialize on implementation of the agreement. Implementation is normally fraught with numerous challenges as it involves unlearning the old ways of facilitating and instituting control measures and learning the new ways. Experience in the Asia-Pacific region indicates that some transport agreements signed many years ago have still not been fully implemented. One of the major reasons for less than full implementation of these agreements has been the lack of appropriate operational tools with border agencies to implement the provisions of the agreements confidently. Technology is helping improve the efficency of traditional forms of transport. Photo: Shuttestock Fortunately, new technologies provide the possibility to develop tools that could help border agencies manage increasing traffic while ensuring compliance with regulations. In 2012, ESCAP Member States adopted a Regional Strategic Framework for Facilitation of International Road Transport that identifies six fundamental issues and seven modalities to support international road transport and possible solutions to address them. As a part of the Framework, ESCAP developed Transport Facilitation Models to support countries to implement transport facilitation agreements. One of the models that demonstrates use of new technologies in transport and transit facilitation is the Secure Cross-Border Transport Model. The model provides a conceptualization of an intelligent transport system involving the design of a cross-border vehicle and cargo monitoring system using new technologies, including ICT, satellite positioning and electronic seals. Ever since ESCAP developed the model, many countries expressed interest in its implementation. The Bhutan, Bangladesh, India and Nepal Motor Vehicle Agreement that was signed in June 2015 contains provision for the electronic tracking of vehicles. Pending enactment of the above agreement, ESCAP, jointly with the Asian Development Bank has been providing technical assistance for pilot implementation of the Model on selected corridors in the countries and established techno-economic feasibility of such technologies for cross- border and transit transport facilitation. Apart from South Asia, many countries in Southeast Asia have also evinced interest in applying the model along the specified corridors. ESCAP will soon undertake trial runs along the transport corridor among these countries. Secure Cross Border Transport Model In addition, an Intergovernmental Agreement on International Road Transport among China, Mongolia and Russian Federation was signed in December 2016. To implement the agreement, Mongolia has sought technical assistance from ESCAP for a pilot application of a secure cross-border transport model on the transit traffic passing through Mongolia. Electronic tracking technologies have been emerging rapidly on the back of developments in ICT. Used effectively, they have potential to reduce transit costs by obviating the needs for guarantees due to reduced risk perception by customs, as these technologies make real time enforcement of violations by transiting vehicles possible. The availability of diverse solutions for the tracking of vehicles, however, could complicate cross-border electronic tracking due to lack of interoperability among the software/electronic seals. Therefore, it might be appropriate to define and explicate minimum requirements for the key components of cross-border electronic tracking systems to ensure maximum facilitation. Transportation is a crucial aspect of sustainable development. Photo: Shutterstock ESCAP has developed guidelines on automated customs transit transport systems that combines electronic tracking of vehicles with customs transit systems. Based on the exchange of electronic messages among stakeholders, the system is fully electronic and its implementation could reduce transit transport costs, particularly for landlocked developing countries in line with the Vienna Programme of Action. Under the 2030 Agenda for Sustainable Development, international road transport, in particular, is being scrutinized closely due to renewed calls for shifting to more sustainable modes of transport, such as railways. However, given the comparative advantages of international road transport, road transport currently plays and will continue to play a vital role in overall transportation systems. Moving forward, there is an urgent need to make international road transport more efficient by further reducing the non-physical barriers and integrating with other transport modes for providing sustainable transport solutions. Innovative technologies could provide a way in this direction.
https://medium.com/@UNESCAP/harnessing-new-technologies-for-strengthening-transport-connectivity-for-sustainable-development-c66be3351b51
['United Nations Escap']
2019-07-16 04:32:20.445000+00:00
['Trade', 'Development', 'Asia Pacific', 'Technology', 'Transportation']
1,768
Node.js Tips — Environment Variables, Downloads, DNS, and More
Photo by Nicolas Tissot on Unsplash Like any kind of apps, there are difficult issues to solve when we write Node apps. In this article, we’ll look at some solutions to common problems when writing Node apps. Node.js Require All Files in a Folder We can export all modules in a folder by creating a module that exports all the modules. Then we can import them all at once. For instance, we can write: index.js exports.foo = require("./routes/foo.js"); exports.bar= require("./routes/bar.js"); app.js const routes = require("./routes"); We can also write: const fs = require("fs"); const normalizedPath = require("path").join(__dirname, "routes"); fs.readdirSync(normalizedPath).forEach((file) => { require(`./routes/${file}`); }); Stringify an Error We can log an error by looping through the properties of an object. For instance, we can write: const error = new Error('error'); const propertyNames = Object.getOwnPropertyNames(error); for (const prop of propertyNames) { const descriptor = Object.getOwnPropertyDescriptor(error, prop); console.log(prop, descriptor); } We use the Object.getOwnPropertyNames method to get the properties of the error object. Then we get the descriptors of each property. Difference Between “process.stdout.write” and “console.log” in Node.js console.log calls process.stdout.write with formatted output. The code for console.log is just: console.log = function (d) { process.stdout.write(d + ' '); }; They are pretty much equivalent except for the extra line break. Set NODE_ENV to production/development in Mac OS or Windows We can use the export command to set environment variables in Mac OS. For instance, we can set the NODE_ENV by running: export NODE_ENV=production In Windows, we can use the SET command: SET NODE_ENV=production We can also set process.env.NODE_ENV directly: process.env.NODE_ENV = 'production'; We can set the environment when we run the file by running: NODE_ENV=production node app.js It also works in the package.json scripts : { ... "scripts": { "start": "NODE_ENV=production node ./app" } ... } Using require in Express Templates require is included with CommonJS. It’s not a part of browser-side JavaScript natively. To include JavaScript files, we can use the script tag. Also, we can use CommonJS module. Or we can use async modules. Browserify, Webpack, and Rollup can bundle modules into build artifacts usable by the browser. We can also use ES6 modules natively in browsers with the type attribute set to module . For instance, we can write: script.js export const hello = () => { return "Hello World"; } We can then include script.js by writing: <script type="module" src="script.js"></script> Measure the Execution Time of JavaScript Code with Callbacks We can use the console.time and console.timeEnd methods to measure the time of a script. For instance, if we have: for (let i = 1; i < 10; i++) { const user = { id: i, name: "name" }; db.users.save(user, (err, saved) => { if(err || !saved) { console.log("error"); } else { console.log("saved"); } }); } We can call the methods by writing: console.time("save"); for (let i = 1; i < 10; i++) { const user = { id: i, name: "name" }; db.users.save(user, (err, saved) => { if(err || !saved) { console.log("error"); } else { console.log("saved"); } console.timeEnd("save"); }); } We’ve to pass in the same string to time and timeEnd . Download a File from NodeJS Server Using Express We can download a file with the res.download method in an Express route. For instance, we can write: const path = reqyire('path'); app.get('/download', (req, res) => { const file = path.join(`${__dirname}, 'folder', 'img.jpg'); res.download(file); }); We just get the path and pass it to res.download . URl Encode a String in Node.js We can URI encode a string in Node.jhs with the querystring module. For instance, we can write: const querystring = require("querystring"); const result = querystring.stringify({ text: "hello world"}); Then we get 'text=hello%20world’ for result . Get Local IP Address in Node.js We can get a local IP address in a Node app with the dns module. For instance, we can write: const dns = require('dns'); const os = require('os'); dns.lookup(os.hostname(), (err, addr, fam) => { console.log(addr); }) We call lookup with the hostname to get the IP address from the hostname. Then addr has the IP address. Conclusion We can get the IP address with the dns module. console has methods to let us measure the timing of code. We can require files dynamically. Environment variables can be set dynamically. JavaScript In Plain English Did you know that we have four publications and a YouTube channel? Find them all at plainenglish.io and subscribe to our YouTube channel!
https://medium.com/javascript-in-plain-english/node-js-tips-environment-variables-downloads-dns-and-more-87412946cf31
['John Au-Yeung']
2020-06-25 16:24:16.398000+00:00
['JavaScript', 'Web Development', 'Software Development', 'Technology', 'Programming']
1,769
Good News or Better News: Bitcoin is Making a Move
Suddenly, Bitcoin is up to almost $7800, and now everyone is wanting to get back on the train hoping that it will continue to chug up that mountain and return cryptocurrency to some of its former glory that faded away soon after the first of the year. There has been speculation (depending on who you’re listening to) for weeks that cryptocurrency would spike again soon with some saying that Bitcoin would hit crazy high numbers by the end of the year, but the waters have been relatively calm and depressing until this bit of positive activity. So, what made the market jump, and is this the beginning of the price hikes some have predicted? The overall market cap hit $295 billion on July 18th and has stayed in a this trading range over the last few days. Since Bitcoin is the largest cryptocurrency by the market cap, when it rises the other alt-coins follow. This was a fairly significant event, and some might wonder what changed to cause this price spike. According to Forbes, the new CEO of Goldman/Sachs is friendly towards cryptocurrency and will be in full control of their new trading desk by October. Speculators are betting that this will help to shift more institutional money into the space. On top of that, Blackrock might of played a big part, and its $6.3 trillion fund is interested in starting a cryptocurrency “task force.” This combined with the positive news coming from the SEC saying that the interest in having a Bitcoin ETF is higher now than it was in January, and hope is high that the SEC will approve of one soon. This is exciting because it will expose players from the traditional markets to cryptocurrency, and time will tell if that will bring about the positive results everyone is hoping for. But does this explain the sudden jump in price? Considering that this recent increase happened over a short period, one has to think there was more to it than just these positive speculations alone that drove the price up so high. Some feel that this hike is connected to the liquidation of “shorts.” Shorts are positions where the traders bet the price of cryptocurrency will go down. These type of players are betting against the crypto market, and in most cases, leveraging money, as in borrowing funds to make their bet. This forces traders to buy back the short position when the market goes up for current market value, which in turn drives the price even higher. This would explain the recent vertical line on the charts. But will this trend continue upward? One might speculate, should this explanation be the true cause for the radical spike, that the price increase doesn’t have what it takes to be sustained. That, however, is for the investor to decide. One thing is for sure, if you had a substantial investment in Bitcoin on the 16th you were certainly a happy camper by the 18th. Bitcoins gains are always made on just a few days out of the year, so it’s important to have a position when that spike happens. — Not all blockchains are going to be profitable, find out which cryptocurrency to invest in that’s setting up to change the future.
https://medium.com/datadriveninvestor/good-news-or-better-news-bitcoin-is-making-a-move-f33e833227f1
['Chris Douthit']
2018-07-23 23:29:49.826000+00:00
['Blockchain', 'Finance', 'Technology', 'Bitcoin', 'Investing']
1,770
What makes Python the “Best of the Best?”
Python is an object-oriented, interpreted, and high-level popular programing language with dynamic semantics. It is desirable for rapid application development, combined with dynamic binding and dynamic typing, its high-level built-in data structure, and connecting existing components together used for as a scripting or glue language. The cost of program maintenance reduces because Python is easy and simple to learn syntax emphasizes readability. Python is a popular and most lovable programming language globally because of the increased productivity it provides. According to Google Trends, let’s have a quick view of Python’s popularity all over the world. Here in this blog, some most noteworthy Python features make it best of the best tool for professionals of all skill levels are provided: 1. Open-source language Python is opensource, so to install and use Python, you don’t need to pay charges. That means to the public, the source code of Python is available freely. From Python’s official website, you can easily download it. The Free/Libre and Open Source Software (FLOSS) supported by the Python so you can distribute and change it. This allowed by the Python community for the improve its features continuously. 2. High-level language: You do not need to perform memory management and do not need to remember its system architecture since Python is a high-level language. To Python’s user-friendliness, this features contributions. 3. Python = Simplicity To learn quickly is not enough for any programming language. Right! Here Python is not easy to learn, but it is also easy to learn, but in the use and implementation, that is an excellent point about Python that male developers fall in love with Python. You can master python coding’s nitty-gritty In a few days, with syntax similar to English. Through its readability factor enhancing Python is dynamically typed, so that makes indentation mandatory, 4. Object-oriented and functional: Python supports object-oriented and functional features. Python helps various inheritances also, unlike Java. An object-oriented programming language can model real-world data while on functions (code that can be reused) a functional language focuses. 5. It’s interpreted: Related Java and C++, unlike compiled language wherein you need to compile the code and then run it. Python executes it line by line instead of all at once executing the source code. Because you can do it while writing the code, this makes it easier to debug a python code. 6. The vast collection of Libraries: Python comes with a vast collection of libraries — the extensive collection of Python libraries you can automatically download with it when you download Python. For every single thing, you don’t need to write individual code because these libraries are built-in. For threading, web browsers, regular expressions, unit-testing, email, CGI, database, image manipulation, documentation-generation, and many more Python has packages and libraries. 7. Portable: Python is highly flexible, meaning, portable, for a Linux machine or window machine, a python code written that also can run on iOS and vise versa- in the code, you don’t need to male any alterations. Different machines need to write different code with Python eliminates (in your python code, just make sure there’s no system-dependent feature). 8. Extensible and embeddable: To write specific parts of your python code in other programming languages that Python allows you, so it’s an extensible language such as C++. In the source code of other languages, you can also embed your Python code similarly. In another language, this allows you to integrate Python’s scripting functionalities into code written. Conclusion: After reading about what makes Python best of the best, it is safe to conclude that Python is the best capable programming language for handling any development requirement. In the field of Data Science and Specifically in Machine Learning, businesses hire Python developers more from a Python development company to make Python applications that gained newfound traction in the last few years.
https://medium.com/devtechtoday/what-makes-python-the-best-of-the-best-55e285adea37
['Binal Prajapati']
2020-09-29 07:52:13.195000+00:00
['Python', 'Technology', 'Development', 'Developer', 'Python Programming']
1,771
The Librarians
The champagne bottle popped. A universal wave of semi-flinch filled the room’s bodies in a wave of excitement despite the expectation the cork would pop. We watched from the side of the crowd. Another year. Another drama. Another opportunity. As they downed their glasses and replenished them, they made sure to snap shots. Just as everyone else, they lost sight of their originality and ability to be present. It was as any other photo of any other person in that same time. Maybe others were in more interesting locations, but they were still dissociated from those who surrounded them, disconnected from a true celebration. We somehow managed attendance to a more opulent party that year. Friends of friends were helpful in that way, during the covert hypnotism and before the onset of sudden decay. We looked at each other with an internal wink and instead rolled our eyes then quickly, instinctively took what we could. Too sloshed and into their phones, nobody noticed we took a case of champagne with the meat and cheese platters. Out the front door, we went not bothering to cloak our theft in our purses or coats. That’s what we did. We scavenged when we could, sometimes out of necessity but mostly because we could. My bucket Guatemalan purse was perpetually in tow, despite my fashion sense, for it was useful in carrying forties and books. Four of either could easily fit or a six-pack clinking with each footstep. Instead of a band of thieves jingling with treasure, we were simply a clan of deviants clanking their way out of parties. All that I see, I steal. Anything consumable we took. Check the cupboards. Check the medicine cabinet. If we could get into a bedroom, what’s in the top drawer? My most cherished prize was discovering a canister of nitrous oxide in a closet. We rolled that big score out the back door. We each took a hit and then sold it as full at the next party, making sure we didn’t know anybody there well. They couldn’t track us down if they wanted. We had watches and each other. If we got separated, we’d meet at the library. We left our phones behind years ago in another act of defiance. We relied on our innate connection to navigate the world with each other. The library was our source if we absolutely had to go online. Yet, most of what we were looking for could be found in the books. More could be found in the books. With time, people in the library increased to get a wifi signal for superficial reasons rather than acquiring knowledge. We knew we were slightly silly in our strange rebellion. Yet, we had all we needed; it didn’t matter. Coincidentally, that immature refusal insurmountably saved us from the decline. It didn’t mean we were free from the attack, but we were immune from being it. Later, as we huddled in the library, by candlelight we read the Reader’s Digest Collection of the Supernatural. There were cases of zombies in Louisiana when plantation farmers would kidnap people by drugging them into zombies, making them mindless slaves as their families thought they were dead. Those same books which filled the building were our outlet for maintaining sanity or destroying it as well as building our survival. How to stave off infection. How to butcher a deer translated into how to butcher a cat. What is the meaning of life? What is starvation? Since the library moved everything into a digital Dewey Decimal system, it was difficult to find all the zombie fiction works. We started with the comic books and moved on to the horror authors we knew. Three of us scoured the fiction looking for tricks, maintaining strength. The other two were tasked with the easily found computer science books. It was a new year. There was little thought nor feeling but glee as we packed our new haul into the car that night. Despite the robust amount of books the library had, it was low on public funding and thus lacked sufficient security. We discovered how to break in a couple of years before that night. Deviant geeks that we were, it seemed appropriate we pop bottles by the Millers and Oates…and then return on January 2nd to see the staff baffled by the champagne stains and hoers devours crumbs. Yet, there was no 2nd to come in the standard sense, the confined by humankind, digital sense. Reminiscent of the plantation farmers zombifying by drugging others into servitude, there was the drugging. Yet, it took place just short of mass extinction. Also unlike the Deep South, these zombies were meant for destruction rather than fieldwork. Who these new farmers were was too distant information that all those books couldn’t tell us. Yet, we speculated after we digested the 000’s, 600’s, and 900’s as well the most recent magazines and newspapers. We found dark humor in most of the fiction works describing zombie’s consumption habits as either solely brains or whole bodies. Those speculating novelists’ minds adhered to an outline passed through the modern age. They were wrong. Real zombies ate your heart out. That’s all they wanted, the seat of your soul. For what is life without the feeling that makes your brain form memories as good or bad? It wasn’t zombies ate brains because they were mindless. It was they ate hearts because they were soulless. The poetry section helped me see this lyrical juxtaposition as I sought comfort within its pages. For a week, if there was a thing called a week anymore, I went into a deep bout trying to find meaning in our descent within the spirituality section. I paced around spouting destruction equals creation as my friends wearily overlooked my insanity. Would I take one for the team if I couldn’t live in this madness anymore? Would they eat me? Not my heart, please. I’ll tell them to not eat my heart. How could my friends eat me? I’d eat them if I had to. Or not. It causes neurological disease. Those fuckers can’t have my heart. After I said I was going to burn myself alive like a monk as a fuck you to whatever assholes who started it, my friends banned me from religion, philosophy and most of the self-help. I was allotted one hour every other day listening to Dennis read from a curated pile of thoughtfully selected books. We started to make notes until we realized it was winter. We didn’t want to burn the books. It was agreed the first to go would be the romance novels, the children’s books and then all of the fiction, if it came to that. It did, all the way into the beginning of poetry. We spent the end of the season in the basement reading Dickinson and Collins out loud, tearing out the pages and watching them burn as an offering for our safety. We sacrificed the wonders of the mind and beauty of its art for the spark from which it originates, life. Poetry in motion as its words flickered above our heads, sustaining us for another night. One of the basement windows remained cracked on and off for ventilation. Zombies only knew if you were there if they saw you. Deaf, dumb but not blind. As we took turns opening and closing it, we all noticed the smell of other fires but did not dare to find the source. We wondered why no other human life attempted entry into the building. Surely, somebody out there needed knowledge. Perhaps they had the same fear we had, smelling the embers of our literary bonfire. For what had all the zombie films and novels taught us just like Mary Shelley did? Living people are the biggest monsters. They are the true threat in a zombie apocalypse. We all read Frankenstein to remind ourselves of the monstrosity of mankind, lest we forget. It took each of us less than two days to read it. Then it burnt. The history books further cemented that fact which encompasses many works of art: man as a predator, man as the destructor. I begged to not burn each of my favorite books as they came in sequence. Or at least let me read it one last time before I died. Some I was permitted until I became too obsessive and frequent with my requests. You can’t burn Robbins because he’s too funny. You can’t burn Oates; she’s too prolific. Dickens is too classic. We didn’t sleep much out of terror but also from the coffee beans we roasted from the library’s cheap cafe. Once that supply ended, our fervent digestion of literature paced itself to a still high volume but less rapid approach. Amy, who always enjoyed math, occupied herself with alternating between functional knowledge for our situation to mathematics works. She calculated problems in the margins to remain calm. The others had their own devotions to certain pieces. We all had assignments dictated to each other for survival while other times we spent reading what we liked, what made us human. My friends were lucky their inclinations gravitated towards non-fiction whereas I saw what I loved burn. The doors were barricaded with the tables. The blinds were all pulled. Zombies were lazy. If they can’t see you, you’re not there. Out of sight, out of whatever minds they had, perhaps that of a mosquito. There wasn’t discovery for a zombie, just happenstance. We sometimes peeked through the windows and saw them standing there, dead eyes fixated on the stars. I found it odd for the undead to gaze into the heavens and wondered if they were crying up to their souls. They traded in their phone chargers for the cosmic space powering what brutal battery they had left. They were already zombies before someone made them one. Jordan sobbed in horror one day as he saw his little sister in that inactive zombie pose and damned her Snapchat addiction. In outrage, he smashed most of the computers into remnants of plastic and wire disarrayed about the floor. Physically, they could not be killed. We first encountered them when we attempted leaving the library. In tipsy merriment, we thought he was a drunken pervert when he lunged at my chest, but it became evident of something sinister as he cannibalistic-ally tore at my breast. Jordan tried to knock him out but that whole folktale of zombies not becoming winded was true: strong, soulless with an appetite never satiated. The others started to beat on him until he was on the ground with his face and brains splattered on the pavement. I didn’t think we had the savagery to curb someone, yet we quickly learned what adrenaline and instinct could do. We simultaneously panicked we killed a person until we noticed his feet and arms were still moving. Looking at each other in disbelief and then down the street to another with the same dead set eyes on our chests, we retreated back into the library and locked the door. Soon we heard screams from outside. Peeking out, we saw a waif sorority girl type pull out the heart of a local homeless man’s chest and feverishly clench her jaws around the organ like a dog shaking its prey dead. Over the next few days, we heard similar screams and witnessed the same killing. Eventually, we stopped wondering, for we knew. The screams became less frequent to the point that we began to look again at the horror because it had become rare. We grew up on a fictional basis that zombies created more zombies. Not true; they kill you for your soul. They weren’t going to bite you and your doom was set. They could do damage, mostly to your breast bone. They would grab you, maybe dismember you if you struggled incorrectly, but they went for the heart. Fixated on the stars and craving a human heart, that was their unlife. I found it comforting to know I would never be a zombie. There would be no begging my friends to kill me if I got bitten. It was dead, and I would die someday anyway. Amy’s scientific mind wanted to catch one and study it. How are you going to study it? With office supplies and cafe equipment? She refrained from the endeavor. We concluded it was the phones by elimination. As we witnessed former teachers and acquaintances ravage one homeless or elderly person after another, it became clear. What happened to the babies and toddlers? What about the kids? I contemplated the terror, yet we didn’t have children, just each other. I thought I heard a child’s faint scream one night and tightly held my hands over my ears. We discussed looking for children, yet our own shortcomings and lack of bravery prevented us from being heroes. When I still believed in God, I prayed someone out there was doing what I couldn’t do. I took the rage from that constraint out on one of the computers Jordan hadn’t destroyed. A remarkable snowstorm hit midwinter. We saw this an opportunity to go outside with better coverage against the zombies. Jordan and Dennis dressed in layers from the forgotten scarves and mittens in the Lost and Found Box. They weren’t going to venture far. The apartment building next door used to house a few families. We were familiar with those tenants and saw most of them out in the street, undead. We knew it was unlikely any of them were in the building, except what was left of the children. We already covered World War 2 by that time. We knew how essential the death of future generations was. Yet, these remote enemies were cowards by using the parents to terminate their own offspring. They didn’t toss the babies on the bayonet. They fed them to their wolves without witnessing the terror. Off Jordan and Dennis went hidden behind the snow peaks. Although they both returned with bags full of supplies over a few days, Jordan never came back the same. We didn’t ask what he saw, yet my mind filled with images of bloody cribs and other scenes of hell. We each went mad in our own ways. Our insanity waxed and waned. When one of us recovered another filled the psychotic void in constant rotation. Who’s turn will it be today? We individually possessed our own mental disintegration niches. Mine was grief over the destruction of the art encapsulated in books, the words which made us human. Jordan’s was the hatred for technology while Amy’s was a wonder of it. Dennis occasionally became silent and spent hours away from us ripping through the storage closets and weeping in the mess. Joanne couldn’t bear the silence. One day she sat looking at the CD collection. She read over the booklets then stacked them in a pile for us to burn. “I miss music,” she looked up and cried to me as I walked over to the collect the paper. She used to sing and dance when she was feeling jovial, which was often. Breaking out in song and dance was her way of entertaining herself and others. Later in the day, Dennis surprised us with tambourines, cowbells, and triangles from the children’s section closet. We gathered around the fire that night not knowing what song to sing. We eased into our makeshift group music therapy with “This Little Light of Mine.” Our set list varied as it turned into irony. Joanne began to sing “Don’t Fear the Reaper” then wanted some pop. She sang Lady Gaga’s “Monster” by herself. He ate my heart. We laughed at our demented humor. We finished off with “8 Days a Week” on repeat. The Beatles song used to be one of my favorite cheery, light love songs in my prior life. Love. It capsized from the models we once knew. It was what kept us together but also what we lost as our respective ideals and dreams of it diminished. Before the zombies, Joanne and Dennis were hooking up without telling any of us. Yet, we all sensed it. They never spoke a word of it. As we parted ways every night, those two went together. Like groups of young adults typically interact, we had our caveats of more complicated almost incestuous relationships considering how much a family we were. I had hooked up with Jordan on a drunken night. Afterward, we agreed to it being a mistake and never spoke of it again. Dennis and Joanne were more serious despite the secrecy. They slept clutched together in the library’s basement as not one of us remarked on their closeness. “O load of stress and bother, / Lie on the shells of our backs in a great heap: / It will but press us closer, one to the other. / We are asleep.” Let them have it. As for me, after each night of mayhem, I had my own habit concerning love in my prior life. Kyle was home most of the time. Of course, he was. He was in bed sleeping or in pretend slumber, waiting on me. I’d tip tap on his bedroom window. Promptly, he signaled for me to go to the door. Rarely, I slept there. I’d eventually leave our strewn sheets after he fell asleep and embark on my walk to my own bed into the quiet street hours slightly before dawn. And miles to go before I sleep. Shortly before Christmas, I had coffee with a friend who was not part of my regular unit. She began the conversation like many initiated in that age. Those three words which prompted discussions concerning gossip and drama, “I saw on Facebook…” I saw on Facebook so and so had a baby. I saw on Facebook so and so broke up. I saw on Facebook so and so hates so and so. I saw on Facebook ten photos of the same thing on five different profiles. “I saw on Facebook that Kyle is ‘in a relationship.’ With who? He’s such a player. It’s shocking.” I didn’t know with who either. I discovered who that girl was that night as we had a post-coital spliff. “I heard you’re seeing someone,” I coughed out after an intense hit. “Yea, you.” “You know I’m not into the whole social media thing, but somebody told me you are ‘in a relationship’ on Facebook. What does that mean?” At this, he got out of bed in a rush of confusion and anger. “I’m in a relationship with you.” “No, you’re not.” What followed was a one-sided discussion of how could I not realize what I meant to him and my lack of realizing it because I was “immature.” “You and your friends are something else thinking you are revolting with no phones. I never know where you are. I can’t reach you. You guys just go about like you’re stuck in the ‘90s,” he rambled on, “I can get you a cell if you want, if you don’t have the money.” I had the money. He didn’t understand me nor my friends. I felt free, unchained from societal standards and boyfriends. I knew I was too wild as much as I knew I’d make a terrible girlfriend for that same reason. I thought I was considerate sparing guys the perplexity that was me by exempting myself from serious relationships. Yet, I had turned into a player in her own right with that logic. That was the last time I saw him. His relationship status probably just as quickly went blank. The guy who broke many hearts got his own broken by me before he started to eat them. Maybe I deserve this for breaking his heart. Those romantic squabbles were something from another time as I reflected on my life before the death of society. How tragic we refused society and got what we wanted in such a terrible way. Those tiny yet loud misunderstandings which annoyed and confused me turned into humanistic interactions I craved and missed. I’d take breaking your heart or you breaking mine over this any day, Kyle. By the time spring came, we went through the rations we made out of the cafe and the sparse supplies in the lunchroom. Jordan discovered a nest of rats in a storage room. We ate them one by one. They were the occasional treat. The guys brought back seeds during their winter exploration and carefully went outside to gather dirt once the ground became unfrozen. We closed off the room with the most sunlight from the rest of the library and planted the seeds in makeshift pots of children’s toy bins. We watered them and eventually gathered the produce by night making sure to move slowly as to not catch whatever lurking outside’s sight. We also fortified any entrances since our food was in plain view to anybody who walked past. Every morning I woke expecting the windows smashed in our greenhouse. Yet, perhaps, there really weren’t that many people left. If there ever could be a happy time in this age, it was summer. We had food, and we didn’t need to burn the books unless we came across the occasional meat. The only gripe we had was the few intense heat waves. We simply went into the basement when it became too hot to bear. The guys became more confident with scavenging around the block. Sometimes they returned more startled than usual, yet they always arrived with supplies. Jordan surprised us during this time with marijuana he saved from our previous lives. When we initially ransacked the library we found some strong pain killers in desk drawers and a couple wine bottles. We went through them in the first week. Soon, the rush of getting high and drunk became a distant memory as we reluctantly sobered. Gleefully, we took hits from Jordan’s bowl and felt the high melt over us. I got the idea to act out a play. The rest were feeling as playful. I chose A Streetcar Named Desire from the book stacks. We read the lines with our best acting efforts and finished the night high, sweaty and maniacally screaming “Stella!” at each other. Our primal, tortured collective scream reverberated against the walls of our domain of tattered literature and knowledge. It was a still morning filled with smiles of what inventive comfort we attained by late summer. Joanne had a tendency to partially sunbathe through a sliver of sunlight illuminating a corner in the former children’s picture books wing. From downstairs I heard her jubilantly shout, “Chicken!” This alerted the rest of us to her excitement yet did not alarm us as it should until we saw a flash of her body dashing out the door. Dennis and Jordan followed immediately knowing the danger of her going out alone. I thought it was a silly thing among many silly tendencies Joanne had. Why did the chicken cross the road? To be ignored by zombies and not be eaten by Joanne. My comical outlook on the fiasco altered to bereavement when the guys returned with only the chicken. With tears in their eyes, they flung the squawking thing loose in the library and paced. Dennis punched the wall and retreated elsewhere as Amy and I followed Jordan with our eyes pleading for an answer we already knew. A still morning was suddenly disturbed and permanently altered by a wail of despair. It somehow resolved into a quieter than before. The hit of sudden death ricocheted off the walls and then deafened us into a newer, deeper state of desperate solitude. There would be no more music. Our singer was gone. The only sound was the chaotic chicken who I wanted to drop kick into the window pane and see its guts smashed for its meager promise of sustenance to Joanne. I replaced this anguish with Joanne’s ill received intention. She wanted chicken. Maybe she wanted eggs. The hysterics I felt inside matched the behavior of the animal although I had learned by then showing my pain and grief did not change anything. I could scream. I could trash something, anything. But it would not make the coffin that was the streets fade nor would it produce anything tenable for survival. Life had become how to overcome one nuisance or heartbreak after the other, finding rugged joy in being alive efficiently with my friends. One of them may be gone, yet we still had to eat. Time, even in the cruel degradation of it we now experienced. continued to lapse. We still had to survive. Capturing the chicken was not a difficult task. I chased it into the bathroom and shut the door. I knew I had to wait for the others to overcome the shock before we made a decision. Not one word was spoken or read for the rest of that day. That day paused and its loss echoed into the following months. Joanne’s chicken endeavor proved fruitless and punctured our weakened spirits more. The hen was egg bound. It produced one egg. By the time we realized it had an ailment and found the correct book to address the issue, it died. It was a mere hour between knowing it was sick and finding the solution, but it died somewhere in the interim. Fall quickly came. Our grief made us lazy and indifferent in maintaining the routine we formerly had. We may have been able to stock on more food and supplies. Much like an animal knowing it was going to die and refraining from eating, we sparsely tried to gather food much less ration what we had. We didn’t eat. The zombies weren’t eating. Our dead world was a stagnant, nutrition-less existence. The pangs of hunger were the last reminders we were alive. The books surrounding us had more life in them. All the dreams discovered and adventures recorded within them smote us into forgetting them. We would never get to write our own stories, live our own poetry because of the vicious, imposed catastrophe created by some other author. Days without food lapsed into what seemed like an eternity. I took what energy I had to practice yin yoga. I learned it in the summer when I was in a better state. It provided the movement I had lost from being confined in the library. I spent most of my short adult life on foot, busy with destinations. The yoga was the only makeshift self-care I could summon that didn’t make me crazed. It was not without emotion when I stretched and breathed. Towards the end, I curled into an earnest child’s pose in complete surrender, prayer to a higher power and cried into my third eye until it was clear the future was blind. Delirium was slow. Each sunrise came to burden us in another day of burnt pages and words that formerly filled our minds but not our bodies. We became apathetic to all the work and art of men. All those crafts created from the same organ from which the destruction was sourced made us bitter to how terrible the mind could be. We were imprisoned by and with the duplicitous power of the brain. I began to wonder what was worse after we read about starvation. What was worse? My heart destroying itself? Or the undead sinking their ravenous teeth into it? Maybe it’s better to be the consumer rather than the food… Briefly, after high school I worked in a strip club. I mostly thought it was comical as my friends worried for my well being. I came out to the song “Living Dead Girl.” Years later, locked in the library I reflected on my ill-conceived stripper days and wondered if the song was a self-prophesizing clue into what the future held. Was I truly living as I watched the world outside’s demise and all the words I cherished given to flame and smoke just to keep a dead girl alive for another day of tragedy? We were living but dead in all the ways we thought made us human, walking corpses without the innate violence of those beings who waited outside the doors. Perhaps it was a collective hallucination. Dennis paced as he manically recited scripture. A glow we hadn’t seen since the old life slightly illuminated past the bookcases to the table we habitually used since we were kids. Huddled with our swollen bellies and worn out minds, we froze in disbelief of that unique glow we had not forgotten but disregarded in our previous lives. Slowly, we rose up in wonder and dread. Among the trashed computer aisles remained one laptop we never trashed. It beamed like a welcoming hello, that mysterious light most men knew less about than zombies. And it flickered goodbye, a farewell for all of us. Control. Alt. Delete.
https://medium.com/champagne-and-zombies/the-librarians-6f9668997ecc
['Robyn G.']
2019-04-07 23:03:56.725000+00:00
['Short Story', 'Technology', 'Fiction', 'Books', 'Zombies']
1,772
What Is Chi-Square Test & How Does It Work?
As a data science engineer, it’s imperative that the sample data set which you pick from the data is reliable, clean, and well tested for its usability in machine learning model building. So how do you do that? Well, we have multiple statistical techniques like descriptive statistics where we measure the data central value, how it is spread across the mean/median. Is it normally distributed or there is a skew in the data spread? Please refer to my previous article on the same for more clarity. As the first thing we do is to visualize the data using various data visualization techniques to make some early sense of any data skewness or discrepancies, to identify any kind of relationship between data set variables. Data has so much to say and we data engineer give it a voice to express and describe itself, using descriptive statistical techniques. But to make any prediction or to infer something beyond the given data to find any hidden probability, we rely on inferential statistic techniques. Inferential statistics are concerned with making inferences based on relations found in the sample, to relations in the population. Inferential statistics help us decide, for example, whether the differences between groups that we see in our data are strong enough to provide support for our hypothesis that group differences exist in general, in the entire population. Today we will cover one of the inferential statistical mechanisms to understand the concept of hypothesis testing using a popular Chi-Square test. What is the Chi-Square Test? Do remember that, It is an inferential statistical test that works on categorical data. The Chi-Squared test is a statistical hypothesis test that assumes (the null hypothesis) that the observed frequencies for a categorical variable match the expected frequencies for the categorical variable. The test calculates a statistic that has a chi-squared distribution, named for the Greek capital letter Chi (X) pronounced “ki” as in kite. We try to test the likelihood of test data(sample data) to find out whether the observed distribution of data set is a statistical fluke(due to chance ) or not. “Goodness of fit” statistic in the chi-square test, measures how well the observed distribution of data fits with the distribution that is expected if the variables are independent. How Does Chi-Square Work? Generally, we try to establish a relationship between the given categorical variable in this test. Chi-square evaluates whether given variables in a data set(sample) are independent, called the Test of Independence. Chi-square tests are used for testing hypotheses about one or two categorical variables and are appropriate when the data can be summarized by counts in a table. The variables can have multiple categories. Type of Chi-Square Test: For One Categorical Variable, we perform Chi-Square Goodness-of-Fit Test The chi-square goodness of fit test begins by hypothesizing that the distribution of a variable behaves in a particular manner. For example, in order to determine the daily staffing needs of a retail store, the manager may wish to know whether there is an equal number of customers each day of the week. For, Two Categorical Variables, we perform Chi-Square Test for Association Another way we can describe the Chi-square test is that: It tests the null hypothesis that the variables are independent. The test compares the observed data to a model that distributes the data according to the expectation that the variables are independent. Wherever the observed data doesn’t fit the model, the likelihood that the variables are dependent becomes stronger, thus proving the null hypothesis incorrect! Hypothesis In Chi-Square: The first thing as a data engineer, you need to establish before performing any Inferential statistic test like Chi-Square, is to establish H0: Null Hypothesis H1: Alternate Hypothesis For One Categorical Variable: Null hypothesis : The proportions match an assumed set of proportions : The proportions match an assumed set of proportions Alternative hypothesis: At least one category has a different proportion. • For, Two Categorical Variables: Null hypothesis : There is no association between the two variables : There is no association between the two variables Alternative hypothesis: There is an association between the two variable Before we jump into understanding how Chi-square works with an example, we need to understand what is Chi-square distribution & some other related concepts. This Chi-squared distribution is what we will analyze going forward in the chi-square or χ2 test. What Is Chi-Square Distribution? The chi-square distribution (also chi-squared or χ2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. It is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing or in the construction of confidence intervals. The primary reason that the chi-square distribution is used extensively in hypothesis testing is its relationship to the normal distribution. An additional reason that the chi-square distribution is widely used is that it is a member of the class of likelihood ratio tests (LRT).LRT’s have several desirable properties; in particular, LRT’s commonly provide the highest power to reject the null hypothesis. Degree Of Freedom in Chi-Squared Distribution: The degrees of freedom in Chi-Squared distribution is equal to the number of standard normal deviates being summed. The mean of a Chi-square distribution is its degrees of freedom. A chi-square distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom The degrees of freedom ( df or d) tell you how many numbers in your grid are actually independent. For a Chi-square grid, the degrees of freedom can be said to be the number of cells you need to fill in before, given the totals in the margins, you can fill in the rest of the grid using a formula. The degrees of freedom for a Chi-square grid is equal to the number of rows minus one times the number of columns minus one: that is, (R-1)*(C-1). Remember! As the degree of freedom (df), increases the Chi-square distribution approaches a normal distribution Chi-Square Statistic: The formula for the chi-square statistic used in the chi-square test is: The subscript “c” here are the degrees of freedom. “O” is your observed value and E is your expected value. The summation symbol means that you’ll have to perform a calculation for every single data item in your data set. E=(row total×column total) / sample size The Chi-square statistic can only be used on the numbers. They can’t be used for percentages, proportions, means, or similar statistical value. For example, if you have 10 percent of 200 people, you would need to convert that to a number (20) before you can run a test statistic. Chi-Square test involves calculating a metric called the Chi-square statistic mentioned above, which follows the Chi-square distribution. Let’s see an example to get clarity on all the above-covered topics related to Chi-Square: P-Value: The null hypothesis provides a probability framework against which to compare our data. Specifically, through the proposed statistical model, the null hypothesis can be represented by a probability distribution called P-value, which gives the probability of all possible outcomes if the null hypothesis is true; It is a probabilistic representation of our expectations under the null hypothesis. Chi-Square Test Explained With Example: We will cover the following important steps in our journey of the Chi_square test for Independence of two variables. State The Hypothesis Formulate Data Analysis Plan Analyze The Sample Data Interpret The Outcome Problem: This problem has been sourced from starttrek A public opinion poll surveyed a simple random sample of 1000 voters. Respondents were classified by gender (male or female) and by voting preference (Republican, Democrat, or Independent). The results are shown in the contingency table below. We have to infer, Is there a gender gap? Do the men’s voting preferences differ significantly from women’s preferences? Use a 0.05 level of significance. Let’s try to solve this problem using the Chi-Square test to find out the P-Value. Here test type which we will employ is : Chi-square test for independence. So let’s get started by first stating our hypothesis. Step 1: State The Hypothesis: Here we need to start by establishing a null hypothesis and counter hypothesis(alternative hypothesis) as given below. Null Hypothesis: Ho: Gender and voting preferences are independent. Alternate Hypothesis: H1: Gender and voting preferences are not independent. Step 2: Let’s Build Our Data Analysis Plan : Here we will try to find out P-Value and match it with the significance level. Let’s take the standard and accepted level of significance to be 0.05. Given the sample data in the table above, let’s try to employ Chi-Square test for independence and deduce the Probability value. Step 3: Let’s Do Sample Analysis: Here we will analyze the given sample data to compute Degree of freedom Expected Frequency Count of sample variable Calculate Chi-Square test static value All the above values will help us find the P-value. Degree Of Freedom Calculation: Let’s calculate df = (r — 1) * (c — 1), so in the given table, we have r(rows)= 2 and c(column) = 3 df= (2–1)*(3–1) = 1*2= 2 ; Expected Frequency Count Calculation: Let Eij, represent expected values of the two variables are independent of one another. Eij = ith (row total X jth column total) / grand total Let’s calculate the expected value for each given row and column value by using the above mentioned formula, Let me copy the table image again below to help you make calculation easily, Here, Row 1 total value = 400, total value for column1 = 450, total sample size = 1000, So, E1,1 = (400 * 450) / 1000 = 180000/1000 = 180 Similarly, let's calculate other expected values as shown below, E1,2 = (400 * 450) / 1000 = 180000/1000 = 180 E1,3 = (400 * 100) / 1000 = 40000/1000 = 40 E2,1 = (600 * 450) / 1000 = 270000/1000 = 270 E2,2 = (600 * 450) / 1000 = 270000/1000 = 270 E2,3 = (600 * 100) / 1000 = 60000/1000 = 60 Time to calculate Chi-Squares for each calculated expected values above using the formula: Calculating Chi-Square: As already discussed above, the formula for calculating chi-square statistic is The subscript “c” here are the degrees of freedom. “O” is your observed value (actual values given in the table above)and E is your expected value(which we just calculated). The summation symbol means that you’ll have to perform a calculation for every single data item in your data set. Χ² = Σ [ (Oi,j — Ei,j)² / Ei,j ] Using the above formula our chi-square values comes out to be as given below, Χ² = (200–180)²/180 + (150–180)²/180 + (50–40)²/40 + (250–270)²/270 + (300–270)²/270 + (50–60)²/60 Χ² = 400/180 + 900/180 + 100/40 + 400/270 + 900/270 + 100/60 So our final chi-square statistic value , Χ² = 2.22 + 5.00 + 2.50 + 1.48 + 3.33 + 1.67 = 16.2 Having calculated the chi-square value and degrees of freedom, we consult a chi-square table to check whether the chi-square statistic of 16.2 exceeds the critical value for the Chi-square distribution. The intent is to find P-value, which is is the probability that a chi-square statistic having 2 degrees of freedom is more extreme than 16.2. How to calculate P-value? Given the degree of freedom = 2 & Chi-square statistic value = 16.2 , we can easily find P-value using this given Chi-Square Calculator link, simply enter the Chi-square statistic value & degree of freedom as an input, also keep your significance level as 0.05, you will find the result as given below, P-Value is =. 000304. The result is significant at p < .05. You can also find P-value using Chi-Square table given below, you can get this table from this source Having calculated the chi-square value to be 16.2 and degrees of freedom to be 2, we consult a chi-square table given above to check whether the chi-square statistic of 16.2 exceeds the critical value for the Chi-square distribution. The critical value for the alpha of .05 (95% confidence) for df=2 comes out to be 5.99 Step 4: Interpreting the result A: Inference From The P-value: Since we have got the P-value of 0.000304 we can interpret the result where it signifies that As the P-value (0.000304) is less than the significance level (0.05), So we have to reject the below given Null Hypothesis, which says, gender and voting preferences are independent. & accept Alternate Hypothesis: Which says, gender and voting preferences are not independent. Hence we can conclude that, There is a relationship between gender and voting preference. B: Interpreting from Chi-Square Table: Since the critical value for the alpha of .05 (95% confidence) for df=2 is 5.99 and our chi-square statistic value 16.3, is much larger than 5.99, we have sufficient evidence to reject our Null hypothesis which we covered above. So we accept the Alternate Hypothesis: Which says, gender and voting preferences are not independent. Hence we conclude that, There is a relationship between gender and voting preference. What’s Next? We will understand how to perform Chi-Square test using python & Jupyter notebook in part 2 of this series of Inferential Statistic: Hypothesis testing Using Chi-Square and will further explore Normal Deviate Z Test: Two-Sample T-Test ANOVA Test & also will introduce one of the key topic: “Power of Statistical Test “ The power of any test of statistical significance is defined as the probability that it will reject a false null hypothesis. Summing up this part, with a very helpful infographic which guides you to choose your hypothesis test type: So choose your test data wisely and make sure you are interpreting sample data right, so that you can go ahead to design your ML models with required accuracy & confidence.
https://medium.com/swlh/what-is-chi-square-test-how-does-it-work-3b7f22c03b01
[]
2020-08-22 19:01:58.759000+00:00
['Machine Learning', 'Data Science', 'Statistics', 'Data Analysis', 'Technology']
1,773
No matter where you are on the journey, in some way, you are continuing on Nadal vs Tsitsipas live stream
Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://www.deviantart.com/ncflive/commission/liVe-Nadal-vs-Tsitsipas-Live-STREAM-FrEe-1410744 https://www.deviantart.com/ncflive/commission/Rafael-Nadal-vs-Stefanos-Tsitsipas-Live-Stream-1410746 https://www.deviantart.com/ncflive/commission/StREAMS-Tennis-Rafael-Nadal-vs-Stefanos-Tsitsipas-Live-Stream-1410747 https://www.deviantart.com/ncflive/commission/Watch-Nadal-vs-Tsitsipas-Live-Stream-free-1410748 https://www.deviantart.com/ncflive/commission/StreamS-watch-Tsitsipas-vs-Nadal-Live-Stream-Reddit-1410749 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@rafaelnadalvsstefanostsitslive/no-matter-where-you-are-on-the-journey-in-some-way-you-are-continuing-on-nadal-vs-tsitsipas-live-8b27eea88c61
['Rafael Nadal Vs Stefanos Tsitsipas Live Tv']
2020-11-19 17:08:37.501000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
1,774
Knowledge Base — Your New Bitbon System Guide
Interesting fact: the blockchain operation principle was developed back in 1991 by research scientists Stuart Haber and W. Scott Stornetta as a solution for storing digital documents with timestamps. Documents could not be forged or drawn up post factum in this way. However, due to certain reasons, the idea failed at that time, and the patent was not obtained. After a while, this technology found its place in the world and set a brand new trend owing to cryptocurrencies starting with the well-known bitcoin by Satoshi Nakamoto. Since 2008, blockchain has become a real hit applicable in many areas, including for the infamous ICOs. In turn, this attracted the attention of government regulators and showed there was a need for a different approach to using distributed ledger technologies. Back at the stage of developing the Bitbon System concept, we eliminated the shortcomings of the existing blockchain platforms using the related world experience gained over the years, as well as the results of our own research. Every year the distributed ledger technologies information base is growing, as the modern society obviously needs complex and multifunctional systems for the organization and development of the virtual currency market. Realizing this, we created a knowledge base — a special section on the Bitbon Space website, which contains structured information on the development of the distributed ledger virtual asset market, as well as blockchain-based platforms, in particular, the Bitbon System, where distributed ledger tokens and their accounting units are the object of relations between participants. New useful materials will be regularly added to this section, which you can learn about from the news on the official information resources of the Bitbon System. Thus, you now have a convenient source of information for studying the Bitbon System features, which you can use to write your own articles or scientific papers or to make presentations or videos. The knowledge base will be useful not only for lawyers, scientists and entrepreneurs, but also for everyone who wants to keep up with the times and be aware of the progressive trends of the modern world. Click this link to go to the knowledge base: https://www.bitbon.space/en/knowledge-base/
https://medium.com/bitbon/knowledge-base-your-new-bitbon-system-guide-9f6987180617
['Bitbon System']
2020-12-18 15:47:37.178000+00:00
['Blockchain', 'Knowledge', 'Satoshi Nakamoto', 'Bitbon', 'Blockchain Technology']
1,775
Scalability Tradeoffs: Why “The Ethereum Killer” Hasn’t Arrived Yet
Lately I’ve seen a lot of crypto-enthusiasts on Reddit and Telegram making comments like: “Bitcoin is slow. It is expensive. There are many new coins, modern ones that are much better. They are fast and inexpensive.” Or the very popular CryptoKitties argument: “Ethereum couldn’t even handle CryptoKitties, how do you expect it to be Web3.0?” Or about how Blockhain X is here to turn the tables: “<insert coin ticker> is king, it can handle 60,000 transactions per second, has no fees and it can do smart contracts” The popular opinion is that the current leaders by market cap are not good enough, and that new projects are offering better features or alternative architectures (Tangle, Hashgraph) that are going to define a new standard and bring the capabilities of blockchains to new levels. While I do not dismiss the possibility of Bitcoin being dethroned in the upcoming years, or that the top-5 might change radically in the future, I believe that we need to be skeptical when a project advertises itself as a do-it-all solution, and rigorously investigate it before jumping to conclusions. There is no silver-bullet that will solve all problems. “Touka Koukan” (等価交換) is a Japanese phrase which roughly translates to “equivalent exchange”. Nothing comes for free. There will always be trade offs. Below is the Scalability Trilemma as described by Vitalik Buterin: Α blockchain that claims to have solved the trilemma has either bent the laws of physics (highly unlikely), or it has discovered a breakthrough method that solves the major blockchain scalability problems that have stumped top mathematicians and computer scientists for the past decade. While this is not impossible, a more likely explanation is that the blockchain has sacrificed either decentralization, security, or both. What characterizes a blockchain or a cryptocurrency? What characterizes a blockchain or a cryptocurrency? In my debut article, A rant about Blockchains I provide the following definition of a blockchain: A blockchain is a database that can be shared between a group of non-trusting individuals, without needing a central party to maintain the state of the database. And cryptocurrency from Google dictionary: A digital currency in which encryption techniques are used to regulate the generation of units of currency and verify the transfer of funds, operating independently of a central bank. Note that both definitions (blockchain and cryptocurrency) emphasize the need to operate independently of a central party.
https://medium.com/loom-network/scalability-tradeoffs-why-the-ethereum-killer-hasnt-arrived-yet-8f60a88e46c0
['Georgios Konstantopoulos']
2020-02-06 04:25:33.648000+00:00
['Technology', 'Blockchain', 'Cryptocurrency', 'Ethereum', 'Bitcoin']
1,776
15 Minutes or Less: Set Up Over the Air (OTA) Updates for Your NodeMCU and ESP32 Modules
In three quick steps (and less than 15 minutes), you can set up OTA updates for your NodeMCU and ESP32-based IoT projects. In 30 Minutes or Less: Build a Wireless Sensor Network with NodeMCU, we showed you how to easily create a wireless sensor network that can send information to a Universal Sensor Hub built with a Raspberry Pi. But what happens when you need to update the software on these modules? Climbing into building rafters or traveling to remote (or outdoor) destinations to update software can be time consuming, costly and well, just a real pain. In this article, we will show you how to enable Over the Air (OTA) updates to your NodeMCU, ESP8266, or ESP32 project in three quick steps. Step One — Download and Install the WebOTA .zip File At the core of this functionality is the WebOTA library. Download the .zip library: Now Install the .zip library to your Arduino IDE: Step Two — Download the updated Arduino Sketch to the NodeMCU The code listing is at the bottom of this article. Make sure you switch to “raw” before copying, or just download the .zip; copying and pasting can add unwanted characters to the sketch which drives the Arduino IDE crazy. There are really just a few lines of code to add the OTA capability: #include <WebOTA.h> // Initialize WebOTA webota.init(); Serial.flush(); delay(500); // Replace all delay() calls with webota.delay() // Allows webota to listen for update requests during delay() // For example: webota.delay(1000); // Finally, at the end of your sketch loop: webota.handle(); Step Three — Test out the OTA capability Once the sketch is running, export it as a binary file (you will find it in the sub-directory for the project, under Arduino): If you are testing with the target NodeMCU still connected to the PC via USB, open a Serial Monitor to see the OTA “in action”. To test out the OTA capability, use the following curl command to send the binary file (for this example our filename is “Attic-NodeMCU-OTA.ino.nodemcu.bin” and the target NodeMCU IP address is: 192.168.1.6): curl -F “file=@Attic-NodeMCU-OTA.ino.nodemcu.bin” 192.168.1.6:8080/webota Here’s a short video of an OTA in action: Source code: With Machinechat’s JEDI One software you can quickly enable cost-effective on-prem professional data monitoring, visualization and storage for your IoT ESP32 and NodeMCU projects. Now you can make sure that your project stays up-to-date without having to leave the comfort of your desk. Additional resources: Machinechat.io
https://medium.com/@machinechat/15-minutes-or-less-set-up-over-the-air-ota-updates-for-your-nodemcu-and-esp32-modules-48eb19ef57b7
['Machinechat Jedi']
2020-12-18 20:09:32.933000+00:00
['Raspberry Pi', 'Technology', 'IoT', 'Programming', 'Nodemcu']
1,777
The Importance of Shipping — For IT Leaders
The Importance of Shipping — For IT Leaders IT Leaders… Take a look in the mirror and ask yourself this question: “Am I delivering difference-making technology to my organization, or am I maintaining the status quo?” If your answer is the former: Congrats…you are doing your job as an IT Leader. If your answer is the latter: Read on. As an IT leader, one of the most critical things you can do is deliver innovative technologies to your organization. Maintaining the “status quo” or “keeping the lights on” is not going to do the trick. You have to innovate, and … You Have to Ship! What Does it Mean to Ship? Shipping means delivering your ideas to the world. It is not enough to have great ideas. Everyone has great ideas floating around in their head. But if you do not deliver those ideas to the world…they are meaningless. Innovative thinking is great and can lead to amazing things. But only if you do something about it. If you are someone who Ships; it means that you are someone who can take your most innovative ideas and turn them into reality. It’s the most valuable thing you can do as an IT Leader. How to Become a Better “Shipper” OK, if you want to boost your Shipping powers a little; here are some suggestions on how to become a better Shipper. Avoid being a Perfectionist A lot of people delay starting a big project because they are waiting for the “perfect time” or waiting for things to be “just right”. They are being perfectionists. Perfectionism is just a dressed-up form of procrastination. There is no “perfect time” and you will never be 100% ready. Gather the minimal amount of info you need…and go! Stay Strategic Challenge yourself to think strategically. Make time for strategic thinking. A useful habit I adopted is making time at the beginning of my day as my strategic thinking time. I sit quietly, let my mind wander, and write down my thoughts. Don’t be afraid to think big or come up with “crazy ideas.” If you want to Ship, you have to cultivate the ideas to act on. Begin with the End in Mind What are your organization’s strategic goals? What are the biggest problems? Start there. Start Shipping the projects that help achieve the most important goals or solve the biggest problems. Manage your Time — Think like a Lion! A lot of people fail to Ship because they simply do not manage their time and priorities well. Here is a mindset that helps me stay focused on big, strategic projects. Think of yourself like a lion. Your big projects are antelope. Your smaller tasks are mice. What would a lion hunt? Antelope, of course. So be a lion and keep your focus on hunting antelope. Permissionless Leadership Don’t wait for permission to innovate and Ship. The world needs leaders. Not people waiting for permission to be leaders. If you work for an organization with a culture that frowns on acting independently … it might be time for a change if you want to be a Shipper. Quotes on Shipping Here are some great quotes to inspire you to become a better Shipper. “If you don’t ship you haven’t done your work. We’re not waiting for you to tell us about your notebook filled with ideas.” — Seth Godin A great “gut punch” quote. A strong reminder that in the end, it is your work that counts…not your ideas. “No one should know what you’re going to do until you’ve done it. The likes hit different when it’s for an accomplishment instead of a plan.” — Ed Latimore Don’t bring half-baked ideas to people to try to impress them. Share your creations instead. “Real Artists Ship.” — Steve Jobs A great reminder from the master of shipping himself. You are not an artist until you deliver your art to the world…until you Ship. “The pain that comes with action is acute, scars you, and makes you grow. The pain that comes from inaction is low-grade, softens you, and decays your soul.” — Kyle Eschenroeder Don’t let the fear of change hold you back from delivering innovation. Procrastination is the enemy of creation. And in the end, the pain of procrastination is greater than the pain of action. “IT does not wait around to be told what to do. IT is consultive and proactive and drives innovation, value and results.”- Martha Heller A reminder of the importance of permissionless leadership. Great IT Leaders don’t wait for permission to start bringing your most innovative ideas to life. Conclusion Don’t get caught up in trivial tasks. Don’t use your time and energy maintaining the status quo. Don’t let your best ideas die in your head. An amateur comes up with ideas. A professional creates. They Ship. Be an IT PRO fessional…and Ship!
https://medium.com/@michaelmcgill_29045/the-importance-of-shipping-for-it-leaders-658156cea02e
['Michael Mcgill']
2020-12-20 19:09:37.957000+00:00
['Information Technology', 'Cio', 'Tech', 'Leadership', 'Technology']
1,778
6 Programming Habits That (Surprisingly) Not Many Developers Have
6 Programming Habits That (Surprisingly) Not Many Developers Have Distinguish yourself from the herd Photo by Burst on Unsplash When it comes to being a good programmer, there are certain habits that immediately pop up in your mind. There are some habits that most programmers would agree are great to have, but in reality, most of them don’t have these habits themselves. As we all know, we are defined by our habits. To become a better programmer, we should try to build great programming habits. Here are six great programming habits that you should try to build to stand out from the pack.
https://medium.com/better-programming/6-programming-habits-that-surprisingly-not-many-developers-have-c58acd9a67f3
[]
2020-05-19 13:56:59.506000+00:00
['Programming', 'Software Development', 'JavaScript', 'Technology', 'Startup']
1,779
Daily Blockchain Use Cases and News #18
Public blockchain Lisk commenced the release of Lisk Core 1.0 to its Mainnet Decentralized cloud startup Dfinity raised $102 million from Andreessen Horowitz’s crypto fund (a16z) and other investors Australian federal agency is working on a national blockchain, enabling business transactions to work with smart legal contracts Banking company Standard Chartered is cooperating with the financial arm of Siemens on a blockchain pilot for bank guarantees for trade finance
https://medium.com/blockchaincircle/daily-blockchain-use-cases-and-news-18-53b1830a7dea
['Pavel Romanenko']
2018-08-30 10:28:25.510000+00:00
['Technology', 'Blockchain', 'Bitcoin', 'Cryptocurrency', 'News']
1,780
Here’s Where People Shell Out the Most and the Least for Internet
The state that pays the most for internet access forks out more than ten times as much as the state with the lowest internet costs. By Sherin Shibu The price differences from state to state for internet access, a modern necessity, can be staggering. And internet access is especially needed in these pandemic times, with the rise of online school, remote work, video chatting, job searching, and gaming. HighSpeedInternet.com released a report on how much the cost of internet varies across the US in 2020. The data analysts took information from 350,000 internet customers spanning the US, found the average monthly price for internet in each state, and then broke down those values to average price per Mbps (megabits per second). The price per Mbps better reflects the internet speed quality, according to the analysts. The top ten most expensive states for internet access, according to the report, are Wyoming ($7.84), North Dakota ($7.57), Montana ($7.28), South Dakota ($7.17), Virginia ($6.74), Iowa ($6.34), New Mexico ($6.23), North Carolina ($5.87), Alabama ($5.82), and Nebraska ($5.43). The least expensive states for internet access are Rhode Island ($0.63), D.C. ($0.84), Massachusetts ($1.13), Georgia ($1.65), New York ($1.72), California ($1.86), Maryland ($1.99), Kentucky ($2.00), Connecticut ($2.06), and Texas ($2.29). Costs range widely; notice the considerable difference between Wyoming ($7.84) and Rhode Island ($0.63). The national average cost per megabit for internet was $3.91, and the average cost of an internet plan per month in the US is $60. Since COVID-19 arrived, most states have seen an increase in average internet speed, and even some rural areas have impressive access. States with rural areas generally tend to pay more for internet than states with more urban areas, say HighSpeedInternet.com researchers. The type of internet used can also affect costs: Fiber-optic internet and cable internet tend to have lower costs per Mbps, while DSL and satellite internet tend to cost more.
https://medium.com/pcmag-access/heres-where-people-shell-out-the-most-and-the-least-for-internet-57290f25fa1b
[]
2020-11-17 14:03:14.574000+00:00
['USA', 'Technology', 'Internet', 'Computing']
1,781
Explore any data with a custom, interactive web app: Data science with sports
Most good data projects start with the analyst doing something to get a feel for the data that they are dealing with. They might hack together a Jupyter notebook to look at data summaries, first few rows of data and matplotlib charts. Some might look through the data as an Excel sheet and fidget with pivot tables. The ones truly one with the data might even prefer to stare directly at the raw table of data. None of these are ideal solutions. Some of these solutions might be only suitable for the masochistic among us. So what is a person to do? For me, I prefer to build a web app for data exploration. There’s something about the ability to slice, group, filter, and most importantly — see the data, that helps me to understand it and help me to formulate questions and hypotheses that I want answered in the . It allows me to interact with the data visually. My preferred toolkit of choice for this task these days is Plotly and Streamlit. I’ve written enough about Plotly over the last while — I think it’s the best data visualisation package out there for Python. But Streamlit has really changed the way I work. Because it is so terse, it takes almost no extra effort to turn my plots and comments in a python script into a web app with interactivity as I tinker. (FYI — I wrote a comparison between Dash and Streamlit here) I prefer to build a web app for data exploration So in this article, I’d like to share with a simple example building a data exploration app with these tools. Now, for a data project – we need data, and here I will be using stats from the NBA. Learning programming can be dry, so using something relatable like sports data helps me to stay engaged; and hopefully it will for you too. (Don’t worry if you don’t follow the NBA as the focus is on the data science and programming!) Before we get started To follow along, install a few packages — plotly , streamlit and pandas . Install each (in your virtual environment) with a simple pip install [PACKAGE_NAME] . The code for this article is on my GitHub repo here, so you can download/copy/fork away to your heart’s content. The script is called data_explorer_app.py — so you can run it from the shell with: streamlit run data_explorer_app.py Oh, this is the first in a set of data science / data analysis articles that I plan to write about using NBA data. It’ll all go to that repo, so keep your eyes peeled! If you are following along, import the key libraries with: import pandas as pd import plotly.express as px import streamlit as st And we are ready to go. Data Deep Diving Streamlit-ing We use Streamlit here, as it is designed to help us build data apps quickly. So what we are going to build is a Streamlit app that will then run locally. (For more information — you can check out my Dash v Streamlit article here.) If you’ve never used Streamlit, this is all you need to build bare-bones app: import streamlit as st st.write("Hello, world!") Save this as app.py , and then execute it with a shell command streamlit run app.py : Look, ma, it’s a web app! And you have a functioning web app! Building a streamlit app is that easy. Even more amazingly, though, building a useful app isn’t much harder. Oh, by the way, you don’t need to stop and restart the server every time the script is changed. Whenever the underlying script file is updated, you will see a button pop-up on the top right corner like so: Look for this prompt to refresh the app Just keep the script running, and hit Rerun here every time you want to see the latest version at work. Ready? Okay, let’s go! Raw data exploration What I like to do initially it to look at the entire raw dataset. As a first step, we load the data from a CSV file: df = pd.read_csv("data/player_per_game.csv", index_col=0).reset_index(drop=True) Once the data has been loaded, simply typing st.write(df) creates a dynamic, interactive table of the entire dataframe. Explore the entire dataset as an interactive table And the various statistics for columns can be similarly plotted with st.write(df.describe()) . Two dynamic tables in two lines of code I know you can plot a table in Jupyter notebooks, but the difference is in the interactivity. For one, tables rendered with Streamlit are sortable by columns. And as you will see later, you can incorporate filters and other dynamic elements that aren’t as easy to incorporate in notebooks — which is where the real power comes in. Now we are ready to start adding a few charts to our app. Distribution visualisations Statistical visualisation of individual variables are extremely useful, to an extent that I think it’s an indispensable tool above and beyond looking at the raw data. We will begin the analysis by visualising the data by one variable, with an interactive histogram. A histogram can be constructed with Plotly like so: hist_fig = px.histogram(df, x=hist_x, nbins=hist_bins) Traditionally, we would have to manually adjust the x and nbins variables to see what happens, or create a huge wall of histograms from various permutations of these variables. Instead, let’s see how they can be taken in as inputs to interactively investigate the data. The histogram will analyse data from one column of the pandas dataframe. Let’s render it as a drop-down box by calling the st.selectbox() module. We can just grab a list of the columns as df.columns , and additionally we provide a default choice, which we get the column number of using df.columns.get_loc() method. Putting it together, we get: hist_x = st.selectbox("Histogram variable", options=df.columns, index=df.columns.get_loc("mp_per_g")) Then, a slider can be called with the st.slider() module for the user to select the number of bins in the histogram. The module can be customised a minimum/maximum/default and increment parameters as you see below. hist_bins = st.slider(label="Histogram bins", min_value=5, max_value=50, value=25, step=1) These parameters can then be combined to produce the figure: hist_fig = px.histogram(df, x=hist_x, nbins=hist_bins, title="Histogram of " + hist_x, template="plotly_white") st.write(hist_fig) Putting it together with a little heading st.header(“Histogram”) , we get: Histogram portion of the app I recommend taking a second here to explore the data. For example, take a look at different stats like rebounds per game: Histogram of rebounds per game Or positions: Histogram of positions The interactivity makes for easier, dynamic, active exploration of the data. You might have noticed in this last graph that the histogram categories are not in any sort of sensible order. This is due to the fact that this is a categorical variable. So without a provided order, Plotly is (I think) plotting these categories based on the order that it starts to encounter each category for the first time. So, let’s make one last change to would fix that. Since Plotly allows for a category_orders parameter, we could pass a sorted order of positions. But then it wouldn’t be relevant for any of the other parameters. Instead, what we can do is to isolate the column based on the chosen input value, and pass them on by sorting them alphabetically like so: df[hist_x].sort_values().unique() All together, we get: hist_cats = df[hist_x].sort_values().values hist_fig = px.histogram(df, x=hist_x, nbins=hist_bins, title="Histogram of " + hist_x, template="plotly_white", category_orders={hist_x: hist_cats}) Histogram of positions — alphabetically sorted This way, any categorical (or ordinal) variables would be presented in order Now we can go another step and categorise our data with boxplots. Boxplots do a similar job as histograms in that they show distributions, but they are really best at showing how those distributions changed according to another variable. So, the boxplot portion of our app is going to include two pulldown menus like below. box_x = st.selectbox("Boxplot variable", options=df.columns, index=df.columns.get_loc("pts_per_g")) box_cat = st.selectbox("Categorical variable", ["pos_simple", "age", "season"], 0) And it’s just a matter of passing those two inputs to Plotly to build a figure: box_fig = px.box(df, x=box_cat, y=box_x, title="Box plot of " + box_x, template="plotly_white", category_orders={"pos_simple": ["PG", "SG", "SF", "PF", "C"]}) st.write(box_fig) Then… voila! You have an interactive box plot! Interactive boxplot You will notice here that I manually passed an order for my simplified positions column. The reason is that this order is a relatively arbitrary, basketball-specific order (from PG to C), not an alphabetical order. As much as I would like everything to be parametric, sometimes you do have to resort to manual specifications! Correlations & filters Another big thing to do in data visualisation, or exploratory data analysis is to understand correlations. It can be for example handy for some manual feature engineering in data science, and it might actually point you towards an investigative direction that you may not have considered. Let’s just stick to three dimensions in our scatter plot for now. No, not in x, y and z directions. I am not a monster. I’ve got an example of one below — can you make sense of what’s going on? I’m not a huge fan of 3-D Scatter plots (matplotlib) Not for me, thanks. Colour will be the third dimension here to represent data. I’ve left all columns available for the the first two columns, and just a limited selection for colours — but you can really do whatever you want. corr_x = st.selectbox("Correlation - X variable", options=df.columns, index=df.columns.get_loc("fg3a_per_g")) corr_y = st.selectbox("Correlation - Y variable", options=df.columns, index=df.columns.get_loc("efg_pct")) corr_col = st.radio("Correlation - color variable", options=["age", "season", "pos_simple"], index=1) Correlate away! And the chart can be constructed as follows: fig = px.scatter(df, x=corr_x, y=corr_y, template="plotly_white", color=corr_col, hover_data=['name', 'pos', 'age', 'season'], color_continuous_scale=px.colors.sequential.OrRd) So tell me — are they correlated? But this chart is not ideal. For one because the data is dominated by outliers. See the lonely dots on the top left? Those folks with effective FG% of 1.5 are not some gods of basketball, but it’s a side effect of extremely small sample sizes. So what can we do? Let’s put a filter into the data. I’m going to put in two interactive portions here, one to choose the filter parameter, and the other to put the value in. As I don’t know what the parameter is here, I will simply take an empty text box that will take numbers as inputs. corr_filt = st.selectbox("Filter variable", options=df.columns, index=df.columns.get_loc("fg3a_per_g")) min_filt = st.number_input("Minimum value", value=6, min_value=0) Using these values, I can filter the dataframe like so: tmp_df = df[df[corr_filt] > min_filt] And then pass the temporary dataframe tmp_df into the figure instead of the original dataframe, we get: Correlations between efficiency and number of shots (filtered for high-volume shooters) This chart could be used to take a look at correlations between various stats. For example, to see that great 3 pt shooters are also typically great free throw shooters: Free throw accuracy vs 3pt shot accuracy Or that great rebounders tend to be shot blockers as well. It’s also interesting that the game has changed so that no modern players average many blocks per game. Rebounds per game vs blocks per game Plotting rebounds and assists, they show something of an inverse correlation, and are quite nicely stratified according to position here. Assists per game vs rebounds per game Already we can see quite a lot of trends and correlations from our app. Lastly, let’s create some heatmaps to view general correlations between sets of columns of data. Generalised correlations with heatmaps Scatter plots are useful for seeing individual data points, but sometimes it’s good to just visualise datasets such that we can immediately see which columns might be well correlated, not correlated, or inversely correlated. Heatmaps are perfect for this job, by setting it up to visualise what are called correlation matrices. Since a heatmap is best at visualising correlations between sets of input categories, let’s use an input that will take multiple categories. As a result, st.multiselect() is the module of choice here, and df.corr() is all we need to create the correlation matrix. The combined code is: hmap_params = st.multiselect("Select parameters to include on heatmap", options=list(df.columns), default=[p for p in df.columns if "fg" in p]) hmap_fig = px.imshow(df[hmap_params].corr()) st.write(hmap_fig) And we get: Assists per game vs rebounds per game It’s so clear which of these columns are positively correlated or not correlated. and I also suggest playing with different colour scales / swatches for extra fun! That’s it for today — I hope that was interesting. For my money, it’s hard to beat interactive apps like this for exploration, and the power of Plotly and Streamlit make it so easy to build these customised apps for my purpose. And keep in mind that what I have suggested here are just basic suggestions, and what I am sure that you could build something far more useful for your own purpose and to your preference. I look forward to seeing them all!
https://towardsdatascience.com/explore-any-data-with-a-custom-interactive-web-app-data-science-with-sports-410644ac742
['Jp Hwang']
2020-09-06 18:43:43.276000+00:00
['Technology', 'Data Visualization', 'Data Science', 'Programming', 'Python']
1,782
What the world needs now…
Alt Text has been included as one of 10 weblogs in a study/project in developing an open source search tool called latent semantic indexing (LSI). From what I have seen it is pretty promising. Some of the results may not contain the keywords you entered, but use the LSI algorithm to determine relevancy based on like words and topics. There are still some holes in it as it is in development. It works with keywords only — there are no phrase searches allowed. Words that appear in only one post are not indexed, and the interface leaves a lot to be desired, but it is a very cool idea that I hope spreads.
https://medium.com/alttext/what-the-world-needs-now-e04462f6ce52
['Ben Edwards']
2017-04-23 06:13:41.952000+00:00
['Technology', 'Electric Car', 'Weblogs', 'Blogging', 'The Web']
1,783
The 5 most popular types of apps you can create with Overwolf
Gaming is on the rise, and user-generated content is becoming bigger than ever. More than 30,000 creators build gaming apps and mods with the Overwolf platform, and millions of players enjoy them every day. The demand for new and exciting gaming apps is huge, and creating them is now easier than ever before. So you might already have an established gaming website or product, and you might be asking yourself, “Would it make sense to bring it in-game?” Or, perhaps you have some cool idea for a gaming app, but you’re not sure if it can be built. In this post we’ll introduce the five most popular types of apps that people build using the Overwolf framework, and we’ll briefly go over some of the tools and resources they use. And who knows? It might just be that little spark you need to start building the gaming app of your dreams. 1. Analytics and Stats apps Many popular apps provide in-game statistics, track game progress, and analyze performance. Porofessor, for example, is a well-known app for League of Legends. It provides players with a variety of detailed statistics: from build recommendations before the game starts; through real-time information about their team members and opponents; and all the way to post-match analysis. Another popular app you may have heard of is R6Tracker, which provides real-time statistics for Rainbow Six Siege. Players who use it can view in-game information such as level, ranking, number of kills/deaths, win rates, and victory chances. Both apps started as websites widely used by players, so it made a lot of sense to bring their value in-game, using the Overwolf framework. Main tools used: Overlay windows One of the primary Overwolf tools used by these apps, the App Window API, provides the ability to display overlay windows inside a game. Analytics and Stats apps use it to display statistics and relevant in-game information to players, based on their current match and context. 2. Esports and Tournaments apps Many esports and Tournament organizers turn to Overwolf to build apps for governing the competitions they’re running — apps that monitor who won and who lost, and when a game starts or ends. These apps can then report the outcome to the organizers, without relying on players and teams to report on their own results. G-Loot, for example, is an esports platform for PC. It lets players complete challenges and compete for prizes, in more than 25 popular games. Players can either compete against other players or play solo, challenging themselves to get better and better. Main tools used: in-game events tracker One of the main Overwolf tools used for Esports and Tournaments apps is the Game Events API. This tool tracks key events that occur during the game: Match start, match end, kills, deaths, victories, defeats, damage dealt, in-game currency spent, and many more — depending on the game itself. This means that you can easily build an app for holding tournaments across multiple games. With match events being tracked and reported, all that’s left is for players to compete. 3. In-Game Guide apps In-game Guide apps are another type of app that utilizes the Game Events API. These apps can understand exactly what players are facing or experiencing at specific points along the game, and can display the most relevant guide content to assist them. It’s basically like looking up a guide for a game challenge online, just without the looking up part — and without leaving the game. A great example of this would be Icy Heroes, built by Icy Veins — a leading content website for Heroes of the Storm, Diablo games ,World of Warcraft, and Hearthstone. The website offers extensive guides and strategies for Heroes of the Storm, but to find the content they need - players must leave the game and look it up on the website. The dev team behind Icy Veins recognized the opportunity and saw great value in having the content available to players in-game. So they decided to use the Overwolf framework and create an In-game Guide app to do just that. The Icy Heroes app supports Heroes of the Storm — and provides players with exactly the right guides and the most relevant information for their chosen heroes. The app also recommends which talent players should choose when they level up. 4. Companion / Utility apps Apps in this category allow adding sound customization, map layers, and many other special in-game features. One app that does exactly that is Apollo — an interactive audio app for League of Legends, Fortnite, and Rocket League. It allows players to add custom announcements and music clips on various game events, giving their own personal touch to the game and enriching their overall gaming experience. Among Map is another interesting example, its main feature being a high-quality map added as an in-game layer. This allows players to easily navigate through the map and add important notes and markers. 5. Replay & highlight apps This category features apps that can automatically capture and record game highlights, or enable players to manually do so themselves. One of the prominent apps in this category is called Outplayed, and that’s exactly what it does — capturing players’ top moments and biggest plays in dozens of games. It also lets players manually record on demand. Once all highlights are recorded, players can browse through the captured clips and easily share them on social media. Main tools used: OBS game capture, in-app media player and social share To make all this happen, three Overwolf tools are primarily utilized: the Media Replays API, the Media Player API, and the Social Share API. First, the Media Replays API. This tool is basically what enables capturing short, OBS-based video replays of the game that’s currently running. Then, the Media Player API comes into play. This tool allows adding a media player element to an app window and playing video files. In this case, it allows players to play the clip they’ve just captured. Last for this category, the Social Share API. This is the API that allows players to share the video clips on Reddit, YouTube, Discord or other social media channels.
https://medium.com/overwolf/the-5-most-popular-types-of-apps-you-can-create-with-overwolf-78970491781e
['Liri Katz']
2020-12-24 14:50:10.537000+00:00
['Startup', 'Apps', 'App Development', 'Technology', 'Gaming']
1,784
The ‘Flying V’ Aerodynamic Airplane Makes Successful Maiden Flight
The Flying-V In the Flying-V — originally an idea of TU Berlin student Justus Benad during his theory venture at Airbus Hamburg — the traveler lodge, load hold, and fuel tanks are coordinated into its wing structure. The plan isn’t up to an Airbus A350, yet it has a similar wingspan. This permits the Flying-V to utilize the current foundation at air terminals, for example, entryways and runways. The Flying-V conveys about a similar number of travelers — 314 in the standard design — and a similar measure of load, 160 m3. Undertaking pioneer at TU Delft, Dr. Roelof Vos: “The Flying-V is more modest than the A350 and has less inflow surface zone contrasted with the accessible measure of volume. The outcome is less opposition. That implies the Flying-V needs less fuel for a similar distance.” “One of our worries was that the aircraft might have some difficulty lifting-off since previous calculations had shown that ‘rotation’ could be an issue,” Roelof Vos, assistant professor at the aerospace engineering faculty of the Delft’s University of Technology, who led the project, explained in a statement. “The team optimized the scaled flight model to prevent the issue but the proof of the pudding is in the eating. You need to fly to know for sure,” he said. Distantly controlling the airplane, specialists figured out how to take off at a speed of 80 km, while the airplane’s flight speeds, points, and push were as arranged, they noted. Marleen Hillen, MSc Aerospace Engineering / Photo by (www.tudelft.nl) “We will continue to hear and see a lot of the Flying-V in the near future.” Marleen Hillen, MSc Aerospace Engineering, MSc Track: Flight Performance and Propulsion Specialists endeavored to upgrade the plane: so as to improve telemetry, the group had to change the airplane’s focal point of gravity and change its receiving wire. There is still work to be done to refine the aircraft before it could take to the skies with passengers aboard: researchers said that the test flight showed that the aircraft’s current design allows for too much “Dutch roll,” which causes a rough landing. The student found the practical part of the project particularly enjoyable. First, he had to laminate the nacelle, the housing of the engine, himself, after which he could take the measurements in the wind tunnel. Van Empelen: “One of the biggest challenges was the strict deadline. You only have a short time slot in the wind tunnel, so you have to prepare every single detail in advance.” Van Empelen’s research provides a solid basis for further extensive research into the aerodynamic behavior of the Flying-V’s landing gear and winglets. Meanwhile, the Flying-V has been further developed and the first test flight has successfully taken place Sjoerd Van Empelen, MSc Aerospace Engineering / Photo by (www.tudelft.nl) “Hopefully, we will see the plane fly full-scale someday in the future.” Sjoerd Van Empelen, MSc Aerospace Engineering, Track: Flight Performance and Propulsion Specialists intend to utilize the information gathered from the practice run for a streamlined model of the airplane, permitting them to program it in a pilot test program for future tests and to improve flights. The group will direct more tests on the model and would like to give the Flying-V supportable drive, given that the plan fits conveying fluid hydrogen rather than kerosene.
https://medium.com/datadriveninvestor/the-flying-v-aerodynamic-airplane-makes-successful-maiden-flight-47af91371154
['Jesús Salazar']
2020-10-30 08:12:45.888000+00:00
['Ideas', 'Transportation', 'Technology', 'Energy', 'Innovation']
1,785
Slack Channels For Developers
The way we communicate has changed and still changing day by day. For developers, there are many applications, and Slack channels have become new for developers to communicate. Slack is a chat room application for the whole company, which is designed to reduce emailing by providing chatrooms to discuss. This is great for developers as a lot of people have started Slack channels for programming where they can discuss technologies or ask questions. Below I will list all the great communities that you can join. NYC CODERS If you live in New York City then this is for you. A community of developers prepping for coding interviews, participating in mini-hackathons, building portfolio projects, and attending software engineering panels TOGETHER. If you are not from New York, they also do online meetups. Meetup Page: https://www.meetup.com/nyc-coders/ Slack: https://join.slack.com/t/nyc-coders-meetup/shared_invite/enQtODEzMDQ3ODcxNDc2LWQ0NDNiZDRlYjllOGU2OGRmNDUxZDVmZWZmYmM2NDQ3ODg5N2M2NDg2NDIyYmU0YzA5Y2Q2MWE2ODNiOTQ0YjM ReactJS A community of React and React-Native developers. Slack: https://reactjsnews.slack.com/#/ NYCTech NYCTech is a Slack account for the New York-based tech community. Website: https://www.nyctechslack.com/ DEVOPS Find DevOps Jobs, Events, Articles, or make connections with 17,418 members in the Slack Community. Website: https://devopschat.co/ DevChat DevChat is a community of developers, asking and answering questions, solving challenges, and having a good time learning together. A Slack team, which means that we talk in real-time, we share links and resources and keep things organized. Website: https://devchat.dev/ iOS Developers They about three things: being open, helping each other, and sharing knowledge. If this sounds like you then request an invite. Website: https://ios-developers.io/ Ruby on Rails Link 13,114 Ruby on Rails developers from all over the world, including avid OSS contributors, full-stack engineers, startup founders, backend engineers, and people just learning Ruby on Rails. Website: https://www.rubyonrails.link/ WeLearnJS A self-motivating group of individuals that support each other through the process of javascript learning with an ongoing series of projects/challenges. We believe that the best way to learn and become an expert in something is by doing and we also believe that you truly concrete your knowledge when you can teach others. Our group is based on this simple concept — Learn by doing then teach others. Website: https://slofile.com/slack/learnjs Data Science Community The biggest data science slack community with over 3,700 members — chat about anything and everything to do with data & machine learning. Website: https://slofile.com/slack/dscommunity Microsoft Developer Chat on Slack The Slack community for developers working with Microsoft technologies Website: https://slofile.com/slack/msdevchat Node.js Anyone with any amount of interest in Node.js is welcome to join. This is meant to be a very open and collaborative community. A real-time discussion place for literally anything Node.js Website: https://slofile.com/slack/node-js I hope you found this blog helpful. Please do suggest any missing channels on my email singhamritpal49@gmail.com or connect with me on LinkedIn https://www.linkedin.com/in/amritpal-singh-2108b4168/
https://medium.com/@singhamritpal49/slack-channels-for-developers-c50ff9aec929
['Amritpal Singh']
2020-05-18 20:26:48.922000+00:00
['Techcommunity', 'Slack', 'Technology', 'Programming', 'Nyccoders']
1,786
Chatbots: Where Are They Now?
What They Claimed: Chatbots would replace human workers. Early on, some painted a picture in which chatbots replaced human workers en masse. As recently as fall of 2018, some tech insiders continued to speculate about the potential of intelligent machines to eliminate jobs. For some, this was a point of concern; others saw it as a potential to save on labor costs. The Reality So have chatbots changed the face of the labor market? Overall, it’s complicated. Chatbots may have reduced the need for humans to work shifts outside of normal business hours, such as after-hours and weekend support. And companies may overall need fewer front-line employees to serve their customers. But humans are still essential to many customer-facing functions. Like many groundbreaking technologies, chatbots’ true value has come not from replacing humans but from augmenting them. This is especially true for teams where quickly executing repetitive tasks is a key concern, such as customer support. “By using bots to deal with lower-level inquiries, support teams can spend more time answering complex questions that are more valuable to the business,” explains Mike Murchison, CEO of Ada Support. What the Future Holds Some of the most exciting chatbot use cases, in fact, are the ones that intentionally factor in human input. At Tenable, a leading cybersecurity company, bots play a crucial role in directing helpdesk questions to the correct experts. If the bot can’t answer an employee’s question with existing knowledge base articles, they can use the bot to contact a Subject Matter Expert. “There’s an ‘Ask An Expert’ button that posts to a different Slack channel, and all of the channel members are SMEs who could potentially help with the question,” explains Bill Olson, a Product Manager who helped design this intelligent helpdesk. “If two people ask for expert help, their questions will appear in that Slack channel. Another employee can go into that Slack channel, see the two questions, and say, ‘I know the answer to this one.’ They can hit the ‘Claim’ button, which will open a thread inside of Slack so they can have a conversation [with the person who asked the question].” Other companies like Nutanix use chatbots to facilitate approval workflows such as approving the provisioning of virtual machines. By pulling these processes into a chat app — where today’s employees spend the majority of their time — bots can make it easier for human workers to accomplish everything they need to do more efficiently. What They Claimed: Chatbots would create better experiences. Perhaps the most exciting claim about chatbots was that they would totally redefine common experiences like returning an item or requesting time off of work. They’d offer instant, intelligent help to both employees and customers — and at a lower cost to businesses. The Reality From the start, however, designing a useable chatbot interface proved challenging. “There are technical and UX problems that limit the efficacy of a text-based, conversational UI,” says Dave Feldman, Vice President of Product Design at Heap. Without sufficient AI to power things like Natural Language Processing — where users can talk to a chatbot the way they would talk to another human, instead of with rigid commands — it can be hard to feel like you’re having a high-quality experience with a chatbot. Similarly, in order to actually improve experiences, chatbots have to alleviate some of the work that would otherwise fall to humans. It’s nifty if you can ask a chatbot to reschedule your flight, but if a human still has to input the request into a system or make the change manually, the chatbot is relatively useless. Achieving this requires a fairly sophisticated degree of automation and integration, something that enterprises still struggle with. Only 16% of enterprises have deployed multiple automation use cases at scale, according to a Capgemini study. This lack of automation may be to blame for the failure of high-profile chatbot projects like Facebook’s M. “Facebook’s goal with M was to develop artificial-intelligence technology that could automate almost all of M’s tasks,” writes Alex Konrad of Forbes. “But despite Facebook’s vast engineering resources, M fell short: One source familiar with the program estimates M never surpassed 30% automation.” What the Future Holds Thankfully, this is one area where the technology is actually quite promising — especially as more automation platforms see bots as a fundamental part of their promise to help lines-of-business staff work more efficiently. “To understand how bots and automation go hand-in-hand, you have to jump into the day-to-day lives of your target users,” says Ee Shan Sim, a product manager for automation platform Workato. “You have to ask, ‘Okay, if I were a sales manager, what functionality would I want?’ Or ‘As a project manager, which of my daily tasks could a bot make easier?’” This process also involves understanding the way language impacts the user experience — especially for first-time chatbot users. “[With chatbots], user inputs are required, but you want them to be intuitive. The challenge is finding a balance between how powerful a chatbot’s automations should be vs. how intimidating it is to the user to go through the workflow. [You have to] continually ask, ‘What makes sense for a first-time user? What’s going to look weird to them?’,” Sim continues. Similarly, businesses should consider the power of an instant, if imperfect, answer. Despite not offering perfect knowledge all the time, chatbots can still play a valuable role in a world where customers and employees alikeexpect fast, seamless experiences. “There’s a misconception that chatbots aren’t good enough to be customer-facing,” says Murchison. “In reality, customers are more likely to interface with a bot, because they know they’ll get an instant answer.” What They Claimed: Everyone would love chatbots, and adoption would skyrocket. Experts predicted that because of their potential to help cut costs and deliver top-notch experiences, they’d be the hottest new enterprise tech. In fact, a 2017 Deloitte report indicated that 67% of professionals expect chatbots would outperform mobile apps in the next five years. The Reality As businesses have realized that (like any new technology) chatbots come with their own challenges, adoption has slowed. Many companies have looked to tech industry leaders like Facebook, who have given up on their chatbot initiatives, and followed suit or at least scaled down their chatbot efforts. But others aren’t ready to abandon ship. It really depends on what line of work you’re in — for example, 95% of content management professionals surveyed said they still planned to adopt chatbots by 2019. Similarly, the banking sector continues to debut high-profile chatbot projects like Bank of America’s Erica, who managed to attract 1 million users in just three months. What the Future Holds As we move into 2019, chatbot adoption will probably continue to boom in some sectors and slow in others. Some experts believe that adoption ultimately boils down to how well you communicate the purpose of the bot to your prospective users, whether they’re customers or employees. “[Bank of America] had email campaigns for some time saying Erica is coming; here’s what it is [and what it can do],” says Emmett Higdon, director of digital banking at Javelin Strategy & Research. “They did a good job prepping the audience for its introduction.” Experts also agree that successful projects like Erica have high user growth because of how well-integrated they are with other services, like content libraries, search tools, and AI services. So as the automation and cognitive technologies surrounding chatbots improve, adoption will, too, as long as companies can keep pace with user demands. That’s one thing experts broadly agree on: whether a chatbot is customer-facing or internal, its long-term success depends on how well it can anticipate what users want — and then go above and beyond their expectations. For example, Higdon imagines a scenario where a customer asks Erica how much they spent on Uber last month. To really improve adoption and retention, the bot can’t just name a dollar amount. “[It should be able to say] ‘By the way, that’s twice as much as you’ve spent in the last three months, is there something wrong here?’ Something that gets the customer to go ‘Hmmm’ and think more about their financial health overall,” he says. Moving Forward With Chatbots: Patience Is the Answer Overall, chatbots may have evolved differently than we expected. They haven’t turned into a revolutionary tool as quickly as many thought they would, leaving businesses disenchanted and disappointed with marginal improvements. But as Intercom CEO Eoghan McCabe points out, this lack of buzz around chatbots is par for the course when it comes to emerging technologies. “I don’t think there’s ever been a new technology that hasn’t followed that cycle,” he comments. “We insiders who get so excited about the future will always jump on the hype and excitement ahead of its practical reality. Virtual reality, self-driving cars — [all of these] technologies will get less sexy before they get real.” Curious about how bots and automation can change your business processes? Download our free ebook>
https://medium.com/@Workato/chatbots-where-are-they-now-550ac9b9f8
[]
2019-02-01 14:32:59.869000+00:00
['Innovation', 'Slack', 'Technology', 'Bots', 'Chatbots']
1,787
Smart Manufacturing Market Size Worth $514.3 Billion By 2027
The global smart manufacturing market size is estimated to reach USD 514.3 billion by 2027, registering a CAGR of 11.8% over the forecast period, according to a new study by Grand View Research, Inc. The growing adoption of digital technologies such as industrial IoT, autonomous robots, and big data analytics, to enable the fourth industrial revolution are the prime driving factors for the market growth. Moreover, growing emphasis on increasing production efficiency and gaining visibility across the entire value chain will also boost the prospects of smart manufacturing. In addition, availability of advanced technologies such as 3D printing, manufacturing execution systems (MES), and plant asset management solutions to small and medium enterprises is further accelerating the market growth. The positive impact of government initiatives and investments to promote smart manufacturing adoption has been one of the most influential factors driving the market. The fact that both industrialized countries and developing economies are aggressively pursuing this avenue is expected to further propel the growth. For example, China is reportedly investing over USD 3 billion for advanced manufacturing under the Made in China 2025 program. Similarly, SAMARTH Udyog Bharat, Industry 4.0, initiative is being promoted by Indian government to transform the manufacturing industry in the country. Automotive and aerospace and defense industries are the leading growth avenues for solution providers with industries such as oil and gas and industrial equipment manufacturing rapidly scaling their digitalization efforts. Moreover, by implementing smart technologies such as 3D printing, American auto manufacturer General Motors (GM) claims to save more than USD 300,000 in 2019. To drive this technology, GM has entered into a partnership with Autodesk Inc. to produce economical and lighter vehicle parts using 3D printers. With the proliferation of 3D printing, simulation, and modeling in manufacturing and design, these industries are expected to continue to maintain a significant growth rate over the forecast period. Though numerous solutions are available in the market, digital twin and real-time analytics are anticipated to spearhead the penetration of digitalization in these industries. Click the link below: https://www.grandviewresearch.com/industry-analysis/smart-manufacturing-market Further key findings from the report suggest:
https://medium.com/@marketnewsreports/smart-manufacturing-market-dddd44a6f63c
['Gaurav Shah']
2020-10-13 09:14:34.140000+00:00
['Technology', 'Software', 'Hardware', 'IoT', 'Big Data']
1,788
The Iteration Imperative
The Iteration Imperative Why digital products should be created iteratively Back when people first started building complex software, the natural approach was to do this in a planned methodical way that resembles physical engineering. You make a detailed plan, validate the plan in theory, build it, test it in practice, and then deliver the final product to the customer. While there are variations of this approach, it is commonly called the “waterfall” model, since you have a cascade of stages that the process goes through, and once you’ve “fallen” from one stage to the next, you generally don’t go back. Over time, people realized that this approach was not optimal for the development of digital products. After all, software is more malleable than physical items. You can’t easily add another story to a house once it’s built. It’s much easier to add new functionality to software — especially in today’s world of software that’s either run directly in the cloud or at least delivered and updated via the Internet, where you don’t have to physically ship an updated software package to your customers. Additionally, developing and improving digital products is risky. It is generally unknown what the best way to deliver value through the product will be. More concretely: when starting out, we don’t know what functionality and user experience the product should provide in order to be most beneficial to its users. The more innovative a product is, the more risk there is. This is another key difference to many physical engineering projects, that are often more similar to previous constructions. Today, many companies — big and small, incumbents and challengers, old and new — have adopted iterative software development practices. They use Scrum or similar practices, and ship new versions of their software more frequently than as one big annual release. On the surface, it seems they are now working iteratively… but they are not. To truly develop products iteratively, you have to start earlier. The scrumfall trap The trap that many companies fall into, whether they transitioned to an iterative, “Agile” development methodology or started out with it, is the “scrumfall”. In the “scrumfall”, software is delivered incrementally, often using the Scrum methodology, but that delivery is embedded in a waterfall-like process. The backlog items that the Scrum team works on to deliver get specified and prioritized top-down without participation of the Scrum team. There might be a “refinement” meeting in which the Scrum team asks clarifying questions and ensures that they can confidently deliver the item, but that’s it. These items were first ideated and prioritized (by business stakeholders or product managers), and then designed (by product managers, business analysts, and/or UX designers), before they landed on the Scrum team’s board. Once delivered, they will often go through separate, waterfall-like test and release stages. The different stages are “owned” by different stakeholders, which necessitates a handover and therefore enforces a waterfall-like structure. Often, this process will be overall more iterative than a traditional waterfall in the sense that what moves through the waterfall isn’t the whole software system, but rather individual features or “projects”. At any given time, there will likely be some features being ideated, some in design, some in development, and so on. Accordingly, releases happen more frequently than in traditional waterfall. For each individual feature, however, the process is still very waterfall-like. There are few feedback loops before something is built and rolled out. The scrumfall has iterative delivery, but not iterative discovery. Iterative discovery Product discovery is the process of understanding the needs of users and customers and determining how value can be best delivered to them. It includes researching customers and their needs, identifying opportunities or problems, ideating solutions, and validating them. Product discovery starts with generating insights into the problems to be solved. This could be done by reviewing existing information or conducting original research. In any case, based on this information, a product vision is formed, describing beneficial outcomes if this problem is solved in a better way. The vision provides the north star for then discovering the best solution. This discovery process should be iterative. You should “think big, start small” (one of Intercom’s product principles), meaning you should have the long-term vision in mind but make progress in small steps, not all at once. The reason the discovery process should be iterative is that most of the ideas we have to solve the problem will turn out not to work. We therefore need to fail early and often in order to double down on the ideas that work and kill the ones that don’t. If we were certain that our ideas would work out, if there was no uncertainty whether our ideas will deliver the value we hoped for, then a waterfall-like process could work. However, since a lot of them won’t, iteration is crucial. This means that for every problem definition and every solution idea, you should identify what the most critical assumption is behind it, and then validate or invalidate that assumption as quickly and cheaply as possible. With that learning, you can then either double down or change course. This loop is at the heart of iterative discovery. The misunderstood MVP Product people who don’t fully understand this need for iteration often ask “what should be the scope for the Minimum Viable Product (MVP) for this idea?” This is the wrong question, though. You shouldn’t just have one MVP. You should have a series of MVPs, each testing another assumption. The first of these MVPs are likely not going to be real products, but rather prototypes of some sort. This is at the heart of the Lean Startup build — measure — learn loop. Identify the most critical assumption, build the smallest thing possible to test it, learn from it, iterate. In Eric Ries’s own Words: To apply the scientific method to a startup, we need to identify which hypotheses to test. I call the riskiest elements of a startup’s plan, the parts on which everything depends, leap-of-faith assumptions. The two most important assumptions are the value hypothesis and the growth hypothesis. (…) Once clear on these leap-of-faith assumptions, the first step is to enter the Build phase as quickly as possible with a minimum viable product (MVP). In contrast to these MVPs, the scope for the first “real”, shipped version of the product should be determined in a different way. Here, you don’t just want to test a single assumption. You want to ship a product that makes sense from end to end and that customers will want. Now, I am not saying that you should gold-plate your v1. The scoping for a v1 is likely going to be more of an 80:20 exercise: can we get 80% of the potential customer value with 20% of the effort? If you’ve validated the critical assumptions using a series of prototype MVPs before you ever ship v1, you should be pretty confident in how to provide customer value, allowing you to make this tradeoff decision. The elusive v2 Another way that lots of organizations fail to iterate enough is not revising features once their initial iteration has been shipped. This means that the build — measure — learn cycle stops when v1 is shipped. This can break the trust of team members who had been promised that something would be fixed in v2 (often designers and engineers). It is also not a good way to build a great product. It means you are leaving a “good enough”, 80:20 scoped feature in the product in order to chase the next shiny thing. The reason organizations do this is because they feel they never have enough time and resources to do all the things they would want to be doing. So once something has been shipped as “good enough”, let’s not waste more time with it, right? After all, we’ve already built the 20% that deliver 80% of the value! This thinking is a fallacy, however. It is caused by not focusing enough. A great product isn’t one that does a lot of things “well enough” (meaning “mediocre”). A great product focuses on a few things and really nails them. So if you’ve found something that works, consider doubling down instead of moving on. Make a good idea a great one — by iterating. Another reason to avoid always moving on to the next thing instead of iterating is that it increases the complexity of the product. Every feature that is added multiplies the complexity of the product, which incurs additional efforts in design and engineering down the line. Improving an existing feature by iterating might add some complexity, but rarely as much as adding something new. Iterative does not mean incremental One potential risk when embracing the concept of iterative product development from discovery through delivery is to settle for only incremental changes. After all, it’s easy to be iterative by continuously making small improvements to the product. This approach is great if you want to optimize an existing experience. However, you can’t build a great, novel product that way. Iterative doesn’t have to mean incremental. Think about the development of the first airplane, clearly a completely novel product. However, the development was very much iterative: prototype after prototype was built and tested, until finally a working model was identified. Of course, if we take this analogy further, what these prototypes were testing was the feasibility (is there a technology that can achieve what we want to achieve?), because the riskiest assumption was the feasibility of manned flight. The same iterative approach is also possible to test assumptions of value, usability, and business viability. This way, even a big, long-term vision can be realized incrementally. It’s important in this approach to move away from the idea that every iteration has to produce working software. That is true only in delivery mode. During discovery, every iteration should produce learning. This could be achieved for example through creating prototypes or conducting research. In order to achieve more than incremental progress in an iterative way, it is paramount to have a long-term vision and a strategy to achieve that vision. That strategy can then be broken down into assumptions, and they can be tested iteratively, starting with the most critical ones.
https://jefago.medium.com/the-iteration-imperative-46d1dad21b2a
['Jens-Fabian Goetzmann']
2020-06-21 19:48:31.703000+00:00
['Technology', 'Agile', 'Product Management', 'Leadership', 'Startup']
1,789
Making the Most of Data for Conservation
You’ve probably heard it a million times, but I’ll say it again anyway: We’re in the information age, where data and technology are king. They are the keys to most of the countless innovations we’ve seen in society over the past couple of decades. As part of the Center for Conservation Innovation team at Defenders working at the intersection of science, technology and policy to improve conservation, the information age makes us question: Are conservationists using data and technology to their full potential? The answer is…complicated. A lot of data out there — often even more than researchers and practitioners can even process. Conservationists collect and generate a ton of that data but making that data widely available so that it can be easily used remains a challenge. © Tom Egan/Defenders of Wildlife The Mojave desert tortoise is a good example. It’s a species we know quite a bit about: We know where it lives, what laws impact its habitat, what it eats, etc. What is often the more difficult part of protecting and restoring this species (and many others) is combining and putting all that knowledge to use. The two keys to solving this problem are to make that data more accessible and then to make sure it is used effectively. One of the best ways we can do that is by creating more publicly accessible APIs, which stands for an Application Programming Interface. That’s a lot of tech jargon, but all it really means is an application/website that you access without the user interface. In other words, a public API allows outside parties to request information from a server without accessing it through a web page on a screen. Lian Law/NPS For example, the U.S. Fish and Wildlife Service has a website called Ecos that displays information about threatened and endangered species in tabular format. However, the Ecos website isn’t the only way to access that information. There is also an Ecos API which allows outside parties to access the same data that populates those tables. At CCI, we use that API to automatically add species ranges to web maps in our Collaborative Mapper (formerly Refined Range Mapping) app. Being able to quickly and easily add the range for the desert tortoise to a map can help users collaboratively refine it so that it’s more accurate. More accurate range maps help policy and other decision makers make more informed choices on how to best conserve the species. APIs can contribute a lot more than simply providing data. They can also do “work”. At CCI, we use the Google Earth Engine API to help us detect landscape change over time. For example, we can use the Ecos API to allow users to import the Mojave desert tortoise’s range (or any other species). The user can then run an analysis of this range, and using the Earth Engine API we can visually demonstrate what changes happened to its habitat over a specified amount of time. There are endless ways to apply the vast amount of conservation data we have at our disposal. But the best way to optimize the amount of applications that data gets is by making it accessible through public APIs as often as possible. Making functionality and data available through this channel gives the greatest amount of freedom to developers who want to use it in ways never imagined. Once data and functionality are out there, even novice developers can make effective use of data through an API. USFWS For obvious reasons, some work and data are proprietary. But whenever it isn’t, the best way to make sure we can use what we know about desert tortoises and other species is to make that information available through APIs when possible. That way, we can find the best solutions to the conservation challenges faced by the desert tortoise and all other imperiled wildlife, and give them a chance to survive and recover.
https://medium.com/wild-without-end/making-the-most-of-data-for-conservation-a25c58a3300c
['Defenders Of Wildlife']
2020-12-23 22:19:18.884000+00:00
['Technology', 'Conservation', 'Hear From Our Experts', 'Data', 'Wildlife']
1,790
This LED Smart Mask Is What’s Distracting Me Right Now
This LED Smart Mask Is What’s Distracting Me Right Now Megan Morrone · Nov 6 Whatever the outcome of this year’s presidential election, we’re all going to be wearing masks for a while. So you might as well wear one that you can control with your smartphone. In The Bold Italic this week, Sophia Smith spoke to designer Timothy Cochran whose company created both a fiber optic mask and an LED “smart mask” that you control with an app. Read the full interview.
https://debugger.medium.com/this-led-smart-mask-is-whats-distracting-me-right-now-eaf60135dc21
['Megan Morrone']
2020-11-06 20:13:22.085000+00:00
['Masks', 'Technology']
1,791
How to Find the Right Technology Partners for Your Startup
Every successful business is built on successful partnerships. Particularly in the IT sector, choosing the right technology partner for startups is a big decision. You need to ensure they add value to your business and offer services that will help your business grow. There are several factors to consider while choosing a technology partner for startups. What to Look for in Your Technology Partner? Strong Business Understanding A suitable tech partner for your business will have plenty of experience helping clients across multiple industries to meet their business goals. They have a strong understanding of their role, which is supporting business agility and reducing costs by modernizing their IT infrastructure and applications. The technology partner you choose should have knowledge and awareness about your business domain, your target audience and know how to help you grow through their expertise. Working Experience Entrepreneurs choosing technology partners for their startups tend to look at the cost first. However, the truth is experience matters more than the cost because, with an experienced partner, you can recover your partnership expenses eventually. An experienced tech partner understands how technology engages with existing infrastructure and systems and can provide a firm plan for your business. To understand the duration and quality of experience a technology firm has, you can research them a little to find out about their past customers and clients and understand the types of successes they have achieved. If you find them on the first page of Google and have a list of completed projects with businesses similar to yours, that is a good sign. Go through their feedback and reviews to get a better understanding of their service offerings and quality. Also Read: Top 10 Mobile App Development Trends to Watch Out for in 2021 Designing Solutions on a Budget The technology partner you choose should understand the importance of budgets and costs while building solutions for you. Small businesses and startups usually have a lean budget, so the design needs to be broken into smaller chunks. This way, you can roll out your minimum viable product at least within your budget. As business and audience size increases, the solution can be scaled up to match them. Deadlines and TAT A technology partner should get along with your team quickly and turn deliverables around in a shorter time. While screening partners, check how quickly the tech firm developed past projects and solved business problems and how they respond to requests for changes. Also, look into the original project deadline and whether the technology partner adhered to that timeline. What could some of the causes for delay have been? Speedy delivery alone is not the goal. The aim is to understand requests, deliver high-quality outputs and respond to change requests in the shortest time. Trust and Transparency It would be best if you had open conversations with your technology partners. This would include conversations about resources, time management, timelines, and business goals. Every tech partner you choose should keep you updated on any changes in timelines or deliverables through status reports. It helps if your tech partner has a simple roadmap of project milestones to aid non-technical clients in understanding how the implementation and different stages will look in the end. Customer Support Unexpected downtime and app failure are huge problems in today’s digital-first businesses. This can frustrate customers and negatively impact services and your brand. A technology partner for startups will ensure that they can offer product support even after the development is complete. This includes maintenance services to ensure operations run smoothly, along with analytics that can help you scale up or make strategic decisions. They understand that their solutions need to match the pace of your business. Considering each of these factors will help you make the right choice for your business.
https://medium.com/@burgerdoreen0/how-to-find-the-right-technology-partners-for-your-startup-c4dbd8c17491
['Doreen Burger']
2021-07-05 04:58:03.469000+00:00
['Startup', 'Technology Companies', 'Technology', 'Technology Partner']
1,792
How to Learn Cybersecurity
Cybersecurity is a Journey Let’s face it, cybersecurity can be really hard to learn. Not only is it broad and deep, but it consists of hundreds of different fields in technology and computing. I get a lot of questions asking: “what is the best course to take for learning cybersecurity?”. That’s a tough one to answer, because the real answer is: there is no course! Cybersecurity is a journey. It’s a field that is constantly changing where you’re always having to learn new technologies, tools, and methods to stay in the game. Everybody you ask is going to give you a different answer. And the reason for that is everyone’s journey in cybersecurity is different. It’s almost like asking a bunch of martial artists on how to fight. They will all give you very different answers and recommendations depending on where they came from and how they were trained. With that in mind, I’m going to show you some approaches you can use to overcome the challenge of learning cybersecurity, as well as the overall mindset you will need to maintain if you want to be successful on your own journey in the field. Why Cybersecurity is So Hard to Learn The number one reason why cybersecurity is hard to learn is because it consists of so many different fields each with their own unique stack of skills. Think of it this way: every component within each skill stack could be its own concept, tool, or even an entirely new field. A good example of this is network security. In network security, you may encounter IP tables, which let you set packet filtering roles in Linux, PCAPs, or packet captures, which are used to take static snapshots of data in motion, TCP or transmission control protocol which is used to segment data into conversations between devices, BGP or border gateway protocol which is used to govern the routes between autonomous systems on the internet, or switches which are used to connect physical devices together through cables and relay Ethernet frames between them. There are so many different components and concepts that fall under network security, and the examples I gave you are only but a few of them. The list goes on and on. Not only that, each of the concepts that I’ve mentioned can themselves be broken down into smaller bundles of knowledge, which can then be broken down into even further bundles of knowledge. You get the idea. Use Skill Stacks for Your Cyber Training The idea of skill stacks applies to all the different subfields in cybersecurity. What makes it even more complicated is that all the stacks are interrelated to one another, kind of like a skill matrix. If you want to learn a high-level skill, like penetration testing, you will first have to master many different skill stacks before having a solid enough baseline to really understand penetration testing well. This applies to almost all cyber-specific areas of concentration like privilege escalation, security monitoring, incident response, threat hunting, data protection, insider threat, Open-Source Intelligence (OSINT), malware analysis, and so on. It’s impossible to know everything about cybersecurity because there are just too many different subfields and concepts to know. It can very well take you 10 to 20 years to master just a few of them, at which point you just don’t have the time or interest to learn any of the other fields. The reality is if you want to be good at cybersecurity, I recommend starting off by focusing on just one or two areas first and expand out from there. There are many different journeys you can take. You can choose to become well-rounded in a few different skill stacks, or you can choose to become an elite master in just one. There is no right or wrong path. Each path offers its own unique challenges and rewards. Personally, I began my cybersecurity journey in network and system security and then branched out into cybersecurity engineering. This gave me broad exposure to a variety of tools, techniques, and methods for not only securing and defending computer networks and operating systems, but also for building out enterprise-wide security infrastructures. Eventually, I landed in the realm of data protection where I became highly focused and specialize in conducting cyber technical investigations involving insider threat, internal fraud, and data exfiltration for large financial institutions and multinational organizations. As part of this skill stack, I’ve learned how to master querying Splunk to further investigations, how to build my own investigation tools using Python and PowerShell, and how to conduct OSINT (Open-Source Intelligence) operations on targets of interest, to name a few. As you’re asking yourself the question: “How do I learn cybersecurity?” and wondering where to begin, the very first step I would take is to start discovering all the different topics on cybersecurity and how they connect on a broad level. From there you can begin to narrow down the learning scope to just those topics you might be interested in starting off with. Let’s go over some techniques you can use for learning cybersecurity. The Top-Down Approach to Learning Cybersecurity Top-down is probably the most common approach where you pick a subject to tackle and go after the resources specifically tailored towards learning that topic. A good example of this in cybersecurity might be pursuing a specific certification such as “ethical hacking”. Unfortunately, there’s a misconception by newcomers to the field that “ethical hacking” is as simple as loading up Kali Linux and running all its variety of security tools on targets. Others think they just need to grab some courses and books on a subject, and then “brain dump” everything just to pass the certification exam. Once you got your certification, you walk around thinking that you’re a Jedi master, but the reality is that your baseline fundamentals are still very weak, and your true abilities aren’t good enough to operate in most real-world scenarios. I’m not saying certifications are bad. Far from it. I think they a play a useful role to help develop your skills and baseline fundamentals, and show you’ve already attained some level of knowledge about the subject. Especially if you actually take the time to really learn the subject matter, rather than just “brain dump” to pass the certification exam. I like to refer to people at this stage in their journeys as “skiddies”, which stands for “script kiddies”. Basically, skiddies are young, aspiring beginners that only know how to run tools written by other people, but do not understand the principles behind why or how those tools work. In my opinion, the best way to acquire deep-level cybersecurity skills using the top-down learning method is through an apprenticeship. If you look back in history before education was institutionalized through schools, the only real way to learn a specific craft or skill was via an apprenticeship under a master — someone with a few decades of experience under their belt. In the old days, the knowledge transfer process was rigorous and methodical. This was necessary to ensure the apprentice was actually teachable and would be useful in adding value to the craft. The primary advantage of an apprenticeship is that masters can point you to the skill stacks that are relevant to the field you are learning while filtering out the ones that aren’t. Not to mention you have somebody who is available to answer all your questions and guide you on your journey to make sure you stay the course. Another benefit of being an apprentice is that it saves you a huge amount of time in the learning process, which in my experience can reduce what would take years to learn on your own into a matter of months. Top-down learning through an apprenticeship can be great but finding someone who can teach you is not always an easy task. Many journeyman-level and master-level practitioners are either way to busy or not interested in coaching you if you don’t already have a solid baseline to begin with. This is especially true in cybersecurity. It’s a huge time investment on their part to teach students, since it takes them away from actual work and carries a high risk of failure. Nobody wants to teach someone who doesn’t have much grit or the drive to succeed in their chosen profession. If a senior practitioner doesn’t see much potential in you, they’ll probably just walk on by. If you work for a large corporation, chances are you’ll receive on-the-job training. On-the-job training for cybersecurity professionals is extremely helpful because you’re surrounded by co-workers you can learn from, most of whom are likely better than you in one or more areas. If you’re lucky to find yourself in this situation, then try to identify the most technical people on your team and spend time learning as much as you can from them. Once you’ve developed a decent relationship with them, find out which experts they personally look up to. Then reach out to those guys. If you’re not able to get mentorship through professional circles, you might consider building a solid baseline knowledge through the bottom-up approach. The Bottom-Up Approach to Learning Cybersecurity The bottom-up learning approach is where you pick a subject to tackle and decompose it into its most basic principles, definitions, methods, and tools. You learn each of those components first before you dive into the actual target subject. Take UFC fighters for example. They go through countless amounts of conditioning and training combined with simple exercises that build muscle memory and situational agility. Each of these exercises indirectly improves their fighting abilities over time. Even though the bottom-up approach takes a lot longer to do, you build a very solid foundation that becomes useful when you make the switch to more skill-oriented training. In the case of cybersecurity where you literally become a mental athlete, bottom-up learning translates into tons of reading. And by tons of reading, I mean you start out by finding all the books you can that are related to computer and network security and just marathon away. What’s good about traditional book learning is that you tend to get much higher quality content than the average Internet post and you get to learn about each author’s style and approach to cybersecurity, most of whom are active practitioners themselves. Many authors also maintain blogs or tweet links to resources that you can follow. When you’re reading, remember to jot down all the different vocabulary and concepts you’re learning. You can create a mind map or use spaced repetition software like Anki. I highly recommend Anki for studying new cybersecurity concepts. It’s a free and open-source tool that lets you build flashcards to learn just about any concept you happen to be interested in. Unlike normal flashcards, Anki utilizes a scheduling algorithm that decides when to show you concepts based on how well you already know them. This is great for active recall, where you’re asked questions and forced to remember the answers. Research shows that using active recall is a much more effective learning method than passive study for building a strong memory. When you take this process and distribute it consistently over increasing periods of time, it further cements your knowledge by forcing your brain to retrieve it with even deeper levels of recall. Using the bottom-up approach for cybersecurity sets you up for success and makes learning new concepts much easier. One of the cool things about cybersecurity is that many of the concepts you learn will keep showing up repeatedly, since almost everything in cybersecurity is interconnected one way or another. One downside to bottom-up learning, however, is that it can get monotonous at times, since doing any activity for its own sake without a clear goal or purpose can certainly get boring. The Project-based Approach to Learning Cybersecurity One of my favorite approaches to learning cybersecurity is through projects. Project-based learning is a bit of a hybrid approach between the top-down and bottom-up learning methods and gives you more flexibility using both. Unlike the bottom-up approach where you don’t really have any specific objectives that need to be achieved, with a project you define what the technical outcome will be from the get-go, which allows you to gather the resources you need to work toward meeting your project goal. Speaking of goals, it’s always good to set smart goals when working on projects. Smart goals are defined as specific, measurable, achievable, relevant, and time bound. Saying something like “I want to hack” wouldn’t qualify as a smart goal. You need to get more specific like “I want to learn how to crack WEP encryption on my home wireless network by the end of the month”. It may take you longer than a month and that’s okay. The main thing is you have a clear objective in mind and the process will expose you to all sorts of different skill stacks, from air crack, layer 2 networking, the 802.11 protocol and much more. Project ideas often fall in one of four categories: making things, breaking things, fixing things, and knowing things. For instance, if you wanted to learn different forensic methods for detecting and eradicating malware, you could have a project where you build your own virtual machine and then intentionally install publicly available malware on it to hone your skills. Always make sure to document your entire process and workflow, as this can help solidify the entire learning experience and be used later to analyze areas for improvement. You Need to be in it for the Long Haul A final thought I’d like to leave you with that will help you get better at cybersecurity is you need to go in it with the right mind set and time horizon for picking it up. The reality is that some areas of cybersecurity can take a long time to master, much like becoming a doctor or a lawyer. In the United States it takes four years of medical school followed by three to seven years of residency to become a doctor. What’s interesting is medical residencies are basically apprenticeships that involve working 60+ hours a week. Many doctors I know work 80 or more hours a week and only sleep five or six hours each night. Depending on your residency of choice, this is anywhere from 10,000 to 20,000 hours of training. Assuming you’re only working 40 hours a week, it would take you at least 10 to 20 years on the job in a cybersecurity role to attain the absolute number of equivalent hours as a doctor. In his book, Mastery, the author Matthew Green describes mastery of any skill as a function of time and intense focus applied to a particular field of knowledge. In our age of two second attention spans and instant gratification, most people lack discipline to dedicate the time necessary to truly master a specific skill. It’s much easier to take a simple crash course or quick tutorial to teach you everything you want to know. However, just seeking out a surface level education will keep you at an unconsciously incompetent level of learning where you feel super confident that you know everything there is to know, but you’re not actually skilled in the subject matter. As your cyber skills are put to the test under real-world scenarios, you soon come to the realization that you actually suck at what you thought you knew and eventually reach a point as to whether or not you want to continue on the path. If you do take the time to push through it though, you will begin to feel more comfortable and accepting of the concepts you know and the ones you don’t know. Over time, you will reach the most mature stage of unconscious competence, where now you’ve become a true Jedi master and can execute on a skill without even thinking about it. In a field like cybersecurity where there’s no clear institutionalized path to becoming a professional, you really got to self-educate using a combination of the different learning approaches available and be in it for the long haul. Taking shortcuts won’t “cut it” if you truly want to become a master practitioner in your specialized area of interest. Remember, cybersecurity is not a course — it’s a journey that can be very rewarding for those dedicated to its path. Good hunting!
https://medium.com/@michatkins/how-to-learn-cybersecurity-8ac4f356887f
['Michael Atkins']
2020-12-27 04:47:36.359000+00:00
['Cybersecurity', 'Technology', 'Education', 'Learning', 'Hacking']
1,793
D3 in 5 Minutes, Create Bar Chart & Funnel Chart VisualiZations using D3 scales with example using React js
Basic knowledge of SVG is needed, think of SVG similar to HTML elements. working in D3 is pretty much similar to HTML and a pinch of jQuery with all those chaining and selection of elements. Basics first: 1.Selection and manipulation Using D3 we can select DOM elements using their CSS selectors or the name of an element itself. D3 provides us two methods to select DOM elements. They are d3.select() which returns the first selection and d3. selectAll() this returns all the elements matching the criteria. // select h1 ,add style of color red and then insert text 'h1 tag' d3.select('h1') .style('color', 'red') .text('h1 tag'); // select body tag append a `p` tag to it with text 'First Paragraph' d3.select('body').append('p').text('First Paragraph'); // select all `p` tags & add style of blue color to it. d3.selectAll('p').style('color', 'blue'); Code Playground Link for D3 basics: https://scrimba.com/c/c36r67S8 2. Data loading and binding var dataset = [1, 2, 3, 4, 5]; d3.select('body') .selectAll('p') .data(dataset) //loop onto data set .join('p') // appends paragraph for each data element .text('D3 is awesome!!'); //this adds same text for each data entry .text(function(d) { return d; }); //this adds text for each data entry 3. SVG elements SVG or a scalable vector graphics is a powerful tool to define vector graphics for the web. Using SVG we can create different shapes and apply different styles to them. var svgWidth = 600, svgHeight = 500; var svg = d3.select("svg") .attr("width", svgWidth) .attr("height", svgHeight) var line = svg.append("line") .attr("x1", 100) .attr("x2", 500) .attr("y1", 50) .attr("y2", 50) .attr("stroke", "red") //stroke is like border in svg .attr("stroke-width", 5); var rect = svg.append("rect") .attr("x", 100) .attr("y", 100) .attr("width", 200) .attr("height", 100) .attr("fill", "#9B95FF"); //fill is like background color in svg var circle = svg.append("circle") .attr("cx", 200) .attr("cy", 300) .attr("r", 80) .attr("fill", "#7CE8D5"); Code Playground Link for SVG Elements : https://scrimba.com/c/c36r67S8 Scales 📏 are functions which will transform your data by either increasing or decreasing their values for better visualizations 📈. Multiple types of scale 📏 are available and depending upon the input data you can choose the scales. Example Construct a new continuous scale with the specified domain and range, Linear scales are a good default choice for continuous quantitative data because they preserve proportional differences. Each range value y can be expressed as a function of the domain value x: y = mx + b. Say if your linear scale domain is [0,100] and your range is [500,1000] then your input values say 0,50,100 shall map to an output of 500, 750, 1000. which can be understood as the min value of domain corresponds to min value in the range and max value of domain corresponds to max value in the range, and the mid-value of domain corresponds to min-value in range, and same is the case for any other value, basically the input is outputted proportionally based on domain & range. var dataset = [1,2,3,4,5]; var svgWidth = 500, svgHeight = 300, barPadding = 5; var barWidth = (svgWidth / dataset.length); var svg = d3.select('svg')// select the svg element .attr("width", svgWidth) // set its css attribute of width .attr("height", svgHeight); // set its css attribute of height var yScale = d3.scaleLinear() // create a linear scale // set domain of scale with min=0 & max as max value of dataset , D3 even provides us with mathemetical operations like max ,min,etc .domain([0, d3.max(dataset)]) .range([0, svgHeight]);// set range of scale var barChart = svg.selectAll("rect") //📊 select all `rect` .data(dataset) //loop onto dataset .join("rect") // append all selected `rect` .attr("y", function(d) { return svgHeight - d }) //set y position attribute .attr("height", function(d) { return d; })//set height attribute .attr("width", barWidth - barPadding) //set width attribute with padding so that bar does not stick together & have a nice gap in betweeen //set transform attribute else the bars would stack on top of each other .attr("transform", function (d, i) { var translate = [barWidth * i, 0]; return "translate("+ translate +")"; }); Code Playground Link for Bar chart 📊 : https://scrimba.com/c/ceqVqLCW 5.Funnel chart
https://javascript.plainenglish.io/learning-d3-in-5-minutes-and-creating-bar-funnel-chart-visualization-12b71142414e
['Shobhit Singh']
2020-08-29 19:36:01.169000+00:00
['Technology', 'JavaScript', 'D3js', 'Visualization', 'Charts']
1,794
Quantum Computing Explained — It’s Rocket Science
One of the fundamental differences between quantum and classical computers lies in the difference between how they store data. This is a great place to start understanding the power of Quantum Computers Kurzgesagt, aka: An absolute legend In classical computers: A single unit of data is stored as a 1 or 0, called a bit In quantum computers: A single unit of data can be stored as a 1 or 0, but also can be a 1 and 0. This is called a qubit. Wait, what does that mean? A 1 and 0? This phenomenon is known as superposition. You have a superposition of both a 1 and 0 at the same time. This is where things get weird. So, we’ll go through the easy way of understanding this (instead of learning all the math), by taking an analogy. *Sidenote — Under other interpretations, superposition can be understood as a probabilistic wave function. I may write a more technical article in the future, but this interpretation is easier to understand; more on that in the second sidenote How superposition enables Quantum Computers (for dummies) Big Bang Theory (The age old problem of dinner seatings 😮) Let’s say you’ve organized a dinner, and need to decide where to seat your guests. For simplicity’s sake, let’s say you only have 3 guests, and 2 tables to place them on. The thing is, some of these guests don’t like each other, but others do. Let’s say: Person A and C are friends 😆 Person A and B are enemies 😡 Person B and C are enemies 😡 In this situation, you want to have the highest number of friends together, while having the lowest number of enemies together. How this would work on a regular computer Well, we only have 2 tables, so we can assign tables to each guest in binary, with a 1 or 0. So we have Table 1, and Table 0. For example, one combination of placements could be: 001 (Person A and B are placed on Table 0, and Person C is placed on Table 1) Here are all the possible combinations: +----------+----------+----------+ | Person A | Person B | Person C | +----------+----------+----------+ | 0 | 0 | 0 | | 0 | 0 | 1 | | 0 | 1 | 0 | | 0 | 1 | 1 | | 1 | 0 | 0 | | 1 | 0 | 1 | | 1 | 1 | 0 | | 1 | 1 | 1 | +----------+----------+----------+ Now, we want to optimize for friends, and against enemies placed on the same table. With this knowledge, we can create a score algorithm. It could be something like this: Score = (# Friends) - (# Enemies) With this metric, the scores would be: +----------+----------+----------+-------+ | Person A | Person B | Person C | Score | +----------+----------+----------+-------+ | 0 | 0 | 0 | -1 | 😡 | 0 | 0 | 1 | -1 | 😡 | 0 | 1 | 0 | 1 | 😆 | 0 | 1 | 1 | -1 | 😡 | 1 | 0 | 0 | -1 | 😡 | 1 | 0 | 1 | 1 | 😆 | 1 | 1 | 0 | -1 | 😡 | 1 | 1 | 1 | -1 | 😡 +----------+----------+----------+-------+ As you can tell, there are 2 arrangements which would lead to the highest score: 010 & 101. In this situation, a conventional computer would have to try out all of these configurations separately using 3 bits (one at a time) and compute each score. Then select the highest. Now of course, this is a really simple problem which would take almost no time on a conventional computer. However, what if we increased the number of people? With 3 people, there are 8 (2*2*2 = 2³) configurations. 😜 With 20 people, there are 1048576 (2²⁰) configurations. 😤 With 200 people, there are 2²⁰⁰ combinations?! 😱 Vsauce Explains the ridiculousness of 52 factorial. (About ⁶⁰) It’s absurd… That’s about 10⁶⁰ or 10 with 60 zeros behind it. The world’s fastest computers can only compute 200,000 trillions calculations per second (only…). Meaning it would take ~10⁴⁶ seconds, which is WAY too long. Actually longer than the age of the universe.
https://medium.com/hackernoon/quantum-computing-explained-its-rocket-science-55d7766edac2
['Jerry Qu']
2019-04-12 11:36:59.333000+00:00
['Innovation', 'Science', 'Quantum Computing', 'Technology', 'Hackernoon Top Story']
1,795
Introducing Cryptosheets: Real-time Excel Add-in for Cryptocurrencies
Why does this matter? ❓ We have an entire generation of brilliant financial analysts, traders, investors, portfolio managers (the list goes on and on) who live and die within Excel. These talented individuals are wizards in Excel from many years of use, but will never learn how to code. Cryptosheets use cases: Backtesting trading strategies trading strategies Portfolio valuation and optimization Risk management Research and analysis and analysis Third-party integrations Technical analysis Much, much more Cryptocurrencies are a new asset class. A mountain of knowledge from trading and investing is immediately portable over to digital assets. Cryptosheets aims to ease this transition of knowledge. The solution: A smarter, faster add-in built specifically for cryptocurrencies We want to save thousands of hours of Copy->Paste->Spot Check->Pray. A few simple clicks through our interface and you will get whatever data your heart desires. ❤️ Initially we will connect CoinMarketCap, Cryptocompare, and CoinAPI. A large portion of our development resources will be spent integrating the data providers that our users request. 💨 Now ready for your feedback 🎧 Cryptosheets is live on Microsoft Appsource and during phase 1 we will be working relentlessly on speed, efficiency, reliability, and enhancements…all of which your feedback is absolutely critical. Please do not hesitate to report any bugs you found, or give us any feature requests that you want to see implemented. Feature requests are voted on by users and built based on demand. How to Install Follow the steps below for your version of Excel and operating system. Download from Appsource Click here and click the button “Get it now”. Excel 2016 for Windows
https://medium.com/cryptosheets/introducing-cryptosheets-real-time-excel-add-in-for-cryptocurrencies-e8fe12950ec5
['John Young']
2018-08-03 12:53:53.974000+00:00
['Bitcoin', 'Cryptocurrency', 'Blockchain Technology', 'Blockchain Startup', 'Cryptocurrency Investment']
1,796
Why Paper Beats a Screen
On a rusty ferry steaming towards the southern tip of Lake Malawi, I managed to crack my Kindle’s screen — accidentally elbowing it as I climbed up to standing from sitting. I was bereft but not because I was terribly fond of it — indeed, in the five years I’d had it, I had barely used it. No, I was bereft because I was stuck on board for at least another 24 hours without anything (except for an old British GQ) to read. I kept on toggling the buttons, hoping that Lolita or the Michela Wrong book I’d been so excited to read would miraculously appear. They didn’t — the screen remained a mess. Bereavement had given way to crabbiness (and slight panic) by the time we reached our destination. I was beached on a desert island (well, almost) with nothing to read. I scrounged the meagre shelf of abandoned paperbacks, attempting Stieg Larsson and then Jeffrey Archer, quickly bored and unimpressed by them both. When we got to Blantyre, I made a beeline to its best bookshop, happily poring over titles for hours, weighing up what I should read on the bus back to Johannesburg. In the end I managed to narrow my choice to two books (White Mischief and Dead Aid, if you must know), forking out a hefty premium for them both — but that was OK, I could breathe again: I had something to read. It took me a whole year to replace my new Kindle. To be fair, one of the reasons why I didn’t miss it very much is because I’ve often received review hard copies from publishers. But, more than that though, I also just simply far prefer reading ink on paper. One of The Guardian’s journalists compares reading your favourite novel on e-book to sucking vintage wine through a straw. I couldn’t agree more. The act of reading a physical book exudes a magic that an e-reader simply cannot recreate. I love its tactile qualities, how each has a slightly different smell. Then there’s the comforting heft of the thing in your lap, the quiet satisfaction of a turning a page, of shutting it with a bookmark in place when you’re done for the night. Printed books aren’t indestructible, but they’re pretty resilient. If you drop one (or elbow it like I did), it’s likely to be unscathed. It might even survive a brief encounter with a pool if you put it in the sun immediately afterwards. To function, they’re not reliant on batteries that can go flat. The text is easier to read. And their physicality, the way they take up space, means they can serve as mini (and often beautifully designed) monuments: sitting on your shelf they remind you either of the stories they contain, or of the time in which you were reading them. So why did I get a new Kindle then? Because it’s darn useful when travelling to have a single slim-line object containing lots of titles, as opposed to a stash heavier than Pilgrim’s burden that you have to lug around on planes and buses. There is something too to be said for the immediacy with which you can purchase a new title — particularly if it’s one that your local bookshop might not have in stock. So, for me, how I read is not about either/or; it’s about context. I’ll use the Kindle when I’m on the go, but I’ll always prefer paper. I’m far from being alone in this. Some might think it’s a generational thing: that older people haven’t adapted to newer technologies. They’re wrong: the allure of paper cuts across generations — indeed, research by Nielsen shows that 75% of British kids prefer proper books, with 35% going so far as to shun digital ones entirely. Five years or so ago, dramatic headlines were heralding the end of paper books. A surge in the sales of e-readers and e-books (and the closure of many bookshops, including the gargantuan US chain Borders) seemed to bear this out. Today it’s looking rather different. E-book sales have stagnated. In 2015, sales in the UK shrank by 1.6% compared to the previous year — the first drop in seven years (according to the Publishers Association) while print sales rose. In the same year, Waterstones, the major UK book retailer, removed e-readers from most its stores (after its MD, James Daunt, claimed they were “getting virtually no sales”), making way for more physical books. “People talked about the demise of physical books as if it was only a matter of time, but even 50 to 100 years from now, print will be a big chunk of our business,” Markus Dohle, the CEO of Penguin Random House, told the New York Times back in 2015. Solid growth since then is bearing out his prediction. In the US, more than one billion paperbacks were sold in 2017 out of the estimated 2.7 billion books sold according to the Association of American Publishers according to this pithy summary by Publishing Perspectives. And, according to Observer: E-book sales have slipped by 3.9 percent so far this year [the first three quarters of 2018], according to data from the Association of American Publishers, while hardback and paperback book sales grew by 6.2 percent and 2.2 percent, respectively. During the first nine months of 2018, hardback and paperback sales generated nearly $4 billion combined; comparatively, e-books only raked in $770.9 million. For my fellow lovers of the printed word, that’s excellent news.
https://alexandermatthews.medium.com/why-paper-beats-a-screen-d7fef1dfddeb
['Alexander Matthews']
2019-06-21 12:12:19.419000+00:00
['Print', 'Reading', 'Books', 'Kindle', 'Technology']
1,797
The best of home security gadgets-April 2021 edition
Your home is your sanctuary, and no one should get in uninvited. So keeping your home and family safe has to be a priority. Luckily, in April 2021, there are some great home security gadgets out there. Check out this roundup to see what we mean. It’s always a good time to think about home security. And whether you’re looking to beef up your current setup or want to go for a new one entirely, the products on this best of home security gadgets roundup have you covered. From a whole-home system to a camera that doubles as a floodlight, these are some of our favorite home security gadgets this month. Related: Arlo vs. Ring-which home security system should you buy? Concerned about privacy? Some of the indoor cameras below offer monitoring features that are easy to activate and deactivate. Or maybe you want a smart doorbell that opens with your fingerprint? With these gadgets, it’s easier than ever to sit back and relax at home. Ring Floodlight Cam Wired Pro Thwart intruders in their tracks with the Ring Floodlight Cam Wired Pro, the first item on our best of home security gadgets roundup. This advanced home security camera uses ultra-bright LED floodlights and keeps your home secure with 3D Motion Detection. The Bird’s Eye View lets you monitor everything from above, and the two-way talk feature lets them know you’re watching. Nooie Cam Indoor Security Camera Every home security setup needs an indoor camera, and the Nooie Cam Indoor Security Camera is a great one. It works with Alexa and Google Assistant. And with 1,080p Full HD livestreaming, you can keep an eye on everything as it’s happening. The night vision capability even lets you use it as a baby monitor. Hex Home DIY Security System For a comprehensive home security system, look no further than the Hex Home DIY Security System. Its patented technology wraps around corners and sees through walls for complete coverage. But you can adjust the sensitivity levels to ignore things like a pet or robot vacuum. Nooie Smart Cam Doorbell The Nooie Smart Cam Doorbell adds style and security to your front door. It comes with an antitheft locking mechanism, alarm siren, two-way talk audio, and more. Live view even lets you see who’s at the door in real time. Porch Pod Delivery Safes Since you’re getting more deliveries than ever, why not protect them with the Porch Pod Delivery Safes? These gadgets accept deliveries from major carriers and provide a safe, waterproof area for your packages. Vivint Ping Indoor Security Camera The Vivint Ping Indoor Security Camera made our best of home security gadgets roundup because it lets you check in on your home from anywhere. It’s like having your own 24/7 security guard since it captures footage when motion is triggered. It also lets you chat with family members or pets right through the camera. Kangaroo Motion + Entry Sensor Home Security Device The Kangaroo Motion + Entry Sensor Home Security Device makes it easy to keep tabs on your home’s doors, windows, and hallways. Just place one on your door for a notification anytime someone opens it. Best of all, its peel-and-stick setup means you don’t even have to take out any tools to set it up. Ring Video Doorbell Pro 2 The Ring Video Doorbell Pro 2 is what you want for a high-quality smart doorbell. It lets you see and hear like never before, thanks to the boosted HD+ video and crisp audio quality. With Alexa Greetings, this doorbell can even speak to visitors and accept deliveries when you can’t get to the door. Arlo Essential Indoor Camera Another great indoor camera is the Arlo Essential Indoor Camera. The cool thing about this home security device is, you easily stop it from recording and detecting motion and audio. They won’t work again until you open the shield on the Arlo App. Netatmo Smart Indoor Siren The Netatmo Smart Indoor Siren is the gadget you want if there’s an intruder in your house. When the camera sees someone it doesn’t recognize, it sets off a 110-decibel alarm, making it one of the best home security gadgets in April. ecobee SmartCamera Indoor Security Camera See what’s going on at home, easily, with the ecobee SmartCamera Indoor Security Camera. It’s compatible with Alexa and Apple HomeKit so that you can manage it via voice control. An eight-core processor analyzes your footage locally, and it even comes with a baby monitor mode. HeimVision Assure B1: 2K Ultra HD Camera and Smart Hub If you’re looking for one home security gadget that ticks all the boxes, go for the HeimVision Assure B1: 2K Ultra HD Camera and Smart Hub. It connects up to four wireless cameras to its base station, so it covers every corner. And the human face detection can tell the difference between an intruder and the neighborhood cat. LOCKLY Vision Doorbell Camera Smart Lock The LOCKLY Vision Doorbell Camera Smart Lock made our best of home security gadgets roundup because it gives you so many ways to access your front door: app control, 3D fingerprint reader, online or offline access code, voice command, and more. It even has a camera for real-time streaming. Cync Indoor Smart Camera The Cync Indoor Smart Camera is another great way to protect your home and ensure your privacy. That’s because it’s easy to turn off both the camera and microphone when you’re at home. Plus, all content gets stored on the SD drive, so it never leaves your home. Arlo Ultra 2 Spotlight Security Camera Feel safer indoors and while you’re away with the Arlo Ultra 2 Spotlight Security Camera. It has auto-zoom and tracking features that take the lead on monitoring your home. Best of all, it automatically adjusts to movement to track irregular activity in an instant. As you can see, there are plenty of great home security gadgets in April 2021. From whole-home systems to indoor cameras that can protect your privacy, these devices offer some pretty high-tech ways to protect your home. Which of these gadgets would you like to own? Let us know in the comments. Want more tech news, reviews, and guides from Gadget Flow? Follow us on Google News, Feedly, and Flipboard. If you’re using Flipboard, you should definitely check out our Curated Stories. We publish three new stories every day, so make sure to follow us to stay updated! The Gadget Flow Daily Digest highlights and explores the latest in tech trends to keep you informed. Want it straight to your inbox? Subscribe ➜
https://medium.com/the-gadget-flow/the-best-of-home-security-gadgets-april-2021-edition-ad92df10d884
['Gadget Flow']
2021-04-17 18:33:30.412000+00:00
['Home Improvement', 'Smart Home', 'Home Automation', 'Gadgets', 'Technology']
1,798
Healthcare Information Systems Evolve and Embrace the API-led Development Approach
The stage is set for Healthcare Information Systems to evolve and embrace the API-led development approach in order to create clinical applications for use across the enterprise. The primary drivers for evolution in the Healthcare Information Systems are: - Patient-centric healthcare — CMS-mandated meaningful use of EHRs — Better decision making — Transaction modernization solutions The Healthcare Information Systems of today are not ready to realize these shifts. A new architectural approach is needed to move Healthcare Information Systems away from heavy reliance on HL7 and custom integrations, to embracing the API-driven architecture model. This is where Fast Healthcare Interoperability Resources (FHIR) APIs have introduced a new standard to exchange Electronic Health Records (EHRs). FHIRs use of APIs allows implementers to easy integrate, request, and transfer data between various healthcare systems. The basic building blocks of FHIR APIs are base resource pools that are categorized and interact with each other to build a complete picture. This defines the way information can be exchanged between different healthcare systems. The ultimate goal of FHIR APIs is easy exchange of information not only within the organization, but also to make Hospital Information Systems accessible to third party applications that provide more engaging patient experiences. These applications are called SMART-Substitutable Medical Apps and Reusable Technology-apps which are applications that run on FHIR APIs and consume information from the Hospital Information Systems to deliver patient-centric healthcare. How organizations can use API management platforms to leverage FHIR APIs securely: TIBCO’s API management platform, Mashery, has the ability to enable healthcare providers to host their FHIR APIs. Mashery’s authentication support for OAuth2 and reduced coding requirements enable healthcare providers to accelerate their implementation of FHIR APIs. This implementation provides true flexibility to healthcare providers and allows them to accelerate implementations of modern clinical applications. Mashery allows healthcare providers to expose their Electronic Health Records (EHRs) seamlessly with third party applications as well as manage partner applications. Effectively, the healthcare provider can provide access to an App Store which has Smart Applications that a customer can use to interact with the Hospital Information System to put the patient at the center of the healthcare experience. All of this can be done without comprising security using Mashery’s support for OAuth2. Roadmap to reaching complete FHIR API compliance: HL7s prominence in Healthcare Information Systems makes it difficult for providers to shift stable and operational systems to a new architecture overnight. Hospital information systems of today have many critical systems and processes which are interdependent and are already running on HL7 integrations. These interfaces and systems are mission critical and can cause a lot of headaches when considering the switch to FHIR APIs. This is where TIBCO’s best-in-class integration solutions can solve these issues by packing existing integrations as REST interfaces without any disruption to critical processes. TIBCO BusinessWorks comes with a plug-in for HL7 messaging which allows organizations to deploy cloud native healthcare integrations. This allows for a seamless staggered implementation of FHIR across the Healthcare Information System. The implementation in general goes through five different levels from basic framework to clinical reasoning, which can be easily achieved using TIBCO’s enterprise solutions. TIBCO can help healthcare systems on their journey from API creation to API management and integration capabilities: -On-premise with TIBCO BusinessWorks -In public and private PaaS with TIBCO BusinessWorks Container Edition -In the cloud with TIBCO Cloud Integration and TIBCO Mashery Enterprise API management and gateway capabilities: -As a SaaS solution with TIBCO Mashery -On-premise with TIBCO Mashery Local Learn more about TIBCO Mashery API management platform and request a free 30-day Mashery trial.
https://medium.com/tech-weekly/healthcare-information-systems-evolve-and-embrace-the-api-led-development-approach-e5d913270709
['Ashwin Datla']
2019-05-29 06:10:45.497000+00:00
['Technology', 'Data Science', 'Healthcare', 'Digital Transformation', 'API']
1,799
Valuing rooftop solar is tricky
Coauthored by Josh Smith How much is rooftop solar worth? This simple question has stimulated a raging debate between electric utilities and solar advocates, both in Utah and around the country. The issue comes down to how utilities should account for electricity that rooftop solar owners feed onto the electrical grid. Utah’s largest utility, Rocky Mountain Power (RMP), wants to pay just 1.5 cents per kilowatt-hour (kWh) to rooftop solar owners. RMP argues that it is around the price they would pay for solar electricity from larger, utility-scale, providers. They also point out that rooftop solar production creates costs for the utility that owners should be responsible for. Rooftop solar advocates, on the other hand, argue that the environmental benefits and other advantages of rooftop solar amount to a value of 24 cents per kWh. This difference in perspective puts solar advocates and utilities about 22 cents apart on the value of rooftop solar. That 22-cent difference alone is double Utah’s average electricity rate of 11 cents per kWh. To understand what is driving this large discrepancy between two well-informed groups requires a clear understanding of what goes into pricing electricity. What’s in a price? The price that you see on your utility bill each month is made up of several components. There’s the cost of generating the electricity in the first place, the cost of transmission to you, and maintenance costs for keeping the electrical grid available for all users to draw on at any moment. The federal Energy Information Administration (EIA) shows that Utah’s households pay, on average, about 11 cents per kWh. The US average is a bit higher at around 13 cents per kWh. Only a small portion of these prices are tied to generation costs. On the wholesale market, where utilities purchase electricity based largely on the cost of generation, prices roughly vary between 2 cents and 5 cents on average. What’s important, however, is the “on average” point. In reality, there’s a lot of variation. Electricity generation prices jump up and down dramatically by time of day and can even vary significantly from minute to minute. Heatwaves can generate substantial price swings as everyone turns on their air conditioners, and demand for electricity shoots up. When prices are peaking on hot days, the wholesale price of electricity can skyrocket. As the chart below shows, California hit a price of $600/MWh, which is 60¢/kWh, on a hot August day surrounding the state’s recent blackouts. In contrast, sometimes, too much electricity is generated, which drives prices below zero — meaning generators have to pay users to take their electricity. With these possibilities in mind, a flat compensation rate for rooftop solar cannot adequately account for the changing value of the energy it produces. This is a fundamental flaw in the current compensation system for rooftop solar. The value of solar isn’t either 1.5 cents per kWh or 24 cents per kWh. It’s not any single price all the time. Rooftop solar could be worth a lot in certain places and at certain times. In the same vein, there are times when electricity generation isn’t needed and the value of rooftop solar is close to nothing. RMP’s proposal attempts to get at this idea with a rough cut between on-peak and off-peak hours. They suggest paying $0.013 off-peak and $0.026/kWh for on-peak production, which is an hourly average of $0.015/kWh. The peak is simply the time when the grid is being used the most. Peaking hours are generally the most expensive times for running the electrical grid, making it the most valuable time for energy generation. Paying a higher on-peak price and a lower off-peak price is not a perfect solution — there will be times that the value of solar energy at peak moments is higher than 2.6 cents per kWh or at off-demand times is lower than 1.5 cent per kWh. It is a step towards a better valuation for solar, but it’s not going far enough. A better way to price rooftop solar’s value would be to tie it to real-time pricing. Electricity’s value isn’t constant, so regulators should reject rigid valuations for electricity generation. A dynamic pricing system tied to the value of electricity that emerges from the market can even give rooftop solar owners an incentive to pay attention to the broader system’s needs. That is, on a hot day, when prices are peaking, as in the example from California, rooftop solar owners could cut back on their consumption to take advantage of the rising prices. They can then feed the extra energy onto the electrical grid to make more money from their solar panels. Without designing regulations to create this incentive, we’re missing out on rooftop solar’s potential to keep the lights on. The future of electricity is moving in towards easier and expanded control for consumers, as is clear from the success of smart thermostats like Nest and EcoBee. A real-time system will better take advantage of those technological advancements. And it does so by reducing the risk of overestimating the value of solar compared to a less realistic on-peak and off-peak system or the current system’s total consumed to total produced. What about solar’s environmental benefits? One benefit of rooftop solar that is not considered in the cost of generation, but that advocates argue justifies higher rates, is the production of clean energy. Rooftop solar cleans up the environment when it replaces carbon-based energy sources like coal or natural gas. That’s a benefit to us all. Still, as utilities begin to build more and more utility-scale green energy projects, requiring them to pay more for rooftop solar energy than they would otherwise pay for utility-scale renewable energy is forcing them to make a sub-optimal investment. Compare the cost of powering a home on rooftop solar electricity with powering a home with utility-scale solar. An average home requires about 10,972 kWh of energy to power it for a full year. At 24 cents per kWh, RMP would be paying $2,633.28 just to buy the energy needed to power a home for an entire year using rooftop solar. That’s compared to between $219.44 and $548.60 for utility-scale solar. In reality, homes are powered by a mix of electricity sources, so this overestimates the costs. Still, it’s a useful illustration of why a change to 24 cents per kWh will mean paying much more for electricity just because it’s from a rooftop installation, not a utility-scale source. That higher cost not only impacts the price ratepayers pay for their electricity but competes with other renewable investments that could be made. There’s growing economic and environmental value in focusing on utility-scale solar and efficient energy production. The cheaper we can make renewable energy, the more likely it will be to continue to grow as a portion of overall US electricity generation. The best way to account for the value of solar’s environmental benefit is through wider policy changes, like accounting for the social cost of carbon through a carbon tax, rather than supporting specific technologies. That allows more flexibility for producers and consumers to focus on technology that not only reduces our carbon footprint in the most efficient way possible. The costs of overestimating solar’s value Setting a high fixed price for solar will certainly make it likely more people will adopt rooftop solar, but there are downsides to that type of incentivized development. If the fixed price is set too high, it incentivizes rooftop solar installations everywhere, not just where they are valuable to the operation of the electrical grid. The resulting situation would be one where rooftop solar owners are paid a lot for the energy they produce despite that energy having little value or failing to be a useful substitute for fossil fuels. That increases the cost of producing solar energy at a time when lowering costs is the key to increasing renewable generation. Moving away from fixed pricing and towards variable pricing that changes with demand will provide a better mechanism for approximating the value of rooftop solar energy. Additionally, when rooftop solar owners are compensated at overly-generous rates, they don’t cover their share of the costs for maintaining the electrical grid. Instead, those costs are shifted onto other consumers. The cost-shift happens because the costs of maintaining the wires and lines that make up the electrical grid are spread out across every household. When rooftop solar customers don’t pay an electric bill because their energy generation exceeds their use, the costs of those wires and lines fall to other consumers. That’s a problem because rooftop solar owners still use the grid. If nothing else, the grid operates as a battery for rooftop solar owners, giving them the option to draw on it whenever they need — like at night when solar is not producing. This option value is a service everyone should pay for equally. When there are few rooftop solar installations, the burden of shifting grid maintenance costs and other fixed costs onto non-solar households is not that significant. But those costs begin to matter as more and more people adopt solar. That is why it’s smart for utilities to start considering better ways to compensate rooftop solar owners. The cost of solar panels has fallen dramatically over the last 20 years and we should expect that trend to continue. As costs decline, more and more people will turn to rooftop solar as a viable investment for their homes. If policies aren’t set up to handle the deployment of more rooftop solar, then we’ll have missed an opportunity to open up the possibility of a greener future powered by new technologies. What we should do The ideal system for rooftop solar compensation would be a price tied to real-time rates. Like other generators, the chief value of rooftop solar for the electrical grid is in generation. We should treat rooftop solar like other generators and pay producers for the value they create. Real-time pricing would do just that by reflecting the needs on the electrical grid as they really are. And compensating solar owners based on generation ensures that rooftop solar owners are also paying their share of grid operation and maintenance rather than passing those costs onto non-solar households. The right way to reward solar for its environmental benefits isn’t through arbitrarily paying rooftop solar substantially more than larger solar producers receive, but through a direct tax on carbon that reflects the negative impacts of emissions. Instead of over-incentivizing rooftop solar, taxing carbon ensures energy pricing more accurately accounts for environmental impacts. Clean technologies like rooftop solar can then compete against other low-carbon technologies and fossil fuel sources like coal and natural gas. We need smart reform today in order to lay the groundwork for the next evolution of energy technologies. Rooftop solar is just the start. As new technologies like smart meters and thermostats become commonplace, they bring more opportunities to promote a cleaner future. A change to real-time pricing and compensation for rooftop solar owners would be a positive shift toward this future and a huge opportunity for new innovation in electricity markets.
https://medium.com/cgo-benchmark/valuing-rooftop-solar-is-tricky-68604f9bf7dc
['Brian Isom']
2020-10-06 19:17:50.583000+00:00
['Technology', 'Solar Energy', 'Policy', 'Environment']